Category: News
can you help me
Please help me to solve y””+y”’-y”-y’=2x+2sinx and then to compare in Simulink environmentPlease help me to solve y””+y”’-y”-y’=2x+2sinx and then to compare in Simulink environment Please help me to solve y””+y”’-y”-y’=2x+2sinx and then to compare in Simulink environment simulink, matlab, differential equations MATLAB Answers — New Questions
Completing DFSR SYSVOL migration of domains that use Entra ID passwordless SSO
Heya folks, Ned here again. A customer recently reached out to me in the comments section of the well-worn Streamlined Migration of FRS to DFSR SYSVOL article, asking about a problem he was seeing with a single DC that wouldn’t complete the process. Today I’ll explain how to fix the issue introduced by a very modern authentication add-on.
Background
Decades after Windows 2000 first shipped and introduced the world to Domain Controllers, the File Replication Service, and SYSVOL, Azure released the Entra ID Passwordless security key sign-in to on-premises resources. With it, Entra ID can issue Kerberos ticket-granting tickets for your Active Directory domain. Users can sign in to Windows with modern credentials, such as FIDO2 security keys, and then access traditional Active Directory-based resources. Kerberos Service Tickets and authorization continue to be controlled by your on-premises DCs.
Under the covers, this works by an admin provisioning an Entra DC. A pseudo-Domain Controller, as it were, named “AzureADKerberos”. It’s not a real physical or virtual DC, but simply a computer object in AD pretending to be a DC so that Entra ID works in this scenario.
The Issue
So, with that in mind, you’re following the DFSR migration steps and notice that one domain controller named “AzureADKerberos” is not migrating, but instead always stays in the Started state:
dfsrmig.exe /getmigrationstate
The following domain controllers have not reached Global state (‘Eliminated’):
Domain Controller (Local Migration State) – DC Type
===================================================
AzureADKerberos (‘Start’) – Writable DC
Migration has not yet reached a consistent state on all domain controllers.
State information might be stale due to Active Directory Domain Services latency.
Since this isn’t a real domain controller, it’s not participating in FRS or DFSR SYSVOL replication. It doesn’t even have the AD leaf object and links to do so! But DFSRMIG doesn’t know this, it just sees a DC and therefore thinks it must be migrated.
Fixing the issue
OK, so how do we fix this pseudo-domain controller blocking the migration? It’s pretty straightforward once you understand how migration state works under the covers. For that, take a look at Verifying the State of SYSVOL Migration and the equally well-worn AskDS blog post DFSR SYSVOL Migration FAQ: Useful trivia that may save your follicles.
Anyway, let’s do this:
1. Logon to one of your DCs as a domain admin.
2. Run ADSIEDIT.MSC, right click ADSI Edit and connect to the ‘Default Naming Context’.
3. Navigate to the Domain Controllers OU.
4 Right click the “AzureADKerberos” computer object and click New > Object.
5. In ‘Select a class’, choose msDFSR-LocalSettings and click Next.
6. In ‘Value’, type DFSR-LocalSettings and click Next.
7. Click Finish.
8. Right-click the new ‘DFSR-LocalSettings’ leaf object and click Properties.
9. Scroll to ‘msDFSR-Flags’ and set it to a value of 48.
10. Click ok and ok, close ADSIEDIT.MSC.
11. Allow AD replication to complete.
12 Continue your migration until completed and verify all DCs are now in Global state eliminated by running:
dfsrmig.exe /getmigrationstate
All domain controllers have migrated successfully to the Global state (‘Eliminated’).
Migration has reached a consistent state on all domain controllers.
Succeeded.
Last thoughts
Technical debt is a real pita, but you already knew that! Just be glad that you’re finally getting that old FRS system out and moving to DFSR for your SYSVOL.
That streamlined migration article has 702,000 views even after being migrated from the old TechNet blog platform. But the old dog still learns new tricks :).
Until next time,
Ned Pyle
Microsoft Tech Community – Latest Blogs –Read More
SuperRAG – How to achieve higher accuracy with Retrieval Augmented Generation
One of the most common use cases for generative AI is Retrieval Augmented Generation (RAG). RAG enables you to inform the LLM about your business data without the need to retrain it. It happens in 3 basic steps:
Retrieve relevant documents based on a query or chat message from your user. This is usually done over a search enabled vector store such as Azure AI Search by creating embeddings of the query and performing a vector or hybrid search.
Augmenting the LLM prompt with the retrieved documents to provide the required context and grounding data.
Generating a response for the user question from the LLM based on the augmented prompt.
Research Shows that if the answer to the user’s question is not in the first 5 documents in the prompt, the likelihood of generating a correct answer drops significantly. For this reason, most RAG applications only return the top 5 search results and use them to augment to prompt. This works well in most use cases but is entirely reliant on the retrieval step to return the correct document. What if the answer to the user’s question is not in the first 5 documents? How can you increase the number of retrieved documents without diluting the ability of the LLM to answer the question?
Introducing SuperRAG – More powerful than a vector store!
SuperRAG involves retrieving 50 (or some other large number of) documents in the retrieval step and then iterating though them to see if they answer the user’s question. The document is then scored based on this relevance and the relevant parts are extracted. The extracts and scores are then sorted, and the top five are used to augment the prompt in the traditional RAG method.
The benefit of this approach is that it can dramatically increase the amount of information retrieved and increase the chances of finding the correct answer. A vector search, which is commonly used in RAG applications, excels at making semantic connections like synonym recognition and misspellings, but doesn’t really understand intent the way a human or LLM does. So, by retrieving many more documents and letting an LLM like GPT-3.5 decide if the document answers the question, we can achieve higher accuracy with our generated answers.
One drawback to this approach is it can be slower and more expensive than traditional RAG. Because we must send each document to the LLM, we will incur a latency penalty and increased token cost, however, the latency can be mitigated to some degree by evaluating the documents in parallel. Provisioned Throughput Units (PTUs) can also help lower the latency and, if fully used around the clock, lower the token costs.
Let’s see it in action
In this example we will try to answer this question:
‘Does the applicant have any significant illnesses in his medical history?’
With these two sample documents:
‘Please use application form 354-01 to enter applicants’ medical history, significant illnesses and other symptoms.’
‘Mr. John Doe, a 35-year-old non-smoker, is applying for a life insurance policy. He works as an accountant and leads a low-risk lifestyle. He exercises regularly and maintains a healthy diet. His medical history reveals no significant illnesses, and his family history is also clear of any hereditary diseases. He is interested in a policy with a coverage amount of $500,000’
If we do a cosine similarity comparison of the vector representations for this text (like we would for a traditional vector search), we would get the following results:
# Determine the Cosine Similarity of the query and answers (to understand semantics vs intent)
question_emb = generate_embedding(‘Does the applicant have any significant illnesses in his medical history?’)
answer_1_emb = generate_embedding(‘Please use application form 354-01 to enter applicants medical history, significant illnesses and other symptoms.’)
answer_2_emb = generate_embedding(‘Mr. John Doe, a 35-year-old non-smoker, is applying for a life insurance policy. He works as an accountant and leads a low-risk lifestyle. He exercises regularly and maintains a healthy diet. His medical history reveals no significant illnesses, and his family history is also clear of any hereditary diseases. He is interested in a policy with a coverage amount of $500,000’)
print(“Cosine Similarity of Question to Answer 1:”, 1 – cosine(question_emb, answer_1_emb))
print(“Cosine Similarity of Question to Answer 2:”, 1 – cosine(question_emb, answer_2_emb))
Cosine Similarity of Question to Answer 1: 0.5595185612023936
Cosine Similarity of Question to Answer 2: 0.39874486454438407
So, a traditional vector search would rank document 1 higher, indicating it is more relevant to the question. This is obviously incorrect. Document 1 does not answer the intent of the question, but document 2 does.
Instead of just using cosine similarity, let’s now use our LLM to evaluate the documents as well. Here is the prompt we’ll use:
By using this prompt and GPT-3.5 to evaluate the documents, we can see that document 2 is much more relevant to answering the user’s question:
I am going to supply you with a set of potential answers and your goal is to determine which of them is best able to answer the question: n Does the applicant have any significant illnesses in his medical history? Please respond in JSON format with a “confidence” score for each example indicating your confidence the text answers the question as well as the “id” of the text.
Please also include a field called “relevant_text” which includes the text that is relevant to being able to answer the question.
Each example will include an answer id as well as the text for the potential answer, separated by a colon.
1: Please use application form 354-02 to enter applicants medical history, significant illnesses and other symptoms.
2: Mr. John Doe, a 35-year-old non-smoker, is applying for a life insurance policy. He works as an accountant and leads a low-risk lifestyle. He exercises regularly and maintains a healthy diet. His medical history reveals no significant illnesses, and his family history is also clear of any hereditary diseases. He is interested in a policy with a coverage amount of $500,000
By using this prompt and GPT-3.5 to evaluate the documents, we can see that document 2 is much more relevant to answering the user’s question:
{
“answers”: [
{
“id”: 1,
“confidence”: 0.1,
“relevent_text”: “Please use application form 354-02 to enter applicants medical history, significant illnesses and other symptoms.”
},
{
“id”: 2,
“confidence”: 0.9,
“relevent_text”: “His medical history reveals no significant illnesses, and his family history is also clear of any hereditary diseases.”
}
]
}
Now, if we were to scale this from 2 documents to 50, 100, or 1,000 documents depending on our business needs, we could dramatically improve the accuracy of our RAG application. Since each document is given a confidence score, we can easily re-sort the results and pass on the most relevant documents to our LLM to generate the answer.
The big benefit of using SuperRAG is not only can you drastically increase the amount of data you retrieve, but you can also extract the parts of each document that are relevant to answering the question. This makes your final prompt much more focused giving your generated answer much higher precision.
If you’d like to learn more about SuperRAG or see a complete example, check out this Github repo.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Microsoft partners accelerate industrial transformation with AI 4 Hannover Messe 2024
Hannover Messe 2024 is a global industrial trade fair that welcomes more than 130,000 attendees and brings together companies from mechanical engineering, electrical engineering, digital industries, and the energy sector to showcase solutions for a sustainable industry. This year’s event took place in April and focused on how to form a connected industrial ecosystem through the theme Energizing a Sustainable Industry.
Together with our partners, Microsoft is helping customers revolutionize manufacturing and build a more sustainable future. As one of more than 4,000 participating companies at HM2024, we were proud to highlight many of our partners’ latest innovations.
Microsoft partners have successfully leveraged artificial intelligence (AI) capabilities in Microsoft Cloud for Manufacturing across four key areas:
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Azure Lab Services – Lab Plan Outage
Azure Lab Service is experiencing an outage that is affecting Lab Plans, but not Lab Accounts. This outage intermittently impacts all operations in the following regions:
Australia East
East US
North Europe
South Central US
Southeast Asia
UAE North
UK South
West Europe
Impacted customers are encouraged to use unaffected regions as a workaround. We apologize for any inconvenience this may cause.
Update 5/13: A potential hotfix is being tested. We also have temporarily disabled lab schedules which means that VMs will not automatically start/stop based on schedules. Please refer back to this blog post for updates.
Microsoft Tech Community – Latest Blogs –Read More
Dashboard blocks for External mode
Hello, I’m able to use switches (from Dashboard blocks) to change parameters in a model actively running on a target card, using External mode. I would also like to have simple visual elements in my model (as viewed from the host computer screen) for measurement. I am currently using a Display block to show the output, but using Gauges and so a more visible element would be great.
For the moment i am using Simulink with Arduino and TI C2000.
Any suggestions would be appreciated!Hello, I’m able to use switches (from Dashboard blocks) to change parameters in a model actively running on a target card, using External mode. I would also like to have simple visual elements in my model (as viewed from the host computer screen) for measurement. I am currently using a Display block to show the output, but using Gauges and so a more visible element would be great.
For the moment i am using Simulink with Arduino and TI C2000.
Any suggestions would be appreciated! Hello, I’m able to use switches (from Dashboard blocks) to change parameters in a model actively running on a target card, using External mode. I would also like to have simple visual elements in my model (as viewed from the host computer screen) for measurement. I am currently using a Display block to show the output, but using Gauges and so a more visible element would be great.
For the moment i am using Simulink with Arduino and TI C2000.
Any suggestions would be appreciated! dashboard blocks, external mode, arduino, c2000 MATLAB Answers — New Questions
how i can simulate it on MATLAB
I wanne simulate joint spectral amplitude that is based on pump envelope and phase matching. lamdaP value is 1550nm and i write code but my alpha value is 0, and i dont know why!
I wanne a picture like that I attach.
clear;
close all;
%……………….parameters……………
lambdaP = 1.55e-6; % pump wavelength
nelithiumP = 2.142; % refractive index lithium niobate for pump lambda
lambdaS = 0.775e-6 ; % signal wavelength
nelithiumS = 2.186; % refractive index lithium niobate for signal lambda
lambdaI = 0.775e-6 ; % idler wavelength
nelithiumI = 2.186; % refractive index lithium niobate for idler lambda
kP = 2*pi*(nelithiumP/lambdaP);
kS = 2*pi*(nelithiumS/lambdaS);
kI = 2*pi*(nelithiumI/lambdaI);
capital_lambda = 17.6e-6;
Deltak =(kP-kS-kI-(2*pi/capital_lambda));
%Deltak = linspace(a-1e7,a+1e7,100); % phase matching condition
L = 2e-2;
sigmaP=10e-9;
%deltat=50e-12;
sigmaP = deltat/(2*sqrt(log(2))); % pulse width
c = 3e8; % speed of light in vacuum
fS = c/lambdaS; fP = c/lambdaP; fI = c/lambdaI;
wS = 2*pi*fS; wP = 2*pi*fP; wI = 2*pi*fI; % angel frequncy
%alpha=exp(-(2*(pi^2)*(sigmaP^2))*((wP-(wS+wI))^2));
%b=wS+wI-wP;
alphaa = exp(-((wS+wI-wP)/sigmaP)); % pump envelope function
phi = (sinc(Deltak.*L./2)).*exp(1i.*Deltak.*L./2); % phase matching function
f = alphaa*phi; % joint spectral amplitude
joint_spectral_intensity = abs((f).^2); % joint spectral intensityI wanne simulate joint spectral amplitude that is based on pump envelope and phase matching. lamdaP value is 1550nm and i write code but my alpha value is 0, and i dont know why!
I wanne a picture like that I attach.
clear;
close all;
%……………….parameters……………
lambdaP = 1.55e-6; % pump wavelength
nelithiumP = 2.142; % refractive index lithium niobate for pump lambda
lambdaS = 0.775e-6 ; % signal wavelength
nelithiumS = 2.186; % refractive index lithium niobate for signal lambda
lambdaI = 0.775e-6 ; % idler wavelength
nelithiumI = 2.186; % refractive index lithium niobate for idler lambda
kP = 2*pi*(nelithiumP/lambdaP);
kS = 2*pi*(nelithiumS/lambdaS);
kI = 2*pi*(nelithiumI/lambdaI);
capital_lambda = 17.6e-6;
Deltak =(kP-kS-kI-(2*pi/capital_lambda));
%Deltak = linspace(a-1e7,a+1e7,100); % phase matching condition
L = 2e-2;
sigmaP=10e-9;
%deltat=50e-12;
sigmaP = deltat/(2*sqrt(log(2))); % pulse width
c = 3e8; % speed of light in vacuum
fS = c/lambdaS; fP = c/lambdaP; fI = c/lambdaI;
wS = 2*pi*fS; wP = 2*pi*fP; wI = 2*pi*fI; % angel frequncy
%alpha=exp(-(2*(pi^2)*(sigmaP^2))*((wP-(wS+wI))^2));
%b=wS+wI-wP;
alphaa = exp(-((wS+wI-wP)/sigmaP)); % pump envelope function
phi = (sinc(Deltak.*L./2)).*exp(1i.*Deltak.*L./2); % phase matching function
f = alphaa*phi; % joint spectral amplitude
joint_spectral_intensity = abs((f).^2); % joint spectral intensity I wanne simulate joint spectral amplitude that is based on pump envelope and phase matching. lamdaP value is 1550nm and i write code but my alpha value is 0, and i dont know why!
I wanne a picture like that I attach.
clear;
close all;
%……………….parameters……………
lambdaP = 1.55e-6; % pump wavelength
nelithiumP = 2.142; % refractive index lithium niobate for pump lambda
lambdaS = 0.775e-6 ; % signal wavelength
nelithiumS = 2.186; % refractive index lithium niobate for signal lambda
lambdaI = 0.775e-6 ; % idler wavelength
nelithiumI = 2.186; % refractive index lithium niobate for idler lambda
kP = 2*pi*(nelithiumP/lambdaP);
kS = 2*pi*(nelithiumS/lambdaS);
kI = 2*pi*(nelithiumI/lambdaI);
capital_lambda = 17.6e-6;
Deltak =(kP-kS-kI-(2*pi/capital_lambda));
%Deltak = linspace(a-1e7,a+1e7,100); % phase matching condition
L = 2e-2;
sigmaP=10e-9;
%deltat=50e-12;
sigmaP = deltat/(2*sqrt(log(2))); % pulse width
c = 3e8; % speed of light in vacuum
fS = c/lambdaS; fP = c/lambdaP; fI = c/lambdaI;
wS = 2*pi*fS; wP = 2*pi*fP; wI = 2*pi*fI; % angel frequncy
%alpha=exp(-(2*(pi^2)*(sigmaP^2))*((wP-(wS+wI))^2));
%b=wS+wI-wP;
alphaa = exp(-((wS+wI-wP)/sigmaP)); % pump envelope function
phi = (sinc(Deltak.*L./2)).*exp(1i.*Deltak.*L./2); % phase matching function
f = alphaa*phi; % joint spectral amplitude
joint_spectral_intensity = abs((f).^2); % joint spectral intensity jsa, jsi, phase matching, pump envelope, waveguide, quantum, spdc, optic, quantumoptic, nonlinearoptic MATLAB Answers — New Questions
Is “simset” not available in MATLAB R2020b anymore ?
I am trying to use "simset" inside a function and it is not assigning the "hidden workspace" to the workspace in my model it keeps trying to grab the base workspace variables and therefore a lot of my variables in the model are not being found.I am trying to use "simset" inside a function and it is not assigning the "hidden workspace" to the workspace in my model it keeps trying to grab the base workspace variables and therefore a lot of my variables in the model are not being found. I am trying to use "simset" inside a function and it is not assigning the "hidden workspace" to the workspace in my model it keeps trying to grab the base workspace variables and therefore a lot of my variables in the model are not being found. simset, simulink, matlab MATLAB Answers — New Questions
FYI – Current Feature Work: Monaco Editor + Modern Chart Improvements
Hi,
Last week I published 2 videos in which the Microsoft Access team shows their current work on two new features for Access that will be launched in a few months: Monaco SQL Editor + Modern Charts Improvements.
Servus
Karl
****************
Access Forever
Access News
Access DevCon
Access-Entwickler-Konferenz AEK
Hi,
Last week I published 2 videos in which the Microsoft Access team shows their current work on two new features for Access that will be launched in a few months: Monaco SQL Editor + Modern Charts Improvements.
ServusKarl****************Access ForeverAccess NewsAccess DevConAccess-Entwickler-Konferenz AEK Read More
New Blog | Microsoft Entra delivers increased transparency
Seventy-five percent of cybersecurity professionals say the current threat landscape is the most challenging it has been in the last five years, according to the 2023 ISC2 Cybersecurity Workforce Study. You’re probably on the hook to secure access for your organization – preventing identity attacks and securing least privilege access. And we know it’s intense.
One of the ways we strive to assist you is by providing accurate, reliable, and timely information to monitor and optimize the strength of your identity and network access security posture. This transparency gives you visibility that’s necessary to assess performance, tenant health, and your plan for improvements.
In 2024, we’ve released a series of innovations reinforcing our commitment to transparency. This blog recaps these improvements for you in three parts:
Transparency in updates: Helping you know what’s new and coming soon for Microsoft Entra;
Transparency in adoption: Providing recommendations and license utilization insights; and
Transparency in operations: Tailored insights on SLA performance, scenario health, and sign-ins.
Plus, all features highlighted in this blog are demonstrated in our video, Trust via Transparency.
I hope these added capabilities help maximize the value you receive from Microsoft Entra as you consider, deploy, and measure the progress of your Zero Trust approach.
Read the full post here: Microsoft Entra delivers increased transparency
By Shobhit Sahay
Seventy-five percent of cybersecurity professionals say the current threat landscape is the most challenging it has been in the last five years, according to the 2023 ISC2 Cybersecurity Workforce Study. You’re probably on the hook to secure access for your organization – preventing identity attacks and securing least privilege access. And we know it’s intense.
One of the ways we strive to assist you is by providing accurate, reliable, and timely information to monitor and optimize the strength of your identity and network access security posture. This transparency gives you visibility that’s necessary to assess performance, tenant health, and your plan for improvements.
In 2024, we’ve released a series of innovations reinforcing our commitment to transparency. This blog recaps these improvements for you in three parts:
Transparency in updates: Helping you know what’s new and coming soon for Microsoft Entra;
Transparency in adoption: Providing recommendations and license utilization insights; and
Transparency in operations: Tailored insights on SLA performance, scenario health, and sign-ins.
Plus, all features highlighted in this blog are demonstrated in our video, Trust via Transparency.
I hope these added capabilities help maximize the value you receive from Microsoft Entra as you consider, deploy, and measure the progress of your Zero Trust approach.
Read the full post here: Microsoft Entra delivers increased transparency
Working in Windows 11
Want to get quick tips to help you optimize productivity, collaboration, and security? Find them all in one place on the Windows Community YouTube channel. With these 1-3 minute videos, learn how to get savvy with the Windows 11 features: Widgets, live captions, and Quick Assist to name a few that have been covered already.
These are helpful. When can I expect new videos?
We try to drop a new tip the fourth week of every month! If there’s a Windows 11 feature that you’d like to know more about, please do leave a comment here or on the video pages on our YouTube channel!
Catch up on the latest tips
Here’s a list of tips from the last six months:
Tabs, automatic file saves, dark mode and more – see the new look and learn how to Get more out of Notepad.
Want to better understand Copilot in Windows on your device and see how it works? Watch How to get started with Copilot in Windows.
If your eyes are hurting from screen brightness or text is difficult to see, learn how to Navigate the color and contrast settings.
Just getting started with Windows 11? Get a quick overview on Day 1 with Windows 11.
Want to stay organized while multi-tasking? See how to Use and create multiple desktops with ease.
Did your last restart make your device feel slower? Learn how to get it back up to speed and Quickly navigate apps, files, and settings.
Want to get quick tips to help you optimize productivity, collaboration, and security? Find them all in one place on the Windows Community YouTube channel. With these 1-3 minute videos, learn how to get savvy with the Windows 11 features: Widgets, live captions, and Quick Assist to name a few that have been covered already.
These are helpful. When can I expect new videos?
We try to drop a new tip the fourth week of every month! If there’s a Windows 11 feature that you’d like to know more about, please do leave a comment here or on the video pages on our YouTube channel!
Catch up on the latest tips
Here’s a list of tips from the last six months:
Tabs, automatic file saves, dark mode and more – see the new look and learn how to Get more out of Notepad.
Want to better understand Copilot in Windows on your device and see how it works? Watch How to get started with Copilot in Windows.
If your eyes are hurting from screen brightness or text is difficult to see, learn how to Navigate the color and contrast settings.
Just getting started with Windows 11? Get a quick overview on Day 1 with Windows 11.
Want to stay organized while multi-tasking? See how to Use and create multiple desktops with ease.
Did your last restart make your device feel slower? Learn how to get it back up to speed and Quickly navigate apps, files, and settings. Read More
Migrate Android device administrator Tap Scheduler (GMS enabled) to Android Enterpise to
Hi All,
I am looking for some information on migrating Android device administrator Tap Scheduler to Android enterprise. We had to enable GMS option at the time of enrollment as it didnt let us enrol unless the option was enabled.
As per below article, will the support for teams panels like tap schedulers will be ending on 2025 September? Or as GMS is enabled will it be ending in August 2024?
I tried the traditional way of creating the compliance policy and setting the device platform restrictions to block device administrator. However there was no option to resolve the compliance like how we get on regular Android device. Any idea if we still can migrate the device to android enterprise through this way? Or Is it the ASOP, the way forward?
Hi All,I am looking for some information on migrating Android device administrator Tap Scheduler to Android enterprise. We had to enable GMS option at the time of enrollment as it didnt let us enrol unless the option was enabled. As per below article, will the support for teams panels like tap schedulers will be ending on 2025 September? Or as GMS is enabled will it be ending in August 2024? https://techcommunity.microsoft.com/t5/intune-customer-success/microsoft-intune-ending-support-for-android-device-administrator/ba-p/3915443 I tried the traditional way of creating the compliance policy and setting the device platform restrictions to block device administrator. However there was no option to resolve the compliance like how we get on regular Android device. Any idea if we still can migrate the device to android enterprise through this way? Or Is it the ASOP, the way forward? Read More
New Blog | Get visibility into your curated external assets with enhanced generative AI capabilities
By Sushma Raja
Finding, tracking, and managing all the assets found within an organization’s vast – and often unknown – digital attack surface can be a daunting task. A lack of knowing and monitoring all your assets, including shadow IT, leads to security gaps that can be exploited by attackers.
Understanding and documenting your entire attack surface with relevant asset tracking is critical to securing your environment. This highlights the importance of adding an external attack surface management (EASM) tool to your security stack.
EASM solutions are designed to provide a view of your digital attack surface from the outside in, enabling organizations to see exactly what attackers browsing the internet see when they come across an asset owned by your organization. Microsoft Defender EASM discovers and maps both known and unknown assets from an external perspective just as an attacker would see as they look to find a way to compromise an organization.
Enhanced Defender EASM functionality in Microsoft Copilot for Security
In November 2023, we announced new Defender EASM capabilities in Microsoft Copilot for Security that help security teams understand their attack surface, the pervasive CVEs within it, and get assistance remediation prioritization with the help of generative AI. The attack surface snapshot that Copilot users receive when using the prompts are, by default, generated from a library of pre-built attack surfaces that Microsoft has discovered for thousands of organizations. From our daily scans of the internet, Defender EASM discovers and searches for an organization’s attack surface based on publicly available information.
The results of prompts pulled from an organization’s pre-built attack surface are intended to give customers high-level visibility into their external assets and associated vulnerabilities. So far, they have been used by Early Access customers to achieve this visibility. One customer reported that they were able to identify unknown assets and remediate major vulnerabilities based on information gathered from EASM.
Now, we are thrilled to share enhanced functionality with these capabilities, which allows customers to directly connect their seeded and curated Defender EASM resource to Copilot for Security. With the curated Defender EASM integration, Copilot users can leverage generative AI to get comprehensive, up-to-date information about their external attack surface, analyzing assets that go above and beyond their pre-built attack surface.
Setting up is simple. In the configuration menu of Copilot for Security, turn on the Defender External Attack Surface Management skills on and then click on the Settings icon to enter your resource information. Once this information is entered, your future prompts in Copilot will utilize information from your configured EASM resource.
Read the full post here: Get visibility into your curated external assets with enhanced generative AI capabilities
By Sushma Raja
Finding, tracking, and managing all the assets found within an organization’s vast – and often unknown – digital attack surface can be a daunting task. A lack of knowing and monitoring all your assets, including shadow IT, leads to security gaps that can be exploited by attackers.
Understanding and documenting your entire attack surface with relevant asset tracking is critical to securing your environment. This highlights the importance of adding an external attack surface management (EASM) tool to your security stack.
EASM solutions are designed to provide a view of your digital attack surface from the outside in, enabling organizations to see exactly what attackers browsing the internet see when they come across an asset owned by your organization. Microsoft Defender EASM discovers and maps both known and unknown assets from an external perspective just as an attacker would see as they look to find a way to compromise an organization.
Enhanced Defender EASM functionality in Microsoft Copilot for Security
In November 2023, we announced new Defender EASM capabilities in Microsoft Copilot for Security that help security teams understand their attack surface, the pervasive CVEs within it, and get assistance remediation prioritization with the help of generative AI. The attack surface snapshot that Copilot users receive when using the prompts are, by default, generated from a library of pre-built attack surfaces that Microsoft has discovered for thousands of organizations. From our daily scans of the internet, Defender EASM discovers and searches for an organization’s attack surface based on publicly available information.
The results of prompts pulled from an organization’s pre-built attack surface are intended to give customers high-level visibility into their external assets and associated vulnerabilities. So far, they have been used by Early Access customers to achieve this visibility. One customer reported that they were able to identify unknown assets and remediate major vulnerabilities based on information gathered from EASM.
Now, we are thrilled to share enhanced functionality with these capabilities, which allows customers to directly connect their seeded and curated Defender EASM resource to Copilot for Security. With the curated Defender EASM integration, Copilot users can leverage generative AI to get comprehensive, up-to-date information about their external attack surface, analyzing assets that go above and beyond their pre-built attack surface.
Setting up is simple. In the configuration menu of Copilot for Security, turn on the Defender External Attack Surface Management skills on and then click on the Settings icon to enter your resource information. Once this information is entered, your future prompts in Copilot will utilize information from your configured EASM resource.
Read the full post here: Get visibility into your curated external assets with enhanced generative AI capabilities
Improvements to tackle spam in Outlook
Happy Monday Microsoft 365 Insiders!
We’re excited to introduce several improvements to help you tackle spam in Outlook, ensuring a safer and more secure email experience. These enhancements come as a result of our continuous efforts to prioritize user safety and enhance productivity.
Learn more about how these improvements can benefit you and your organization in our latest blog by Nina Arjarasumpun, Principal Product Manager on the Outlook team: Improvements to tackle spam in Outlook
Thanks!
Perry Sjogren
Microsoft 365 Insider Social Media Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Happy Monday Microsoft 365 Insiders!
We’re excited to introduce several improvements to help you tackle spam in Outlook, ensuring a safer and more secure email experience. These enhancements come as a result of our continuous efforts to prioritize user safety and enhance productivity.
Learn more about how these improvements can benefit you and your organization in our latest blog by Nina Arjarasumpun, Principal Product Manager on the Outlook team: Improvements to tackle spam in Outlook
Thanks!
Perry Sjogren
Microsoft 365 Insider Social Media Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Column Validation Single Line Text with blank or multiple email addresses separated by semicolon
I have a SharePoint Online list with a single line text column named Cc… Email (i didn’t create that one) that I have been trying to formulate the validation to only allow valid email addresses. The address field can be empty, single entry or multiple entries separated by a semicolon. I’ve tried everything including chatgpt and copilot to find a formula that will work correctly. Nothing seems to work and it seems like this should be fairly simple. What am I missing?
=OR(
ISBLANK([Cc… Email]),
AND(
NOT(ISERROR(FIND(” “, [Cc… Email]))),
ISERROR(FIND(“;;”, [Cc… Email])),
ISERROR(FIND(“;”, [Cc… Email], LEN([Cc… Email])-1)),
ISERROR(FIND(“@”, [Cc… Email])),
ISERROR(FIND(“@”, [Cc… Email], FIND(“@”, [Cc… Email])+1)),
ISERROR(FIND(“.”, MID([Cc… Email], FIND(“@”, [Cc… Email])+1, LEN([Cc… Email])))) = FALSE,
ISERROR(FIND(” “, MID([Cc… Email], FIND(“@”, [Cc… Email])+1, LEN([Cc… Email]))))
)
)
I have a SharePoint Online list with a single line text column named Cc… Email (i didn’t create that one) that I have been trying to formulate the validation to only allow valid email addresses. The address field can be empty, single entry or multiple entries separated by a semicolon. I’ve tried everything including chatgpt and copilot to find a formula that will work correctly. Nothing seems to work and it seems like this should be fairly simple. What am I missing?=OR(ISBLANK([Cc… Email]),AND(NOT(ISERROR(FIND(” “, [Cc… Email]))),ISERROR(FIND(“;;”, [Cc… Email])),ISERROR(FIND(“;”, [Cc… Email], LEN([Cc… Email])-1)),ISERROR(FIND(“@”, [Cc… Email])),ISERROR(FIND(“@”, [Cc… Email], FIND(“@”, [Cc… Email])+1)),ISERROR(FIND(“.”, MID([Cc… Email], FIND(“@”, [Cc… Email])+1, LEN([Cc… Email])))) = FALSE,ISERROR(FIND(” “, MID([Cc… Email], FIND(“@”, [Cc… Email])+1, LEN([Cc… Email])))))) Read More
Intune Graph API intermittent 500 Errors
Hi Everyone, CTO at Drata here.
We have an Microsoft Intune integration to help our mutual customers automate the evidence collection around around their devices for their security compliance programs.
Everything has been working fine until around April 8th 2024. My team has been researching the issue, troubleshooting with customers, reviewing our own code, opening tickets with Microsoft support, etc, etc.
Nothing has worked until this point, so I’m getting involved with the team, and we need your help!!!
Example:
https://graph.microsoft.com/v1.0/deviceManagement/deviceConfigurations/{GUID_HERE}/deviceStatuses?$filter=id+eq+'{GUID_HERE}’
We can even replicate the issue in the Graph Explorer. In either v1.0 or Beta option, we can get 500 errors.
I can’t find anything on a status page around this issue. Is this a known issue by the Intune Graph API team?
Myself or someone on my team would LOVE to work with a T3 Support Engineer, PM, or TPM from the team to dive in and solve this together for our mutual customers.
Daniel Marashlian
Hi Everyone, CTO at Drata here. We have an Microsoft Intune integration to help our mutual customers automate the evidence collection around around their devices for their security compliance programs. Everything has been working fine until around April 8th 2024. My team has been researching the issue, troubleshooting with customers, reviewing our own code, opening tickets with Microsoft support, etc, etc. Nothing has worked until this point, so I’m getting involved with the team, and we need your help!!! Example:https://graph.microsoft.com/v1.0/deviceManagement/deviceConfigurations/{GUID_HERE}/deviceStatuses?$filter=id+eq+'{GUID_HERE}’ We can even replicate the issue in the Graph Explorer. In either v1.0 or Beta option, we can get 500 errors. I can’t find anything on a status page around this issue. Is this a known issue by the Intune Graph API team? Myself or someone on my team would LOVE to work with a T3 Support Engineer, PM, or TPM from the team to dive in and solve this together for our mutual customers. Daniel Marashlian Read More
Cannot run enable-remotemailbox for exising user in AD within a hybrid envrironment
I have an on prem Exchange 2019 with a hybrid setup. When I create a new user in AD, I can run enable-remotemailbox [identity] -primarysmtpaddress [email address removed for privacy reasons] -remoteroutingaddress [email address removed for privacy reasons]
However, I have some users what are just in AD with mail enabled that I would like to migrate to Exchange online via hybrid. When I run the above command I get:
This task does not support recipients of this type. The specified recipient [me] is
of type UserMailbox. Please make sure that this recipient matches the required recipient type for this task.
In the exchange properties, it is set mailbox type ‘user’, and under email address I do not see the box/option ‘set as remote routing address:’.
What am I doing wrong? I can currently access the account’s email through webmail, so I know there is email working.
I have an on prem Exchange 2019 with a hybrid setup. When I create a new user in AD, I can run enable-remotemailbox [identity] -primarysmtpaddress [email address removed for privacy reasons] -remoteroutingaddress [email address removed for privacy reasons] However, I have some users what are just in AD with mail enabled that I would like to migrate to Exchange online via hybrid. When I run the above command I get:This task does not support recipients of this type. The specified recipient [me] isof type UserMailbox. Please make sure that this recipient matches the required recipient type for this task. In the exchange properties, it is set mailbox type ‘user’, and under email address I do not see the box/option ‘set as remote routing address:’. What am I doing wrong? I can currently access the account’s email through webmail, so I know there is email working. Read More
Lesson Learned #485: Index Recomendation or the Importance of Index Selection in SQL Server
Today, I worked on a service request that caught my attention regarding index missing recommendations given by SQL Server. I want to share some findings and lessons learned about creating and optimizing indexes.
We have the following script:
CREATE Table Review
( ID INT Primary Key Identity(1,1),
Age INT,
TypeMember CHAR(4))
INSERT INTO Review (Age, TypeMember) values(1,’TYP1′)
INSERT INTO Review (Age, TypeMember) values(2,’TYP2′)
INSERT INTO Review (Age, TypeMember) values(1,’TYP0′)
INSERT INTO Review (Age, TypeMember) select Age, TypeMember from Review
Afer running the last ‘INSERT’ multiple times, we ended up with 12 million of rows running in the review table.
When we executed the following query with some conditions:
select top 100 * from Review
WHERE TypeMember=’TYP0′ and Age=1
I assumed that the recommended index would be :
CREATE NONCLUSTERED INDEX [TypeMember_Age]
ON [dbo].[Review] ([TypeMember],[Age])
However, the recommendation was instead of using the Age column first and second the column TypeMember.
CREATE NONCLUSTERED INDEX [Age_TypeMember]
ON [dbo].[Review] ([Age],[TypeMember])
This led me to investigate more about the effectiveness of composite indexes.
When defining indexes, especially composite ones, it is important to consider several factors:
Unique Values in Columns (Cardinality):
Cardinality refers to the number of unique values in a column. Columns with high cardinality (many unique values) are usually more effective for indexing.
Column Size/Width:
Columns with smaller size or width are more efficient to index. This is because they occupy less space and search operations can be faster.
Column Data Type:
Data types can influence performance. For example, searching in an INT column generally has a lower cost than searching in a CHAR column.
This case underscores the importance of considering these factors when defining our indexes. Our initial intuition about the index structure might not always be correct; instead, we should rely on analysis and statistics provided by SQL Server.
In the end, choosing the right index can have a significant impact on the performance of our queries. Pay attention to SQL Server’s recommendations and adjust your indexes based on tests and observations.
I hope you find this information useful!
Enjoy!
Microsoft Tech Community – Latest Blogs –Read More
Intelligent app on Azure Container Apps Landing Zone Accelerator
AI Apps are on the raise with LLMs capabilities made easier for app integration with Azure OpenAI and Azure Container Apps helps developers focus on building the AI apps faster in a serverless container environment without worrying about container orchestration, server configuration and deployments details.
To fast-track your journey to production with AI applications, it’s crucial to implement your solutions adhering to the most effective practices in security, monitoring, networking, and operational excellence.
This blogpost will show you how to leverage Azure Container Apps (ACA) Landing Zone Accelerator (LZA) to deploy the AI apps in a production grade secure baseline.
App Overview
To demonstrate the deployment, Java Azure AI reference template is used that provides a complete end-to-end solution demonstrating the Retrieval-Augmented Generation (RAG) pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
The business scenario showcased in the sample is a B2E intelligent chat app to help employees answer questions about company benefits plan, internal policies, as well as job descriptions and roles. The repo includes sample pdf documents in the data folder so it’s ready to try end to end. Furthermore, it provides:
Chat and Q&A interfaces
Various options to help users evaluate the trustworthiness of responses with citations, tracking of source content, etc.
Possible approaches for data preparation, prompt construction, and orchestration of interaction between model (ChatGPT) and retriever (Azure AI Search)
Possible AI orchestration implementation using the plain Java Open AI sdk or the Java Semantic Kernel sdk
Settings directly in the UX to tweak the behavior and experiment with options
App Architecture
The API app is implemented as springboot 2.7.x app using Microsoft JDK. It provides ask and chat apis which are used by the chat web app. It’s responsible for implementing the RAG pattern orchestrating the interaction between the LLM model (Open AI – ChatGPT) and the retriever (Azure AI Search).
The Chat Web App is built in React and deployed as a static web app on nginx. Furthermore, Nginx act as reverse proxy for api calls to the API app. This also solves the CORS issue.
The indexer App is implemented as springboot 2.7.x app using Microsoft JDK. It is responsible for indexing the data into Azure Cognitive Search and it’s triggered by new BlobUploaded messages from Service Bus. The indexer is also responsible for chunking the documents into smaller pieces, embedding them and store them in the index. Azure Document Intelligence is used to extract text from PDF documents (including tables and images)
Azure AI Search is used as RAG retrieval system. Different search options are available: you have traditional full text (with semantic search) search, or vector-based search and finally you can opt for hybrid search which brings together the best of the previous ones.
Event Grid System topic is used to implement a real time mechanism to trigger the indexer app when a new document is uploaded to the blob storage. It’s responsible for reading BlobUploaded notification from azure storage container and push a message to the service bus queue containing the blob url.
Deployment Architecture
The Java Azure AI reference template is deployed on top of the Azure Container Apps Landing Zone Accelerator ‘internal scenario’ infrastructure. Furthermore, the Azure services required to implement the E2E chat with your data solution, are deployed following the Landing Zone Accelerator (LZA) security, monitoring, networking and operational best practices.
From Networking standpoint, Landing zone uses a hub and spoke model with container apps connected to supporting services securely via private end points.
All the traffic is secured within the LZA hub and spoke networks and public access is disabled for all Azure services involved in the solution
All the resources have diagnostic monitoring configured to send logs and metrics to the Log Analytics workspace deployed in the spoke vnet.
The solution is designed to be regional high-available enabling zone redundancy for the Azure services that support it. Azure Open AI doesn’t provide a built-in mechanism to support zone redundancy. You need to deploy more Azure Open AI instances in the same or different region and use a load balancer to distribute the traffic.
You can implement the load balancing logic in the client app or in a dedicated container running in ACA or you can use Azure service like API Management which also provide support for advanced Open AI scenarios like costs charge-back, rate limiting and retry policies. In this sample the resiliency logic is implemented in the client app using the default Open AI Java SDK retry capabilities to overcome transient failures with Azure Open AI chat endpoint or retry with exponential backoff to handle throttling errors during document ingestion process raised by the embeddings endpoint.
For more detailed guidance about Azure Open AI resiliency and performance best practices from Well Architected Framework perspective see here.
Deployment
The deployment is done in two parts 1. Deploy the infrastructure and 2. Deploy the application
You can provision the infrastructure using “azd provision” that will automatically provision the container apps in a secure baseline along with supporting services (Azure AI Search, Azure Document Intelligence, Azure Storage, Azure Event Grid, Azure Service Bus) required by the app to work following the best practices provided by the ACA LZA infrastructure.
To deploy the app, connect to the jumpbox using bastion and follow the pre-requisites before using “azd deploy” to build and deploy the app.
Run ./scripts/prepdocs.sh to ingest the predefined documents in the data folder. Allow few minutes for the documents to be ingested in the Azure AI Search index. You can check the status of the ingestion in the Azure portal in indexer app log stream.
From your local browser connect to the public azure application gateway using https. To retrieve the App Gateway public IP address, go to the Azure portal and search for the application gateway resource in the spoke resource group. In the overview page copy the “Frontend public IP address” and paste it in your browser.
Special thanks to Davide Antelmo for authoring the detailed guidance to deploy the AI app to Azure Container Apps in a landing zone environment.
Resources: https://aka.ms/java-ai-aca-accelerator
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Entra delivers increased transparency
Seventy-five percent of cybersecurity professionals say the current threat landscape is the most challenging it has been in the last five years, according to the 2023 ISC2 Cybersecurity Workforce Study. You’re probably on the hook to secure access for your organization – preventing identity attacks and securing least privilege access. And we know it’s intense.
One of the ways we strive to assist you is by providing accurate, reliable, and timely information to monitor and optimize the strength of your identity and network access security posture. This transparency gives you visibility that’s necessary to assess performance, tenant health, and your plan for improvements.
In 2024, we’ve released a series of innovations reinforcing our commitment to transparency. This blog recaps these improvements for you in three parts:
Transparency in updates: Helping you know what’s new and coming soon for Microsoft Entra;
Transparency in adoption: Providing recommendations and license utilization insights; and
Transparency in operations: Tailored insights on SLA performance, scenario health, and sign-ins.
Plus, all features highlighted in this blog are demonstrated in our video, Trust via Transparency.
I hope these added capabilities help maximize the value you receive from Microsoft Entra as you consider, deploy, and measure the progress of your Zero Trust approach.
Transparency in updates
In the world of technology, change is constant. In 2023, we released over 100 Microsoft Entra updates and new capabilities and communicated this information across announcements, quarterly blogs, and multiple docs locations. Our first investment area of transparency aims to streamline this communication, helping you find and filter the product update information most relevant to you.
What’s New hub in Microsoft Entra admin center
”What’s New” in Microsoft Entra gives a clear and complete view of Entra product innovation so you can stay informed, evaluate the latest innovations, and eliminate the need to manually track updates. Product updates are categorized into Roadmap and Change Announcements. The roadmap includes public previews and recent general availability releases, while Change Announcements detail modifications to existing features.
Learn more: Introducing “What’s New” in Microsoft Entra – Microsoft Community Hub
Transparency in adoption
The second investment area, transparency in adoption, focuses on helping you get more value from your Microsoft Entra licenses, giving you visibility to intelligent recommendations for improving configurations and protecting your organization.
Microsoft Entra license utilization insights
Microsoft Entra license utilization insights help you optimize your Entra licenses, as well as stay compliant by getting insights into the current usage. Today, you can see usage and licenses for Entra ID capabilities such as Conditional Access and risk-based Conditional Access. In the future, we will expand the license utilization insights to other products in the Microsoft Entra product line.
Learn more: Introducing Microsoft Entra license utilization insights – Microsoft Community Hub
Microsoft Entra recommendations
Microsoft Entra recommendations can serve as a trusted advisor for enhancing your security posture and improving employee productivity. With Microsoft Entra recommendations, you get personalized and actionable insights based on best practices and industry standards to help you secure your organization. Plus, we’ve made updates to Identity Secure Score, which you can find on the Microsoft Entra recommendations blade.
Transparency in operations
Transparency in operations focuses on what we’re doing to help customers see how available and resilient Microsoft Entra really is, to hold us accountable when issues arise so we can keep improving, and to understand when they have actions to take within their tenant to improve its health. Let’s look at recently announced functionality in reporting, health, and monitoring:
Tenant-level SLA reporting
Monthly tenant-level SLA reporting enables you to monitor your tenant’s performance against our Entra ID SLA promise of 99.99% availability in authenticating users and issuing tokens within your tenant.
Learn more: Tenant health transparency and observability – Microsoft Community Hub
Precomputed health metric streams
These new health metrics isolate relevant signals from activity logs and provide pre-computed, low-latency aggregates every 15 minutes for specific high-value observability scenarios. The first scenarios we’ve enabled are multifactor authentication (MFA), sign-ins for managed or compliant devices, and Security Assertion Markup Language (SAML) sign-ins. We’re starting with authentication-related scenarios because they are mission-critical to all our customers, but other scenarios in areas like entitlement management, directory configuration, and app health will be added in time, along with intelligent alerting capabilities in response to anomalous patterns in the data.
Learn more: Tenant health transparency and observability – Microsoft Community Hub
Copilot-assisted assessments
As our third example of our commitment to transparency in operations, we can help you understand how users interact with your organization’s resources. Microsoft Copilot for Security is embedded in Microsoft Entra so you can more efficiently assess identities and access, plus investigate and resolve identity risks and even complete complex tasks. A great example of this assistance is asking Copilot to give you sign-in logs for a specific user for a specific amount of time, saving you the reporting time.
Learn more: Microsoft Entra adds identity skills to Copilot for Security – Microsoft Community Hub
Tell us what you think
For my team, transparency isn’t a buzzword; it’s our commitment. As we continue to enhance Microsoft Entra, earning your trust through transparency remains our guiding star.
We look forward to you trying these new capabilities and hopefully making them part of your ongoing experience to reduce complexity and effectively manage your identity and network access security solutions. I’d be happy to hear your feedback and ideas, either in the comments below or via the “Provide Feedback” link on the Microsoft Entra admin center home page.
Best regards,
Shobhit Sahay
Learn more about Microsoft Entra
Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.
Microsoft Entra News and Insights | Microsoft Security Blog
Microsoft Entra blog | Tech Community
Microsoft Entra documentation | Microsoft Learn
Microsoft Entra discussions | Microsoft Community
Microsoft Tech Community – Latest Blogs –Read More