Category: News
I am trying to run spm12 with matlab 2023b on my macOS but I got this error when I type spm and I have already installed xcode14 from App Store
>> spm
Error using spm_check_installation>check_basic
SPM uses a number of MEX files, which are compiled functions.
These need to be compiled for the various platforms on which SPM
is run. It seems that the compiled files for your computer platform
are missing or not compatible. See
https://en.wikibooks.org/wiki/SPM/Installation_on_64bit_Mac_OS_(Intel)
for information about how to compile MEX files for MACA64
in MATLAB 23.2.0.2391609 (R2023b) Update 2.>> spm
Error using spm_check_installation>check_basic
SPM uses a number of MEX files, which are compiled functions.
These need to be compiled for the various platforms on which SPM
is run. It seems that the compiled files for your computer platform
are missing or not compatible. See
https://en.wikibooks.org/wiki/SPM/Installation_on_64bit_Mac_OS_(Intel)
for information about how to compile MEX files for MACA64
in MATLAB 23.2.0.2391609 (R2023b) Update 2. >> spm
Error using spm_check_installation>check_basic
SPM uses a number of MEX files, which are compiled functions.
These need to be compiled for the various platforms on which SPM
is run. It seems that the compiled files for your computer platform
are missing or not compatible. See
https://en.wikibooks.org/wiki/SPM/Installation_on_64bit_Mac_OS_(Intel)
for information about how to compile MEX files for MACA64
in MATLAB 23.2.0.2391609 (R2023b) Update 2. spm12 MATLAB Answers — New Questions
Optimization problem – Group assignment based on preferences
Hi everyone! I’m organising an event for which participants will be taking part in two rounds of workshops. When registering, they list their top 4 preferences. I’m looking for a way to assign them as best I can to the individual workshops, keeping in mind that each workshop has a limit of 25 participants per round.
With the help of other posts on here and ChatGpt, I came to a test dataset, but Solver said there was no feasible solution.
One thing I already noticed is that I do not know how to make it so that each workshop can be assigned both in round 1 and round 2, so 50 people in total can take part in each workshop, but not at the same time.
In general, this is my first time doing any such optimization and would greatly appreciate any help.
(As a new member to this community, I’m not allowed to upload my data set I just found out…)
Hi everyone! I’m organising an event for which participants will be taking part in two rounds of workshops. When registering, they list their top 4 preferences. I’m looking for a way to assign them as best I can to the individual workshops, keeping in mind that each workshop has a limit of 25 participants per round. With the help of other posts on here and ChatGpt, I came to a test dataset, but Solver said there was no feasible solution.One thing I already noticed is that I do not know how to make it so that each workshop can be assigned both in round 1 and round 2, so 50 people in total can take part in each workshop, but not at the same time.In general, this is my first time doing any such optimization and would greatly appreciate any help. (As a new member to this community, I’m not allowed to upload my data set I just found out…) Read More
All day bookings are 25 hours and prevent bookings being made next day
As the title states, and as I’ve seen on may others posts this is still an ongoing issue. This is my second post around this issue with no resolution.
The simplest way to explain is we have a desk booking system set up here.
When ‘Desk 1’ is booked all day for example Monday, this shows in the calendar as 00:00 Monday to 23:59 Monday, GREAT! But when you look deeper into this, and turn off the ‘All Day’ slider, it changes to 0:59 the next morning for finish.
This means that when ‘Desk 1’ is being booked for Tuesday, THE BOOKING SITE CALENDAR SAYS IT ISN’T AVAILABLE.
We would work around this by setting each booking 8am-8pm but there are more than 5 desks, so you lose vision of what has been booked until you go into the view of ‘view by staff’ for the day.
I can see forum posts going back around 2-3 years with this issue and our clients are tired of it not working. As are we with trying to sort this un-bookable booking system out.
Has anyone found any way around this? We’ve changed time zones, business hours, every setting under staff and services with no progress except maybe a few days of not having a full office so people just pick and choose a desk that they can actually book to save another headache. We’ve even set a new booking calendar up with many variations of setting up staff and services.
THIS HAUNTS MY DREAMS PLEASE FIX IT OR HELP ME UNDERSTAND WHY ALL DAY MEANS 24 HOURS AND 59 MINUTES.
As the title states, and as I’ve seen on may others posts this is still an ongoing issue. This is my second post around this issue with no resolution. The simplest way to explain is we have a desk booking system set up here. When ‘Desk 1’ is booked all day for example Monday, this shows in the calendar as 00:00 Monday to 23:59 Monday, GREAT! But when you look deeper into this, and turn off the ‘All Day’ slider, it changes to 0:59 the next morning for finish.This means that when ‘Desk 1’ is being booked for Tuesday, THE BOOKING SITE CALENDAR SAYS IT ISN’T AVAILABLE. We would work around this by setting each booking 8am-8pm but there are more than 5 desks, so you lose vision of what has been booked until you go into the view of ‘view by staff’ for the day. I can see forum posts going back around 2-3 years with this issue and our clients are tired of it not working. As are we with trying to sort this un-bookable booking system out. Has anyone found any way around this? We’ve changed time zones, business hours, every setting under staff and services with no progress except maybe a few days of not having a full office so people just pick and choose a desk that they can actually book to save another headache. We’ve even set a new booking calendar up with many variations of setting up staff and services. THIS HAUNTS MY DREAMS PLEASE FIX IT OR HELP ME UNDERSTAND WHY ALL DAY MEANS 24 HOURS AND 59 MINUTES. Read More
Mail flow issue exchange 2016
Email is backing up in the Exchange Submission Queue and the following errors are displayed in Queue Viewer:
432 4.3.2 STOREDRV.Deliver; mailbox database thread limit exceeded”
mail flow is stopped and added below values still same issue.
any suggestion….
<add key=”MailboxDeliveryThrottlingEnabled” value=”False” />
<add key=”RecipientThreadLimit” value=”2″ />
<add key=”MaxMailboxDeliveryPerMdbConnections” value=”3″ />
Email is backing up in the Exchange Submission Queue and the following errors are displayed in Queue Viewer:432 4.3.2 STOREDRV.Deliver; mailbox database thread limit exceeded”mail flow is stopped and added below values still same issue.any suggestion….<add key=”MailboxDeliveryThrottlingEnabled” value=”False” /><add key=”RecipientThreadLimit” value=”2″ /><add key=”MaxMailboxDeliveryPerMdbConnections” value=”3″ /> Read More
Project End Date Doesn’t Change with Progress Update
I’m having an issue where Task 7 and 8 had a FS relationship at the creation of the schedule but Task 8 has actually started ahead of time and has progress as does Task 7. My project should now finish early but Task 8s progress only shows starting at the completion of Task 7 so the overall schedule completion date does not change. Task 8 shows as a split task on the Gantt Chart. Any help would be appreciated.
I’m having an issue where Task 7 and 8 had a FS relationship at the creation of the schedule but Task 8 has actually started ahead of time and has progress as does Task 7. My project should now finish early but Task 8s progress only shows starting at the completion of Task 7 so the overall schedule completion date does not change. Task 8 shows as a split task on the Gantt Chart. Any help would be appreciated. Read More
New: AI in Microsoft Teams and Teams Phone blog post
Check out the new Supercharge Your Business: Simplify communications with AI in Microsoft Teams and Teams Phone blog! This article highlights how AI-powered features in Microsoft Teams and Teams Phone can enhance business communication and collaboration. This new AI improves audio and video quality, helps with meeting transcriptions, and offers tools like Microsoft Copilot for real-time insights and meeting recaps. These features streamline communication, support hybrid meetings, and ensure clarity in calls, making Teams a versatile platform for businesses of all sizes.
We are always looking to connect with small or medium businesses who are using Copilot, so comment to let us know some ways you are integrating Copilot into your company.
Check out the new Supercharge Your Business: Simplify communications with AI in Microsoft Teams and Teams Phone blog! This article highlights how AI-powered features in Microsoft Teams and Teams Phone can enhance business communication and collaboration. This new AI improves audio and video quality, helps with meeting transcriptions, and offers tools like Microsoft Copilot for real-time insights and meeting recaps. These features streamline communication, support hybrid meetings, and ensure clarity in calls, making Teams a versatile platform for businesses of all sizes.
We are always looking to connect with small or medium businesses who are using Copilot, so comment to let us know some ways you are integrating Copilot into your company. Read More
Grow your Business with Copilot for Microsoft 365 – August 2024
Welcome back to Grow Your Business with Copilot for Microsoft 365, a monthly series designed to empower small and midsized businesses to harness the power of AI at work.
My team works with a wide range of small and midsized businesses. And while each is unique in their own way, we’ve found that regardless of size, industry, or market, they basically want the same thing: to grow. To attract more customers. To boost revenue. To scale efficiently.
Make sure to also check out our weekly Copilot productivity series that just launched, as well as the new Copilot Success Kit, your one-stop shop for getting ready and implementing Copilot.
PKSHA Technology – Embracing AI
Staying on the cutting edge – PKSHA Technology is doing just that by using the power Copilot for Microsoft 365 to grow their business and evangelize AI to their customers so they can do the same.
PKSHA Technology is a midsized company based in Tokyo, Japan. PKSHA develops algorithmic solutions and AI technologies that help companies become more efficient and improve their processes – they believe algorithms can solve some of the world’s biggest challenges. With effective roll out techniques, PKSHA leveraged Copilot to create new hire shortcuts, improve their customer management, and shorten the process from product roadmap to feature enhancements.
Onboarding Shortcuts with AI
As PKSHA experienced rapid growth and hired new employees, and like most businesses, they found pain points in the onboarding process. It was difficult to ensure new hires had access or could find the information they needed. Onboarding new employees and getting them up to speed can also be a very demanding process for your current employees.
With the help of Copilot, PKSHA employees task Copilot to search for the information they need. This ultimately shortens the time for new hires between their first day on the job to making a true impact! It also frees up time for those tasked with onboarding them into role, taking advantage of the fact that much of the company internal intel is now at their fingertips with Copilot.
There are many ways that Copilot can help accelerate onboarding. For example, while attending a team meeting, using Copilot to ask clarifying questions. The “personal chat” with Copilot allows you ask questions about the meeting while not interrupting the flow of the meeting. As a new hire, creating documents, proposals, or paper can be hard as you are still learning the tone, voice, and preferred format of your new company. Using Copilot in Word, you can reference other documents to get to your first draft faster. Managers are also able to use Copilot to create onboarding documents and processes much faster to help employees orientate themselves to their new organization.
Customer Management
High–touch customer service can be a very time-consuming task that requires thorough preparation and detailed follow–up communications. Prior to Copilot, PKSHA Customer Success specialist, Ms. Takeuchi, would spend hours preparing information prior to calls and afterwards transcribing notes and documenting follow-up actions. Now, she uses Copilot to quickly assemble materials in advance, organizes to-dos and shares action tasks with customers immediately after the meetings. With her administrative workload considerably reduced with Copilot in Teams, Ms. Takeuchi is able to dedicate more time focusing on her customers and activities that matter the most, maximizing care, attention, and service quality.
Product Development
A streamlined customer feedback loop that feeds into an issues list and ultimately product enhancements… sounds like an operational dream. With Copilot, PKSHA is making that dream closer to a reality. The PKSHA team leverages Copilot in Teams and Excel for gathering customer intel and feedback. By using Copilot in Teams to summarize and organize product feedback they receive to easily surface product needs and create a centralized log of possible product improvements. This process creates a shared knowledge base that team members across their product groups can reference, instead of disparate information silos, resulting in greater coordination and faster delivery of product enhancements. In parallel, the customer success team also uses Copilot in Excel to identify trends in the customer data. These trends help the team create meaningful recommendations for their customers. With Copilot, the team overall saves up to 4 hours of time spent on data analysis.
Creating AI Champions
When introducing any new technology tools in the workplace, it’s crucial to have the right adoption plan in place. Often a pilot group is part of any successful roll out plan. The pilot approach is baked into PKSHA’s vision for their company. PKSHA utilizes new AI solutions internally first to better evaluate how they can solve client needs with those AI solutions. In order to both test and drive the internal adoption of AI, PKSHA created their Future Work Black Belt Team. Creating an AI leadership team is a best practice that Microsoft has witnessed across its Copilot customer base. Read more details about how to stand up your own AI Council here.
Accelerating AI innovation with Copilot
The productivity and collaboration benefits of Copilot enable the team at PKSHA to focus more on their core mission of creating better AI solutions and technologies. Just like PKSHA is all about harnessing the power of algorithms to solve some of the world’s biggest challenges, Copilot gives them the power to fuel their innovation, creativity and efficiency amidst their AI development.
We are so excited to see PKSHA and other small and medium companies harness the power of Copilot to grow! Tune in next month for another example of how Copilot helps unlock more value and opportunity. If your company has used Copilot for Microsoft 365 to grow and you’d like to share your story, we’d love to feature you! Comment below to let us know you’re interested and a member from our team will get in touch!
Want to try out some of the ways PKSHA used Copilot for Microsoft 365? Check out the following resources:
Get started with Copilot in Teams or learn how to identify insights with Copilot in Excel
Visit Scenario Library use case on using Copilot in onboarding
Check out the new SMB Success Kit and accelerate your Copilot adoption today
Read the full PKSHA story here
For adoption content visit Microsoft 365 Adoption – Get Started
For the latest SMB AI insights follow Microsoft 365 blog
Angela Byers
Microsoft
Senior Director, Copilot & Growth Marketing for SMB
Meet the team
The monthly series, Grow Your Business with Copilot for Microsoft 365, is brought to you by the SMB Copilot marketing team at Microsoft. From entrepreneurs to coffee connoisseurs, they work passionately behind the scenes, sharing the magic of Copilot products with small and medium businesses everywhere. Always ready with a smile, a helping hand, and a clever campaign, they’re passionate about helping YOUR business grow!
Microsoft Tech Community – Latest Blogs –Read More
Announcing a new way to build technical skills: 30 Day Plans on Microsoft Learn
Are you hoping to learn new technical skills to excel in your current job or prepare for a new career? Is your organization looking to upskill employees in AI and other critical technologies? Microsoft Learn’s 30 Day Plans are a great option for skilling up quickly in a variety of specific fields and topics, including AI, data science, security, and more. As AI becomes increasingly embedded in all sectors of the economy, expanding your portfolio of skills is a smart investment. In fact, professionals with AI skills earn 21% more on average than those without.
Curated by Microsoft subject matter experts, 30 Day Plans are designed to be completed in one month or less so you can reach your learning goals sooner. Each Plan is also aligned to a Microsoft Certification exam or Microsoft Applied Skills assessment so you can prove your expertise by earning a verified Microsoft Credential.
With carefully designed learning outcomes, clear milestones, and automated nudges, 30 Day Plans help keep you focused and on track. That way, your next step is always clear and your goal attainable. Plus, you can pursue the training whenever, wherever, and however works best for you!
30 Day Plan topics cover an array of subjects, including:
AI: Azure AI Engineer, Azure AI Language, Copilot for Microsoft 365
Security: Security Operations Analyst, Get AI-Ready with Microsoft 365 Admin
Data: Azure Data Fundamentals, Make Your Data AI Ready with Microsoft Fabric
Get a jump-start on your individual or organizational technical skilling goals with 30 Day Plans on Microsoft Learn. Our skill-specific learning content is ready to go anytime you are.
Learn more about Plans on Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Optimization Live Editor task Error “Your objective function must return a scalar value”
Hi,I’m trying to maximize a function with genetic algorithm or patternsearch using Optimization Live Editor task. But it confuses me that an Error "Your objective function must return a scalar value" always occurs, and I have alreay checked out the output of my objective function. Can somebody tell me how to fix this problem? Would appreciate any help!
I checked out the the output of my objective function as follows:
input = [0 0.5];
MaxSidelobe = FindBestPlacingGA(input);
TF = isscalar(MaxSidelobe);
disp(TF);
The objective function and other functions needed:
function MaxSidelobe= FindBestPlacingGA(input)
input(1) = deg2rad(input(1));
mic_pos = [0 0.24 0
-0.2078 -0.12 0
0.2078 -0.12 0];
mic_pos = [Array3N(input(1),input(2));mic_pos];
MaxSidelobe= FPSF_Function(mic_pos,500,0:1:80);
end
function mic_pos = Array3N(theta,rho)
theta3N = [theta+pi/2;theta+pi*7/6;theta+pi*11/6];
mic_pos = zeros(3,3);
mic_pos(:,3) = 0;
[mic_pos(:,1),mic_pos(:,2)] = pol2cart(theta3N,rho);
end
function MSL= FPSF_Function(mic_pos,f,El)
Num_mic = size(mic_pos,1);
Az = -180:1: 180;
c = 343;
k0 = [0 0 -1];
numAz = length(Az);
numEl = length(El);
K = zeros(3, numAz, numEl);
for i = 1:numAz
for j = 1:numEl
az_rad = deg2rad(Az(i));
el_rad = deg2rad(El(j));
x = cos(az_rad) * sin(el_rad);
y = sin(az_rad) * sin(el_rad);
z = cos(el_rad);
K(:, i, j) = [x; y; z];
end
end
W = zeros(numAz,numEl);
for p = 1:numAz
for q = 1:numEl
for n = 1:Num_mic
W(p,q) = exp(-1i*dot(K(:,p,q)’-k0,mic_pos(n,:))*2*pi*f/c) + W(p,q);
end
end
end
W = W/Num_mic;
Y = 10*log10((abs(W)).^2);
local_max = imregionalmax(Y);
max_values = Y(local_max);
Mainlobe = max(max_values(:));
sidelobes = max_values(max_values~=Mainlobe);
MSL = Mainlobe – max(sidelobes(:));
endHi,I’m trying to maximize a function with genetic algorithm or patternsearch using Optimization Live Editor task. But it confuses me that an Error "Your objective function must return a scalar value" always occurs, and I have alreay checked out the output of my objective function. Can somebody tell me how to fix this problem? Would appreciate any help!
I checked out the the output of my objective function as follows:
input = [0 0.5];
MaxSidelobe = FindBestPlacingGA(input);
TF = isscalar(MaxSidelobe);
disp(TF);
The objective function and other functions needed:
function MaxSidelobe= FindBestPlacingGA(input)
input(1) = deg2rad(input(1));
mic_pos = [0 0.24 0
-0.2078 -0.12 0
0.2078 -0.12 0];
mic_pos = [Array3N(input(1),input(2));mic_pos];
MaxSidelobe= FPSF_Function(mic_pos,500,0:1:80);
end
function mic_pos = Array3N(theta,rho)
theta3N = [theta+pi/2;theta+pi*7/6;theta+pi*11/6];
mic_pos = zeros(3,3);
mic_pos(:,3) = 0;
[mic_pos(:,1),mic_pos(:,2)] = pol2cart(theta3N,rho);
end
function MSL= FPSF_Function(mic_pos,f,El)
Num_mic = size(mic_pos,1);
Az = -180:1: 180;
c = 343;
k0 = [0 0 -1];
numAz = length(Az);
numEl = length(El);
K = zeros(3, numAz, numEl);
for i = 1:numAz
for j = 1:numEl
az_rad = deg2rad(Az(i));
el_rad = deg2rad(El(j));
x = cos(az_rad) * sin(el_rad);
y = sin(az_rad) * sin(el_rad);
z = cos(el_rad);
K(:, i, j) = [x; y; z];
end
end
W = zeros(numAz,numEl);
for p = 1:numAz
for q = 1:numEl
for n = 1:Num_mic
W(p,q) = exp(-1i*dot(K(:,p,q)’-k0,mic_pos(n,:))*2*pi*f/c) + W(p,q);
end
end
end
W = W/Num_mic;
Y = 10*log10((abs(W)).^2);
local_max = imregionalmax(Y);
max_values = Y(local_max);
Mainlobe = max(max_values(:));
sidelobes = max_values(max_values~=Mainlobe);
MSL = Mainlobe – max(sidelobes(:));
end Hi,I’m trying to maximize a function with genetic algorithm or patternsearch using Optimization Live Editor task. But it confuses me that an Error "Your objective function must return a scalar value" always occurs, and I have alreay checked out the output of my objective function. Can somebody tell me how to fix this problem? Would appreciate any help!
I checked out the the output of my objective function as follows:
input = [0 0.5];
MaxSidelobe = FindBestPlacingGA(input);
TF = isscalar(MaxSidelobe);
disp(TF);
The objective function and other functions needed:
function MaxSidelobe= FindBestPlacingGA(input)
input(1) = deg2rad(input(1));
mic_pos = [0 0.24 0
-0.2078 -0.12 0
0.2078 -0.12 0];
mic_pos = [Array3N(input(1),input(2));mic_pos];
MaxSidelobe= FPSF_Function(mic_pos,500,0:1:80);
end
function mic_pos = Array3N(theta,rho)
theta3N = [theta+pi/2;theta+pi*7/6;theta+pi*11/6];
mic_pos = zeros(3,3);
mic_pos(:,3) = 0;
[mic_pos(:,1),mic_pos(:,2)] = pol2cart(theta3N,rho);
end
function MSL= FPSF_Function(mic_pos,f,El)
Num_mic = size(mic_pos,1);
Az = -180:1: 180;
c = 343;
k0 = [0 0 -1];
numAz = length(Az);
numEl = length(El);
K = zeros(3, numAz, numEl);
for i = 1:numAz
for j = 1:numEl
az_rad = deg2rad(Az(i));
el_rad = deg2rad(El(j));
x = cos(az_rad) * sin(el_rad);
y = sin(az_rad) * sin(el_rad);
z = cos(el_rad);
K(:, i, j) = [x; y; z];
end
end
W = zeros(numAz,numEl);
for p = 1:numAz
for q = 1:numEl
for n = 1:Num_mic
W(p,q) = exp(-1i*dot(K(:,p,q)’-k0,mic_pos(n,:))*2*pi*f/c) + W(p,q);
end
end
end
W = W/Num_mic;
Y = 10*log10((abs(W)).^2);
local_max = imregionalmax(Y);
max_values = Y(local_max);
Mainlobe = max(max_values(:));
sidelobes = max_values(max_values~=Mainlobe);
MSL = Mainlobe – max(sidelobes(:));
end optimization live editor task, error, scalar value MATLAB Answers — New Questions
How to import a Project for the Web export into Project Pro (desktop)
Hi all, so I’m working with someone who is using Project for the Web. This will only allow them to export to excel. I’m trying to take this and import it into my desktop Project Pro, but can’t seem to do it. Here’s the steps I’m following:
New from Excel workbookImport Wizard selectionsNew mapAs a new projectTasks & Import includes headersSource worksheet name auto populates with the only tab value in the excel, Project tasksHere’s the issue… Under “From: Excel Field”, I’m not able to select anything at all. The dropdown is empty. Clicking Add All does nothing. It’s like it’s not recognizing the headers.
I did make sure to remove the excel export’s Project Information that ends up at the top of the tab, so that the field names are in the header row.
Anyone able to help or have another idea? Goal is that this smaller plan will be part of my larger plan so that this vendor is managing their plan but right now just trying to convert theirs before doing that.
TIA!
Hi all, so I’m working with someone who is using Project for the Web. This will only allow them to export to excel. I’m trying to take this and import it into my desktop Project Pro, but can’t seem to do it. Here’s the steps I’m following:New from Excel workbookImport Wizard selectionsNew mapAs a new projectTasks & Import includes headersSource worksheet name auto populates with the only tab value in the excel, Project tasksHere’s the issue… Under “From: Excel Field”, I’m not able to select anything at all. The dropdown is empty. Clicking Add All does nothing. It’s like it’s not recognizing the headers.I did make sure to remove the excel export’s Project Information that ends up at the top of the tab, so that the field names are in the header row. Anyone able to help or have another idea? Goal is that this smaller plan will be part of my larger plan so that this vendor is managing their plan but right now just trying to convert theirs before doing that. TIA! Read More
New Outlook – Viewing Distribution List membership on contact record
In the old version of outlook you could pull up a contact’s profile via the addressbook or within outlook and see the distribution lists to which they are members. I cannot find that in new outlook. If I pull up a distribution list I can see the members but not start with a contact and see their distribution lists.
In the old version of outlook you could pull up a contact’s profile via the addressbook or within outlook and see the distribution lists to which they are members. I cannot find that in new outlook. If I pull up a distribution list I can see the members but not start with a contact and see their distribution lists. Read More
Intune Multi-User setup
Hi,
I need some assistance with creating an Intune PC that utilises a “multi-user setup” whereby a user can come along and use the PC but when they log out it wipes for someone else to then use.
We normally setup our standard (user-assigned) laptops using the following:
set-executionpolicy bypass
install-script get-windowsautopilotinfo
Get-WindowsAutoPilotInfo.ps1 -online
This brings us to the OOBE page whereby the user can then sign in. However, with the multi-user PCs we want to setup we want them to have no user signing into them to setup.
I’ve tried setting up a Windows 10/11 configuration policy which I’ll attach and then excuting the following instead of the above:
set-executionpolicy bypass
install-script get-windowsautopilotinfo
Get-WindowsAutoPilotInfo.ps1 -AddToGroup Intune_MDM_HotDesk -online
However the above still brings me to the ‘Welcome (Company) screen’ and asks for a user to sign in. But again I want to avoid the sign-in so that it goes right through to the desktop experience.
Can anyone help?
Hi, I need some assistance with creating an Intune PC that utilises a “multi-user setup” whereby a user can come along and use the PC but when they log out it wipes for someone else to then use. We normally setup our standard (user-assigned) laptops using the following: set-executionpolicy bypassinstall-script get-windowsautopilotinfoGet-WindowsAutoPilotInfo.ps1 -online This brings us to the OOBE page whereby the user can then sign in. However, with the multi-user PCs we want to setup we want them to have no user signing into them to setup. I’ve tried setting up a Windows 10/11 configuration policy which I’ll attach and then excuting the following instead of the above: set-executionpolicy bypassinstall-script get-windowsautopilotinfoGet-WindowsAutoPilotInfo.ps1 -AddToGroup Intune_MDM_HotDesk -online However the above still brings me to the ‘Welcome (Company) screen’ and asks for a user to sign in. But again I want to avoid the sign-in so that it goes right through to the desktop experience. Can anyone help? Read More
Enhancing vulnerability prioritization with asset context and EPSS – Now in Public Preview.
Vulnerability prioritization is a critical component of an effective Vulnerability Risk Management (VRM) program.
It involves identifying and ranking security weaknesses in an organization’s systems based on their potential impact and exploitability.
Given the vast number of potential vulnerabilities, it is impossible to address all of them at once. Effective prioritization ensures that the most critical vulnerabilities are addressed first, maximizing security efforts.
This approach is crucial for defending against cyberattacks, as it helps allocate resources effectively, reduce the attack surface, and protect sensitive data more efficiently.
We are excited to announce the addition of three crucial factors to our prioritization process in Microsoft Defender Vulnerability Management, aimed at improving accuracy and efficiency. These factors include:
Information about critical assets (defined in Microsoft Security Exposure Management)
Information about internet-facing device
Exploit Prediction Scoring System (EPSS) score
In this article, you can learn more about each of these enhancements, how they contribute to a more robust vulnerability prioritization process, and how you can use them.
Critical devices
In Microsoft Security Exposure Management (preview), you can define and manage resources as critical assets.
Identifying critical assets helps ensure that the most important assets in your organization are protected against risk of data breaches and operational disruptions. Critical asset identification contributes to availability and business continuity. Exposure Management provides an out-of-the-box catalog of predefined critical asset classifications and ability to create your custom definitions, in addition to the capability to manually tag devices as critical to your organization. Learn more about critical asset management in this deep dive blog.
Now in preview, you can prioritize security recommendations, and remediation steps to focus on critical assets first.
A new column displaying the sum of critical assets for each recommendation has been added to the security recommendations page, as shown in figure 1.
Figure 1. New column in the recommendations page that displays the number of critical devices that are correlated to each recommendation (all criticality levels).
Additionally, in the exposed device lists (found throughout the Microsoft Defender portal), you can view device criticality, as shown in figure 2.
Figure 2. Exposed devices with their criticality level in the recommendation object.
You can also use the critical devices filter to display only recommendations that involve critical assets, as shown in figure 3.
Figure 3. Capability to filter and display only recommendations that involves critical assets.
The sum of critical assets (in any criticality level) for each recommendation is now consumable through the recommendations API.
This is the first factor we are incorporating from Exposure Management, and we plan to expand this feature to include more context from the enterprise graph for prioritization enhancements. This will enable a more comprehensive understanding and management of security risks, ensuring that critical areas are addressed with the highest priority.
Internet facing devices
As threat actors continuously scan the web for exposed devices to exploit, Microsoft Defender for Endpoint automatically identifies and flags onboarded, exposed, internet-facing devices in the Microsoft Defender portal. This critical information enhances visibility into your organization’s external attack surface and provides insights into asset exploitability. Devices that are successfully connected via TCP or are identified as host reachable through UDP are flagged as internet-facing in the portal. Learn more about devices flagged as internet-facing.
The internet-facing device tag is now integrated into Defender Vulnerability Management experiences. This allows you to filter and see only weaknesses or security recommendations that impact internet-facing devices. The tag is displayed in the tags column, as shown in figure 4, for all relevant devices in the exposed device lists found throughout the Microsoft Defender portal.
Figure 4. Internet-facing tag on the CVE object and on the relevant device.
Exploit Prediction Scoring System (EPSS)
The Exploit Prediction Scoring System (EPSS) is a data-driven effort for estimating the likelihood (probability) that a software vulnerability will be exploited in the wild. EPSS uses current threat information from CVE and real-world exploit data. The EPSS model produces for each CVE a probability score between 0 and 1 (0 and 100%). The higher the score, the greater the probability that a vulnerability will be exploited. Learn more about EPSS.
In the Microsoft Defender portal, you can see the EPSS score for each weakness, as shown in figure 5.
Figure 5. Screenshot showing EPSS score.
When the EPSS is greater than 0.9, the bug tip is highlighted to reflect the urgency of mitigation, as shown in figure 6.
Figure 6. On the weaknesses page: the bug tip is highlighted for this CVE as EPSS > 0.9.
EPSS is designed to help you enrich your knowledge of weaknesses, understand exploit probability, and enable you to prioritize accordingly. The EPSS score is also consumable through the Vulnerability API.
Note that if the EPSS score is smaller than 0.001, it’s considered to be 0.
Try the new capabilities
Incorporating asset context and EPSS into Defender Vulnerability Management marks a significant advancement in our vulnerability prioritization capabilities. These new features—critical asset identification, internet-facing device tagging, and EPSS scoring—provide a more accurate and efficient approach to managing security risks.
By leveraging these tools, you can better protect your organization’s most valuable assets, reduce their attack surface, and stay ahead of potential threats. We invite you to explore these new capabilities and see how they can help with prioritization and enhance your security posture.
For more information, see the following articles:
What’s new in Microsoft Defender Vulnerability Management
What is Microsoft Security Exposure Management?
Device inventory
Overview of management and APIs
Microsoft Tech Community – Latest Blogs –Read More
“Microsoft 365 Backup – Veeam” 🎙 – The Intrazone podcast
By default, Microsoft 365 handles high-availability and disaster recovery. It, too, covers some point-in-time restore scenarios at the end-user and site level. However, when you need additional backup and restore options for your data, Microsoft 365 Backup is your in-place solution. It offers lightning-fast restorability, ensuring business continuity. And our partner ecosystem extends our core offering so you can design backup that works for your business.
On this episode, guest host, Brad Gussin (Principal PM Manager on the SharePoint team) and I provide an overview of the core Microsoft offering. You’ll then hear Brad interview our valued partner, Karinne Bessette (Technologist at Veeam) about the integration of Microsoft 365 Backup Storage into their Veeam Data Cloud for Microsoft 365 solution.
OK, this episode is all backed up and resiliently ready to restore your backup and recovery knowledge.
The Intrazone, episode 113:
Subscribe to The Intrazone podcast + show links and more below.
BONUS | Discover more about Microsoft 365 Backup Storage and how it gets embedded in Veeam’s Microsoft 365 solutions directly from Brad Gussin (Microsoft) and Mike Resseler (Veeam) that includes a live demo of Microsoft 365 Backup Storage embedded into Veeam Data Cloud – session video from the recent VeeamON 2024 event, “The Future of M365 Backup: Veeam + Microsoft Backup Storage.” Watch now:
The magic behind the offering is that the data stays within the customer’s Microsoft 365 Trust Boundary, still with multiple physically disparate copies to maintain high availability and disaster recovery durability in the service. We currently support OneDrive, SharePoint, and Exchange Online, and plan to add additional Microsoft 365 services in the future. Customers can adopt the solution either by using our application in the Microsoft 365 admin center or by purchasing a third-party application that integrates with the Microsoft 365 Backup Storage platform. Get the ability to restore quickly and regain business continuity today.
Links to important on-demand recordings and articles mentioned in this episode:
Hosts, guests, and related links and information
Karinne Bessette (Veeam) | LinkedIn | @Veeam [guest]
Brad Gussin | LinkedIn [guest co-host]
SharePoint | Facebook | @SharePoint | SharePoint community blog | Feedback
Mark Kashman |@mkashman [co-host]
Related videos, common admin articles and sites
“Microsoft Announces General Availability of Microsoft 365 Backup and Microsoft 365 Backup Storage” by Zach Rosenfield [July 31, 2024]
Veeam’s press release: Veeam Brings Data Resilience to Over 21 Million Microsoft 365 Users with New Microsoft 365 Backup Storage Capabilities for Veeam Data Cloud
Veeam’s blog post: Microsoft 365 Backup Storage with Veeam Game-Changing Integration
Upcoming Veeam webinar (Aug 20, 2024, 1pm EDT): NEW Capabilities of Veeam with Microsoft 365 Backup Storage
Learn more about Veeam’s Microsoft 365 Backup solution
Learn more about Microsoft 365 Backup (adoption.microsoft.com)
“Learn more about Microsoft 365 Backup” (short YouTube explainer video)
Watch “The Ins and Outs of Microsoft 365 Backup & Archive“
Microsoft Docs – The home for Microsoft documentation for end users, developers, and IT professionals.
Microsoft Tech Community Home
Stay on top of Office 365 changes
Listen to other Microsoft podcasts
Upcoming Events
TechCon365 – DC | Washington DC | Aug. 12-16, 2024
CollabDays Hamburg | August 31, 2024 – Hamburg, Germany
Microsoft Power Platform Conference | September 18-20 – Las Vegas, NV, USA
CollabDays Portugal Porto 2024 (previously CollabDays Lisbon)| Sept. 21 Venue: Instituto Superior de Engenharia do Porto
CollabDays New England | October 18-19, 2024 – Burlington, Massachusetts, USA
TechCon365 – Dallas | Nov. 11-15, 2024 | Dallas, TX, USA
Microsoft Ignite (+ more info) | Nov 18-22, 2024, “Save the date,” Chicago, IL
ESPC | European SharePoint Conference | Dec 2-5, 2024 | Stockholm, Sweden
+ always review and share the CommunityDays.org website to find your next event.
Subscribe today!
Thanks for listening! If you like what you hear, we’d love for you to Subscribe, Rate and Review on iTunes or wherever you get your podcasts.
Be sure to visit our show page to hear all episodes, access the show notes, and get bonus content. And stay connected to the SharePoint community blog and where we’ll share more information per episode, guest insights, and take any questions or suggestions from our listeners and SharePoint users via email at TheIntrazone@microsoft.com.
Get The Intrazone anywhere and everywhere
Listen to other Microsoft podcasts at aka.ms/microsoft/podcasts.
Microsoft Tech Community – Latest Blogs –Read More
General Availability: Maintenance window support for Azure SQL Database Hyperscale named replica.
We are excited to announce that Hyperscale named replicas now support configuring a specific maintenance windows. You can now choose from predefined time slots for maintenance and setup alerts to be notified of upcoming maintenance events.
Why it matters: With named replicas, you can enhance your modern application architectures by scaling out read workloads to up to 30 read replicas. This reduces the load on the primary replica, allowing it to focus on write operations. Imagine multiple reader applications connecting to specific named replicas, each with its own unique business hours. This setup requires a tailored maintenance schedule, which minimizes downtime during crucial local business hours and prevents widespread disruptions.
Overview
By default, impactful updates to Azure SQL Database resources happen during the period 5PM to 8AM local time. Local time is determined by the location of Azure region that hosts the resource and may observe daylight saving time in accordance with local time zone definition. You can adjust the window for maintenance updates to a time suitable for your workload, by choosing from two non-default maintenance window slots:
Weekday window: 10:00 PM to 6:00 AM local time, Monday – Thursday
Weekend window: 10:00 PM to 6:00 AM local time, Friday – Sunday
Once the maintenance window selection is made, all planned maintenance events will only occur during the window of your choice.
Quick Start Guide
Maintenance window can be configured for existing named replicas through Azure portal, CLI, PowerShell, or RESTAPI.
NOTE: Setting a maintenance window during the creation of a new named replica is not supported.
Detailed information is available at Configure maintenance window. For convenience, here’s a quick start to configure maintenance window for named replica.
Azure Portal:
Here’s a screenshot of an existing Hyperscale named replica in East US2 region, with the available maintenance windows shown in the dropdown list. Select the desired maintenance window from the drop-down.
Azure PowerShell
You can use below script to get a list of supported maintenance configurations for eastus2 region:
$location = “eastus2”
$configurations = Get-AzMaintenancePublicConfiguration
$configurations | ?{ $_.Location -eq $location -and $_.MaintenanceScope -eq “SQLDB”}
Script to check current maintenance window for your database.
$resourceGroupName = “YourResourceGroupName”
$serverName = “YourServerName”
$databaseName = “YourDatabaseName”
(Get-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName).MaintenanceConfigurationId
Script to change the maintenance configuration of your database.
Set-AzSqlDatabase -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName -MaintenanceConfigurationId “SQL_EastUS2_DB_2”
Azure CLI
Script to get a list of supported maintenance configurations for the eastus2 region:
location=”eastus2″
resourceGroupName=”your_resource_group_name”
serverName=”your_server_name”
databaseName=”your_db_name”
az maintenance public-configuration list –query “[?location==’$location’&&contains(maintenanceScope,’SQLDB’)]”
Script to check current maintenance window for your database.
az sql db show –name $databaseName –resource-group $resourceGroupName –server $serverName –query “maintenanceConfigurationId”
Script to change the maintenance configuration of your database
# Select different maintenance window
maintenanceConfig=”SQL_EastUS2_DB_2″
# Update database
az sql db update –resource-group $resourceGroupName –server $serverName –name $databaseName –maint-config-id $maintenanceConfig
Advance notifications (preview)
You can also configure alerts for Hyperscale named replica to notify you about upcoming planned maintenance events 24 hours in advance. More information on how to set up advance notification can be found here.
Conclusion
We hope you will find maintenance window for named replica useful for your Hyperscale databases. We would be happy to answer any questions you may have. Please leave a comment on this blog or email us at sqlhsfeedback AT microsoft DOT com.
References
https://aka.ms/SQLMaintenanceWindow
https://aka.ms/SQLAdvNotification
Configure and manage Hyperscale named replicas
Microsoft Tech Community – Latest Blogs –Read More
How to Evaluate & Upgrade Model Versions in the Azure OpenAI Service
Introduction
As an Azure OpenAI customer, you have access to the most advanced artificial intelligence models powered by OpenAI. These models are constantly improving and evolving, which means that you can benefit from the latest innovations and enhancements, including improved speed, improved safety systems, and reduced costs. However, this also means that older model versions will eventually be deprecated and retired.
We notify customers of upcoming retirements well in advance, starting from model launch.
At model launch, we programmatically designate a “not sooner than” retirement date (typically six months to one year out).
We notify customers with active deployments at least 60 days notice before model retirement for Generally Available (GA) models.
For preview model versions, which should never be used in production applications, we provide at least 30 days notice.
You can read about our process, who is notified, and details of upcoming model deprecations and retirements here: Azure OpenAI Service model retirements – Azure OpenAI | Microsoft Learn
Azure AI Studio Evaluations
We understand that upgrading model versions involves a challenging and time-consuming process of evaluation, especially if you have numerous prompts and responses to assess and certify your applications. You likely want to compare the prompt responses across different model versions to see how changes impact your use cases and outcomes.
Azure AI Studio Evaluations can help you evaluate the latest model versions in the Azure OpenAI service. Evaluations support both a code-first and UI-friendly experience, enabling you to compare prompt responses across different model versions and observe differences in quality, accuracy, and consistency. You can also use evaluations to test your prompts and applications with the new model versions at any point in your LLMOps lifecycle, making any necessary adjustments or optimizations.
A Code-First Approach to Evaluation
Azure’s Prompt Flow Evaluations SDK package is a powerful and flexible tool for evaluating responses from your generative AI application. In this blog, we will walk you through the steps of using it to evaluate your own set of prompts across various base models’ responses. The models in this example can be deployed through Azure or as external models deployed through MaaS (Model as a Service) endpoints.
You can learn more about how to use the promptflow-evals SDK package in our how-to documentation.
Getting started with Evaluations
First, install the necessary packages:
sh
pip install promptflow-evals
pip install promptflow-azure
Next, provide your Azure AI Project details so that traces, logs, and evaluation results are pushed into your project to be viewed on the Azure AI Studio Evaluations page:
python
azure_ai_project = {
“subscription_id”: “00000000000”,
“resource_group_name”: “000resourcegroup”,
“project_name”: “000000000”
}
Then, depending on which models you’d like to evaluate your prompts against, provide the endpoints you want to use. For simplicity, in our sample, an `env_var` variable is created in the code to maintain targeted model endpoints and their authentication keys. This variable is then used later in our evaluate function as our target to evaluate prompts against:
python
env_var = {
“gpt4-0613”: {
“endpoint”: “https://ai-***.***.azure.com/openai/deployments/gpt-4/chat/completions?api-version=2023-03-15-preview“,
“key”: “***”,
},
“gpt35-turbo”: {
“endpoint”: “https://ai-***.openai.azure.com/openai/deployments/gpt-35-turbo-16k/chat/completions?api-version=2023-03-15-preview“,
“key”: “***”,
},
“mistral7b” : {
“endpoint” : “https://mistral-7b-**.ml.azure.com/chat/completions“,
“key” : “***”,
},
“tiny_llama” : {
“endpoint” : “https://api-inference.huggingface.co/**/chat/completions“,
“key” : “***”,
},
“phi3_mini_serverless” : {
“endpoint” : “https://Phi-3-mini***.ai.azure.com/v1/chat/completions“,
“key” : “***”,
},
“gpt2” : {
“endpoint” : “https://api-inference.huggingface.co/**openai/gpt2“,
“key” : “***”,
},
}
The following code creates a configuration for Azure Open AI Model, which acts as an LLM Judge for our built-in Relevance and Coherence Evaluator. This configuration is passed as a model config to these evaluators:
python
from promptflow.core import AzureOpenAIModelConfiguration
configuration = AzureOpenAIModelConfiguration(
azure_endpoint=”https://ai-***.openai.azure.com“,
api_key=””,
api_version=””,
azure_deployment=””,
)
The Prompt Flow Evaluations SDK supports a wide variety of built-in quality and safety evaluators (see the full list of supported evaluators in Built-in Evaluators) and provides the flexibility to define your own code-based or prompt-based custom evaluators.
For our example, we will just use the built-in Content Safety (a composite evaluator for measure harmful content in model responses), Relevance, and Coherence Evaluators:
python
from promptflow.evals.evaluators
import ContentSafetyEvaluator, RelevanceEvaluator, CoherenceEvaluator, GroundednessEvaluator, FluencyEvaluator, SimilarityEvaluator
content_safety_evaluator = ContentSafetyEvaluator(project_scope=azure_ai_project)
relevance_evaluator = RelevanceEvaluator(model_config=configuration)
coherence_evaluator = CoherenceEvaluator(model_config=configuration)
groundedness_evaluator = GroundednessEvaluator(model_config=configuration)
fluency_evaluator = FluencyEvaluator(model_config=configuration)
similarity_evaluator = SimilarityEvaluator(model_config=configuration)
Using the Evaluate API
Now let’s say we want to bring a list of prompts that we’d like to test across different model endpoints with our evaluators that we initialized in the previous step.
The Prompt Flow Evaluations SDK provides an Evaluate API, allowing you to evaluate model-generated responses against the provided prompts. This Evaluate API accepts a data file containing one or many prompts per line. Each prompt contains a Question, Context, and Ground Truth for evaluators to use. It also accepts an Application Target class ‘app_target.py’ whose response is evaluated against each model you’re interested in testing. We will discuss this further in detail in a later section.
The following code runs the Evaluate API and uses Content Safety, Relevance, and Coherence Evaluators. It provides a list of model types referenced in the Application Target called `ModelEndpoints` defined in ‘app_target.py’. Here are the parameters required by the Evaluate API:
Data (Prompts): Prompts, questions, contexts, and ground truths are provided in a data file in JSON format. Please find a file (data.jsonl).
Application Target: The name of the Python class that can route the calls to specific model endpoints using the model’s name in conditional logic.
Model Name: An identifier of the model so that custom code in the App Target class can identify the model type and call the respective LLM model using the endpoint URL and auth key.
Evaluators: A list of evaluators is provided to evaluate the given prompts (questions) as input and output (answers) from LLM models.
The following code runs the Evaluate API for each provided model type in a loop and logs the evaluation results into your Azure AI Studio project:
python
from app_target import ModelEndpoints
import pathlib
import random
from promptflow.evals.evaluate import evaluate
models = [“gpt4-0613”, “gpt35-turbo”, “mistral7b”, “phi3_mini_serverless” ]
path = str(pathlib.Path(pathlib.Path.cwd())) + “/data.jsonl”
for model in models:
randomNum = random.randint(1111, 9999)
results = evaluate(
azure_ai_project=azure_ai_project,
evaluation_name=”Eval-Run-“+str(randomNum)+”-“+model.title(),
data=path,
target=ModelEndpoints(env_var, model),
evaluators={
“content_safety”: content_safety_evaluator,
“coherence”: coherence_evaluator,
“relevance”: relevance_evaluator,
“groundedness”: groundedness_evaluator,
“fluency”: fluency_evaluator,
“similarity”: similarity_evaluator,
},
evaluator_config={
“content_safety”: {
“question”: “${data.question}”,
“answer”: “${target.answer}”
},
“coherence”: {
“answer”: “${target.answer}”,
“question”: “${data.question}”
},
“relevance”: {
“answer”: “${target.answer}”,
“context”: “${data.context}”,
“question”: “${data.question}”
},
“groundedness”: {
“answer”: “${target.answer}”,
“context”: “${data.context}”,
“question”: “${data.question}”
},
“fluency”: {
“answer”: “${target.answer}”,
“context”: “${data.context}”,
“question”: “${data.question}”
},
“similarity”: {
“answer”: “${target.answer}”,
“context”: “${data.context}”,
“question”: “${data.question}”
}
}
)
The file app_target.py is used as the Application Target in which individual Python functions call specified model endpoints. In this file, the `__init__` function of the `ModelEndpoints` class stores a list of model endpoints and keys in the variable env. The model type is also provided so that the specific model can be called:
python
import requests
from typing_extensions import Self
from typing import TypedDict
from promptflow.tracing import trace
class ModelEndpoints:
def __init__(self: Self, env: dict, model_type: str) -> str:
self.env = env
self.model_type = model_type
The `__call__` function of the `ModelEndpoints` class routes the calls to a specific model endpoint by model type using conditional logic:
python
def __call__(self: Self, question: str) -> Response:
if self.model_type == “gpt4-0613”:
output = self.call_gpt4_endpoint(question)
elif self.model_type == “gpt35-turbo”:
output = self.call_gpt35_turbo_endpoint(question)
elif self.model_type == “mistral7b”:
output = self.call_mistral_endpoint(question)
elif self.model_type == “tiny_llama”:
output = self.call_tiny_llama_endpoint(question)
elif self.model_type == “phi3_mini_serverless”:
output = self.call_phi3_mini_serverless_endpoint(question)
elif self.model_type == “gpt2”:
output = self.call_gpt2_endpoint(question)
else:
output = self.call_default_endpoint(question)
return output
The following code handles the POST call to the model endpoint. It captures the response and parses it accordingly to retrieve the answer from the LLM. One of the sample functions is provided below:
python
def query (self: Self, endpoint: str, headers: str, payload: str) -> str:
response = requests.post(url=endpoint, headers=headers, json=payload)
return response.json()
def call_gpt4_endpoint(self: Self, question: str) -> Response:
endpoint = self.env[“gpt4-0613”][“endpoint”]
key = self.env[“gpt4-0613”][“key”]
headers = {
“Content-Type”: “application/json”,
“api-key”: key
}
payload = {
“messages”: [{“role”: “user”, “content”: question}],
“max_tokens”: 500,
}
output = self.query(endpoint=endpoint, headers=headers, payload=payload)
answer = output[“choices”][0][“message”][“content”]
return {“question”: question, “answer”: answer}
def call_gpt35_turbo_endpoint(self: Self, question: str) -> Response:
endpoint = self.env[“gpt35-turbo”][“endpoint”]
key = self.env[“gpt35-turbo”][“key”]
headers = {“Content-Type”: “application/json”, “api-key”: key}
payload = {“messages”: [{“role”: “user”, “content”: question}], “max_tokens”: 500}
output = self.query(endpoint=endpoint, headers=headers, payload=payload)
answer = output[“choices”][0][“message”][“content”]
return {“question”: question, “answer”: answer}
def call_mistral_endpoint(self: Self, question: str) -> Response:
endpoint = self.env[“mistral7b”][“endpoint”]
key = self.env[“mistral7b”][“key”]
headers = {
“Content-Type”: “application/json”,
“Authorization”: (“Bearer ” + key)
}
payload = {
“messages”: [{“content”: question, “role”: “user”}],
“max_tokens”: 50}
output = self.query(endpoint=endpoint, headers=headers, payload=payload)
answer = output[“choices”][0][“message”][“content”]
return {“question”: question, “answer”: answer}
You can view the full sample notebook here.
Compare your evaluation results in Azure AI Studio
Once you run your evaluation in the SDK and log your results to your project, you can compare your results across different model evaluations in Azure AI Studio. Inside your project, you can use the left hand navigation menu under the “Tools” section to get to your Evaluation runs.
By default you can see all your model evaluations run show up here if you’ve logged the results to your project in the SDK. To compare the evaluations direction, click “Switch to dashboard view” located above the list of evaluations:
Then select which evaluations you want to visualize in the dashboard view to compare:
In addition to comparing overall and row level outputs and metrics, you can open each evaluation run directly to see overall distribution of metrics in a chart view for both quality and safety evaluators, which you can switch between by selecting the each tab above the charts.
Read more on how to view results in Azure AI Studio here.
How to upgrade your deployment
Luckily, once you’ve run your evaluations and decided to upgrade to the latest model version, the process is relatively simple to set your deployments to auto-upgrade to the default.
When a new model version is set as the default in the service, your deployments will automatically upgrade to that version. You can read more about this process here.
Conclusion
Through this article, we’ve walked through not only how to upgrade your deployments to the latest generative AI model versions, but how to use our suite of Azure AI evaluation tools to evaluate which model versions best meet your needs.
Once you’ve decided on the right model version for your solution, upgrading to the latest is a matter of a few simple few clicks.
As always, we constantly strive to improve our services, if you have any feedback or questions, please feel free to speak with our support team, or leave product suggestions or feedback in our Azure Feedback Forum, tagging the suggestion with Azure OpenAI.
Microsoft Tech Community – Latest Blogs –Read More
Why is the suspension displacement so low?
hello everyone,
I designed an active suspension system but there is something I don’t understand. While the bump in my road profile is 0.07, I see the suspension movement as 10^-7. Of course I expect improvement, but isn’t this too much. Please I am waiting for your comments.hello everyone,
I designed an active suspension system but there is something I don’t understand. While the bump in my road profile is 0.07, I see the suspension movement as 10^-7. Of course I expect improvement, but isn’t this too much. Please I am waiting for your comments. hello everyone,
I designed an active suspension system but there is something I don’t understand. While the bump in my road profile is 0.07, I see the suspension movement as 10^-7. Of course I expect improvement, but isn’t this too much. Please I am waiting for your comments. transferred MATLAB Answers — New Questions
HDL Coder FPGA resources report
Hello,
In the HDL Coder workflow, is any way to size the FPGA requirements? In a way to be able to precisly select the FPGA that will be able to run the model simulation. Or may be a way to try seveal FPGA chips that will report utilization, without physically having the device?
We dont have neither Matlab and obviously HDL Coder, but we are considering it and after a lot of research, given that we can generate the code and even program the FPGA from Matlab, we just wonder how can we select the right FPGA chip… we are considering usning Xilinx/AMD chips… we have also considered the speedgoat options and other alternatives like NI Veristand, but that is overkill for what we are looking,… our model can be simplified to 1 DI, 2 AO and 1 AI…
And another question, the above idea, does it work only for Simulink models or it can work also for Simscape Electrical models?
ThanksHello,
In the HDL Coder workflow, is any way to size the FPGA requirements? In a way to be able to precisly select the FPGA that will be able to run the model simulation. Or may be a way to try seveal FPGA chips that will report utilization, without physically having the device?
We dont have neither Matlab and obviously HDL Coder, but we are considering it and after a lot of research, given that we can generate the code and even program the FPGA from Matlab, we just wonder how can we select the right FPGA chip… we are considering usning Xilinx/AMD chips… we have also considered the speedgoat options and other alternatives like NI Veristand, but that is overkill for what we are looking,… our model can be simplified to 1 DI, 2 AO and 1 AI…
And another question, the above idea, does it work only for Simulink models or it can work also for Simscape Electrical models?
Thanks Hello,
In the HDL Coder workflow, is any way to size the FPGA requirements? In a way to be able to precisly select the FPGA that will be able to run the model simulation. Or may be a way to try seveal FPGA chips that will report utilization, without physically having the device?
We dont have neither Matlab and obviously HDL Coder, but we are considering it and after a lot of research, given that we can generate the code and even program the FPGA from Matlab, we just wonder how can we select the right FPGA chip… we are considering usning Xilinx/AMD chips… we have also considered the speedgoat options and other alternatives like NI Veristand, but that is overkill for what we are looking,… our model can be simplified to 1 DI, 2 AO and 1 AI…
And another question, the above idea, does it work only for Simulink models or it can work also for Simscape Electrical models?
Thanks hdl coder, fpga size MATLAB Answers — New Questions
Planner – Every time a comment is made in a task, everyone should get a notification
Hi everyone,
I would like some help. I’m trying to automate a flow via power automation between Microsoft Planner and Outlook: every time a comment is made on a planner task, everybody should receive a notification via e-mail. It might be important to mention this planner includes people from the organization and guests (authorized and added to our tenant).
What I’ve already tried:
1) I tried to create an automation starting with Microsoft Planner, but the planner’s trigger options are only: when a task is created, when a task is assigned to me and when a task is completed. So, I couldn’t go on in this path.
2) I tried to automate using “post a Microsoft Teams message” trigger when “a new email arrives at a Groups mailbox”. I was able to configure the trigger by adding the Groups mailbox and also modified the end of the flow. Instead of the standard use, I used the option “post a choice of options as the flow bot for a user”. It worked in the beginning, but it stopped. Couldn’t understand why stopped tho.
Could anyone help me? We are missing important information because stakeholders are not being notified. Thanks!
Hi everyone,I would like some help. I’m trying to automate a flow via power automation between Microsoft Planner and Outlook: every time a comment is made on a planner task, everybody should receive a notification via e-mail. It might be important to mention this planner includes people from the organization and guests (authorized and added to our tenant). What I’ve already tried:1) I tried to create an automation starting with Microsoft Planner, but the planner’s trigger options are only: when a task is created, when a task is assigned to me and when a task is completed. So, I couldn’t go on in this path. 2) I tried to automate using “post a Microsoft Teams message” trigger when “a new email arrives at a Groups mailbox”. I was able to configure the trigger by adding the Groups mailbox and also modified the end of the flow. Instead of the standard use, I used the option “post a choice of options as the flow bot for a user”. It worked in the beginning, but it stopped. Couldn’t understand why stopped tho.Could anyone help me? We are missing important information because stakeholders are not being notified. Thanks! Read More
SQL Server Migration Assistant: How to customize SQL for an object in MSSQL after 1st conversion
So in the picture below, I’ve run “Convert Schema” for the first time. For the tEntity_d object, it gave an appropriate conversion error (“object not found”)
After that, in the bottom right MSSQL destination window (ambigously titled “SQL”) I fixed the bad code (see the comment “This incompatible statement has been deleted by me”). But there’s no option to save the fixed code in the to the MSSQL destination:
If I right click on tEntity_d in the bottom left “SQL Server Metadata Explorer” window, I see a “Synchronize with Database” option. Which database? The destination? When I try and run “Synchronize with Database” it asks if I want to save changes to “MsSQL SQL Editor” (the bottom right “SQL” window?). I answered “yes”.
Then I get a “Database” (destination?) “Not Found” error. What does this mean?
Thanks in advance
Ben
So in the picture below, I’ve run “Convert Schema” for the first time. For the tEntity_d object, it gave an appropriate conversion error (“object not found”) After that, in the bottom right MSSQL destination window (ambigously titled “SQL”) I fixed the bad code (see the comment “This incompatible statement has been deleted by me”). But there’s no option to save the fixed code in the to the MSSQL destination: If I right click on tEntity_d in the bottom left “SQL Server Metadata Explorer” window, I see a “Synchronize with Database” option. Which database? The destination? When I try and run “Synchronize with Database” it asks if I want to save changes to “MsSQL SQL Editor” (the bottom right “SQL” window?). I answered “yes”. Then I get a “Database” (destination?) “Not Found” error. What does this mean? Thanks in advanceBen Read More