Month: September 2024
I keep getting the error “Error: File: builtin.m Line: 1 Column: 24 Invalid text character…”. How do I fix this?
Hi,
The full error is:
"Error: File: builtin.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters."
I get it whenever I try to do anything, including close Matlab. The file name changes slightly depending on what I do: for example, when I try to open a new file, I get the following error:
"Error in matlab.unittest.internal.ui.toolstrip.getFileInfoForToolstrip (line 8)
isClassBasedTest = false;",
preceded by the above text with "false.m" as the file name.
I think the problem started when I tried to set the path? I got the following error:
"Unable to run P-code file. The file header is damaged.
Error in pathtool (line 13)
isDesktop = desktop(‘-inuse’);"
But I’m not sure. Any help that you can give me would be appreciated!Hi,
The full error is:
"Error: File: builtin.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters."
I get it whenever I try to do anything, including close Matlab. The file name changes slightly depending on what I do: for example, when I try to open a new file, I get the following error:
"Error in matlab.unittest.internal.ui.toolstrip.getFileInfoForToolstrip (line 8)
isClassBasedTest = false;",
preceded by the above text with "false.m" as the file name.
I think the problem started when I tried to set the path? I got the following error:
"Unable to run P-code file. The file header is damaged.
Error in pathtool (line 13)
isDesktop = desktop(‘-inuse’);"
But I’m not sure. Any help that you can give me would be appreciated! Hi,
The full error is:
"Error: File: builtin.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters."
I get it whenever I try to do anything, including close Matlab. The file name changes slightly depending on what I do: for example, when I try to open a new file, I get the following error:
"Error in matlab.unittest.internal.ui.toolstrip.getFileInfoForToolstrip (line 8)
isClassBasedTest = false;",
preceded by the above text with "false.m" as the file name.
I think the problem started when I tried to set the path? I got the following error:
"Unable to run P-code file. The file header is damaged.
Error in pathtool (line 13)
isDesktop = desktop(‘-inuse’);"
But I’m not sure. Any help that you can give me would be appreciated! matlab, set path MATLAB Answers — New Questions
How to accelerate Copilot for Microsoft 365 adoption across every department and job role?
Read the Copilot for Microsoft 365: The Ultimate Skilling Guide
Designed for business leaders, IT pros and end users – this eBook covers detailed insights on how to unlock AI-powered productivity across every department and job role in your organization.
It also deep dives into the art and science of prompting with expert-recommended best practices, tips, and a curated collection of 300+ prompts to accelerate adoption.
Read the Copilot for Microsoft 365: The Ultimate Skilling Guide Designed for business leaders, IT pros and end users – this eBook covers detailed insights on how to unlock AI-powered productivity across every department and job role in your organization.It also deep dives into the art and science of prompting with expert-recommended best practices, tips, and a curated collection of 300+ prompts to accelerate adoption. Read More
Azure AI Assistants with Logic Apps
Introduction to AI Automation with Azure OpenAI Assistants
Intro
Welcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work.
Azure OpenAI Assistants (preview)
Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows:
Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition.
Consumption Logic Apps: Currently supported consumption workflows.
Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling.
AI Automation
So apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and:
Azure OpenAI in the supported regions. This demo is on Sweden Central.Logic Apps consumption Plan.
We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o.
The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants:
Name Description
Assistant nameYour deployment name that is associated with a specific model.InstructionsInstructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses.DeploymentThis is where you set which model deployment to use with your assistant.FunctionsCreate custom function definitions for the models to formulate API calls and structure data outputs based on your specificationsCode interpreterCode interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code.FilesYou can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio.
The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed.
Challenge #1 – Summarize WordPress Blog Posts
How about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow.
We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request.
{
“name”: “__ALA__lgkapp002”, // Remove this for the Logic App Trigger
“description”: “Fetch the latest post from a WordPress website,summarize it, and return the summary.”,
“parameters”: {
“type”: “object”,
“properties”: {
“url”: {
“type”: “string”,
“description”: “The base URL of the WordPress site”
},
“post”: {
“type”: “string”,
“description”: “The page number”
}
},
“required”: [
“url”,
“post”
]
}
}
In the Designer this looks like this :
As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call:
And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step:
Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier!
From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App.
Let’s give our Assistant a Prompt and see what happens:
The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service.
Challenge #2 – Create Azure Virtual Machine based on preferences
For the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource.
This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up:
{
“name”: “__ALA__assistkp02” //remove this for the Logic App Trigger
“description”: “Create an Azure VM based on the user input”,
“parameters”: {
“type”: “object”,
“properties”: {
“name”: {
“type”: “string”,
“description”: “The name of the VM”
},
“location”: {
“type”: “string”,
“description”: “The region of the VM”
},
“size”: {
“type”: “string”,
“description”: “The size of the VM”
},
“os”: {
“type”: “string”,
“description”: “The OS of the VM”
}
},
“required”: [
“name”,
“location”,
“size”,
“os”
]
}
}
And the actual screenshot from the Trigger, observe the absence of the “name” here:
Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku:
if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘Windows-10’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘Windows-11’, ‘default-offer’))
if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘win10-22h2-pro-g2’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘win11-22h2-pro’, ‘default-sku’))
As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems.
After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP:
And the Create VM action with highlight to the storage profile – OS Image setup:
Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body:
Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message.
Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed :
The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts.
Conclusion
In conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey.
References:
Getting started with Azure OpenAI Assistants (Preview)Call Azure Logic apps as functions using Azure OpenAI AssistantsAzure OpenAI Assistants function callingAzure OpenAI Service modelsWhat is Azure Logic Apps?Azure Resource Manager – Rest Operations
Introduction to AI Automation with Azure OpenAI Assistants IntroWelcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work.Azure OpenAI Assistants (preview)Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows:Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition.Consumption Logic Apps: Currently supported consumption workflows.Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling.AI AutomationSo apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and:Azure OpenAI in the supported regions. This demo is on Sweden Central.Logic Apps consumption Plan.We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o.The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants:Name DescriptionAssistant nameYour deployment name that is associated with a specific model.InstructionsInstructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses.DeploymentThis is where you set which model deployment to use with your assistant.FunctionsCreate custom function definitions for the models to formulate API calls and structure data outputs based on your specificationsCode interpreterCode interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code.FilesYou can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio.The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed.Challenge #1 – Summarize WordPress Blog PostsHow about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow.We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request. {
“name”: “__ALA__lgkapp002”, // Remove this for the Logic App Trigger
“description”: “Fetch the latest post from a WordPress website,summarize it, and return the summary.”,
“parameters”: {
“type”: “object”,
“properties”: {
“url”: {
“type”: “string”,
“description”: “The base URL of the WordPress site”
},
“post”: {
“type”: “string”,
“description”: “The page number”
}
},
“required”: [
“url”,
“post”
]
}
} In the Designer this looks like this : As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call: And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step: Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier!From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App.Let’s give our Assistant a Prompt and see what happens:The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service.Challenge #2 – Create Azure Virtual Machine based on preferencesFor the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource.This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up: {
“name”: “__ALA__assistkp02” //remove this for the Logic App Trigger
“description”: “Create an Azure VM based on the user input”,
“parameters”: {
“type”: “object”,
“properties”: {
“name”: {
“type”: “string”,
“description”: “The name of the VM”
},
“location”: {
“type”: “string”,
“description”: “The region of the VM”
},
“size”: {
“type”: “string”,
“description”: “The size of the VM”
},
“os”: {
“type”: “string”,
“description”: “The OS of the VM”
}
},
“required”: [
“name”,
“location”,
“size”,
“os”
]
}
} And the actual screenshot from the Trigger, observe the absence of the “name” here: Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku: if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘Windows-10’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘Windows-11’, ‘default-offer’)) if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘win10-22h2-pro-g2’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘win11-22h2-pro’, ‘default-sku’)) As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems.After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP: And the Create VM action with highlight to the storage profile – OS Image setup: Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body: Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message. Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed :The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts.ConclusionIn conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey.References:Getting started with Azure OpenAI Assistants (Preview)Call Azure Logic apps as functions using Azure OpenAI AssistantsAzure OpenAI Assistants function callingAzure OpenAI Service modelsWhat is Azure Logic Apps?Azure Resource Manager – Rest Operations Read More
Inventory Sign Out Quantity Issue
Hello, I have something I am trying to solve and have been searching all over.
I have a sheet made to help keep track of stuff taken from the warehouse. We have a scanner that pulls from a sheet that has prices and that all works fine. The Issue I am having is having the quantity determined based off of what is scanned. I was able to get it set to fill to “1” if it pulls a proper part number from the list. I cant find a way to tell it to merge its current quantity value and the one under it IF they meet two conditions: First one being that the part number is the same, Second being if the Equipment number is the same.
For Example this screen shot when you scan it twice it puts two rows. I need it to read the two conditions of same “Barcode scan” and same “job number” so it can merge the quantity of 1+1
But it also needs to know if the value of the two merged is indeed merged so it can clear the second barcode scan so it doesn’t repeat endlessly.
Hello, I have something I am trying to solve and have been searching all over. I have a sheet made to help keep track of stuff taken from the warehouse. We have a scanner that pulls from a sheet that has prices and that all works fine. The Issue I am having is having the quantity determined based off of what is scanned. I was able to get it set to fill to “1” if it pulls a proper part number from the list. I cant find a way to tell it to merge its current quantity value and the one under it IF they meet two conditions: First one being that the part number is the same, Second being if the Equipment number is the same. For Example this screen shot when you scan it twice it puts two rows. I need it to read the two conditions of same “Barcode scan” and same “job number” so it can merge the quantity of 1+1But it also needs to know if the value of the two merged is indeed merged so it can clear the second barcode scan so it doesn’t repeat endlessly. Read More
Important changes to the Windows enrollment experience coming soon
Windows updates are essential for keeping your devices secure and up to date with the latest security, performance, and reliability improvements. One of the top customer requests we receive is to enable Windows updates during provisioning in the out-of-box experience (OOBE), so that devices are fully patched and ready to use as soon as they are enrolled with mobile device management (MDM).
In the coming weeks, the Windows MDM enrollment experience will be updated to automatically enable quality updates during OOBE. Quality updates are monthly updates that provide security and reliability fixes, as well as enhancements to existing features. These updates are critical for the performance and security of your devices, and we want to make sure they’re delivered as soon as possible. Please note that not every monthly quality update will be made available through the OOBE. Microsoft will determine the availability of these updates based on the value of the update and how it relates to a device setup situation.
What’s changing
With the upcoming October Windows update, all Windows 11, version 22H2 and higher, devices that are enrolled with an MDM, e.g. Microsoft Intune, will automatically download and install quality updates during OOBE. This will apply to all MDM-enrolled devices, regardless of whether they’re pre-registered with Windows Autopilot or not. The updates will be applied before the user reaches the desktop, ensuring that the device is fully patched before logging in.
The new experience will look like this:
After the device connects to the internet and checks for updates, if there are available quality updates found, the device displays a message on the updates page stating that updates are available and being installed.
The device then downloads and installs the quality updates in the background, while showing installation progress.
Once the updates are installed, the device restarts and continues to the desktop. The user then signs in to the device and the device completes enrollment.
Please note that this change only applies to quality updates. Feature updates, which are major updates that introduce new functionality, and driver updates, which provide hardware-specific fixes or enhancements will not be applied during OOBE but will be managed by your MDM according to your policies.
Impacts and what this means for you
While we believe that this change will improve the Windows enrollment experience and provide more security and reliability for your devices, we also want to make you aware of some potential impacts and what you need to do to prepare.
Additional time in OOBE
Quality update installation during OOBE adds some additional time to the device setup process, depending on when the device was most recently updated, internet speed, and device performance. We recommend notifying your vendors and customers of this additional time, and plan accordingly for your device deployment scenarios.
Organizations using temporary passwords
With the additional time for setup, if using Temporary Access Pass (TAP), the passcode may expire before the user signs onto the desktop. To avoid this, we recommend that you extend the validity period of the temporary passwords during enrollment.
Summary
There may be instances where the update is not initiated if the Windows Update for Business (WUfB) policies that block or delay updates are applied to the device before reaching the New Device Update Page (NDUP). This is particularly possible if app installations significantly delay the Enrollment Status Page (ESP).
At this time, there’s no option to control or disable quality updates during OOBE. As mentioned earlier in this blog, we’re exploring when all monthly quality updates can be available and manageable during OOBE to provide the best overall experience.
We hope that this change will improve your Windows Autopilot experience and provide more security and reliability for your devices. If you have any feedback or questions, please let us know in the comments or reach out on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Introducing Reporting and Entra ID Authentication for Microsoft Playwright Testing
Microsoft Playwright Testing is a managed service built for running Playwright tests easily at scale. As we aim to improve the developer experience, and through our interactions with users, we recognize the need for simpler, more efficient troubleshooting. Today, we’re excited to introduce a new web-hosted reporting dashboard to help speed up the troubleshooting and make it easier for developers to identify and resolve issues. To further enhance security, we’re also implementing Microsoft Entra ID as the default authentication method, providing a more secure and seamless workflow.
Read on to learn more about what’s now possible with Microsoft Playwright Testing.
Reporting Dashboard
As development teams scale and iterate rapidly, maintaining high quality becomes more critical than ever. Slow issue resolution impacts the entire development process. With our new reporting feature, anyone on your team can quickly access detailed test results from a CI/CD run, complete with rich artifacts like logs, screenshots, and traces for efficient troubleshooting.
The reporting feature streamlines your workflow by bringing the tests that needs your attention to your notice. The test run view is filtered by failed and flaky tests so that you can start troubleshooting instantly. You can click through each test to find all the information you need to troubleshoot.
Screen capture of troubleshooting in the Playwright dashboard
Troubleshoot easily using rich artifacts
All test logs and artifacts, including screenshots, videos, and traces are securely stored in a centralized location. They can be accessed through a unified dashboard with configurable permissions.
The Trace Viewer is a powerful tool that is hosted directly in the dashboard. It allows you to visually step through your test execution, or use the timeline to hover over steps and reveal the page state before and after each action. Detailed logs, DOM snapshot, network activity, errors, and console output are available at each test step for precise troubleshooting.
Screenshot of trace viewer hosted in the Playwright dashboard
Seamless integration with CI pipelines
Test results in the dashboard captures essential CI pipeline details such as commit information, author, and branch, with one click access to the CI pipeline that ran the tests. This enables you to easily investigate code changes that are related to the test result.
For GitHub Actions users, summarized reports are displayed directly in the job summary section, providing a clear overview of test results and direct links to the Playwright dashboard for in-depth analysis.
Screenshot of GitHub Actions job summary
Securely authenticate using Microsoft Entra ID
We are also excited to add Microsoft Entra ID support to achieve a more secure default authentication method for Playwright Testing service. Access tokens, though convenient, pose inherent risks such as potential leaks, frequent rotations, and accidental exposure in code. Microsoft Entra ID mitigates these risks by securely authenticating clients with Azure when running tests on cloud-hosted browsers and publishing test reports and artifacts, streamlining workflows and simplifying access control.
Although we recommend using Microsoft Entra ID authentication, access token authentication will still be supported, ensuring flexibility for existing setups and easing the transition to this more secure approach.
Get started with Playwright Testing service
Getting started is easy—simply install the service package by running this command:
npm init /microsoft-playwright-testing
This will provide you with a configuration file required to and publish test results. You don’t need to modify your test code. Use the newly created Playwright service configuration file to run the tests. The package also facilitates authentication using Microsoft Entra ID and is compatible with Playwright version 1.47 and above.
Next, you can explore our flexible consumption-based pricing where you pay only for what you use
Share your feedback
Your feedback is invaluable to us. Please share your feedback and help us shape the future of Microsoft Playwright Testing.
Learn more about the Microsoft Playwright Testing service
Learn more about using the Playwright Testing service for your web application testing.
Explore the features and benefits that Microsoft Playwright Testing offers.
Learn about our flexible pricing.
Use the pricing calculator to determine your costs based on your business needs.
Microsoft Tech Community – Latest Blogs –Read More
Grab Your Board and Catch a Wave… Copilot Wave 2 That Is
Happy Monday and what a way to kick off the week! Just started back this morning after a 10-week Sabbatical (THANK YOU MICROFT!) and was greeted with the kickoff of the Microsoft 365 Copilot Wave 2 presented by Microsoft’s Satya Nadella and Jared Spataro. The streaming Wave 2 kickoff centered on 3 main areas of focus: 1- Copilot Pages, 2- Copilot in Microsoft 365 Apps, and 3- Copilot Agents. If you missed the session no worries. I have grabbed a link to the recording, the follow-up Copilot blog post, some individual deep dive videos as well as some additional content to help you on your Copilot Wave 2 journey.
Watch the recording of today’s announcements on LinkedIn by clicking here.
Check out the blog post “Microsoft 365 Copilot Wave 2: Pages, Python in Excel, and agents”
Watch videos on:
Copilot Studio Agent Builder
Copilot Pages
Prioritize my Inbox in Outlook
Python in Excel
Narrative Builder in PowerPoint
Get powerful Microsoft 365 Copilot adoption resources available to help your organization on its Copilot journey
Microsoft 365 Copilot home page
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Simscape multibody reinforced learning: unpossible to run examples
I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues?I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues? I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues? simscape, reinforced learning MATLAB Answers — New Questions
MATLAB stalls after running script
I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue.I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue. I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue. structures MATLAB Answers — New Questions
How to change units in Bode Diagram?
I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this?I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this? I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this? change, bode, diagram, units, properties, problematically, command, line, figure, setoptions MATLAB Answers — New Questions
How to Change Line Color on Mouse Click Without Disabling Pan and Zoom in MATLAB?
I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot?I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot? I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot? callback, plot MATLAB Answers — New Questions
It’s Nearly 2025 and Meeting Channel invites still don’t work properly
With channel meetings, all members get a meeting invite regardless of whether the organizer invites them or not.
Why even offer the ability to invite individuals in the first place?
It’s been so long, I’m starting to think this was by design and MS has no intention of fixing it.
With channel meetings, all members get a meeting invite regardless of whether the organizer invites them or not. Why even offer the ability to invite individuals in the first place? It’s been so long, I’m starting to think this was by design and MS has no intention of fixing it. Read More
Hidden Symbol in Word; Cannot Find and Replace
When copying pasting from web pages or from Google email, I often encounter this weird symbol, imbedded in the document and only visible from the Show/Hide function. It creates an extra space in documents. Problem is, I cannot do a “Find/Replace” to remove it from Word docs. Can someone tell me what this character is referred to in Word and how to “Find/Replace” it? Thanks.
When copying pasting from web pages or from Google email, I often encounter this weird symbol, imbedded in the document and only visible from the Show/Hide function. It creates an extra space in documents. Problem is, I cannot do a “Find/Replace” to remove it from Word docs. Can someone tell me what this character is referred to in Word and how to “Find/Replace” it? Thanks.Hidden Symbol? Read More
Update a sharepoint Excel file with the contents of multiple Excels in another folder
For context we recieve a monthly Excel report every month that is automatically upload to our Sharepoint.
At the moment we have someone manually copy and paste the content from these newly uploaded files into a “master worksheet” that contains all the reports data in a single file. I want to know if there is a way that we that we can automate the process of updating this excl file?
The tabs and columns on all the Excel’s are exactly the same.
For context we recieve a monthly Excel report every month that is automatically upload to our Sharepoint. At the moment we have someone manually copy and paste the content from these newly uploaded files into a “master worksheet” that contains all the reports data in a single file. I want to know if there is a way that we that we can automate the process of updating this excl file? The tabs and columns on all the Excel’s are exactly the same. Read More
Re: Notes
How do I recover notes that were on my IPhone previous to today? I deleted my account to register it again today and my notes were gone when I added my email back to my phone
How do I recover notes that were on my IPhone previous to today? I deleted my account to register it again today and my notes were gone when I added my email back to my phone Read More
Update: Cost-effective genomics analysis with Sentieon on Azure
This Blog was Co-Authored by Don Freed – Sr. Bioinformatics Scientist, Brendan Gallagher – Head of Business Development at Sentieon, Inc.
In our previous blog, we discussed benchmarking the performance of Sentieon’s, DNAseq and DNAscope pipelines using Azure instances using v202112.05 of the software. Since the publication of those results, there have been significant updates to the Sentieon software. As a result, we have updated the benchmarking to use Sentieon version 202308.01. We break down the runtime and cost of the pipelines on a wide range of currently available instances. These benchmarks use publicly available datasets, and the pipeline is available on Github.
Additionally, we have worked with Sentieon to develop a Terraform template for deploym
ent of the license server.
Running Sentieon on Azure
The pipelines and scripts needed for setup used in this benchmarking are provided on GitHub.
Instance Setup
The script at misc/instance_setup.sh performs initial setup of the instance and download/installation of software packages used in the benchmark.
Input datasets
In these benchmarks, as we stated before, we use the GIAB HG002 sample sequenced on multiple sequencing platforms. Input datasets for the benchmark are recorded in the config/config.yaml. With the exception of the Element dataset, that you will have to download on your own.
We recommend downloading all the files and placing them in an azure blob storage. You can use AzCopy to transfer the required files to your own Storage account using a shared access signature with “Write” access. Then we recommend updating the configs to use a shared access signature to each file. The pipeline will automatically download input files.
Input FASTQ were obtained as previously outlined, we have added the new ONT dataset below:
ONT HPRC
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_1_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_2_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_3_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_4_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_5_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_6_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_7_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_8_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_9_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_10_Dordo_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_11_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
The input files vary in their coverage, so the datasets with FASTQ input were down-sampled to approximately 93 billion bases (~30x coverage) prior to processing with the Sentieon secondary analysis pipelines. The Ultima CRAM file was not down-sampled and is at 40x coverage as recommended by Ultima Genomics. The ONT duplex sample was not down-sampled and is at approximately 30x coverage.
The data were processed using the hg38 reference genome. The reference genome at https://giab.s3.amazonaws.com/release/references/GRCh38/GCA_000001405.15_GRCh38_no_alt_analysis_set.fasta.gz was used for files with input in the FASTQ format. The reference genome at https://broad-references.s3.amazonaws.com/hg38/v0/Homo_sapiens_assembly38.fasta was used with the Ultima data in CRAM format, as this dataset was already aligned to this reference genome.
Running benchmarks on Azure
The script at misc/run_benchmarks.sh was used to run the benchmarks. This orchestrates the localization of the input datasets, references, model files and execution of Snakemake workflows on the machine. The workflow will down-sample the input data to be consistent to run on the Sentieon analysis workflows and will calculate variant calling accuracy against the Genome in a Bottle (GIAB) v 4.2.1 truth set. For the ARM benchmarking we didn’t run ONT and Pacbio data as minimap2 is not support by Sentieon on that architecture in version 202308.01. Support for minimap2 on ARM was added in version 202308.03 of the Sentieon software.
Improved Benchmarking with HBv3
To test the improvement of the software we wanted to retest on the HBv3 series of machines, that we previously recommended. These machines are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation and would be a good fit for Sentieon’s analysis pipelines. Figure 1 presents the runtime and Spot compute cost of running Sentieon’s analysis pipelines for germline variant calling across multiple sequencing technologies on Standard_HB120rs_v3 instance in US East at the time of publication.
Figure 1: Runtime and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on Standard_HB120rs_v3.
Using the Standard_HB120rs_v3, we analyzed 30x Illumina NovaSeq and HiSeqX samples from FASTQ to VCF using the DNAseq and DNAscope pipelines. The DNAseq pipeline took around 28 minutes with a cost of $0.17. Sentieon’s DNAscope pipeline has been speed up and takes only 10 minutes shorter– around 18 minutes with a cost of $0.11, about 6 cents less, see Table 1
The Ultima UG100 dataset is already aligned to the reference genome and pipeline performed variant calling without alignment. The DNAscope pipeline finished in 18 minutes for Spot cost of $0.10.
Sentieon’s DNAscope LongRead pipeline for PacBio HiFi data is more computationally intensive as it includes multiple passes of variant calling along with a read-backed phasing. The DNAscope LongRead pipeline finished in 41 minutes with a Spot cost of $0.25. We add in ONT data in this round of tests, similar to the PacBio data, the ONT pipeline is more computationally involved. The DNAscope LongRead pipeline finished in 88 minutes with a Spot cost of $0.53 with the ONT long reads.
The Element Biosciences AVITI system is supported by a customized Sentieon DNAscope pipeline. Sentieon’s DNAscope pipeline for Element Biosciences finished in 21 minutes with a Spot cost of $0.13.
All run times and costs can be found in Table 1.
Sample
Pipeline
Alignment (min)
Preprocessing (min)
Variant Calling (min)
Total Runtime (min)
On Demand($)
Spot ($)
Element Aviti
DNAscope
11.05
2.30
7.39
20.74
1.241
0.121
Illumina HiSeq X
DNAseq
21.09
2.97
4.11
28.18
1.691
0.171
Illumina HiSeq X
DNAscope
9.47
1.40
7.71
18.57
1.111
0.111
Illumina NovaSeq
DNAseq
21.53
2.63
4.43
28.59
1.721
0.171
Illumina NovaSeq
DNAscope
9.74
1.39
7.78
18.92
1.141
0.111
ONT Duplex
DNAscope
32.91
N/A
55.37
88.28
5.301
0.531
PacBio HiFi
DNAscope
11.49
N/A
29.75
41.24
2.471
0.251
Ultima UG100
DNAscope
N/A
N/A
17.87
17.87
1.071
0.111
Table 1: Runtime and On Demand and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on Standard_HB120rs_v3. Alignment includes alignment with Sentieon BWA-MEM for short-read data and alignment with Sentieon minimap2 for PacBio HiFi and ONT Duplex data. Preprocessing includes duplicate marking, base-quality score recalibration, and merging of multiple aligned files into a single file. Variant calling includes variant calling or variant candidate identification along with variant genotyping and filtering. Variant calling for PacBio HiFi data is implemented as a multi-stage pipeline. All runs were in the eastus region1 Pricing is accurate at the time of publication.
Let’s compare the improvements between v202112.05 and v202308.01 of the software results based on the provided information:
1. DNAseq Pipeline Performance:
– v202112.05: Took around 30 minutes with a cost Spot of $0.18.
– v202308.01: Took around 28 minutes with a cost Spot of $0.17.
– Improvement: In v202308.01, the runtime decreased by 2 minutes; and the cost decreased by $0.01.
2. DNAscope Pipeline Performance:
– v202112.05: Took around 32 minutes with a cost of $0.19.
– v202308.01: Improved to 19 minutes with a cost of $0.11.
– Improvement: In v202308.01, the runtime decreased significantly to 19 minutes, and the cost decreased by $0.07.
3. DNAscope LongRead Pipeline Performance (PacBio HiFi Data):
– v202112.05: Finished in 72 minutes with a Spot cost of $0.42.
– v202308.01: Improved to 41 minutes with a Spot cost of $0.25.
– Improvement: In v202308.01, decreased significantly to 41minutes, and the cost decreased by $0.17.
4. Element Biosciences AVITI System Performance:
– v202112.05: Finished in 31 minutes with a Spot cost of $0.18.
– v202308.01: Improved to 20 minutes with a Spot cost of $0.12.
– Improvement: In v202308.01, the runtime decreased slightly to 20 minutes, and the cost decreased by $0.06.
Overall, in v202308.01, significant improvements were observed in the runtime and cost efficiency of the DNAscope pipeline, whereas minor fluctuations were noted in other pipeline performances. It’s also important to note that v202308.01 introduced support for ONT data in the DNAscope LongRead pipeline.
Sentieon benchmark across multiple instance families and architectures
The Sentieon pipelines and software can scale to smaller or larger instances depending on data as well as instance availability. To provide an accurate representation of performance across various architectures, we again benchmarked the Sentieon DNASeq and DNAscope pipeline with Illumina NovaSeq dataset on ARM and x86 architecture. The runtime, On Demand and Spot compute cost is shown in Figures 2 and 3 respectively. On Demand VMs are pay for compute capacity by the second, with no commitments or upfront payments. While Spot VMs are pay for unused compute capacity at a discount.
Figure 2: Runtime and Dedicated and Spot compute cost of Sentieon DNAseq pipeline across various Azure machine types using Illumina NovaSeq dataset sorted by overall runtime. Larger instances provide lower runtime, while cost is generally consistent within a family but does differ between architectures.
Figure 3: Runtime and Dedicated and Spot compute cost of Sentieon DNAscope pipeline across various Azure machine types using Illumina NovaSeq dataset sorted by overall runtime. Larger instances provide lower runtime, while cost is generally consistent within a family but does differ between architectures.
For the fastest turnaround, the Sentieon DNAseq pipeline can process the Illumina 30x NovaSeq dataset in 28 minutes on a Standard_HB120rs_v3, with a Dedicated cost of $1.72 or a Spot cost of $0.11, see Figure 2. As another cost-effective option, DNAseq can be used on the Standard_D96ads_v5 instance with an On-Demand cost of $3.38, a spot cost of $0.34 and a turnaround time of under 40 minutes, see Figure 2. The DNAscope pipeline for Standard_D96ads_v5 instance with an On-Demand cost of $2.55, a spot cost of $0.26 and a turnaround time of 31 minutes, see Figure 3. Note, for the Standard_F48s_v2, an additional external disk was used to accommodate all the test data for the analysis but wasn’t included in the overall cost.
Let’s compare the performance and cost efficiency between version v202308.01 and v202112.05:
1. DNAseq Pipeline Performance:
– v202112.05: Processed Illumina 30x NovaSeq dataset in 30 minutes on a Standard_HB120rs_v3 with a Spot cost of $0.18.
– v202308.01: Processes the dataset in 28 minutes on a Standard_HB120rs_v3 with a Spot cost of $0.11. Alternatively, it can be processed on a Standard_D96ads_v5 instance in under 40 minutes with a Spot cost of $0.34.
– Improvement: The turnaround time for the Standard_HB120rs_v3 decrased slightly to 28 minutes, with a decrease in Spot cost by $0.07. Additionally, a new option is available on the Standard_D96ads_v5 instance with a slightly longer turnaround time of under 40 minutes but at a higher Spot cost of $0.34 compared to $0.11.
2. DNAscope Pipeline Performance:
– v202112.05: Turnaround time of under 50 minutes with a Spot cost of $0.39.
– v202308.01: Turnaround time of 31 minutes on a Standard_D96ads_v5 instance with an On-Demand cost of $2.55 and a Spot cost of $0.26.
– Improvement: In v202308.01, the turnaround time decreased to 31 minutes, with a Spot cost of $0.26, offering improved performance and cost efficiency compared to the previous version.
3. Comparison Against ARM CPUs:
– v202112.05: ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. Spot price of $0.33 for DNAscope and $0.30 for DNAseq pipeline.
– v202308.01: ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. No significant difference in cost between architectures.
– Improvement: No significant difference in cost between the architectures is noted in v202308.01, whereas in v202112.05, there was a significant difference in cost for AMD architecture compared to Intel.
Overall, in v202308.01, while the DNAseq pipeline on the Standard_HB120rs_v3 shows a slight increase in turnaround time and cost, the DNAscope pipeline on the Standard_D96ads_v5 instance demonstrates improved performance and cost efficiency compared to the previous
version. Additionally, there is no significant difference in cost between ARM and X86 architectures in v202308.01, unlike in v202112.05. We would also like to note that the order of the machine types is slightly different but not with significant changes.
We were able to also run comparison against ARM CPUs. For direct comparison we were able to use the equivalent 32 vCPU machines, but the highest available is 64 vCPU when compared to 96 vCPU in X86 (Figure 2 and 3). In Table 2, we can see that ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. Additionally, Dedicated cost was comparable for DNAscope and DNAseq pipeline comparable across the board. However, this time there was not significant difference in cost between the architectures.
VM Size
Architecture
Pipeline
Total Runtime (min)
On Demand ($)
Spot ($)
D32ds_v5
x86 (Intel)
DNAscope
64.51
1.941
0.191
D32ads_v5
x86 (AMD)
DNAscope
76.95
2.111
0.211
D32pds_v5
ARM
DNAscope
82.00
1.981
0.201
D32ds_v5
x86 (Intel)
DNAseq
121.51
3.661
0.371
D32ads_v5
x86 (AMD)
DNAseq
115.12
3.161
0.321
D32pds_v5
ARM
DNAseq
123.72
2.981
0.301
Table 2: Runtime, Dedicated and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on across 32cpu architectures. All runs were in the eastus region.
1 Pricing is accurate at the time of publication.
These results highlight the ability of the Sentieon software to scale up large instances for faster turnaround and down to smaller instances as needed. We only included a subset of potential compute, based on optimized compute-to-price ratios. However, the Sentieon tools can also be used with other machine families, based on availability in a given region.
Conclusion
Sentieon’s updated DNAseq and DNAscope pipelines are highly scalable and can be used on a variety of machine types. The software can scale up to the 120 vCPU Standard_HB120rs_v3, instances for turnaround times of 28 minutes or down to Standard_D32pds_v5 instances for better pricing on Spot instance of $0.30
If you can get Standard_HB120rs_v3 in your preferred region, it is the cheapest per run. However, if not available, all other Spot pricing options are great with the following two being your best cost advantage, Standard_D32ds_v5 and Standard_D96ds_v5. If you are looking for turnaround time, we recommend any of the 96vCPU options. Sentieon’s FASTQ to VCF pipelines can process Illumina 30x whole genomes for less than $3.60 on On Demand machines or $0.33 on Spot machines and in under 120 minutes. Standard_D32ds_v5 process the DNAseq pipeline in for $3.66 on On Demand machines or $0.37 on Spot machines and in about 121 minutes. While on Spot machines Sentieon DNAseq is capable of processing 30x genomes from FASTQ to VCF with a Spot machine cost of less than $1.50 on a variety of machine types that we tested.
Overall, the new version of the software has decreased cost and, in some cases, decreased turnaround time, with increased performance and range of datasets it can analyze.
Readers should note that all costs represent hardware costs and don’t represent software licensing costs.
To get started with the Sentieon software on Azure, please reach out to info@sentieon.com or visit the Sentieon website at www.sentieon.com
Microsoft Tech Community – Latest Blogs –Read More
Key Architectural Differences Between AWS and Azure Explained
Introduction
In today’s fast-moving digital world, cloud platforms are the foundation of everything from small startups to global enterprises. Choosing the right one can make all the difference when it comes to scalability, security, and driving innovation. With over 94% of companies relying on cloud services, expanding from AWS to Microsoft Azure unlocks a host of new possibilities.
Azure not only provides robust tools and services to optimize your infrastructure, but it also puts you at the forefront of AI advancements. From integrated AI services like Azure OpenAI to sophisticated machine learning models, Azure empowers businesses to transform how they build, deploy, and scale intelligent applications.
This guide explores the key differences between AWS and Azure—covering network architecture, availability zones, security, and more—helping you make informed decisions to future-proof your cloud strategy and stay ahead in an AI-driven world.
1. Network Architecture: AWS VPC vs. Azure VNET
AWS Virtual Private Cloud (VPC)
In AWS, the Virtual Private Cloud (VPC) is the backbone of your network architecture. It lets you build isolated environments where you control every aspect of your networking. The subnets in a VPC must be clearly designated as either public or private, ensuring a firm boundary between internet-facing resources and internal systems. Here’s how AWS VPC handles traffic and segmentation:
AWS VPC Network Segmentation
Key Components:
Public Subnet: Hosts internet-facing resources, such as web servers, which handle incoming HTTP traffic through an Internet Gateway (IGW).
Private Subnet: Hosts internal resources like databases that don’t have direct internet access.
Internet Gateway (IGW): The bridge that provides internet access for public subnets.
VPC Endpoint Gateway: Allows secure, private access to AWS services like S3 and DynamoDB without needing an internet connection.
NAT Gateway: Enables outbound internet traffic from private subnets.
Security Groups and Network ACLs: Provide both stateful and stateless traffic filtering to control inbound and outbound traffic.
Architectural Characteristics:
Explicit Segmentation: Subnets are clearly marked as public or private, making it easy to manage resource placement.
Manual Configuration: Setting up Internet Gateway (IGW), NAT Gateway, and route tables requires hands-on configuration.
Availability Zones (AZs): Resources are often spread across multiple AZs to ensure high availability and fault tolerance.
Azure Virtual Network (VNet)
Azure Virtual Network (VNet) provides similar network isolation as AWS, but with a stronger focus on managed services and simplifying network segmentation. It’s designed to reduce the complexity of manual configuration and make networking more efficient.
Azure VNET Network Segmentation
Key Components:
Public Subnet: Hosts resources that have direct internet access through assigned public IP addresses.
Private Subnet: Holds internal resources and securely connects to Azure services using Private Endpoints through Private Link.
Network Security Groups (NSGs): Control traffic to and from both public and private subnets, ensuring your resources are properly shielded.
Azure NAT Gateway: Offers outbound internet connectivity for resources that don’t have public IPs.
Service Endpoints and Private Links: Enable secure, private access to Azure services without needing to expose your resources to the internet.
Architectural Characteristics:
Streamlined Internet Access: Public IP addresses can be directly assigned to resources, bypassing the need for an Internet Gateway (IGW). Azure’s NAT Gateway provides outbound internet connectivity for private subnets, offering a simpler setup compared to AWS’s NAT Gateway.
Azure NAT Gateway: Offers outbound connectivity for private subnets without public IPs. The setup is simpler compared to AWS’s NAT Gateway, reducing the need for intricate routing configurations.
Integrated Services: Azure emphasizes managed services like Private Link, which simplify complex networking tasks, reducing the need for hands-on management.
Abstraction: Less manual configuration of routing and network appliances, making it easier for organizations to manage.
Key Architectural Differences:
Internet Connectivity:
AWS: Requires an Internet Gateway (IGW) for public subnet internet access.
Azure: Public IPs are directly assigned; no IGW equivalent is needed, and Azure NAT Gateway abstracts much of the internet connectivity configuration.
Subnet Designation:
AWS: Subnets must be explicitly marked as public or private.
Azure: Subnets are neutral; traffic control is handled by NSGs and public IP assignment.
Network Segmentation:
AWS: Provides granular control using Security Groups and NACLs.
Azure: Simplifies this with NSGs and Application Security Groups (ASGs), offering easier management of security rules.
2. Availability Zones and Redundancy
AWS Availability Zones
In AWS, regions are divided into multiple Availability Zones (AZs) to ensure high availability and fault tolerance. Resources can be deployed across these AZs, but it’s not automatic—you need to explicitly distribute them for redundancy, which often involves manual setup.
Multi-AZ architecture ensures redundancy and fault tolerance.
Architectural Approach:
Manual Distribution: Resources must be manually deployed across AZs to achieve redundancy.
Load Balancing: AWS uses Elastic Load Balancers to distribute traffic across multiple AZs for high availability.
High Availability Configurations: For services like RDS, configuring multi-AZ deployments requires additional setup to ensure proper redundancy and failover.
Azure Availability Zones
Azure also provides Availability Zones but takes a different approach by offering automatic zone-redundancy for many services. This abstraction reduces the complexity of managing high availability, especially for managed services. However, it’s important to remember that certain IaaS services, like Azure VMs, still require explicit configuration for redundancy across AZs. Additionally, geo-redundancy (multi-region failover) isn’t automatic for every service and must be configured for mission-critical workloads.
Azure abstracts zone management for many services. It’s zone redundant by default without manual configuration
Architectural Approach:
Automatic Redundancy: Many managed services, like Azure SQL Database, come with built-in zone redundancy by default, saving you the hassle of manual configuration.
Managed Services: Azure abstracts most of the complexity by automatically handling replication and failover for services like Azure SQL Database.
Zone-Aware Services: Not all services in Azure require explicit AZ configurations, making it easier to achieve high availability without manual effort.
Key Architectural Differences:
Resource Deployment:
AWS: Requires manual placement across AZs for redundancy.
Azure: Many services are inherently zone-redundant, though not all services are automatically redundant.
Operational Overhead:
AWS: Achieving high availability often requires more manual configuration.
Azure: Reduces complexity with built-in redundancy for managed services, such as Azure SQL Database, allowing for easier scaling and high availability without additional setup.
3. Security Models: AWS vs. Azure Controls
AWS Security Controls
In AWS, security is managed with a combination of Security Groups (SGs) and Network ACLs (NACLs). Security Groups operate at the instance level, while NACLs control traffic at the subnet level, offering multiple layers of security.
AWS uses SGs for instance-level security and NACLs for subnet-level control.
Key Points:
Security Groups: Manage inbound and outbound traffic by attaching to instances. Since they are stateful, they automatically allow return traffic without the need for additional rules.
Network ACLs: Control traffic at the subnet level and are stateless, meaning both inbound and outbound rules must be defined.
Architectural Implications:
Layered Security: By combining SGs for instance-level control and NACLs for subnet-level control, AWS provides a granular approach to managing traffic.
Complexity: The trade-off is complexity, as you need to manage both SGs and NACLs separately, which can add overhead when configuring security across large deployments.
Azure Security Controls
Azure takes a more streamlined approach to security with Network Security Groups (NSGs) and Application Security Groups (ASGs), making it easier to manage security policies across your infrastructure. Unlike AWS, Azure simplifies the process by combining functionality, reducing the need to manage multiple layers.
Azure simplifies security management through NSGs and ASGs, integrating directly with VMs or network interfaces
Key Points:
NSGs: Control inbound and outbound traffic at both the VM and subnet levels, similar to AWS SGs. Like AWS SGs, NSGs are stateful and automatically allow return traffic.
Flexible Application: NSGs can be applied to subnets, individual VMs, or network interfaces.
ASGs: Offer centralized security rules for logical groupings of VMs, making it easier to manage policies for specific sets of resources.
Dynamic Security Policies: Security rules can reference ASGs, reducing the need to manually update IP addresses whenever new instances are added.
Architectural Implications:
Simplified Management: With NSGs handling both instance-level and subnet-level security, Azure eliminates the need for a separate layer like NACLs, streamlining your security setup.
Efficient Policy Application: ASGs make it easier to apply consistent security policies across groups of VMs without needing to reconfigure individual resources.
Key Architectural Differences:
Security Layers:
AWS: Uses both SGs (stateful) and NACLs (stateless) for security, which can lead to more granular control but requires more effort.
Azure: Primarily uses NSGs (stateful), simplifying the model by not needing an additional layer like NACLs.
Resource Grouping:
AWS: Lacks a direct equivalent to ASGs, though you can use EC2 tagging for dynamic grouping in some cases.
Azure: ASGs allow for more efficient security management by applying centralized policies to logical groupings of VMs.
4. Managed Services: Levels of Automation
AWS Managed Services
AWS offers powerful managed services, but achieving high availability and scaling often requires manual setup. For example, if you want to configure RDS Multi-AZ deployments, you’ll need to manually set up replication across Availability Zones to ensure redundancy.
AWS services provide a high level of control but require more configuration for high availability.
Key Services:
RDS Multi-AZ: Requires manual configuration to enable replication across AZs for high availability.
EC2 Auto Scaling: Involves setting up scaling rules to automatically adjust resources based on demand.
Elastic Load Balancer (ELB): Distributes incoming traffic across AZs but requires additional setup.
Architectural Characteristics:
Customization: AWS gives you full control over configurations, allowing you to tailor setups to your needs.
Operational Responsibility: With more control comes more responsibility—there’s a greater need for hands-on management to ensure high availability and scaling.
Azure Managed Services
Azure takes a different approach by emphasizing automation and built-in redundancy in its managed services. Services like Azure SQL Database and Cosmos DB come with high availability baked in, so you spend less time configuring infrastructure and more time focusing on your core business. However, even though Azure automates much of the infrastructure management, careful planning for failover is still essential, particularly for mission-critical workloads.
Azure services are more abstracted, automating key operational tasks like scaling and availability across zones.
Key Services:
Azure SQL Database: Automatically manages replication, backups, zone redundancy, and scaling without manual intervention.
Azure App Service: Provides a fully managed PaaS solution for web applications, with built-in autoscaling and minimal configuration required.
Azure Cosmos DB: Delivers global replication with automatic scaling, making it easy to build globally distributed applications.
Architectural Characteristics:
Built-In High Availability: Services are designed with resilience in mind, ensuring high availability without additional configuration.
Reduced Operational Overhead: By automating critical tasks like redundancy and scaling, Azure reduces the need for manual maintenance, allowing you to focus on innovation instead of infrastructure management.
Key Architectural Differences:
Control vs. Convenience:
AWS: Offers more control but requires manual configurations to achieve redundancy and scaling, especially across AZs.
Azure: Automates much of the redundancy and scaling, particularly for managed services, with minimal user intervention required.
5. Storage Resiliency and Data Replication
AWS Storage Options
AWS offers a range of storage tiers, each designed for different durability and cost requirements. For instance, S3 Standard replicates data across multiple facilities in a region, providing high durability by default, while S3 One Zone-IA offers a more cost-effective option by storing data in a single Availability Zone (AZ), though this comes with lower durability.
Key Characteristics:
S3 Standard: Automatically replicates data across multiple facilities within a region for high durability.
S3 One Zone-IA: Stores data in a single AZ, reducing cost but sacrificing some resiliency.
Architectural Characteristics:
Automatic Replication: By default, S3 provides high durability across multiple AZs, ensuring data redundancy.
Choice of Redundancy: AWS offers a range of storage classes to allow flexibility in cost and durability, letting users balance redundancy with budget.
Azure Storage Options
Azure gives users more granular control over data replication, offering several replication strategies depending on your needs. Whether you require local, zonal, or geo-redundancy, Azure provides storage options that ensure data availability and resilience.
Key Characteristics:
Locally Redundant Storage (LRS): Keeps three copies of your data within a single data center, ensuring protection against local hardware failures.
Zone-Redundant Storage (ZRS): Replicates data synchronously across three AZs for higher availability.
Geo-Redundant Storage (GRS): Replicates data asynchronously to a secondary region, providing protection against regional failures.
Geo-Zone-Redundant Storage (GZRS): Combines ZRS and GRS for maximum resilience by replicating both within and across regions.
Architectural Characteristics:
Customization: Azure provides multiple levels of control over data replication, letting you choose the redundancy model that best suits your business needs.
Disaster Recovery: Azure includes built-in options for cross-regional replication, giving you out-of-the-box disaster recovery capabilities.
Key Architectural Differences:
Replication Control:
AWS: Automatic multi-AZ replication with fewer options for customization.
Azure: Offers a wider range of replication strategies, including local, zonal, and geo-redundancy, for greater flexibility.
Disaster Recovery Planning:
AWS: Cross-region replication requires additional services and setup.
Azure: Provides built-in geo-redundancy options for simpler disaster recovery planning.
6. Private Connectivity to Cloud Services
AWS VPC Endpoints
In AWS, VPC Endpoints allow you to connect privately to AWS services without exposing your resources to the internet. However, setting up these endpoints requires manual configuration for each service, making it a more hands-on process.
Types:
Gateway Endpoints: Used for services like S3 and DynamoDB.
Interface Endpoints: Powered by AWS PrivateLink to connect to other AWS services.
Architectural Characteristics:
Manual Setup: Each service you want to connect privately to requires its own endpoint, meaning more manual work.
Service-Specific Endpoints: The type of endpoint you need depends on the service, with different setups for gateway versus interface endpoints.
Azure Private Link and Endpoints
Azure streamlines private connectivity with Private Link and Private Endpoints, offering a more unified approach to accessing both Azure services and your own services securely. This reduces the complexity compared to AWS and makes managing private connections more efficient.
Features:
Private Endpoints: These are network interfaces that allow you to privately and securely connect to a service through Azure Private Link.
Service Integration: Works seamlessly with Azure services and can also be used for your own custom applications, creating a more versatile connection model.
Architectural Characteristics:
Simplified Configuration: With a more unified setup, it’s easier to manage and configure private connections in Azure.
Unified Approach: Azure uses the same method—Private Link—to connect to various services, making the process much more consistent and straightforward compared to AWS.
Key Architectural Differences:
Configuration Complexity:
AWS: Requires different setups depending on the type of service, with separate configurations for gateway and interface endpoints.
Azure: Simplifies this with Private Link, providing a unified approach for connecting to multiple services.
Service Accessibility:
AWS: Each service requires a specific endpoint type, which can lead to more management overhead.
Azure: Private Link offers broader access with fewer configurations, making it more user-friendly.
Conclusion
Understanding the key architectural differences between AWS and Azure is crucial for organizations looking to optimize their cloud strategy. While both platforms provide robust services, their approaches to network architecture, availability zones, security models, managed services, and storage resiliency vary significantly. By understanding these distinctions, businesses can fully leverage Azure’s capabilities while complementing their existing AWS expertise, creating a powerful multi-cloud strategy that boosts operational efficiency.
Key Takeaways:
Network Architecture: AWS offers granular control over network segmentation, but Azure simplifies it with integrated managed services, reducing manual configuration.
Availability Zones: Azure’s managed services come with built-in zone redundancy, while AWS often requires more manual intervention to achieve multi-AZ redundancy.
Public Internet Access: AWS uses an Internet Gateway for public internet access, whereas Azure simplifies this by directly assigning public IPs to resources.
Private Subnet Outbound Traffic: Both platforms use NAT Gateways for outbound traffic, but Azure abstracts the configuration more, making it easier to manage.
Security Models: Azure streamlines security with NSGs and ASGs, offering simpler and more flexible traffic control than AWS’s combination of Security Groups and NACLs.
Managed Services: Azure automates critical tasks like redundancy and scaling, while AWS often requires manual configuration for high availability.
Storage Resiliency: Azure provides more granular replication options, while AWS relies on predefined storage tiers.
Private Endpoints: Azure’s Private Link and Endpoints offer a more seamless and integrated approach to private connectivity compared to AWS’s VPC Endpoints, which require more manual setup.
By adapting to these architectural differences, your organization can unlock Azure’s full potential, complementing your AWS expertise and creating a multi-cloud strategy that enhances availability, operational efficiency, and cost management.
Additional resources:
Azure Architecture Guide for AWS Professionals: For a detailed comparison and further reading on transitioning from AWS to Azure.
Mapping AWS IAM concepts to similar ones in Azure: For a direct mapping of AWS IAM concepts to Azure’s security solutions, read this detailed discussion.
Microsoft Tech Community – Latest Blogs –Read More
calculate angles for walking robot
I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"?I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"? I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"? roboticsarena, walkingrobot, bipedalrobot, inversekinematics, simulink MATLAB Answers — New Questions
App of chatbot restarts if you leave chatbot and go to normal chat options
Hi All,
I have a chatbot deployed as an app in MS Teams environment, called IT Support, now when we are using the chatbit and a colleague pings you and you respond to colleauge go back to chatbot, the chatbot has restarted even if you come back in matter of 10 seconds.
What settings we can do.
Hi All,I have a chatbot deployed as an app in MS Teams environment, called IT Support, now when we are using the chatbit and a colleague pings you and you respond to colleauge go back to chatbot, the chatbot has restarted even if you come back in matter of 10 seconds. What settings we can do. Read More