Month: August 2024
Simulation with an RL Agent does not save simulation data
I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience")I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience") I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience") machine learning, deep learning, artificial intelligence, bug, reinforcement learning MATLAB Answers — New Questions
How to deleting a registry key
I would like to know if you have a way to remediated a malicious registry key with Defender XDR ?
I would like to know if you have a way to remediated a malicious registry key with Defender XDR ? Read More
JSON Header Formatting SharePoint list
I am attempting to add a custom header to my list form. When particular departments are selected from the “Department” choices (Critical Care, Anesthesia, Radiology, Surgery), I would like alert to display in the header that say “This Department Requires Approval – A Request Will Be Sent to Department Representation.” Can anyone assist with the JSON?
I am attempting to add a custom header to my list form. When particular departments are selected from the “Department” choices (Critical Care, Anesthesia, Radiology, Surgery), I would like alert to display in the header that say “This Department Requires Approval – A Request Will Be Sent to Department Representation.” Can anyone assist with the JSON? Read More
Macro Error
Hi All,
I generated a macro to first filter the values of a column and remove the blanks, and then sort them from highest to lowest, but it generates the following error. What could it be due to?.
The Macro is:
Hi All, I generated a macro to first filter the values of a column and remove the blanks, and then sort them from highest to lowest, but it generates the following error. What could it be due to?. The Macro is: Read More
Benchmark Testing puts you on the path to peak API Performance
Benchmark performance testing involves measuring the performance characteristics of an application or system under normal or expected conditions. It’s a recommended practice in any case, but it’s a critical consideration for your APIs since your consumers will depend on consistent performance for their client applications.
Incorporating benchmark testing of your Microsoft Azure API Management services into your software delivery process provides several important benefits:
It establishes performance baseline as a known, quantifiable starting point against which future results can be compared.
It identifies performance regressions so that you can pinpoint changes or integrations that may be causing performance degradation or hindering scalability — in effect helping you to identify which components might need to be scaled or configured to maintain performance. This allows developers and operational staff to make targeted improvements to enhance the performance of your APIs and avoid accumulating performance-hindering technical debt.
It validates performance requirements so you can be assured that the architecture meets the desired operating performance targets. This can also help you determine a strategy for implementing throttling or a circuit breaker pattern.
It improves user experience by identifying and resolving performance issues early in the development life cycle— before your changes make it into production.
And perhaps most importantly, it gives you the data you need to create the capacity model you’ll need to operate your APIs efficiently across the entire range of design loads. This is a topic for a future post, but the methods described here are a a great starting point.
Benchmark vs Load Testing. What’s the difference?
While the approaches and tools involved are nominally very similar, the reasons for doing them differ. Benchmark testing establishes a performance baseline within the normal operational range of conditions, while load testing established the upper boundary or point of failure. Benchmark testing establishes a reference point for future iterations, while load testing validates scalability and stress handling. Both are important for ensuring API performance, and you can combine the approaches to suit your needs as long as the goals of each are met.
Below, we’ll describe the principles of designing a repeatable benchmark test and conclude with a full walkthrough and the resources you’ll need to do it yourself.
A model approach
Before we get into a specific example, let’s look at the conceptual steps involved.
Broadly, there are two stages:
Design and Planning: Decide what to measure, and how to measure it. (Steps 1-4 below)
Execution: Run test, collect results, and use the results to inform future actions or decisions. (Steps 5-7)
The execution stage is repetitive. The first execution result becomes the baseline. From there, the benchmark test can be repeated after any important change to your API workload (resource configuration, backend application code, etc.). Comparing the results of the current and previous test will indicate whether the most recent change moved you closer to your goal or caused a regression. Once the goal is met, you’ll continue the practice with future changes to ensure that the required performance is being maintained.
1. Identify your benchmark metric
Determine the key performance metric that will define your benchmark. Think of it as the key performance indicator (KPI) of your API workload. Some examples include: operation or request duration, failure rate, resource utilization (eg, memory usage), data transfer speed, and database transaction time. The metric should align with your requirements and objectives, and be a good indicator for the quality of the consumer experience. For API Management, and APIs in general, the easiest and most useful metric is usually response time. For that reason, start with response time as the default choice if your circumstances don’t guide you to choose something else.
The key here is to choose a single metric that you can capture easily and consistently, is an indicator of the kind of performance you are after, and that will allow you to make linear comparisons over time. It’s possible to devise your own composite metric based on an aggregation formula using multiple primitives, if required, in order to derive a single benchmark measurement that works best for you.
Tip: Requests per second (RPS) might be the first metric you think of when you are trying to decide what you should measure. Similar unit-per-second metrics have been used historically for benchmark everything from web servers to GPUs. But in reality, RPS by itself isn’t very useful as a benchmark for APIs. It’s not uncommon to observe a system achieve a “high” RPS while individual consumers are simultaneously experiencing “slow” response times. For this reason, we recommend that you only use RPS as a scenario parameter and choose something else as your benchmark metric.
2. Define the benchmark scenario
The scenario describes input parameters and the simulation. In other words, it describes what is happening in the system while the benchmark metric is being measured. For example, “1000 simulated users, calling the Product Search API, at a rate of 10 searches per minute per user”. The scenario should be as simple as possible while also providing a realistic representation of typical usage and conditions. It should accurately reflect the behavior of the system in terms of user interactions, data payloads, etc. For example, if your API relies on caching to boost performance, don’t use a scenario that results in an unrealistically high cache hit rate.
Tip: For an existing application, choose an API operation that represents an important use case and is frequently used by your API consumers. Also, make sure that the performance of the scenario is relatively deterministic— meaning that you expect the test results to be relatively consistent across repeated runs using the same code and configuration, and the results aren’t likely to be skewed by external or transient conditions. For example, if your API relies on a shared resource (like a database), make sure the external load on that resource isn’t interfering with your benchmark. When it doubt, use multiple test runs and compare the results.
3. Define the test environment
The test environment includes the tool that will run the simulation (JMeter, for example), along with all the resources your API requires. Generally speaking, you should use a dedicated environment that models your production environment as closely as possible, including compute, storage, networking, and downstream dependencies. If you have to use mocks for any of your dependencies, make sure that they are accurately simulating the real dependency (network latency, long running processes, data transfer, etc).
Tip: You want your testing environment to satisfy two conditions:
It makes it easy to set up and execute the test. You don’t want to deter yourself from running tests because the process is tedious or time-consuming.
It is consistent and repeatable across test runs to ensure the observed results can be compared reliably.
Automation helps you achieve both of these things.
4. Determine how you will record your chosen metric
You may need to instrument your code or API Management service with performance monitoring tools or profiling agents (for example, Azure Application Insights). You may also need to consider how you will retrieve and store the results for future analysis.
Tip: Be aware that adding observability and instrumentation can, by itself, adversely impact your performance metric, so the ideal case (if the observability tooling isn’t already part of your production-ready design) would be a data collection method that captures the data at the client (or agent, in the case of Azure Load Testing).
5. Execute the test scenario
Run the defined test scenario against the API while measuring the performance metric.
6. Analyze the results
Analyze the collected performance data to assess how your API performs. If this isn’t your first time running the test, compare the observed performance against previous executions to determine if the API continues to meet the desired performance objectives and what the impact (if any) of your code or configuration changes may be. There are statistical methods that can applied to aid this analysis, which are extremely useful in automated tests or pull request reviews. These methods are beyond the scope of this post, but it’s a good idea to familiarize yourself with some of the approaches.
For Example: You just added a policy change that decrypts part of the request payload and transforms it into a different format for your backend to consume. You noticed that the time for the operation to complete has increased from 70ms to 110ms. Your benchmark objective is 80ms. Do you revert the change? Do you scale your API management service to compensate? Do you try to optimize your recent changes to see if you can get the results to improve? The bottom line here is that you can use the data to make an informed decision.
7. Report and document
Document the test results, including performance metrics, observations, and any identified issues or recommended actions. This information serves as a reference for future performance testing iterations and as a new benchmark for future comparison.
8. Iterate and refine
Finally, find ways to automate or optimize the process or modify your strategy as necessary to improve its usefulness to your business operations and decision making. In a future article, we’ll talk more about how to operationalize benchmark testing and how to use it as a powerful capacity management tool.
Walkthrough
Let’s make this more realistic with a basic example. For the purposes of this walkthrough, we’ve developed an automated environment setup using Terraform. Find more information about the environment and the source code on GitHub. The environment includes an API Management service, a basic backend (httpbin, hosted in an Azure App Service plan), and an Azure Load Testing resource.
Tip: Use the Terraform templates provided in the repo to deploy all the resources you’ll need to follow along. For operational use, we recommend that you create your own repository using our repo as a template, and then follow the instructions in the README to configure the GitHub workflows for deployment to your Azure subscription. Once configured, the workflow will deploy the infrastructure and then run the load tests for you automatically.
You are free to choose any testing tools that fit your needs, but we recommend Azure Load Testing. It doesn’t require you to install JMeter locally or author your own test scripts. It allows you to define parameters, automatically generates the JMeter script for your test, and manages all the underlying resources required for the test agents. Most importantly, it avoids many of the problems we’d be likely to encounter with client-based tools and gives us the repeatability we need.
Let’s look at how we’ll apply our model approach in the example:
Performance metric
Average response time
Benchmark scenario
Performance will be measured under a consistent request rate of 500 requests per second.
Environment
The sample environment – an App Service Web App that hosts the backend API and an API Management Service configured with one scale unit. Both are located in the same region, along with the Azure Load Testing resource. The deployment assets for all resources are included.
Deploy the Azure resources
1. Open Azure Cloud Shell and run the following commands.
2. Clone the Repository
git clone https://github.com/ibersanoMS/api-management-benchmarking-sample.git
cd api-management-benchmarking-sample/src/infra
3. Initialize Terraform
terraform init
4. Plan the Deployment
terraform plan –out=tfplan
5. Apply the Terraform Templates
terraform apply tfplan
Creating and running the tests
Note: The Terraform templates will configure the load tests for you, but if you want to create tests on your own the steps below will walk you through it.
Identify the host url of your App Service backend and your API Management service. If you’re using the sample environment created from the Terraform template, these will be the “backendUrl” and “apiUrl” respectively.
Search for Azure Load Testing in the Azure Portal.
Click Create on the resource provider menu bar.
Once the Load Testing resource is created, navigate to Tests.
Click Create on grid menu bar and then choose Create a URL-based test.
Configure the test with the following parameters for your first case (500B payload). Enter the App Service backend as the host portion of the Test Url, which should be in the form of: https://{your App Service hostname}/bytes/500.
Click Run Test. Once the test completes, you should see results like below:
Now that we have a baseline result for the backend, create and run another identical test, but this time use the API Management API URL as the Test Url (https://{your API Management service hostname}/bytes/500).
Finally, we’ll simulate an updated version of the API by increasing the response payload size. Our API now returns more data than the previous version, so we’ll be able to measure the impact of that change.
Configure and run a new test. We’re still using the API Management host url, with a url path that returns 1500 bytes instead of 500 bytes: (https://{your API Management service hostname}/bytes/1500).
Once the test completes, you should see results like below:
Looking at the Results
In our first benchmark, we were establishing a performance baseline of the backend application which returns a 500-byte payload. We tested the backend in isolation (meaning the load test client was sending requests directly to the backend API end point, without API Management) so that we could measure how it performs on its own. This isn’t always necessary, or even practical, but it can provide really useful insights. Below are the results from three different runs of that first test:
First result set
Throughput (RPS)
Average Response Time (ms)
444
21
431
14
447
15
Next, we ran the same benchmark test using the API Management endpoint so requests were being proxied through API Management to the backend application. This scenario is an “end-to-end” or “system” test that is representative of how our API would be deployed in production. The results help us measure any latency or change in performance added by API Management and the Azure network. As we can see, the results are similar. This indicates that the net effect of API Management on the system performance at this design load is zero or very close to zero.
Second result set
Throughput (RPS)
Average Response Time (ms)
443
15
441
14
436
10
Finally, we ran a benchmark on a new “release” of our backend application. The new version of the API now returns a larger 1,500 byte payload, and we can see from the results that response times have increased significantly.
Third result set
Throughput (RPS)
Average Response Time (ms)
361
600
370
518
367
585
Assuming these results don’t meet our performance objectives, we now know that remediation steps will need to be taken before the new release of our API can be deployed to production. For example, we might consider adding output caching, or scaling the App Service or API Management service, or look for ways to optimize the payload returned from the application code. In any case, we now have the tools to test any remediation approach (using the same structured, quantitative approach above) so that we can be sure that the new API version meets its performance objective before it’s released.
Related resources to explore
Performance tuning a distributed application
Autoscaling
Automate an existing load test with CI/CD
Add caching to improve performance in Azure API Management
Troubleshooting client response timeouts and errors with API Management
Microsoft Tech Community – Latest Blogs –Read More
App designer TabGroup colours
I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below:I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below: I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below: app designer, tabgroup MATLAB Answers — New Questions
Error in boxchart (invalid parameter/value pair arguments)
I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"])I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"]) I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"]) boxchart, anova MATLAB Answers — New Questions
How Windows determines that a connection is a domain network
Hello,
When a Windows client boots, how does it determine that connection is a domain network? Does it have to connect with a PDC domain controller, or can it be any domain controller in the domain? I’m asking because I’m troubleshooting an issue with clients recognizing the network as a domain network and I’m getting inconsistent results. Any information or guidance will be helpful.
Thanks,
Hello, When a Windows client boots, how does it determine that connection is a domain network? Does it have to connect with a PDC domain controller, or can it be any domain controller in the domain? I’m asking because I’m troubleshooting an issue with clients recognizing the network as a domain network and I’m getting inconsistent results. Any information or guidance will be helpful. Thanks, Read More
Filtering data excluding empty values
Hi All,
I have a table where a column calculates a ranking value, but when another column has no data the formula causes the value not to be displayed so that the rankings appear as the names are entered.
The formula for the ranking column is: =IF(B6=””;””;IF.ERROR(IF(H6=0;0;(H6/I6)*100);””))
The problem is that when I filter from highest to lowest all the cells without a value (“”) are sorted before the highest values.
Does anyone know how to solve this?.
Thanks,
Francisco
Hi All, I have a table where a column calculates a ranking value, but when another column has no data the formula causes the value not to be displayed so that the rankings appear as the names are entered.The formula for the ranking column is: =IF(B6=””;””;IF.ERROR(IF(H6=0;0;(H6/I6)*100);””))The problem is that when I filter from highest to lowest all the cells without a value (“”) are sorted before the highest values.Does anyone know how to solve this?. Thanks, Francisco Read More
URP configuration is blocked by Mesh
I cannot find any documentation about configuring URP for the Mesh project. Can you help me?
I have noticed that multiple URP assets are located in “LibraryPackageCachecom.microsoft.mesh.toolkit@5.2409.245mesh.toolkit.uploaderAssetsURP” folder. The problem is that most of the options in URP assets files are grayed out. Is there a way to enable them?
I have created my own URP configuration files and set the project to use them, but then I get an error when I try to run or build the scene.
The game object ‘Global Volume’ uses the component ‘UnityEngine.Rendering.Volume’ in Assembly ‘Unity.RenderPipelines.Core.Runtime’. This component is not supported by Mesh runtime.
I want to enable some post-processes for the PC, which should not be a problem.
Modifying Quality levels or setting URP assets to my own resets them to default settings during the Build via the Mesh environments uploader. Is it intentional that you blocked the URP configuration?
I cannot find any documentation about configuring URP for the Mesh project. Can you help me?I have noticed that multiple URP assets are located in “LibraryPackageCachecom.microsoft.mesh.toolkit@5.2409.245mesh.toolkit.uploaderAssetsURP” folder. The problem is that most of the options in URP assets files are grayed out. Is there a way to enable them?I have created my own URP configuration files and set the project to use them, but then I get an error when I try to run or build the scene. The game object ‘Global Volume’ uses the component ‘UnityEngine.Rendering.Volume’ in Assembly ‘Unity.RenderPipelines.Core.Runtime’. This component is not supported by Mesh runtime.I want to enable some post-processes for the PC, which should not be a problem.Modifying Quality levels or setting URP assets to my own resets them to default settings during the Build via the Mesh environments uploader. Is it intentional that you blocked the URP configuration? Read More
Google workspace migration – Automatic doesn’t download the JSON while creating the migration endpoi
I recently came across an issue where the google workspace migration wasn’t downloading the Json file. After a lot of struggle, I figured out that google by default blocks service account key creation. To resolve this, follow the below mentioned steps
1) login to https://console.cloud.google.com/iam-admin and select the root org
2) Add Organisation Policy Administrator role
3) Click on Organization policies and then search for “Disable service account key creation” and set the enforcement to OFF.
4) Now you should be able to download the JSON.
I recently came across an issue where the google workspace migration wasn’t downloading the Json file. After a lot of struggle, I figured out that google by default blocks service account key creation. To resolve this, follow the below mentioned steps1) login to https://console.cloud.google.com/iam-admin and select the root org2) Add Organisation Policy Administrator role3) Click on Organization policies and then search for “Disable service account key creation” and set the enforcement to OFF. 4) Now you should be able to download the JSON. Read More
Synchronizing Time on a Forest Root PDC housed within an Entra VM
Hey everyone! Allan Sandoval from the Directory Services team here.
We’ve all experienced a lot of changes since the rise of cloud computing and virtualization, and our time synchronization technology (Windows Time) is no exception.
Today, I want to shed light on the VMICTimeProvider and its impact on Virtual Machines (VM) within an Active Directory Domain Services (AD DS) environment. If your domain members and/or Domain Controllers (DC’s) are virtualized, this article is for you.
Typically, in an on-premises deployment, you use your Forest Root Primary Domain Controller (PDC) as a time source for all your domain client machines and for other DCs. This PDC syncs its time with your configured external time source, and this setup remains effective for many of our customers. But what happens if your DCs are virtualized?
The VMICTimeProvider allows VMs to synchronize their time with their host. While this can be useful for some organizations, you might prefer to have your machines synchronized from the same time source and maintain a traditional AD DS Time Hierarchy; (which is the Microsoft recommendation). If this traditional hierarchy resonates with you, then your clients should sync their time with their closest DC and have those DCs synchronize their own time with the PDC.
Azure Virtual Desktops (AVD) sync with their host by default. You can query the current configuration’s Time Provider on any Windows machine with either of these two commands:
w32tm /query /source
w32tm /query /status
From both commands you will get the currently configured Time Source provider, if you get “VM IC Time Synchronization Provider” as a result in the output, then that machine is using the VMICTimeProvider.
If this is the case for you, and you want to keep the traditional ADDS Time hierarchy, you should disable the VMICTimeProvider on your desired VMs. To do this, modify the following registry value, which will effectively disable the VMICTimeProvider as a time source on these:
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesW32TimeTimeProviders VMICTimeProvider
DWORD: Enabled
Value: 0
The default value for this DWORD is 1.
For this change to apply to the machine you must restart the W32Time Service, you can perform that using the following commands on an Elevated Command Prompt:
net stop w32time
net start w32time
You should be working with your previously configured ADDS Windows Time settings afterwards.
But it doesn’t stop there. It’s not uncommon to find DCs being virtualized nowadays. In this case, you would also want to disable the VMICTimeProvider on those DC’s, so they get their time from the Forest Root PDC. The Forest Root PDC must also be configured in this way, so it too gets its time externally from your configured time source, rather than syncing with its own host. The same registry key specified above can be modified on the DCs to make that change.
What about physical machines?
You can also deploy this registry key on bare metal machines for an added benefit, this will prevent the regular informational Event ID 158 from Time-Service getting logged in their System event log.
We have recommended this approach for customers who have faced issues with their Windows VMs time jumping around in their on-premise domains. This has occurred even though they have their Time Settings configured through a well configured GPO (Group Policy Object). The virtualized machines were not following the GPO settings because they were picking up their time from their host instead. This is expected behavior and here is a reference:
“w32time would prefer the time provider in the following order of priority: stratum level, root delay, root dispersion, time offset. In most cases, w32time on an Azure VM would prefer host time due to evaluation it would do to compare both time sources.”
Time sync for Windows VMs in Azure – Azure Virtual Machines | Microsoft Learn
We recommend disabling the VMICTimeProvider if you plan to leverage an existing on-premises ADDS Time Hierarchy. This way your machines will avoid syncing the time with the host and follow your designed Time Hierarchy instead, this way the time across your domain will remain both reliable and accurate.
References:
Time sync for Windows VMs in Azure – Azure Virtual Machines | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
FinOps para AKS: Um Guia para Otimização de Custos
No cenário em constante evolução da computação em nuvem, gerenciar custos de forma eficaz enquanto se mantém o desempenho ideal é um desafio que muitas organizações enfrentam. Os Serviços Kubernetes do Azure (AKS) oferecem uma plataforma poderosa para orquestração de contêineres, mas sem práticas adequadas de operações financeiras (FinOps), os custos podem rapidamente sair de controle.
Aqui estão algumas recomendações para implementar FinOps no AKS para garantir crescimento sustentável e otimização de custos:
O Add-in de Análise de Custos do Azure é uma ferramenta poderosa que fornece insights detalhados sobre o consumo de recursos e os custos associados ao seu cluster AKS. Com ele, você pode identificar rapidamente onde estão ocorrendo os maiores gastos e tomar decisões informadas sobre como reduzi-los. Por exemplo, você pode ajustar o tamanho e o número de nós do cluster, escolher SKUs de VM mais econômicas ou identificar e eliminar recursos subutilizados. A análise de custos ajuda a garantir que você esteja aproveitando ao máximo seu investimento no Azure, mantendo os custos sob controle.
Atualmente, kubernetes não tem suporte nativo ao scale-to-zero, ou seja, ela manterá o número de replicas em iddle mesmo que o workload não esteja em uso. Ai que entra o KEDA ( Kubernetes Event-Driven Autoscaling), uma solução que permite escalonamento baseada em eventos. Com o KEDA, você pode configurar o escalonamento automático de seus contêineres com base na demanda real de eventos, garantindo que você utilize recursos apenas quando necessário. Isso não apenas melhora a eficiência operacional, mas também reduz os custos, pois você paga apenas pelo que usa.
GA: Kubernetes Event-driven Autoscaling (KEDA) Add-on for AKS
Além disso, o KEDA suporta uma ampla gama de fontes de eventos, tornando-o uma ferramenta versátil para diversas aplicações.
Aqui você pode encontrar mais detalhes KEDA e pros/contras.
A prévia privada do Backup do AKS é outra ferramenta a considerar. Ela oferece uma maneira de proteger seu ambiente AKS, garantindo que, em caso de desastre, suas cargas de trabalho possam ser restauradas, evitando assim perdas financeiras devido a tempo de inatividade.
Private preview: Azure Kubernetes Service (AKS) Backup
FinOps como parte da cultura com processos automatizados
No meu primeiro artigo da serie FinOps, foi comentado em detalhes mas abaixo trago alguns dos itens importantes sobre cultura e processo:
Implemente políticas de governança: Estabeleça políticas de governança para garantir que os recursos sejam provisionados de acordo com as melhores práticas e alinhados com os objetivos financeiros da empresa.
Automatize processos: Automatize a alocação de recursos, o dimensionamento automático e outras tarefas operacionais para otimizar a eficiência e reduzir o erro humano.
Reveja e melhore continuamente: Realize revisões periódicas dos custos e do uso, ajuste as estratégias de otimização conforme necessário e mantenha uma mentalidade de melhoria contínua.
A Microsoft Azure oferece recursos e ferramentas para ajudar as organizações a otimizar seus gastos com o ambiente de nuvem. O framework de adoção de nuvem e os frameworks bem arquitetados estabelecem práticas recomendadas, definem estratégia de nuvem e oferecem um framework de ponta a ponta para gerenciar o ambiente de nuvem de forma eficaz. Ferramentas como Azure Advisor, política de gerenciamento de custos e grupos de gerenciamento permitem recomendações sobre o uso de recursos, identificação de gastos em diferentes grupos de recursos e definição de políticas para restringir o uso de recursos ou para criar uma exigência para, por exemplo, marcar recursos com um centro de custo específico.
Além disso, com o FinOps toolkit, você vai encontrar muito assets para acelerar os processos de FinOps da sua empresa.
Talvez você não precise executar continuamente suas cargas de trabalho do AKS (Serviço de Kubernetes do Azure). Por exemplo, você pode ter um cluster de desenvolvimento que só usa durante o horário comercial. Isso significa que há momentos em que seu cluster pode estar ocioso, executando apenas os componentes do sistema. Você pode reduzir o volume do cluster colocando todos os pools de nós User em escala 0 :
Start and stop an Azure Kubernetes Service (AKS) node pool
Mesmo desligando todos os os pools de nós User, ainda sim seu cluster terá custo dado node pools de sistema. Para otimizar melhor os custos durante esses períodos, você pode desativar ou interromper seu cluster. Essa ação interrompe o painel de controle e os nós de agente, permitindo que você economize em todos os custos de computação e ainda mantenha todos os objetos, exceto os pods autônomos:
Stop and start an Azure Kubernetes Service (AKS) cluster
Com a introdução do suporte a pool de nós ARM64 no AKS, as organizações agora podem criar nós agentes ARM64 Ubuntu e misturar nós de arquitetura Intel e ARM dentro de um cluster. Essas VMs ARM são projetadas para executar cargas de trabalho dinâmicas e escaláveis de forma eficiente, oferecendo até 50% de melhor desempenho de preço do que VMs baseadas em x86 comparáveis para cargas de trabalho de expansão. Isso é particularmente benéfico para servidores de bancos de dados de código aberto, aplicativos nativos da nuvem, servidores de jogos e muito mais.
Para cargas de trabalho intensivas de computação, como renderização de gráficos, treinamento de grandes modelos e inferência, considere o uso de VMs otimizadas para computação, memória, armazenamento ou unidades de processamento gráfico (GPU). Os tamanhos de VM de GPU são VMs especializadas disponíveis com uma, várias ou frações de GPUs, mais adequadas para pools de nós Linux habilitados para GPU no AKS.
É importante notar que o custo de computação varia entre as regiões. Ao selecionar uma região menos cara para executar cargas de trabalho, esteja ciente do impacto potencial da latência e dos custos de transferência de dados.
Optimize costs in Azure Kubernetes Service (AKS)
Implementar práticas recomendadas de FinOps no AKS é crucial para a otimização de custos e crescimento sustentável. Nao é um trabalho simples e rápido mas, ao combinar as recomendações acima, vai ajuda-lo a alcançar um equilíbrio entre desempenho e custo.
Este é o terceiro artigo da serie FinOps. Você pode acessar os artigos anteriores pelo links:
(1) FinOps é SÓ o primeiro passo … que venha DevSecFinOps
(2) Democratizando FinOps com FOCUS
Microsoft Tech Community – Latest Blogs –Read More
Error : Failure in initial user-supplied nonlinear constraint function evaluation.
Hello everyone!
I’m encountering an issue with my MATLAB code and could use some assistance. Specifically, I’m getting the following error message:
[sol, fval, exitflag, output] = solve(prob);
Caused by:
Failure in initial user-supplied nonlinear constraint function evaluation.
It seems to be related to the evaluation of the nonlinear constraint function. I’ve attached the relevant part of my code below. I’m particularly confused because I linearised all my constraint terms, so I wasn’t expecting any non-linearity.
Could you please help me identify the issue and suggest any potential fixes?
Thank you!
best Regards!
MyProblem()Hello everyone!
I’m encountering an issue with my MATLAB code and could use some assistance. Specifically, I’m getting the following error message:
[sol, fval, exitflag, output] = solve(prob);
Caused by:
Failure in initial user-supplied nonlinear constraint function evaluation.
It seems to be related to the evaluation of the nonlinear constraint function. I’ve attached the relevant part of my code below. I’m particularly confused because I linearised all my constraint terms, so I wasn’t expecting any non-linearity.
Could you please help me identify the issue and suggest any potential fixes?
Thank you!
best Regards!
MyProblem() Hello everyone!
I’m encountering an issue with my MATLAB code and could use some assistance. Specifically, I’m getting the following error message:
[sol, fval, exitflag, output] = solve(prob);
Caused by:
Failure in initial user-supplied nonlinear constraint function evaluation.
It seems to be related to the evaluation of the nonlinear constraint function. I’ve attached the relevant part of my code below. I’m particularly confused because I linearised all my constraint terms, so I wasn’t expecting any non-linearity.
Could you please help me identify the issue and suggest any potential fixes?
Thank you!
best Regards!
MyProblem() nlp, optimization, error MATLAB Answers — New Questions
Avast detected a virus threat (IDP.ALEXA.54) in my own standalone application
Hello everyone.
Recently, I have compiled my own standalone application using Matlab Compiler – let’s say it is called JHApp.exe. Then, I tried to test the functions of my JHApp.exe file occurring in the for_redistribution_files_only folder (i.e., without installation). During each very first run (with each new compiled version), a security alert is reported by Avast. I understand the antivirus is suspicious of that new executable file and I know this is quite common – this is not the major problem.
However, after several minutes of using JHApp.exe (the app makes many calculations and can create .xls, .html and .m files), I received a new, more specific alert, something like IDP.ALEXA.54 detected, and Avast moved JHApp.exe to the carantine.
Given that I have spent a lot of time coding the app and I want to share my app in a scientific community, it is very important for me that it is trustworthy and safe. May it happen that a harmful code, e.g., from an infected PC, is accidentally and unintentionally distributed together with a Matlab standalone application?
It is quite strange for me to imagine that – my PC does not seem to be infected (according to Avast), and I don’t think some harmful code can easily attack Matlab and hide in a standalone application.
Please, what do you think about that?
Thank you very much for your answers.
Best regards, Jakub HaiflerHello everyone.
Recently, I have compiled my own standalone application using Matlab Compiler – let’s say it is called JHApp.exe. Then, I tried to test the functions of my JHApp.exe file occurring in the for_redistribution_files_only folder (i.e., without installation). During each very first run (with each new compiled version), a security alert is reported by Avast. I understand the antivirus is suspicious of that new executable file and I know this is quite common – this is not the major problem.
However, after several minutes of using JHApp.exe (the app makes many calculations and can create .xls, .html and .m files), I received a new, more specific alert, something like IDP.ALEXA.54 detected, and Avast moved JHApp.exe to the carantine.
Given that I have spent a lot of time coding the app and I want to share my app in a scientific community, it is very important for me that it is trustworthy and safe. May it happen that a harmful code, e.g., from an infected PC, is accidentally and unintentionally distributed together with a Matlab standalone application?
It is quite strange for me to imagine that – my PC does not seem to be infected (according to Avast), and I don’t think some harmful code can easily attack Matlab and hide in a standalone application.
Please, what do you think about that?
Thank you very much for your answers.
Best regards, Jakub Haifler Hello everyone.
Recently, I have compiled my own standalone application using Matlab Compiler – let’s say it is called JHApp.exe. Then, I tried to test the functions of my JHApp.exe file occurring in the for_redistribution_files_only folder (i.e., without installation). During each very first run (with each new compiled version), a security alert is reported by Avast. I understand the antivirus is suspicious of that new executable file and I know this is quite common – this is not the major problem.
However, after several minutes of using JHApp.exe (the app makes many calculations and can create .xls, .html and .m files), I received a new, more specific alert, something like IDP.ALEXA.54 detected, and Avast moved JHApp.exe to the carantine.
Given that I have spent a lot of time coding the app and I want to share my app in a scientific community, it is very important for me that it is trustworthy and safe. May it happen that a harmful code, e.g., from an infected PC, is accidentally and unintentionally distributed together with a Matlab standalone application?
It is quite strange for me to imagine that – my PC does not seem to be infected (according to Avast), and I don’t think some harmful code can easily attack Matlab and hide in a standalone application.
Please, what do you think about that?
Thank you very much for your answers.
Best regards, Jakub Haifler virus threat, standalone application, compiler, security, alert MATLAB Answers — New Questions
How do I identify the MEX function that caused MATLAB to crash?
MATLAB crashed when I tried running my code and I believe the cause of the crash was a MEX function due to the following message at the bottom on the MATLAB crash log:
This error was detected while a MEX-file was running. If the MEX-file
is not an official MathWorks function, please examine its source code
for errors. Please consult the External Interfaces Guide for information
on debugging MEX-files.
How can I determine from the crash log which MEX function caused the crash?MATLAB crashed when I tried running my code and I believe the cause of the crash was a MEX function due to the following message at the bottom on the MATLAB crash log:
This error was detected while a MEX-file was running. If the MEX-file
is not an official MathWorks function, please examine its source code
for errors. Please consult the External Interfaces Guide for information
on debugging MEX-files.
How can I determine from the crash log which MEX function caused the crash? MATLAB crashed when I tried running my code and I believe the cause of the crash was a MEX function due to the following message at the bottom on the MATLAB crash log:
This error was detected while a MEX-file was running. If the MEX-file
is not an official MathWorks function, please examine its source code
for errors. Please consult the External Interfaces Guide for information
on debugging MEX-files.
How can I determine from the crash log which MEX function caused the crash? mex, mexfunction, mexfunctionadapter MATLAB Answers — New Questions
SQL Monitoring
Hi,
I am looking for a way to monitor several SQL servers including several SQL AAG clusters. I have SCOM available to me but haven’t found it very useful thus far. I want to monitor things like if the AAG failed over or what node is running as primary. I want to see if databases are healthy and synced up or if a node is not functioning like I would expect. I would also like to monitor disk space and CU level if I could. I wanted to see what other people are using that they like. I an envisioning a dashboard of sorts but I open. I also would prefer to find a free solution if I can or use what I have available. Thanks in advance!
Hi,I am looking for a way to monitor several SQL servers including several SQL AAG clusters. I have SCOM available to me but haven’t found it very useful thus far. I want to monitor things like if the AAG failed over or what node is running as primary. I want to see if databases are healthy and synced up or if a node is not functioning like I would expect. I would also like to monitor disk space and CU level if I could. I wanted to see what other people are using that they like. I an envisioning a dashboard of sorts but I open. I also would prefer to find a free solution if I can or use what I have available. Thanks in advance! Read More
Windows App update for Remote Desktop on iOS and macOS
In an upcoming update for Remote Desktop on iOS and macOS, the client will have a new name: Windows App! Along with this new name, you will find an updated user interface and additional functionality. If you are currently using Remote Desktop on iOS or macOS, there will no disruption to business continuity. For more information, check out Windows App general availability coming soon.
A preview of the Windows App is now available in the Remote Desktop beta release, available from the App Center. To access the preview, follow the steps outlined in Get started with Windows App to connect to devices and apps. If you have general questions, you can ask community experts in Microsoft Q&A. If you have feedback on the Windows App preview, please contact us at iOSWindowsAppBeta@microsoft.com or macOSWindowsAppBeta@microsoft.com.
In an upcoming update for Remote Desktop on iOS and macOS, the client will have a new name: Windows App! Along with this new name, you will find an updated user interface and additional functionality. If you are currently using Remote Desktop on iOS or macOS, there will no disruption to business continuity. For more information, check out Windows App general availability coming soon.
A preview of the Windows App is now available in the Remote Desktop beta release, available from the App Center. To access the preview, follow the steps outlined in Get started with Windows App to connect to devices and apps. If you have general questions, you can ask community experts in Microsoft Q&A. If you have feedback on the Windows App preview, please contact us at iOSWindowsAppBeta@microsoft.com or macOSWindowsAppBeta@microsoft.com. Read More
Outlook search not working as expected
Using Classic outlook 2021 on the desktop app, the search bar is not working. When I click to activate, the cursor appears for a moment, then deactivates immediately and will not allow me to enter any text.
If I am typing while clicking, I can get a few characters to come in, and then it will not “boot” me from the search bar. This is a poor workaround.
Using Classic outlook 2021 on the desktop app, the search bar is not working. When I click to activate, the cursor appears for a moment, then deactivates immediately and will not allow me to enter any text. If I am typing while clicking, I can get a few characters to come in, and then it will not “boot” me from the search bar. This is a poor workaround. Read More
Enhanced presenter and attendee experience with the expanded gallery view in Team
Happy Monday, Microsoft 365 Insiders!
New enhancement in Microsoft Teams! We’re excited to share that you can now use new expanded gallery view options for the minimized meeting window. The existing single-tile meeting view for attendees, which shows the active speaker, has been enhanced with the option to expand to a gallery view which shows up to 4 meeting participants and a Me Video tile.
Whether you’re an attendee or a presenter, these enhancements are designed to make your meetings more productive. Check out our latest blog: Enhanced presenter and attendee experience with the expanded gallery view in Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Happy Monday, Microsoft 365 Insiders!
New enhancement in Microsoft Teams! We’re excited to share that you can now use new expanded gallery view options for the minimized meeting window. The existing single-tile meeting view for attendees, which shows the active speaker, has been enhanced with the option to expand to a gallery view which shows up to 4 meeting participants and a Me Video tile.
Whether you’re an attendee or a presenter, these enhancements are designed to make your meetings more productive. Check out our latest blog: Enhanced presenter and attendee experience with the expanded gallery view in Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More