Month: June 2024
Simulink migration from ros to ros 2
Hi all,
I have a simulink project that generated a code for ROS.
Due to migration to ROS 2, this code is no longer adequate and I would like to obtaine the ROS 2 generated code.
Any orientation as to where to find an answer for this would be more than welcome.
Thanks in advance.Hi all,
I have a simulink project that generated a code for ROS.
Due to migration to ROS 2, this code is no longer adequate and I would like to obtaine the ROS 2 generated code.
Any orientation as to where to find an answer for this would be more than welcome.
Thanks in advance. Hi all,
I have a simulink project that generated a code for ROS.
Due to migration to ROS 2, this code is no longer adequate and I would like to obtaine the ROS 2 generated code.
Any orientation as to where to find an answer for this would be more than welcome.
Thanks in advance. simulink, ros, ros2 MATLAB Answers — New Questions
How to call Matlab script from Labview using the Labview “Call Matlab Function” function
mycirclescript.m contains the following:
"r = 3
A = pi * r^2"
Labview program is simply the following:
"Open Matlab Session" with path to Release Name ("R2019b") in string constant.
followed by the "Call Matlab Session" function with .m file path selected with a control.
followed by the "Close Matlab Session"
All three functions have "Session Out" connected to "Session In" and "Error Out" connected to "Error In", with an indicator for Error Out
on the Close Matlab Session function.
This is the simplest possible script and I don’t see Matlab open and the only thing returned
is the following error below"
Source: E:LABVIEWPROGRAMSmycirclescript.m
Function Name: mycirclescript
MATLAB call returned the following error: Output argument "n" (and maybe others) not assigned during call to "niifm.RunFunction>getOutputArgumentsCount".mycirclescript.m contains the following:
"r = 3
A = pi * r^2"
Labview program is simply the following:
"Open Matlab Session" with path to Release Name ("R2019b") in string constant.
followed by the "Call Matlab Session" function with .m file path selected with a control.
followed by the "Close Matlab Session"
All three functions have "Session Out" connected to "Session In" and "Error Out" connected to "Error In", with an indicator for Error Out
on the Close Matlab Session function.
This is the simplest possible script and I don’t see Matlab open and the only thing returned
is the following error below"
Source: E:LABVIEWPROGRAMSmycirclescript.m
Function Name: mycirclescript
MATLAB call returned the following error: Output argument "n" (and maybe others) not assigned during call to "niifm.RunFunction>getOutputArgumentsCount". mycirclescript.m contains the following:
"r = 3
A = pi * r^2"
Labview program is simply the following:
"Open Matlab Session" with path to Release Name ("R2019b") in string constant.
followed by the "Call Matlab Session" function with .m file path selected with a control.
followed by the "Close Matlab Session"
All three functions have "Session Out" connected to "Session In" and "Error Out" connected to "Error In", with an indicator for Error Out
on the Close Matlab Session function.
This is the simplest possible script and I don’t see Matlab open and the only thing returned
is the following error below"
Source: E:LABVIEWPROGRAMSmycirclescript.m
Function Name: mycirclescript
MATLAB call returned the following error: Output argument "n" (and maybe others) not assigned during call to "niifm.RunFunction>getOutputArgumentsCount". labview MATLAB Answers — New Questions
Why do we use ifft instead of fft when looking at the spectrum of a sech pulse (uu) in this code?
In the code below, temp = fftshift(ifft(uu)) is used to view the sech pulse in the spectrum. However, since ifft is a function that transforms from the frequency domain to the time domain, wouldn’t temp = fftshift(fft(uu))) be correct? Why does the book do it this way?
% SSFM code for solving the normalized NLS equation
%% By G. P. Agrawal for the 6th edition of NLFO book
fiblen = 5;
beta2 =-1;
N = 1;
% fiber length (in units of L_D)
% sign of GVD parameter beta_2
% soliton order
%—set simulation parameters
nt = 1024; Tmax = 32;
% FFT points and window size
step_num = round(20*fiblen*N^2);
% No. of z steps
deltaz = fiblen/step_num; % step size in z
dtau = (2*Tmax)/nt;
tau = (-nt/2:nt/2-1)*dtau;
% step size in tau
% time array
omega = fftshift(-nt/2:nt/2-1)*(pi/Tmax); % omega array
uu = sech(tau);
% sech pulse shape (can be modified)
%—Plot input pulse shape and spectrum
temp = fftshift(ifft(uu));
% Fourier transform
spect = abs(temp).^2;
spect = spect./max(spect);
freq = fftshift(omega)/(2*pi);
subplot(2,1,1);
% input spectrum
% normalize
% freq. array
plot(tau, abs(uu).^2, ‘–k’); hold on; axis([-5 5 0 inf]);
xlabel(‘Normalized Time’); ylabel(‘Normalized Power’);
subplot(2,1,2);
plot(freq, spect, ‘–k’); hold on; axis([-.5 .5 0 inf]);
xlabel(‘Normalized Frequency’); ylabel(‘Spectral Power’);In the code below, temp = fftshift(ifft(uu)) is used to view the sech pulse in the spectrum. However, since ifft is a function that transforms from the frequency domain to the time domain, wouldn’t temp = fftshift(fft(uu))) be correct? Why does the book do it this way?
% SSFM code for solving the normalized NLS equation
%% By G. P. Agrawal for the 6th edition of NLFO book
fiblen = 5;
beta2 =-1;
N = 1;
% fiber length (in units of L_D)
% sign of GVD parameter beta_2
% soliton order
%—set simulation parameters
nt = 1024; Tmax = 32;
% FFT points and window size
step_num = round(20*fiblen*N^2);
% No. of z steps
deltaz = fiblen/step_num; % step size in z
dtau = (2*Tmax)/nt;
tau = (-nt/2:nt/2-1)*dtau;
% step size in tau
% time array
omega = fftshift(-nt/2:nt/2-1)*(pi/Tmax); % omega array
uu = sech(tau);
% sech pulse shape (can be modified)
%—Plot input pulse shape and spectrum
temp = fftshift(ifft(uu));
% Fourier transform
spect = abs(temp).^2;
spect = spect./max(spect);
freq = fftshift(omega)/(2*pi);
subplot(2,1,1);
% input spectrum
% normalize
% freq. array
plot(tau, abs(uu).^2, ‘–k’); hold on; axis([-5 5 0 inf]);
xlabel(‘Normalized Time’); ylabel(‘Normalized Power’);
subplot(2,1,2);
plot(freq, spect, ‘–k’); hold on; axis([-.5 .5 0 inf]);
xlabel(‘Normalized Frequency’); ylabel(‘Spectral Power’); In the code below, temp = fftshift(ifft(uu)) is used to view the sech pulse in the spectrum. However, since ifft is a function that transforms from the frequency domain to the time domain, wouldn’t temp = fftshift(fft(uu))) be correct? Why does the book do it this way?
% SSFM code for solving the normalized NLS equation
%% By G. P. Agrawal for the 6th edition of NLFO book
fiblen = 5;
beta2 =-1;
N = 1;
% fiber length (in units of L_D)
% sign of GVD parameter beta_2
% soliton order
%—set simulation parameters
nt = 1024; Tmax = 32;
% FFT points and window size
step_num = round(20*fiblen*N^2);
% No. of z steps
deltaz = fiblen/step_num; % step size in z
dtau = (2*Tmax)/nt;
tau = (-nt/2:nt/2-1)*dtau;
% step size in tau
% time array
omega = fftshift(-nt/2:nt/2-1)*(pi/Tmax); % omega array
uu = sech(tau);
% sech pulse shape (can be modified)
%—Plot input pulse shape and spectrum
temp = fftshift(ifft(uu));
% Fourier transform
spect = abs(temp).^2;
spect = spect./max(spect);
freq = fftshift(omega)/(2*pi);
subplot(2,1,1);
% input spectrum
% normalize
% freq. array
plot(tau, abs(uu).^2, ‘–k’); hold on; axis([-5 5 0 inf]);
xlabel(‘Normalized Time’); ylabel(‘Normalized Power’);
subplot(2,1,2);
plot(freq, spect, ‘–k’); hold on; axis([-.5 .5 0 inf]);
xlabel(‘Normalized Frequency’); ylabel(‘Spectral Power’); nls, fiber simulation MATLAB Answers — New Questions
Help in varchar column calculation
Hi, How I can calculate and get the calculation result of calc4 colum. Currently this is varchar but i want to calculate calc1+calc2+calc3 in calc4.
Create table Calculation (Calc1 numeric (10,2), calc2 numeric (10,2), Calc3 numeric (10,2), Calc4 varchar(40))
insert into Calculation (6,7,8, ‘calc1+calc2+calc3’)
Desired result want:
Select * from Calculation
Calc1 calc2 Calc3 Calc4
6 7 8 22
Hi, How I can calculate and get the calculation result of calc4 colum. Currently this is varchar but i want to calculate calc1+calc2+calc3 in calc4. Create table Calculation (Calc1 numeric (10,2), calc2 numeric (10,2), Calc3 numeric (10,2), Calc4 varchar(40))insert into Calculation (6,7,8, ‘calc1+calc2+calc3’) Desired result want:Select * from CalculationCalc1 calc2 Calc3 Calc46 7 8 22 Read More
Index in position 1 is invalid. Array indices must be positive integers or logical values.
I am trying to take raw data from a zip file and the raw data includes x,y,z coordinates of each pixel in addition to the grayscale value (a measure of brightness/darkness of each pixel with lower values indicating darker pixel). The end result show be a clear xray. Here is what I have so far, but when I get to the for loop, I get an error of "Index in position 1 is invalid. Array indices must be positive integers or logical values."
How do I fix this?
image_data = readmatrix(‘Chest_Xray_Raw_Data.txt’);
x = data(:, 1);
y = data(:, 2);
z = data(:, 3);
grayscale_value = data(:, 4);
max_x = max(x);
max_y = max(y);
max_z = max(z);
max_grayscale = max(grayscale_value);
[max_y, max_x]
image_matrix = zeros(uint8(max_y), uint8(max_x));
for i = 1:length(grayscale_value)
image_matrix(y(i), x(i))= grayscale_value(i);
endI am trying to take raw data from a zip file and the raw data includes x,y,z coordinates of each pixel in addition to the grayscale value (a measure of brightness/darkness of each pixel with lower values indicating darker pixel). The end result show be a clear xray. Here is what I have so far, but when I get to the for loop, I get an error of "Index in position 1 is invalid. Array indices must be positive integers or logical values."
How do I fix this?
image_data = readmatrix(‘Chest_Xray_Raw_Data.txt’);
x = data(:, 1);
y = data(:, 2);
z = data(:, 3);
grayscale_value = data(:, 4);
max_x = max(x);
max_y = max(y);
max_z = max(z);
max_grayscale = max(grayscale_value);
[max_y, max_x]
image_matrix = zeros(uint8(max_y), uint8(max_x));
for i = 1:length(grayscale_value)
image_matrix(y(i), x(i))= grayscale_value(i);
end I am trying to take raw data from a zip file and the raw data includes x,y,z coordinates of each pixel in addition to the grayscale value (a measure of brightness/darkness of each pixel with lower values indicating darker pixel). The end result show be a clear xray. Here is what I have so far, but when I get to the for loop, I get an error of "Index in position 1 is invalid. Array indices must be positive integers or logical values."
How do I fix this?
image_data = readmatrix(‘Chest_Xray_Raw_Data.txt’);
x = data(:, 1);
y = data(:, 2);
z = data(:, 3);
grayscale_value = data(:, 4);
max_x = max(x);
max_y = max(y);
max_z = max(z);
max_grayscale = max(grayscale_value);
[max_y, max_x]
image_matrix = zeros(uint8(max_y), uint8(max_x));
for i = 1:length(grayscale_value)
image_matrix(y(i), x(i))= grayscale_value(i);
end index, greyscale, readmatrix MATLAB Answers — New Questions
Variable appears to change size on every loop iteration
Hi to everybody
i have a problem running these codes .its a linear segment price in power generation economic dispatch matlab code
in line 17 AND SOME OTHER LINES THIS ERROR COMES:
Variable F appears to change size on every loop iteration
I DONT KNOW HOW TO SOLVE THAT!Hi to everybody
i have a problem running these codes .its a linear segment price in power generation economic dispatch matlab code
in line 17 AND SOME OTHER LINES THIS ERROR COMES:
Variable F appears to change size on every loop iteration
I DONT KNOW HOW TO SOLVE THAT! Hi to everybody
i have a problem running these codes .its a linear segment price in power generation economic dispatch matlab code
in line 17 AND SOME OTHER LINES THIS ERROR COMES:
Variable F appears to change size on every loop iteration
I DONT KNOW HOW TO SOLVE THAT! loop iteration MATLAB Answers — New Questions
Datetime with variable format
I have an output file that has timestamps in the form of ‘yyyy-MM-dd HH:mm:ss.S’ or ‘yyyy-MM-dd HH:mm:ss’. I would like a clean way to convert them from date strings into an array of date time format.
Example code:
a={‘2016-02-09 10:28:00′;’2016-02-09 10:28:01.5’}
out=datetime(a,’InputFormat’,’yyyy-MM-dd HH:mm:ss.S’,’InputFormat’,’yyyy-MM-dd HH:mm:ss’,’Format’,’yyyy-MM-dd HH:mm:ss.S’);
Actual output:
out =
2016-02-09 10:28:00.0
NaT
Desired output:
out =
2016-02-09 10:28:00.0
2016-02-09 10:28:00.5I have an output file that has timestamps in the form of ‘yyyy-MM-dd HH:mm:ss.S’ or ‘yyyy-MM-dd HH:mm:ss’. I would like a clean way to convert them from date strings into an array of date time format.
Example code:
a={‘2016-02-09 10:28:00′;’2016-02-09 10:28:01.5’}
out=datetime(a,’InputFormat’,’yyyy-MM-dd HH:mm:ss.S’,’InputFormat’,’yyyy-MM-dd HH:mm:ss’,’Format’,’yyyy-MM-dd HH:mm:ss.S’);
Actual output:
out =
2016-02-09 10:28:00.0
NaT
Desired output:
out =
2016-02-09 10:28:00.0
2016-02-09 10:28:00.5 I have an output file that has timestamps in the form of ‘yyyy-MM-dd HH:mm:ss.S’ or ‘yyyy-MM-dd HH:mm:ss’. I would like a clean way to convert them from date strings into an array of date time format.
Example code:
a={‘2016-02-09 10:28:00′;’2016-02-09 10:28:01.5’}
out=datetime(a,’InputFormat’,’yyyy-MM-dd HH:mm:ss.S’,’InputFormat’,’yyyy-MM-dd HH:mm:ss’,’Format’,’yyyy-MM-dd HH:mm:ss.S’);
Actual output:
out =
2016-02-09 10:28:00.0
NaT
Desired output:
out =
2016-02-09 10:28:00.0
2016-02-09 10:28:00.5 datetime MATLAB Answers — New Questions
How can I change an xlsx file to a mat file?
I have a long column in a form of a .xlsx file and I am trying to convert to a .mat file in order to make a signal. what is the best way to go about this?I have a long column in a form of a .xlsx file and I am trying to convert to a .mat file in order to make a signal. what is the best way to go about this? I have a long column in a form of a .xlsx file and I am trying to convert to a .mat file in order to make a signal. what is the best way to go about this? excel, xlsx, mat, upload, xlsread, matrix MATLAB Answers — New Questions
Index list – show the last person who edited each page
Hi folks,
I’m looking for guidance on a SharePoint list that indexes a group of pages (A-Z safety topics). I’d like the list to also have a column that identifies the last person to edit each page, and a column that identifies the last person to comment on each page.
Comments in each page are turned on.
In list settings you have modified by, and this only cites the person to modify the line item, not the page that the line points to.
Any help would be appreciated :folded_hands:
Hi folks, I’m looking for guidance on a SharePoint list that indexes a group of pages (A-Z safety topics). I’d like the list to also have a column that identifies the last person to edit each page, and a column that identifies the last person to comment on each page. Comments in each page are turned on. In list settings you have modified by, and this only cites the person to modify the line item, not the page that the line points to. Any help would be appreciated :folded_hands: Read More
More > Workflow and Details missing from right click on SharePoint List
Howdy all, for one of my client’s SharePoint, I am unable to access the Workflow or details pane for all of their lists.
I utilise Nintex 365 workflows for this client so it’s completely blocking me from accessing their workflows and supporting them. Any ideas to bring this back? Note: this does still show up on some other of their sites.
This one is from the other site:
Side note: Really hating the new list experience.
Howdy all, for one of my client’s SharePoint, I am unable to access the Workflow or details pane for all of their lists. I utilise Nintex 365 workflows for this client so it’s completely blocking me from accessing their workflows and supporting them. Any ideas to bring this back? Note: this does still show up on some other of their sites. This one is from the other site:Side note: Really hating the new list experience. Read More
Workflow and Details missing from right click on SharePoint List” />
Week of June 20, 2024: Azure Updates
Public Preview: Upgrade Policies for Virtual Machine Scale Sets with Flexible Orchestration
Status: In Preview
The upgrade policy of a Virtual Machine Scale Set determines how virtual machines can be brought up to date with the latest scale set model. Before today, upgrade policies were available for Virtual Machine Scale Sets with Uniform Orchestration. Now the same upgrade policies available for Uniform Orchestration are available for Virtual Machine Scale Sets with Flexible Orchestration.
The upgrade policies available for Virtual Machine Scale Sets are Automatic, Manual and Rolling. Additionally, if using a Rolling upgrade policy, you can choose to enable MaxSurge to create new instances with the updated scale set model to replace virtual machines using the old model.
Automatic upgrade policy
With an automatic upgrade policy, the scale set makes no guarantees about the order of virtual machines being brought down. The scale set might take down all virtual machines at the same time to perform upgrades.
Automatic upgrade policy is best suited for DevTest scenarios where you aren’t concerned about the uptime of your instances while making changes to configurations and settings.
Manual upgrade policy
With a manual upgrade policy, you choose when to update the scale set instances. Nothing happens automatically to the existing virtual machines when changes occur to the scale set model. New instances that are added to the scale set, utilize the most update to date models available.
Manual upgrade policy is best suited for workloads where you require more control over when and how instances are updated.
Rolling Upgrade Policy + MaxSurge
With a rolling upgrade policy, the scale set performs updates in batches. You also get more control over the upgrades with settings like batch size, max healthy percentage, prioritizing unhealthy instances and enabling upgrades across availability zones. Additionally, you can enable MaxSurge which will create new virtual machines to replace virtual machines running in the old model. Using MaxSurge ensures your scale set does not see any reduced capacity during an upgrade.
Rolling upgrade policy is best suited for production workloads that require a set number of instances always be available. Rolling upgrades is the safest way to upgrade instances to the latest model without compromising availability and uptime.
Try the upgrade policies for Virtual Machine Scale sets today.
Products:
Virtual Machine Scale Sets
Virtual Machines
________________________________________________________________________________________________________________________________
Generally Available: az command invoke in AKS
Status: Now Available
AKS run command allows users to remotely invoke commands in an AKS cluster through the AKS API. For example, this feature introduces a new API that supports executing just-in-time commands from a remote laptop for a private cluster. This can greatly assist with quick just-in-time access to a private cluster when the client is not on the cluster private network, while still retaining and enforcing full RBAC controls and private API server.
Example:
az aks command invoke “kubectl get nodes”
Products:
AKS
________________________________________________________________________________________________________________________________
Generally Available: OS Security Patch channel for Linux in AKS
Status: Now Available
OS security patch channel for Linux, part of NodeOSUpgrade feature, is now generally available.
OS security patches are AKS-tested, fully managed, and applied with safe deployment practices. AKS regularly updates the node’s virtual hard disk (VHD) with patches from the image maintainer labeled “security only.”
This channel is part of nodeosupgrade feature, honors maintenance windows and limits disruption by applying live patching wherever necessary.
Products:
AKS
________________________________________________________________________________________________________________________________
Available: Announcing kube-egress-gateway for Kubernetes
Status: Now Available
kube-egress-gateway is an open-source project that offers a scalable and cost-efficient solution for configuring fixed source IPs for Kubernetes pod egress traffic on Azure. The kube-egress-gateway components run within Kubernetes clusters—whether managed (Azure Kubernetes Service, AKS) or unmanaged—and use one or more dedicated Kubernetes nodes as pod egress gateways, routing pod outbound traffic through a WireGuard tunnel.
Compared to existing methods, such as creating dedicated Kubernetes nodes with a NAT gateway or assigning instance-level public IP addresses and scheduling only specific pods on these nodes, kube-egress-gateway is more cost-efficient. It allows pods requiring different egress IPs to share the same gateway and be scheduled on any regular worker node.
Products:
AKS
________________________________________________________________________________________________________________________________
Public Preview: Azure Container Apps available in Azure Government Cloud Virginia
Status: In Preview
Azure Container Apps, a managed serverless container service, is now available in Azure Government Cloud. Azure Container Apps offers an ideal platform for application developers who want to run apps and microservices in containers without managing infrastructure. Azure Container Apps is built on a foundation of powerful open-source technology including Kubernetes, KEDA, Dapr, and Envoy.
To learn more about Azure Container Apps, please see the getting started guide on Microsoft Learn.
Products:
Azure Container Apps
________________________________________________________________________________________________________________________________
Public Preview: Cluster operation status for AKS
Status: In Preview
Cluster operation status for AKS is now in public preview.
With this feature, you can get a snapshot of progress for your long standing operations such as upgrade, scale, create and more.
Products:
AKS
________________________________________________________________________________________________________________________________
Public Preview: Geo-Replication for Azure Service Bus Premium
Status: In Preview
We are excited to announce the public preview of the new Geo-Replication feature for Azure Service Bus in the premium tier. This feature ensures that the metadata and data of a namespace are continuously replicated from a primary region to a secondary region. Moreover, this feature allows promoting a secondary region at any time. The Geo-Replication feature is the latest option to insulate Azure Service Bus applications against outages and disasters.
The Geo-Replication feature implements metadata and data replication in a primary-secondary replication model. It works with a single namespace, and at a given time there’s only one primary region, which serves both producers and consumers. There is a single hostname used to connect to the namespace, which always points to the current primary region.
After promoting a secondary region, the hostname points to the new primary region, and the old primary region is demoted to secondary region. After the new secondary has been re-initialized, it is possible to promote this region again to primary at any moment.
Products:
Azure Service Bus
________________________________________________________________________________________________________________________________
Generally Available: Azure SQL updates for late June 2024
Status: Now Available
In late June 2024, the following updates and enhancements were made to Azure SQL:
Prepare for planned maintenance events on your Azure SQL Managed Instance resources by enabling advance notifications.
Enable zone redundancy for Azure SQL Database Hyperscale’s named replicas to enhance protection against extensive failures, such as datacenter disasters, without altering application logic.
Products:
Azure SQL DB
Azure SQL MI
________________________________________________________________________________________________________________________________
Generally Available: IOPS scaling for Azure Database for PostgreSQL – Flexible Server
Status: Now Available
We are excited to announce the general availability of IOPS scaling for Azure Database for PostgreSQL – Flexible Server. This feature empowers you to dynamically scale your IOPS based on your workload needs. Ensure optimal performance during high-demand operations like migrations or data loads and scale down to save costs when demand decreases. With IOPS scaling, you can fine-tune your database’s performance and manage costs more effectively without over-provisioning resources. Experience seamless and efficient database management with the flexibility to adjust IOPS as required. Start using IOPS scaling today to enhance your database’s performance and efficiency. Visit the Azure portal to get started. Learn more.
Products:
Azure DB for PostgreSQL
________________________________________________________________________________________________________________________________
Generally Available: New Azure Advisor recommendations for Azure Database for PostgreSQL – Flexible Server
Status: Now Available
New Azure Advisor recommendations have been created for Azure Database for PostgreSQL – Flexible Server and existing recommendations have been improved to provide more actionable guidance.
Azure Advisor is a cloud assistant that analyzes your configuration and usage telemetry to make personalized recommendations to help improve performance, reliability, security, and cost effectiveness. You can find these recommendations in the Advisor dashboard section of the Azure Portal.
New Azure Database for PostgreSQL – Flexible Server recommendations include checks for long running and orphaned prepared transactions, crossing the transaction wraparound limit, and exceeding the recommended bloat ratio.
Products:
Azure DB for PostgreSQL
________________________________________________________________________________________________________________________________
Public Preview: Extension version sync for Azure Database for PostgreSQL – Flexible Server
Status: In Preview
We are excited to announce the public preview of extension version sync for Azure Database for PostgreSQL – Flexible Server. This feature allows you to seamlessly update your PostgreSQL extensions to the latest versions with a simple command, ensuring your system remains secure and up to date.
By using `ALTER EXTENSION <extension-name> UPDATE`, you can automatically upgrade to the most stable and secure versions available, enhancing both security and operational stability with minimal effort. This streamlined process prevents unauthorized changes and potential vulnerabilities, making it easier for you to manage your database extensions efficiently.
Products:
Azure DB for PostgreSQL
________________________________________________________________________________________________________________________________
Public Preview: Online migration in migration service Azure Database for PostgreSQL
Status: In Preview
We’re excited to announce the launch of our latest migration service in Azure Database for PostgreSQL feature: online migration, now available in public preview. This feature empowers you to migrate your PostgreSQL databases to Azure seamlessly and with minimal downtime. You’ll benefit from a streamlined migration process that ensures your data is transferred securely and efficiently, allowing you to take advantage of Azure’s scalability, performance, and security features without interrupting your business operations.
By leveraging this new feature, you can confidently move your workloads to a more robust and reliable cloud environment, positioning your business for future growth. Start your migration journey today and experience the advantages of Azure Database for PostgreSQL with our migration service.
Products:
Azure DB for PostgreSQL
________________________________________________________________________________________________________________________________
Public Preview: Redis 7.2 on Azure Cache for Redis Enterprise
Status: In Preview
Enterprise and Enterprise Flash tier caches now support Redis 7.2 in preview. This latest version of Redis offers over a dozen new commands and performance enhancements over Redis 6.0, the previous version offered on Azure Cache for Redis. New features include expanded geospatial functionality, sharded pub/sub, and support for the RESP3 protocol.
Products:
Azure Cache for Redis
________________________________________________________________________________________________________________________________
Public Preview: Windows Server 2025 now available
Status: In Preview
Windows Server 2025 delivers advanced security, new Azure hybrid features, a high-performance platform for your existing apps and AI workloads, and a modernized Windows Server experience. With this new release you will see investments in:
A rich set of security innovations including new capabilities in Active Directory, Server Message Block (SMB) improvements including SMB over QUIC, and security updates with fewer reboots.
Improved hybrid capabilities like Software-defined network (SDN) multisite features allowing native L2 and L3 connectivity for workloads in multiple locations, flexible hybrid and multicloud management tools, and easier onboarding to Azure Arc.
New features for AI, performance, and scale such as GPU partitioning across virtual machines, vastly improved Hyper-V performance and scalability, and easy upgrades through Windows Update to name a few.
Products:
Windows Server
________________________________________________________________________________________________________________________________
Generally Available: Azure Log Alerts support for Azure Data Explorer
Status: Now Available
Azure Monitor Alerts allow you to monitor your Azure and application telemetry to quickly identify issues affecting your service. More specifically, Azure Monitor log alert rules allow you to set periodic queries on your log telemetry to identify potential issues and get notifications or trigger actions.
Until now, log alert rules have supported running queries on Log Analytics and Application Insights data. We are now introducing support for running queries also on Azure Data Explorer (ADX) tables, and even joining data between those data sources in a single query.
In addition, as part of this newly added support, log alert rules now support managed identities for Azure resources – allowing you to see and control the exact permissions of your log alert rule.
Learn More:
Create a new log alert rule accessing ADX
Write a query accessing ADX data from a Log Analytics using the adx pattern
Managed identities for Azure resources
Products:
Azure Alerts
________________________________________________________________________________________________________________________________
Generally Available: Spain Central region added to Azure HDInsight
Status: Now Available
HDInsight is now generally available in Spain Central. Azure HDInsight is a managed, full-spectrum, open-source analytics service in the cloud for enterprises. You can use open-source frameworks such as Hadoop, Apache Spark, Apache Hive, LLAP, Apache Kafka, and more.
________________________________________________________________________________________________________________________________
Public Preview: Force detach zone redundant data disks during zone outage
Status: In Preview
We are excited to announce the public preview support to force detach ZRS data disks from a VM residing on a zone impacted by failure. Customers will now be able to detach the ZRS data disks and attach them to another VM, decreasing the RTO.
Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk across three Azure availability zones within the region providing 99.9999999999% (12 9’s) of durability over a given year. Zone redundant storage (ZRS) option for Azure managed disks is supported on Premium SSDs and Standard SSDs.
Reference
Products:
Azure Virtual Machines
Managed Disks
Microsoft Tech Community – Latest Blogs –Read More
Custom formatting model advisor report in HTML
When generating a model advisor report, in the Model Advisor toolstrip it is possible to add a pdf or word template for your report. Is there a method for doing this for the HTML files? Or simply insert a section of code? I am just modifying the title portion of the report.When generating a model advisor report, in the Model Advisor toolstrip it is possible to add a pdf or word template for your report. Is there a method for doing this for the HTML files? Or simply insert a section of code? I am just modifying the title portion of the report. When generating a model advisor report, in the Model Advisor toolstrip it is possible to add a pdf or word template for your report. Is there a method for doing this for the HTML files? Or simply insert a section of code? I am just modifying the title portion of the report. matlab report generator, model advisor MATLAB Answers — New Questions
Using dlarray with betarnd/randg
I am writing a custom layer with the DL toolbox and a part of the forward pass of this layer is making draws from a beta distribution where the b parameter is to be optimised as part of the network training. However, I seem to be having difficulty using betarnd (and by extension randg) with a dlarray valued parameter.
Consider the following, which works as expected.
>> betarnd(1, 0.1)
ans =
0.2678
However, if I instead do the following, then it does not work.
>> b = dlarray(0.1)
b =
1×1 dlarray
0.1000
>> betarnd(1, b)
Error using randg
SHAPE must be a full real double or single array.
Error in betarnd (line 34)
g2 = randg(b,sizeOut); % could be Infs or NaNs
Is it not possible to use such functions with parameters to be optimised via automatic differentiation (hence dlarray)?
Many thanksI am writing a custom layer with the DL toolbox and a part of the forward pass of this layer is making draws from a beta distribution where the b parameter is to be optimised as part of the network training. However, I seem to be having difficulty using betarnd (and by extension randg) with a dlarray valued parameter.
Consider the following, which works as expected.
>> betarnd(1, 0.1)
ans =
0.2678
However, if I instead do the following, then it does not work.
>> b = dlarray(0.1)
b =
1×1 dlarray
0.1000
>> betarnd(1, b)
Error using randg
SHAPE must be a full real double or single array.
Error in betarnd (line 34)
g2 = randg(b,sizeOut); % could be Infs or NaNs
Is it not possible to use such functions with parameters to be optimised via automatic differentiation (hence dlarray)?
Many thanks I am writing a custom layer with the DL toolbox and a part of the forward pass of this layer is making draws from a beta distribution where the b parameter is to be optimised as part of the network training. However, I seem to be having difficulty using betarnd (and by extension randg) with a dlarray valued parameter.
Consider the following, which works as expected.
>> betarnd(1, 0.1)
ans =
0.2678
However, if I instead do the following, then it does not work.
>> b = dlarray(0.1)
b =
1×1 dlarray
0.1000
>> betarnd(1, b)
Error using randg
SHAPE must be a full real double or single array.
Error in betarnd (line 34)
g2 = randg(b,sizeOut); % could be Infs or NaNs
Is it not possible to use such functions with parameters to be optimised via automatic differentiation (hence dlarray)?
Many thanks deep learning, statistics, matlab, neural networks, random number generator MATLAB Answers — New Questions
F3 Users – Open / Edit files on File Server / Mapped Drives
Hi All
I hope you are well.
Anyway, we have a lot of F3 users and have deployed a Custom OMA-URI policy that sets Office apps to open in Edge.
So far so good, however, this seems to only work for local files.
Our F3 users also need to be able to Open / Edit files stored on a File Server / Mapped Drive in Office Web Apps.
Is this possible? And can it be pushed out via Intune?
Info greatly appreciated.
Stuart
Hi All I hope you are well. Anyway, we have a lot of F3 users and have deployed a Custom OMA-URI policy that sets Office apps to open in Edge. So far so good, however, this seems to only work for local files. Our F3 users also need to be able to Open / Edit files stored on a File Server / Mapped Drive in Office Web Apps. Is this possible? And can it be pushed out via Intune? Info greatly appreciated. Stuart Read More
Deleted Items folder in archive
Hello,
Please i need your help on this issue.
We Unable to delete items from Deleted Items folder in archive for our Tech Support mailbox.
It deletes them but after a refresh they show right back up.
Hello, Please i need your help on this issue. We Unable to delete items from Deleted Items folder in archive for our Tech Support mailbox.It deletes them but after a refresh they show right back up. Read More
SharePoint REST API v2 delta nextlink
I’m using the SharePoint REST API v2 with a driveitem delta query and when I add an expand odata query, it returns the expanded information on the first response (200 items), but does not return it on any response using the returned “@nextlink”.
Query:
/drives/{driveId}/root/delta?expand=publication,sharepointIds
also tried:
/drives/{driveId}/root/delta?$expand=publication,sharepointIds
The same query in Graph will return the values correctly.
I’m using the SharePoint REST API v2 with a driveitem delta query and when I add an expand odata query, it returns the expanded information on the first response (200 items), but does not return it on any response using the returned “@nextlink”.Query:/drives/{driveId}/root/delta?expand=publication,sharepointIdsalso tried:/drives/{driveId}/root/delta?$expand=publication,sharepointIds The same query in Graph will return the values correctly. Read More
New Blog | Update on the Deprecation of Admin Audit Log Cmdlets
We wanted to provide you with an important update to the deprecation schedule for the two Admin Audit Log cmdlets, as part of our ongoing commitment to improve security and compliance capabilities within our services. The two Admin Audit Log cmdlets are:
Search-AdminAuditLog
New-AdminAuditLog
As communicated in a previous blog post, the deprecation of Admin Audit Log (AAL) and Mailbox Audit Log (MAL) cmdlets was initially planned to occur simultaneously on April 30th, 2024. However, to ensure a smooth transition and to accommodate the feedback from our community, we have revised the deprecation timeline.
We would like to inform you that the Admin Audit Log cmdlets will now be deprecated separately from the Mailbox Audit Log cmdlets, with the final date set for September 15, 2024.
This change allows for a more phased approach, giving you additional time to adapt your processes to the new Unified Audit Log (UAL) cmdlets, which offer enhanced functionality and a more unified experience.
What This Means for You
The Admin Audit Log cmdlets will be deprecated on September 15, 2024.
The Mailbox Audit Log cmdlets will have a separate deprecation date, which will be announced early next year.
We encourage customers to begin transitioning to the Unified Audit Log (UAL) cmdlet i.e. Search-UnifiedAuditLog as soon as possible. Alternatively, you can explore using the Audit Search Graph API, which is currently in Public Preview and is expected to become Generally Available by early July 2024.
By Angélique Conde
We wanted to provide you with an important update to the deprecation schedule for the two Admin Audit Log cmdlets, as part of our ongoing commitment to improve security and compliance capabilities within our services. The two Admin Audit Log cmdlets are:
Search-AdminAuditLog
New-AdminAuditLog
As communicated in a previous blog post, the deprecation of Admin Audit Log (AAL) and Mailbox Audit Log (MAL) cmdlets was initially planned to occur simultaneously on April 30th, 2024. However, to ensure a smooth transition and to accommodate the feedback from our community, we have revised the deprecation timeline.
We would like to inform you that the Admin Audit Log cmdlets will now be deprecated separately from the Mailbox Audit Log cmdlets, with the final date set for September 15, 2024.
This change allows for a more phased approach, giving you additional time to adapt your processes to the new Unified Audit Log (UAL) cmdlets, which offer enhanced functionality and a more unified experience.
What This Means for You
The Admin Audit Log cmdlets will be deprecated on September 15, 2024.
The Mailbox Audit Log cmdlets will have a separate deprecation date, which will be announced early next year.
We encourage customers to begin transitioning to the Unified Audit Log (UAL) cmdlet i.e. Search-UnifiedAuditLog as soon as possible. Alternatively, you can explore using the Audit Search Graph API, which is currently in Public Preview and is expected to become Generally Available by early July 2024.
Read the full post here: Update on the Deprecation of Admin Audit Log Cmdlets
Windows Admin center V2 Authentication issue
Windows Admin center V2 “Modernized” v2.0.01 will not operate in Windows Authentication mode. If you run the installer in Remote Express mode, it is set to Forms Authentication. This mode will not allow a login with an Active Directory administrator the V2 gateway is joined to, only the local machine administrator.
If you rerun the setup in custom mode and attempt to indicate Kerberos / Windows Authentication the gateway will not run. If you use the powershell admin script to run the command “Set-WACLoginMode -Mode WindowsAuthentication” on a functioning gateway in Forms auth mode, the gateway will crash/not run.
Both methods of setting integrated authentication produce this .NET runtime error below. Running the indicated netsh command produces no change in behavior and no change in future .NET runtime errors. Running a netsh show command clearly displays the indicated urlacl exists. The gateway server OS is Windows 2022 STD Desktop Exp. This is not installed SxS with V1 gateway. V1 gateway operates without issue on this server when installed.
Application: WindowsAdminCenter.exe
CoreCLR Version: 6.0.2523.51912
.NET Version: 6.0.25
Description: The process was terminated due to an unhandled exception.
Exception Info: Microsoft.AspNetCore.Server.HttpSys.HttpSysException (5): The prefix ‘https://+:6600/’ is not registered. Please run the following command as Administrator to register this prefix:
netsh http add urlacl url=https://+:6600/ user=DOMAINHIDDENWAC$
See “Preregister URL prefixes on the server” on https://go.microsoft.com/fwlink/?linkid=2127065 for more information.
at Microsoft.AspNetCore.Server.HttpSys.UrlGroup.RegisterPrefix(String uriPrefix, Int32 contextId)
at Microsoft.AspNetCore.Server.HttpSys.UrlPrefixCollection.RegisterAllPrefixes(UrlGroup urlGroup)
at Microsoft.AspNetCore.Server.HttpSys.HttpSysListener.Start()
at Microsoft.AspNetCore.Server.HttpSys.MessagePump.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.WindowsAdminCenter.Core.HostingRuntime.StartAsync(CancellationToken cancellationToken)
at Microsoft.WindowsAdminCenter.Executable.WindowsService.OnStart(String[] args)
at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state)
at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
at System.Threading.Thread.StartCallback()
Windows Admin center V2 “Modernized” v2.0.01 will not operate in Windows Authentication mode. If you run the installer in Remote Express mode, it is set to Forms Authentication. This mode will not allow a login with an Active Directory administrator the V2 gateway is joined to, only the local machine administrator. If you rerun the setup in custom mode and attempt to indicate Kerberos / Windows Authentication the gateway will not run. If you use the powershell admin script to run the command “Set-WACLoginMode -Mode WindowsAuthentication” on a functioning gateway in Forms auth mode, the gateway will crash/not run. Both methods of setting integrated authentication produce this .NET runtime error below. Running the indicated netsh command produces no change in behavior and no change in future .NET runtime errors. Running a netsh show command clearly displays the indicated urlacl exists. The gateway server OS is Windows 2022 STD Desktop Exp. This is not installed SxS with V1 gateway. V1 gateway operates without issue on this server when installed. Application: WindowsAdminCenter.exeCoreCLR Version: 6.0.2523.51912.NET Version: 6.0.25Description: The process was terminated due to an unhandled exception.Exception Info: Microsoft.AspNetCore.Server.HttpSys.HttpSysException (5): The prefix ‘https://+:6600/’ is not registered. Please run the following command as Administrator to register this prefix:netsh http add urlacl url=https://+:6600/ user=DOMAINHIDDENWAC$See “Preregister URL prefixes on the server” on https://go.microsoft.com/fwlink/?linkid=2127065 for more information.at Microsoft.AspNetCore.Server.HttpSys.UrlGroup.RegisterPrefix(String uriPrefix, Int32 contextId)at Microsoft.AspNetCore.Server.HttpSys.UrlPrefixCollection.RegisterAllPrefixes(UrlGroup urlGroup)at Microsoft.AspNetCore.Server.HttpSys.HttpSysListener.Start()at Microsoft.AspNetCore.Server.HttpSys.MessagePump.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)at Microsoft.WindowsAdminCenter.Core.HostingRuntime.StartAsync(CancellationToken cancellationToken)at Microsoft.WindowsAdminCenter.Executable.WindowsService.OnStart(String[] args)at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state)at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()at System.Threading.ThreadPoolWorkQueue.Dispatch()at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()at System.Threading.Thread.StartCallback() Read More
New Blog | How to break the token theft cyber-attack chain
By Alex Weinert
We’ve written a lot about how attackers try to break passwords. The solution to password attacks—still the most common attack vector for compromising identities—is to turn on multifactor authentication (MFA).
But as more customers do the right thing with MFA, actors are going beyond password-only attacks. So, we’re going to publish a series of articles on how to defeat more advanced attacks, starting with token theft. In this article, we’ll start with some basics on how tokens work, describe a token theft attack, and then explain what you can do to prevent and mitigate token theft now.
Tokens 101
Before we get too deep into the token theft conversation, let’s quickly review the mechanics of tokens.
A token is an authentication artifact that grants you access to resources. You get a token by signing into an identity provider (IDP), such as Microsoft Entra ID, using a set of credentials. The IDP responds to a successful sign-in by issuing a token that describes who you are and what you have permission to do. When you want to access an application or service (we’ll just say app from here), you get permission to talk to that resource by presenting a token that’s correctly signed by an issuer it trusts. The software on the client device you’re using takes care of all token handling behind the scenes.
Read the full post here: How to break the token theft cyber-attack chain
By Alex Weinert
We’ve written a lot about how attackers try to break passwords. The solution to password attacks—still the most common attack vector for compromising identities—is to turn on multifactor authentication (MFA).
But as more customers do the right thing with MFA, actors are going beyond password-only attacks. So, we’re going to publish a series of articles on how to defeat more advanced attacks, starting with token theft. In this article, we’ll start with some basics on how tokens work, describe a token theft attack, and then explain what you can do to prevent and mitigate token theft now.
Tokens 101
Before we get too deep into the token theft conversation, let’s quickly review the mechanics of tokens.
A token is an authentication artifact that grants you access to resources. You get a token by signing into an identity provider (IDP), such as Microsoft Entra ID, using a set of credentials. The IDP responds to a successful sign-in by issuing a token that describes who you are and what you have permission to do. When you want to access an application or service (we’ll just say app from here), you get permission to talk to that resource by presenting a token that’s correctly signed by an issuer it trusts. The software on the client device you’re using takes care of all token handling behind the scenes.
Read the full post here: How to break the token theft cyber-attack chain Read More
Process Monitor v4.01
Microsoft Tech Community – Latest Blogs –Read More