Category: News
Make figure without white border
I have a piece of code that plots a surface, and I feel it is ready for being included into the report. The only thing is that the figure has a quite large white border which substantially increases the size of it. I would like to have the unnecessary white space reduced to a minimum.
I did try
set(gca,’LooseInset’,get(gca,’TightInset’))
But this cuts off my z-axis label. Is there an easy way to achieve this? With LaTeX, we can simply use the standalone documentclass with PGFplots/tikz, but I don’t think there’s a simple keyword or option to do this with matlab. I’m using R2016b.I have a piece of code that plots a surface, and I feel it is ready for being included into the report. The only thing is that the figure has a quite large white border which substantially increases the size of it. I would like to have the unnecessary white space reduced to a minimum.
I did try
set(gca,’LooseInset’,get(gca,’TightInset’))
But this cuts off my z-axis label. Is there an easy way to achieve this? With LaTeX, we can simply use the standalone documentclass with PGFplots/tikz, but I don’t think there’s a simple keyword or option to do this with matlab. I’m using R2016b. I have a piece of code that plots a surface, and I feel it is ready for being included into the report. The only thing is that the figure has a quite large white border which substantially increases the size of it. I would like to have the unnecessary white space reduced to a minimum.
I did try
set(gca,’LooseInset’,get(gca,’TightInset’))
But this cuts off my z-axis label. Is there an easy way to achieve this? With LaTeX, we can simply use the standalone documentclass with PGFplots/tikz, but I don’t think there’s a simple keyword or option to do this with matlab. I’m using R2016b. figure MATLAB Answers — New Questions
Why do I get a Microsoft Visual C++ Redistributable error 1935 when installing MATLAB on Windows?
I get the following error when installing MATLAB on Windows:
ERROR: Error 1935. An error occurred during the installation of assembly
‘Microsoft.VC80.ATL.type="win32",version="8.0.50727.762".publicKeyToken="1fc8b3b9a1e18e3b".processorArchitecture="amd64"’.
Please refer to Help and Support for more information.I get the following error when installing MATLAB on Windows:
ERROR: Error 1935. An error occurred during the installation of assembly
‘Microsoft.VC80.ATL.type="win32",version="8.0.50727.762".publicKeyToken="1fc8b3b9a1e18e3b".processorArchitecture="amd64"’.
Please refer to Help and Support for more information. I get the following error when installing MATLAB on Windows:
ERROR: Error 1935. An error occurred during the installation of assembly
‘Microsoft.VC80.ATL.type="win32",version="8.0.50727.762".publicKeyToken="1fc8b3b9a1e18e3b".processorArchitecture="amd64"’.
Please refer to Help and Support for more information. msvc, compiler MATLAB Answers — New Questions
Does MATLAB support quadruple precision – 128-bit floating point arithmetics?
I need more precision than offered by the standard double data type in MATLAB. I am looking for a quadruple precision in MATLAB.I need more precision than offered by the standard double data type in MATLAB. I am looking for a quadruple precision in MATLAB. I need more precision than offered by the standard double data type in MATLAB. I am looking for a quadruple precision in MATLAB. 128-bit, quadruple_precision, float, double MATLAB Answers — New Questions
Why do I get “Code generation information file does not exist” as the “Rebuild Reason” when building my model?
When I successfully build our model I get "Code generation information file does not exist" as "Rebuild Reason".
Build Summary
Top model targets built:
Model | Action |Rebuild Reason
====================================================================
modelName | Code generated | Code generation information file does not exist.
1 of 1 models built (0 models already up to date)
Why is this and how can I remedy it?When I successfully build our model I get "Code generation information file does not exist" as "Rebuild Reason".
Build Summary
Top model targets built:
Model | Action |Rebuild Reason
====================================================================
modelName | Code generated | Code generation information file does not exist.
1 of 1 models built (0 models already up to date)
Why is this and how can I remedy it? When I successfully build our model I get "Code generation information file does not exist" as "Rebuild Reason".
Build Summary
Top model targets built:
Model | Action |Rebuild Reason
====================================================================
modelName | Code generated | Code generation information file does not exist.
1 of 1 models built (0 models already up to date)
Why is this and how can I remedy it? MATLAB Answers — New Questions
Understanding Site Hierarchy and Permissions
If you have ever been confused with site collections and hierarchies for permission levels then this is a great reference page.
https://learn.microsoft.com/en-us/sharepoint/understanding-permission-levels#BKMK_Site_Dependent
If you have ever been confused with site collections and hierarchies for permission levels then this is a great reference page.https://learn.microsoft.com/en-us/sharepoint/understanding-permission-levels#BKMK_Site_Dependent Read More
Recovering publishing account
My company has a app in the Microsoft Teams, but the person who created it is no longer with the company. We’ve lost access to the account that owns the administrative rights to the app. Any suggestions on recovering access to that account?
My company has a app in the Microsoft Teams, but the person who created it is no longer with the company. We’ve lost access to the account that owns the administrative rights to the app. Any suggestions on recovering access to that account? Read More
Error Code 0x8000x4005 HELP?
Simply just trying to delete files of a software I no longer want on my Labtop….. and here we are again with another issue. Ive tryd YouTube and googling it of course but then I start seeing comments saying ” I had to end up resetting my whole pc ” or ” I have more issues with my pc now then I did before “…… I need help thanks.
Simply just trying to delete files of a software I no longer want on my Labtop….. and here we are again with another issue. Ive tryd YouTube and googling it of course but then I start seeing comments saying ” I had to end up resetting my whole pc ” or ” I have more issues with my pc now then I did before “…… I need help thanks. Read More
Partition rows by id and DateColumn or DateColumn + 1
Hello all. I’m trying to partition out data by patient stay and select the stay with the greatest discharge date. The only thing I’m struggling with is, multiple stays where the admit date is the same or 1 day later have to be treated as the same, and again I take the one with the latest discharge date.
SQL server 2016
create table #HospitalStays
(
PatientId int
,AdmitDate date
,DischargeDate date
,DaysToFollowUp int
)
GO
insert into #HospitalStays
Values
(123456, ‘2024-08-01’, ‘2024-08-01’, 14)
,(123456, ‘2024-08-02’, ‘2024-08-05’, 30)
,(123456, ‘2024-08-07’, ‘2024-08-08’, 30)
select
patientId
,AdmitDate
,DischargeDate
,DaysToFollowUp
,row_number() over (partition by patientId, AdmitDate order by DischargeDate desc, DaysToFollowUp) as rownum
from
#HospitalStays
So in the above how do I make Admit date either admitDate or dateadd(dd, 1, admitDate)
I’m doing it with a cursor, but I have a feeling my architect is not going to want a cursor in the proc.
Much appreciated!
Hello all. I’m trying to partition out data by patient stay and select the stay with the greatest discharge date. The only thing I’m struggling with is, multiple stays where the admit date is the same or 1 day later have to be treated as the same, and again I take the one with the latest discharge date.SQL server 2016create table #HospitalStays(PatientId int,AdmitDate date,DischargeDate date,DaysToFollowUp int)GOinsert into #HospitalStaysValues (123456, ‘2024-08-01’, ‘2024-08-01’, 14) ,(123456, ‘2024-08-02’, ‘2024-08-05’, 30) ,(123456, ‘2024-08-07’, ‘2024-08-08’, 30)selectpatientId,AdmitDate,DischargeDate,DaysToFollowUp,row_number() over (partition by patientId, AdmitDate order by DischargeDate desc, DaysToFollowUp) as rownumfrom#HospitalStays So in the above how do I make Admit date either admitDate or dateadd(dd, 1, admitDate)I’m doing it with a cursor, but I have a feeling my architect is not going to want a cursor in the proc.Much appreciated! Read More
RDS on prem future
I know the immediate answer is it will stick around but with RemoteApps specifically, we are recognising:
1) All new development seems to be on Windows/AVD Remote Desktop apps.
2) Traditional Remote Apps do not support WHfB with Kerberos trust whereas every other application does.
I am guessing it is relying on NTLM behind the scenes and heavily relies on storing credentials locally. It isn’t future proof and definitely seems a security risk keeping this 2009 product in production.
Has anyone managed to get RemoteApps working on Entra Joined devices with WHfB? And will we see the new apps support traditional RDS in the future?
Just feels very disjointed at the mo.
I know the immediate answer is it will stick around but with RemoteApps specifically, we are recognising: 1) All new development seems to be on Windows/AVD Remote Desktop apps.2) Traditional Remote Apps do not support WHfB with Kerberos trust whereas every other application does. I am guessing it is relying on NTLM behind the scenes and heavily relies on storing credentials locally. It isn’t future proof and definitely seems a security risk keeping this 2009 product in production. Has anyone managed to get RemoteApps working on Entra Joined devices with WHfB? And will we see the new apps support traditional RDS in the future? Just feels very disjointed at the mo. Read More
Why do I get the error message “#error Must define one of RT, NRT, MATLAB_MEX_FILE, SL_INTERNAL, or FIPXT_SHARED_MODULE” when building a model with an S-function?
I have a Simulink R2018b model which contains an S-function. The S-function is a MEX S-Function. When I try to build the model, I get the following error message:
#error Must define one of RT, NRT, MATLAB_MEX_FILE, SL_INTERNAL, or FIPXT_SHARED_MODULE
I am using Visual Studio 2017 as a compiler.I have a Simulink R2018b model which contains an S-function. The S-function is a MEX S-Function. When I try to build the model, I get the following error message:
#error Must define one of RT, NRT, MATLAB_MEX_FILE, SL_INTERNAL, or FIPXT_SHARED_MODULE
I am using Visual Studio 2017 as a compiler. I have a Simulink R2018b model which contains an S-function. The S-function is a MEX S-Function. When I try to build the model, I get the following error message:
#error Must define one of RT, NRT, MATLAB_MEX_FILE, SL_INTERNAL, or FIPXT_SHARED_MODULE
I am using Visual Studio 2017 as a compiler. external, ide, s-function, error, inlining, tlc, non-lined MATLAB Answers — New Questions
Microsoft’s Copilot: A Frustrating Flop in AI-Powered Productivity
Microsoft’s Copilot was supposed to be the game-changer in productivity, but it’s quickly proving to be a massive disappointment. The idea was simple: integrate AI directly into Word, Excel, PowerPoint, and other Office tools to make our lives easier. But when it comes to actually performing specific functions, Copilot falls flat.
Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don’t want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.
What makes this even more frustrating is that other AI tools, like ChatGPT, can handle these tasks effortlessly. When you ask ChatGPT to perform a specific function, it does so without hesitation. It’s able to understand the request and deliver exactly what’s needed. But Copilot? It struggles with the basics, and that’s unacceptable, especially from a company like Microsoft.
It’s frankly embarrassing that Microsoft can’t get this right. The whole point of integrating AI into these tools was to streamline workflows and boost productivity. But if Copilot can’t even manage simple tasks like formatting a document or adjusting a spreadsheet, then what’s the point? Users don’t need another tool that tells them how to do something—they need one that does it for them.
Microsoft, you’ve missed the mark with Copilot. It’s not just a minor inconvenience; it’s a serious flaw that undermines the value of your Office suite. When other AI tools can easily accomplish what Copilot can’t, it’s time to reevaluate. Users expect more, and frankly, they deserve more for their investment.
What’s been your experience with Copilot? Is anyone else finding it as frustrating as I am? Let’s talk about it.
Microsoft’s Copilot was supposed to be the game-changer in productivity, but it’s quickly proving to be a massive disappointment. The idea was simple: integrate AI directly into Word, Excel, PowerPoint, and other Office tools to make our lives easier. But when it comes to actually performing specific functions, Copilot falls flat.Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don’t want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.What makes this even more frustrating is that other AI tools, like ChatGPT, can handle these tasks effortlessly. When you ask ChatGPT to perform a specific function, it does so without hesitation. It’s able to understand the request and deliver exactly what’s needed. But Copilot? It struggles with the basics, and that’s unacceptable, especially from a company like Microsoft.It’s frankly embarrassing that Microsoft can’t get this right. The whole point of integrating AI into these tools was to streamline workflows and boost productivity. But if Copilot can’t even manage simple tasks like formatting a document or adjusting a spreadsheet, then what’s the point? Users don’t need another tool that tells them how to do something—they need one that does it for them.Microsoft, you’ve missed the mark with Copilot. It’s not just a minor inconvenience; it’s a serious flaw that undermines the value of your Office suite. When other AI tools can easily accomplish what Copilot can’t, it’s time to reevaluate. Users expect more, and frankly, they deserve more for their investment.What’s been your experience with Copilot? Is anyone else finding it as frustrating as I am? Let’s talk about it. Read More
App using node-fetch as agent
A few days ago, I was looking into a user’s sign in logs. I noticed an application called Augmentation Loop with the user agent as node-fetch/1.0 (+https://github.com/bitinn/node-fetch). Looking into the Augmentation Loop, it is part of apps included in Conditional Access Office 365 app suite. (https://learn.microsoft.com/en-us/entra/identity/conditional-access/reference-office-365-application-contents)
According to this site (https://petri.com/microsoft-revamps-outlook-one-outlook-vision/), it is a way of coordinating all the various types of data and services consumed by Outlook.
From what I can see, Augmentation Loop sign ins are always in between Microsoft Office sign ins:
I tried referencing the app ID (4354e225-50c9-4423-9ece-2d5afd904870) to the Azure app ID list (https://learn.microsoft.com/en-us/microsoft-365-app-certification/azure/azure-apps), however, it is not there.
I also tried searching through Azure admin all applications and it is also not there. Google search doesn’t also return anything.
May someone please explain what application or service is using the node-fetch agent?
A few days ago, I was looking into a user’s sign in logs. I noticed an application called Augmentation Loop with the user agent as node-fetch/1.0 (+https://github.com/bitinn/node-fetch). Looking into the Augmentation Loop, it is part of apps included in Conditional Access Office 365 app suite. (https://learn.microsoft.com/en-us/entra/identity/conditional-access/reference-office-365-application-contents)According to this site (https://petri.com/microsoft-revamps-outlook-one-outlook-vision/), it is a way of coordinating all the various types of data and services consumed by Outlook. From what I can see, Augmentation Loop sign ins are always in between Microsoft Office sign ins: I tried referencing the app ID (4354e225-50c9-4423-9ece-2d5afd904870) to the Azure app ID list (https://learn.microsoft.com/en-us/microsoft-365-app-certification/azure/azure-apps), however, it is not there. I also tried searching through Azure admin all applications and it is also not there. Google search doesn’t also return anything. May someone please explain what application or service is using the node-fetch agent? Read More
Introducing the MDTI Premium Data Connector for Sentinel
The MDTI and Unified Security Operations Platform teams are excited to introduce an MDTI premium data connector available in the Unified Security Operations Platform and standalone Microsoft Sentinel experiences. This connector enables customers with an MDTI premium license and API license to apply the powerful raw and finished threat intelligence in MDTI, including high-fidelity indicators of compromise (IoCs), across their security operations to detect and respond to the latest threats.
Microsoft researchers, with the backing of interdisciplinary teams of thousands of experts spread across 77 countries, continually add new analysis of threat activity observed across more than 78 trillion threat signals to MDTI, including powerful indicators drawn directly from threat infrastructure. In Sentinel, this intelligence enables enhanced threat detection, enrichment of incidents for rapid triage, and the ability to launch investigations that proactively surface external threat infrastructure before it can be used in campaigns.
This blog will highlight the exciting use cases for the MDTI premium data connector, including enhanced enrichment, threat detection, and hunting that customers can tap into when enabling both the standard and premium MDTI data connectors. It will also cover how customers can easily get started with this out-of-the-box connector.
Dynamic Incident Enrichment
The MDTI data connector can help analysts respond to threats at scale by automatically enriching incidents with MDTI premium threat intelligence, evaluating indicators in an incident with dynamic reputation data (everything Microsoft knows about a piece of online infrastructure) to mark its severity and automatically triage it accordingly. Comments are added to the incident outlining the reputation details with links to further information about associated threat actors, tools, and vulnerabilities.
Threat Detection
With a flip of the switch, the MDTI premium data connector immediately enables detections for threats, including activity from the more than 300 named threat actor groups tracked by Microsoft. When enabled in Microsoft Sentinel, this connector takes URLs, domains, and IPs from a customer environment via log data and checks them against a dynamic list of known bad IOCs from MDTI. When a match occurs, an incident is automatically created, and the data is written to the Microsoft Sentinel TI blade. By enabling this rule, Microsoft Sentinel users know they have detections in place for threats known to Microsoft.
External Threat Hunting
Customers can pivot off the IoCs to investigate further and boost their understanding of the threat with MDTI’s repository of raw and finished intelligence. Finished intelligence, or written intelligence and analysis, includes articles, activity snapshots, and Intel Profiles about actors tooling and vulnerabilities. It provides crucial context and vital information such as targeting information, TTPs (tactics, techniques, and procedures), and additional IoCs.
Customers can also explore advanced internet data sets created by amass collection network that maps threat infrastructure across the internet every day to locate relationships between entities on the web to malicious infrastructure, tooling, and backdoors outside the network at incredible scale. Below is an example of how to effectively detect and hunt for Indicators of Compromise (IoCs) associated with threat actors using Sentinel with MDTI premium connector enabled.
Begin by following these steps:
Filter IoCs by MDTI Source – set the source filter to “Premium Microsoft Defender Threat Intelligence” within the Sentinel TI Blade
Tags enable filtering on IoCs by specific threat actors. For example, `ActivityGroup:AQUA BLIZZARD`
Next, customers can leverage the enriched data from the MDTI feed in their Log Analytics workspace using KQL queries to hunt. They can also create custom analytic rules:
Users can also create an Analytics Rule to better align with their hunting workflow:
For the sake of this example, our detection rule is very simple. However, customers can enhance rules with their own detection logic:
Customers can then extend their investigation and gather more intelligence on the threat actor in the Unified Security Operations Platform MDTI experience by taking the indicator value and perform a search in the global search feature:
Customers can click on the intel profiles directly to learn more about the actor and access additional IoCs compiled by Microsoft’s threat research teams:
Getting started with MDTI Connector
To install/access the UX for the Premium MDTI data connector, users will need to install the Threat Intelligence (Preview) Solution:
Sign up here to participate. We will enable this private preview in the customer environment three (3) business days after submission.
Three business days after the previous step, customers should navigate to this Threat Intelligence (Preview)Solution and select Create
Customers should then select the subscription, resource group, and workspace name for which they wish to add this solution.
Select Review + create
Select Create
After selecting Create, customers will be navigated to the page with the deployment of the solution. Please allow a couple minutes for the deployment to be completed.
Then, use this feature flag, https://aka.ms/MDTIPremiumFeedPrPFeatureFlag, to login again to Microsoft Sentinel.
After installing the preview solution and adding the feature flag to the URL – users will be able to access the Premium Microsoft Defender for Threat Intelligence Data Connector. Below is a screenshot showing what the Data Connector page in Sentinel should look like:
Connecting the Data Connector
Navigate to the Data Connectors blade in Sentinel:
Select the Premium Microsoft Defender Threat Intelligence (Preview)Connector:
Select Open connector page:
Select Connect to connect the data connector (note, if already connected, the disconnect button will allow customers to disconnect the data connector):
After connecting the data connector, customers should navigate to the Threat Intelligence Blade in their Sentinel Workspace, and soon premium indicators will be added.
Conclusion
Microsoft delivers leading threat intelligence built on visibility across the global threat landscape made possible protecting Azure and other large cloud environments, managing billions of endpoints and emails, and maintaining a continuously updated graph of the internet. By processing an astonishing 78 trillion security signals daily, Microsoft can deliver threat intelligence in MDTI providing an all-encompassing view of attack vectors across various platforms, ensuring Sentinel customers have comprehensive threat detection and remediation.
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDTI product web page.
Also, be sure to contact our sales team to request a demo or a quote. Learn how you can begin using MDTI with the purchase of just one Copilot for Security SCU here.
Microsoft Tech Community – Latest Blogs –Read More
Simulate sine wave with timestep different than overall model timestep
Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance.Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance. Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance. #simulink MATLAB Answers — New Questions
getting statistics from within a mask within an image
We have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank youWe have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank you We have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank you mask, statistics MATLAB Answers — New Questions
How to classify a folder of images?
I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors.I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors. I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors. classify a folder of images MATLAB Answers — New Questions
IT admin unable to approve an uploaded custom app to Teams
Hello everyone,
My team and I have developed a copilot chatbot which we’re ready to make available to our superusers ( basically a group of testers) intially. Eventually we would like to rollout the app the rest of our organization.
Here are the steps we followed :
1. We’ve created a new app in Apps: Developer Portal (microsoft.com) pointing to the client ID within the chatbot configuraiton.
2. We validated and published the app by downloading the app package.
3. We uploaded the app package in Teams and submitted for approval
4. Our IT received our request and when approving, nothing really happens, the app remains in a blocked status.
Additionnally, he reports that he sees a pending action on the publish side :
We are clueless as to what’s missing here and we would like some guidance on the troubleshooting steps. We would like less privileges as possible especially that we are not ready to rollout the app to the org yet.
Best
Faten
Hello everyone, My team and I have developed a copilot chatbot which we’re ready to make available to our superusers ( basically a group of testers) intially. Eventually we would like to rollout the app the rest of our organization. Here are the steps we followed : 1. We’ve created a new app in Apps: Developer Portal (microsoft.com) pointing to the client ID within the chatbot configuraiton. 2. We validated and published the app by downloading the app package. 3. We uploaded the app package in Teams and submitted for approval 4. Our IT received our request and when approving, nothing really happens, the app remains in a blocked status. Additionnally, he reports that he sees a pending action on the publish side : We are clueless as to what’s missing here and we would like some guidance on the troubleshooting steps. We would like less privileges as possible especially that we are not ready to rollout the app to the org yet.BestFaten Read More
Copilot a let down
Microsoft’s attempt at integrating Copilot into its Office suite has been nothing short of a letdown. What was touted as the next big thing in productivity tools has turned out to be a frustrating experience for many users. The promise was grand—Copilot was supposed to revolutionize how we work in Word, Excel, PowerPoint, and more, but the reality has been far from it.
Let’s start with the basics. Copilot struggles to execute even the simplest of prompts. Whether you’re trying to format a document in Word, generate data insights in Excel, or create a presentation in PowerPoint, Copilot often fails to deliver. It’s supposed to be an AI-powered assistant, yet it feels more like a sluggish tool that barely gets the job done. For something that’s supposed to save time and enhance productivity, Copilot ends up wasting more time as users grapple with its limitations.
In contrast, tools like ChatGPT are light years ahead. When you ask ChatGPT to help with a task, it understands context, executes commands efficiently, and delivers accurate results. Whether it’s generating text, helping with coding, or providing insights, ChatGPT has proven itself as a reliable assistant that can handle a wide array of tasks.
But Copilot? It can’t even handle a basic document format without hiccups. It’s as if Microsoft has launched a half-baked product, expecting users to tolerate its shortcomings while they work out the kinks. This isn’t the first time we’ve seen a tech giant overpromise and underdeliver, but it’s particularly disappointing coming from Microsoft, a company that has the resources and expertise to do better.
The worst part? Users are paying for this. Copilot isn’t a free add-on—it’s a feature that’s supposed to justify its cost with enhanced productivity. But when it can’t even perform fundamental tasks correctly, it feels more like a waste of money.
Microsoft, if you’re listening, it’s time to get your act together. Copilot needs significant improvements if it’s going to compete in the AI assistant space. Right now, it’s not even in the same league as ChatGPT. Users deserve better for the investment they’ve made.
What are your thoughts? Has anyone had a different experience, or do you agree that Copilot has been a massive disappointment? Let’s discuss.
Microsoft’s attempt at integrating Copilot into its Office suite has been nothing short of a letdown. What was touted as the next big thing in productivity tools has turned out to be a frustrating experience for many users. The promise was grand—Copilot was supposed to revolutionize how we work in Word, Excel, PowerPoint, and more, but the reality has been far from it.Let’s start with the basics. Copilot struggles to execute even the simplest of prompts. Whether you’re trying to format a document in Word, generate data insights in Excel, or create a presentation in PowerPoint, Copilot often fails to deliver. It’s supposed to be an AI-powered assistant, yet it feels more like a sluggish tool that barely gets the job done. For something that’s supposed to save time and enhance productivity, Copilot ends up wasting more time as users grapple with its limitations.In contrast, tools like ChatGPT are light years ahead. When you ask ChatGPT to help with a task, it understands context, executes commands efficiently, and delivers accurate results. Whether it’s generating text, helping with coding, or providing insights, ChatGPT has proven itself as a reliable assistant that can handle a wide array of tasks.But Copilot? It can’t even handle a basic document format without hiccups. It’s as if Microsoft has launched a half-baked product, expecting users to tolerate its shortcomings while they work out the kinks. This isn’t the first time we’ve seen a tech giant overpromise and underdeliver, but it’s particularly disappointing coming from Microsoft, a company that has the resources and expertise to do better.The worst part? Users are paying for this. Copilot isn’t a free add-on—it’s a feature that’s supposed to justify its cost with enhanced productivity. But when it can’t even perform fundamental tasks correctly, it feels more like a waste of money.Microsoft, if you’re listening, it’s time to get your act together. Copilot needs significant improvements if it’s going to compete in the AI assistant space. Right now, it’s not even in the same league as ChatGPT. Users deserve better for the investment they’ve made.What are your thoughts? Has anyone had a different experience, or do you agree that Copilot has been a massive disappointment? Let’s discuss. Read More
Gatekeeper: Enforcing security policy on your Kubernetes clusters
Microsoft Defender for Containers secures Kubernetes clusters deployed in Azure, AWS, GCP, or on-premises using sensor data, audit logs and security events, control plane configuration information, and Azure Policy enforcement. In this blog, we’ll take a look at Azure Policy for Kubernetes and explore the Gatekeeper engine that is responsible for policy enforcement on the cluster.
Each Kubernetes environment is architected differently, but Azure Policy is enforced the same way across Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) in AWS, Google Kubernetes Engine (GKE) in GCP, and on-premises or IaaS. Defender for Containers uses an open-source framework called Gatekeeper to deploy safeguards and enforcements at scale. We’ll get into what Gatekeeper is in a moment, but first, let’s orient ourselves with a simplified reference architecture for AKS.
Every Kubernetes environment has two main components, the control plane which provides the core Kubernetes services for orchestration and the nodes which house the infrastructure that runs the applications themselves. In Azure managed clusters, the control plane includes the following components:
An API server named kube-apiserver which exposes the Kubernetes API and acts as the front end for the control plane
A scheduler named kube-scheduler which assigns newly created pods to available nodes based on scheduling criteria such as resource requirements, affinity and anti-affinity, and so on
A controller manager named kube-controller-manager which responds to node health events and other tasks
A key-value store named etcd which backs all cluster data
A cloud controller manager, logically named cloud-controller-manager, that links the cluster into Azure (this is the primary difference between Kubernetes on-premises and any cloud-managed Kubernetes)
We look to the API server when we need to enforce and validate a policy. For example, let’s say we want to set limits on container CPU and memory usage. This is a good idea to protect against resource exhaustion attacks, and it’s a generally good practice to set resource limits on cloud compute anyways. This configuration comes from the container spec – lines 53-54 in this example YAML template:
In this case, I didn’t specify any limit on CPU or memory usage for this container. Defender for Cloud will flag this as a recommendation that we can delegate, remediate, automate via a Logic App, or deny outright:
It’s not hard to imagine how Defender for Cloud can identify affected containers – it’s simply looking for quota values populated in the container spec. But Defender for Cloud is also giving us the option to enforce this recommendation by denying the deployment of any container with no specified resource limit. How does this work? To answer this, we need to dive into Gatekeeper.
Defender for Containers enforces Azure Policy through an add-on called Azure Policy for Kubernetes. This is deployed as an Arc-enabled Kubernetes extension in AWS, GCP, and on-premises environments and as a native AKS add-on in Azure. The add-on is powered by a Gatekeeper pod deployed into a single node in the cluster.
Gatekeeper is a widely deployed solution that allows us to decouple policy decisions from the Kubernetes API server. Our built-in and custom benchmark policies are translated into “CustomResourceDefinition” (CRD) policies that are executed by Gatekeeper’s policy engine. Kubernetes includes admission controllers that can view and/or modify authenticated, authorized requests to create, modify, and delete objects in the Kubernetes environment. There are dozens of admission controllers in the Kubernetes API server, but there are two that we specifically rely on for Gatekeeper enforcement. First, the MutatingAdmissionWebhook is a controller that calls mutating webhooks – in serial, one after another – to read and modify the pending request. Second, the ValidatingAdmissionWebhook controller goes into action during the final validation phase of the operation and calls validating webhooks in parallel to inspect the request. A validating webhook can reject the request which will deny creation, modification, or deletion of the resource. Because the validating controller is invoked after all object modifications are complete, we use validating admission webhooks to guarantee that we are inspecting the final state of an object.
Gatekeeper has several components called “operations” that can be deployed into one monolithic pod or as multiple individual pods in a service-oriented architecture. The Azure Policy add-on deploys Gatekeeper’s operations individually in three pods:
The audit process, which evaluates and reports policy violations on existing resources (this should always be run as a singleton pod to avoid contentions and prevent overburdening the API server)
The validating webhook, and
The mutating webhook.
You can see these pods in your cluster by filtering on the ‘gatekeeper.sh/system’ label:
Here we can see one gatekeeper-audit pod and two gatekeeper-controller pods. Note that the two webhook pods are not distinguished by function – we’ll encounter this later on when we view logs from the mutating admission controller. Running these operations in different pods allows for horizontal scaling on the webhooks and enables operational resilience among the three components.
In our earlier example, we wanted to deny the creation of any container that doesn’t have CPU and/or memory usage limits defined in its container spec. Defender for Containers will use Gatekeeper’s validating admission webhook to reject any misconfigured requests at the API server. But what if we wanted to take some other action – for instance, if we were rolling out a new policy and wanted to audit compliance rather than directly move into enforcement? Or what if we want to exempt certain namespaces or labels from a policy rule? For this, we will need to explore parameters and effects.
First, let’s find our policy definition in the Azure portal by navigating to Microsoft Defender for Cloud > Environment settings and opening the Security Policies in the settings for our Azure subscription. Our built-in policy definitions come from the default Microsoft Cloud Security Benchmark which contains 240 recommendations covering all Defender for Cloud workload protections. Filtering on a keyword will surface our policy definition:
Click the ellipses at the right of the definition to view the context menu. Select “Manage effect and parameters” to open a configuration panel with several options:
First, let’s talk about the policy effects. Sorted by their order of evaluation from first to last, we have:
Disabled – this will prevent rule evaluation throughout this subscription.
Deny – this will block creation of a new resource that fails the policy. (Note that it will not remove existing resources that have already been deployed.)
Audit – this will generate an alert but not block resource creation. Audit is evaluated after Deny to prevent double-logging of an undesired resource.
What about the additional parameters? Our policy rule allows us to set rule parameters such as the maximum allowed memory and CPU values, as well as exclude namespaces from monitoring, select labels for monitoring, and exclude images from all container policy inspection. This configuration block is critical for managing exemptions such as containers that should be allowed to run as root or similar scenarios. Several Kubernetes namespaces are excluded by default: kube-system, gatekeeper-system, azure-arc, and others are commonly excluded from these policy definitions.
If we inspect the policy itself, we will see its execution logic. Of particular interest is the “templateInfo” section in lines 178-181:
This invokes the URI for the CustomResourceDefinition (CRD), a YAML file that describes the schema of the constraint and specifies the actual constraint logic in the Rego declarative language. In our example, the CRD is located at
You might have noticed that our Azure policy effects of “audit” and “deny” map directly to the validating admission webhook, which can check resource create/modify/delete requests against our policy configuration. What about the other Gatekeeper component, the mutating admission webhook? Instead of simply rejecting creation of a container that is missing a resource usage quota, we could dynamically edit the API request to set our own limit and allow the container to spawn. Let’s check out another built-in Azure policy definition to see this one in action.
First, let’s take a look at the policy reference list from the AKS documentation. Search or scroll down to find a policy named “[Preview]: Sets Kubernetes cluster containers CPU limits to default values in case not present.” The documentation includes links to the Azure portal (login required) and the JSON source code for the definition in the Azure-Policy GitHub, currently at version 1.2.0-preview as of the date of this blog post. Let’s click into the Azure portal where we can view the policy definition and assign it to our Kubernetes cluster. Notice our available effects – instead of “Audit” and “Deny”, we now have “Mutate”:
The linked CRD (line 64) is a short one, assigning a limit of “500m” if not present:
(Direct link: https://store.policy.core.windows.net/kubernetes/mutate-resource-cpu-limits/v1/mutation.yaml)
We can assign the policy to the tenant, subscription, or resource group(s) in our environment, set exclusions, and optionally configure resource selectors and overrides to customize the rollout of this policy. Once deployed, we will need to wait for up to 15 minutes for the Azure Policy add-on to pull changes to policy assignments. Once the new assignment is updated, the add-on will add the appropriate constraint template and constraints to the policy engine. On the same fifteen-minute timer, the add-on will execute a full scan of the cluster using the Audit operation.
Let’s connect to our Kubernetes cluster and run some commands to validate our new mutate-effect policy. First, we’ll need to set up kubeconfig by setting subscription context and saving credentials for our cluster. Follow the instructions in the documentation and check by running ‘kubectl cluster-info’ to validate that the shell is connected correctly:
View constraint templates downloaded by the Azure Policy add-on using ‘kubectl get assign’:
Now let’s spawn a container that will violate this policy to view the mutation in action. You can use any YAML template or the single-image application wizard in the Azure console. If you use the wizard, be sure to zero out the default limits in Application Details.
Since we’re using a mutation effect, the mutating admission webhook in Gatekeeper should insert default values for CPU and memory when it’s called by the admission controller before passing the object creation request back to the API server. The container should deploy without any interference from a Deny effect policy because the request was modified prior to the validating admission webhook being called. Sure enough, our deployment is successful!
Now let’s check the logs for the gatekeeper pod to view audit and mutation events. Note that the two gatekeeper-controller webhook pods are not differentiated in the console – check both pod names to find the one that is executing mutate actions in your cluster.
We can see the mutate event at the end of the log:
Copied in text form, it reads as follows.
{“level”:”info”,”ts”:1723829551.9305975,”logger”:”mutation”,”msg”:”Mutation applied”,”process”:”mutation”,”Mutation Id”:”a4155642-5417-48c9-a15a-e31040807e66″,”event_type”:”mutation_applied”,”resource_group”:””,”resource_kind”:”Pod”,”resource_api_version”:”v1″,”resource_namespace”:”default-1723829546418″,”resource_name”:”web-dvwa-nolimit-8c9f967d4-“,”resource_source_type”:”Original”,”resource_labels”:{“app”:”web-dvwa-nolimit”,”pod-template-hash”:”8c9f967d4″},”iteration_0″:”Assign//azurepolicy-k8sazurev1resourcelimitscpu-f81c1c050a0fb6b965bc:1″}
We can validate that our new container has a limit applied by inspecting the pod YAML:
There it is – the mutation applied a CPU limit before passing the request back to the API server, and the resource was created successfully!
For more reading on Gatekeeper and Azure Policy for Kubernetes, check out these resources:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
https://learn.microsoft.com/en-us/azure/aks/use-azure-policy
https://learn.microsoft.com/en-us/azure/aks/policy-reference
https://open-policy-agent.github.io/gatekeeper/website/docs/
https://github.com/open-policy-agent/opa
Microsoft Tech Community – Latest Blogs –Read More
how to modify code for distributed delay
I have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delayI have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delay I have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delay distributed delay, delay differentia equations, solve MATLAB Answers — New Questions