Month: February 2026
Using simulink with two unrelated clocks (non-integer multiples of eachtoher)
I am recieving the following error in Simulink, using Model Composer:
Illegal period. Ensure that this block’s sample period is an integer multiple of the Simulink system period as configured in the Settings tab of the Vitis Model Composer Hub block.
Error occurred during "Block Configuration".
F1 = 16.0362MHz
F2 = 1.023MHz
I need to simulate with two different clocks: F1 and F2
The ratio of F1 and F2 are 15.6757
There is no GCD between F1 and F2.
I am all out of ideas.
I would like to generate a signal on 1.023MHz and sample it on the 16.0362M clock.I am recieving the following error in Simulink, using Model Composer:
Illegal period. Ensure that this block’s sample period is an integer multiple of the Simulink system period as configured in the Settings tab of the Vitis Model Composer Hub block.
Error occurred during "Block Configuration".
F1 = 16.0362MHz
F2 = 1.023MHz
I need to simulate with two different clocks: F1 and F2
The ratio of F1 and F2 are 15.6757
There is no GCD between F1 and F2.
I am all out of ideas.
I would like to generate a signal on 1.023MHz and sample it on the 16.0362M clock. I am recieving the following error in Simulink, using Model Composer:
Illegal period. Ensure that this block’s sample period is an integer multiple of the Simulink system period as configured in the Settings tab of the Vitis Model Composer Hub block.
Error occurred during "Block Configuration".
F1 = 16.0362MHz
F2 = 1.023MHz
I need to simulate with two different clocks: F1 and F2
The ratio of F1 and F2 are 15.6757
There is no GCD between F1 and F2.
I am all out of ideas.
I would like to generate a signal on 1.023MHz and sample it on the 16.0362M clock. clocking, simulink MATLAB Answers — New Questions
MatLab C++ Shared Dll library initialization problem
I am using Matlab 2013a and Visual Studio 2013. I am trying to use MatLab compiled dll from a C++ console application. My simple Matlab test dll and console application is compiled for 64bit machines (I made sure they are 64bit via dumpbin). I also have correct version of MCR installed on my machine (even if it’s not necessary).
I setup my compiler using "mbuild -setup" command.
I compile my .m file using this command line: "mcc -v -W cpplib:mylib -T link:lib myFunc" successfully.
In my console application, I include these libraries: mylib.lib, mclmcrrt.lib, mclmcr.lib, libmx.lib, libmat.lib
and use the libraries in this path "C:Program FilesMATLABR2013aexternlibwin64microsoft"
When I debug my program, it successfully initializes MCR using this call: "mclInitializeApplication(NULL, 0)" but when I call
"mylibInitialize()" in order to initialize my library, program crashes. It doesn’t even throw an axception therefore I cannot handle it using
try/catch block. It gives unhandled exception and access vialoation error message.
These are sample debug output messages I got;
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::SchemaDateTimeException at memory location 0x000000E263EF4EB8.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EF4E40.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2643FF630.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EFA640.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2642FFCD0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000E264EFF530.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: CryptoPP::AES_PHM_Decryption::InvalidCiphertextOrKey at memory location 0x000000E264EFB0F0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: tfFailedException at memory location 0x000000E264EF4C10.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: varflowFailedException at memory location 0x000000E264EF9410.
The thread 0x3550 has exited with code 0 (0x0).
‘MatlabTest.exe’ (Win32): Loaded ‘C:Program FilesMATLABR2013abinwin64hgbuiltins.dll’. Cannot find or open the PDB file.
First-chance exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
Unhandled exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
The program ‘[12952] MatlabTest.exe’ has exited with code 0 (0x0).
I installed MCR to another machine and tried to run this console application. I got the same result.
I tried Loren’s Vigenere example too,
http://blogs.mathworks.com/loren/2011/02/03/creating-c-shared-libraries-and-dlls/#respond
It didn’t work either.
I couldn’t find any helpful answer on Matlab community or stackoverflow.
Do you have any idea why this is happening?
ThanksI am using Matlab 2013a and Visual Studio 2013. I am trying to use MatLab compiled dll from a C++ console application. My simple Matlab test dll and console application is compiled for 64bit machines (I made sure they are 64bit via dumpbin). I also have correct version of MCR installed on my machine (even if it’s not necessary).
I setup my compiler using "mbuild -setup" command.
I compile my .m file using this command line: "mcc -v -W cpplib:mylib -T link:lib myFunc" successfully.
In my console application, I include these libraries: mylib.lib, mclmcrrt.lib, mclmcr.lib, libmx.lib, libmat.lib
and use the libraries in this path "C:Program FilesMATLABR2013aexternlibwin64microsoft"
When I debug my program, it successfully initializes MCR using this call: "mclInitializeApplication(NULL, 0)" but when I call
"mylibInitialize()" in order to initialize my library, program crashes. It doesn’t even throw an axception therefore I cannot handle it using
try/catch block. It gives unhandled exception and access vialoation error message.
These are sample debug output messages I got;
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::SchemaDateTimeException at memory location 0x000000E263EF4EB8.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EF4E40.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2643FF630.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EFA640.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2642FFCD0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000E264EFF530.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: CryptoPP::AES_PHM_Decryption::InvalidCiphertextOrKey at memory location 0x000000E264EFB0F0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: tfFailedException at memory location 0x000000E264EF4C10.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: varflowFailedException at memory location 0x000000E264EF9410.
The thread 0x3550 has exited with code 0 (0x0).
‘MatlabTest.exe’ (Win32): Loaded ‘C:Program FilesMATLABR2013abinwin64hgbuiltins.dll’. Cannot find or open the PDB file.
First-chance exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
Unhandled exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
The program ‘[12952] MatlabTest.exe’ has exited with code 0 (0x0).
I installed MCR to another machine and tried to run this console application. I got the same result.
I tried Loren’s Vigenere example too,
http://blogs.mathworks.com/loren/2011/02/03/creating-c-shared-libraries-and-dlls/#respond
It didn’t work either.
I couldn’t find any helpful answer on Matlab community or stackoverflow.
Do you have any idea why this is happening?
Thanks I am using Matlab 2013a and Visual Studio 2013. I am trying to use MatLab compiled dll from a C++ console application. My simple Matlab test dll and console application is compiled for 64bit machines (I made sure they are 64bit via dumpbin). I also have correct version of MCR installed on my machine (even if it’s not necessary).
I setup my compiler using "mbuild -setup" command.
I compile my .m file using this command line: "mcc -v -W cpplib:mylib -T link:lib myFunc" successfully.
In my console application, I include these libraries: mylib.lib, mclmcrrt.lib, mclmcr.lib, libmx.lib, libmat.lib
and use the libraries in this path "C:Program FilesMATLABR2013aexternlibwin64microsoft"
When I debug my program, it successfully initializes MCR using this call: "mclInitializeApplication(NULL, 0)" but when I call
"mylibInitialize()" in order to initialize my library, program crashes. It doesn’t even throw an axception therefore I cannot handle it using
try/catch block. It gives unhandled exception and access vialoation error message.
These are sample debug output messages I got;
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::SchemaDateTimeException at memory location 0x000000E263EF4EB8.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EF4E40.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xercesc_2_7::NoSuchElementException at memory location 0x000000E263EF4E48.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2643FF630.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: xsd_binder::MalformedDocumentError at memory location 0x000000E263EFA640.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: boost::thread_interrupted at memory location 0x000000E2642FFCD0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000E264EFF530.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: CryptoPP::AES_PHM_Decryption::InvalidCiphertextOrKey at memory location 0x000000E264EFB0F0.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: tfFailedException at memory location 0x000000E264EF4C10.
First-chance exception at 0x00007FFA22761F08 in MatlabTest.exe: Microsoft C++ exception: varflowFailedException at memory location 0x000000E264EF9410.
The thread 0x3550 has exited with code 0 (0x0).
‘MatlabTest.exe’ (Win32): Loaded ‘C:Program FilesMATLABR2013abinwin64hgbuiltins.dll’. Cannot find or open the PDB file.
First-chance exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
Unhandled exception at 0x0000000000B46E60 (m_interpreter.dll) in MatlabTest.exe: 0xC0000005: Access violation reading location 0x0000000064EF3B90.
The program ‘[12952] MatlabTest.exe’ has exited with code 0 (0x0).
I installed MCR to another machine and tried to run this console application. I got the same result.
I tried Loren’s Vigenere example too,
http://blogs.mathworks.com/loren/2011/02/03/creating-c-shared-libraries-and-dlls/#respond
It didn’t work either.
I couldn’t find any helpful answer on Matlab community or stackoverflow.
Do you have any idea why this is happening?
Thanks matlab compiler, mcc, c++, dll, crash MATLAB Answers — New Questions
Microsoft Previews userConfiguration Graph API
UserConfiguration API Manages Exchange Folder Associated Items
On January 28, 2026, Microsoft launched the preview (beta) of the UserConfiguration Graph API. A userConfiguration object is a Folder Associated Item (FAI), a hidden MAPI item stored in Exchange Server and Exchange Online folders. FAIs have been in Exchange for many years. In the Exchange protocol documentation, FAIs are defined as:
A collection of Message objects that are stored in a Folder object and are typically hidden from view by email applications. An FAI Message object is used to store a variety of settings and auxiliary data, including forms, views, calendar options, favorites, and category lists.
In this context, userConfiguration objects are used to store user settings for applications like the Calendar and OWA. This usage leads to the name persistent application settings, which is how the Exchange Web Services (EWS) documentation refers to FAIs.
You can access FAIs in mailboxes using utilities like MFCMAPI. Figure 1 shows an example of an FAI from the associated items table for the Calendar folder.

Eradication of EWS Requires Access to FAIs
Why is a Graph API for FAIs important? The answer lies in Microsoft’s campaign to eradicate EWS from Exchange Online by October 2026. Applications that use EWS to store their settings in mailboxes need a replacement API to continue reading and updating settings in FAIs.
A user mailbox setting Graph API already exists to get and update settings like work hours and timezone used by Outlook. However, the API offers incomplete coverage of all settings and is limited to Microsoft applications. The API is not much use to ISVs with applications that depend on FAIs to hold application settings. This is the target audience for the userConfiguration API.
Example – Retrieving FAI Settings Stored Serialized XML
Access to configuration settings items requires the MailboxConfigItem.Read permission. Calendar work hours are stored in serialized XML. When retrieved, the Graph outputs the options as a Base64-encoded string, which must be decoded. For example:
("https://graph.microsoft.com/beta/users/{0}/mailFolders/Calendar/userConfigurations/WorkHours" -f $UserId)
[array]$Data = Invoke-MgGraphRequest -URI $Uri -Method Get -OutputType PSObject
[string]$Base64Convert = [Text.Encoding]::Utf8.GetString([Convert]::FromBase64String($Data.xmldata))
The same result can be gained by running the Get-MgBetaUserMailFolderUserConfiguration cmdlet from the Microsoft Graph PowerShell SDK (I used version 2.34).
$Data = Get-MgBetaUserMailFolderUserConfiguration -MailFolderId Calendar -UserConfigurationId WorkHours -UserId $UserId
To extract the work hours settings, the information stored in the decoded XML must be parsed. Here’s some code to extract and populate a PowerShell object with the settings:
# Parse calendar settings from the XML data
[xml]$doc = $Base64Convert
$ns = New-Object System.Xml.XmlNamespaceManager($doc.NameTable)
$ns.AddNamespace('w','WorkingHours.xsd')
$root = $doc.SelectSingleNode('//w:WorkHoursVersion1',$ns)
$tz = $root.SelectSingleNode('w:TimeZone',$ns)
$tzName = $tz.SelectSingleNode('w:Name',$ns).InnerText
$tzBias = [int]$tz.SelectSingleNode('w:Bias',$ns).InnerText
$std = $tz.SelectSingleNode('w:Standard',$ns)
$stdBias = [int]$std.SelectSingleNode('w:Bias',$ns).InnerText
$stdChange = $std.SelectSingleNode('w:ChangeDate',$ns)
$stdChangeDate = $stdChange.SelectSingleNode('w:Date',$ns).InnerText
$stdChangeTime = $stdChange.SelectSingleNode('w:Time',$ns).InnerText
$stdChangeDayOfWeek = $stdChange.SelectSingleNode('w:DayOfWeek',$ns).InnerText
$dst = $tz.SelectSingleNode('w:DaylightSavings',$ns)
$dstBias = [int]$dst.SelectSingleNode('w:Bias',$ns).InnerText
$dstChange = $dst.SelectSingleNode('w:ChangeDate',$ns)
$dstChangeDate = $dstChange.SelectSingleNode('w:Date',$ns).InnerText
$dstChangeTime = $dstChange.SelectSingleNode('w:Time',$ns).InnerText
$dstChangeDayOfWeek = $dstChange.SelectSingleNode('w:DayOfWeek',$ns).InnerText
$timeslot = $root.SelectSingleNode('w:TimeSlot',$ns)
$start = $timeslot.SelectSingleNode('w:Start',$ns).InnerText
$end = $timeslot.SelectSingleNode('w:End',$ns).InnerText
$workdays = $root.SelectSingleNode('w:WorkDays',$ns).InnerText
$CalendarOptions = [PSCustomObject]@{
TimeZoneName = $tzName
TimeZoneBiasMinutes = $tzBias
StandardBiasMinutes = $stdBias
StandardChangeDate = $stdChangeDate
StandardChangeTime = $stdChangeTime
StandardChangeDayOfWeek = $stdChangeDayOfWeek
DaylightBiasMinutes = $dstBias
DaylightChangeDate = $dstChangeDate
DaylightChangeTime = $dstChangeTime
DaylightChangeDayOfWeek = $dstChangeDayOfWeek
WorkStart = $start
WorkEnd = $end
WorkDays = $workdays
}
The output object looks something like this:
TimeZoneName : GMT Standard Time TimeZoneBiasMinutes : 0 StandardBiasMinutes : 0 StandardChangeDate : 00/10/05 StandardChangeTime : 02:00:00 StandardChangeDayOfWeek : 0 DaylightBiasMinutes : -60 DaylightChangeDate : 00/03/05 DaylightChangeTime : 01:00:00 DaylightChangeDayOfWeek : 0 WorkStart : 09:00:00 WorkEnd : 18:00:00 WorkDays : Monday Tuesday Wednesday Thursday Friday
Example – Retrieving FAI Settings from Structured Data
An FAI that uses structured data to store its information uses key-value pairs. Calendar settings are a good example of the type. To retrieve the settings, run the request and extract the structuredData property:
$Uri = ("https://graph.microsoft.com/beta/users/{0}/mailFolders/Calendar/userConfigurations/Calendar" -f $UserId)
[array]$Data = Invoke-MgGraphRequest -URI $Uri -OutputType PsObject | Select-Object -ExpandProperty structuredData
Now it’s a matter of processing each key-value pair to extract the name of the setting and its value:
ForEach ($Setting in $Data) {
[string]$SettingName = $Setting.keyEntry.values
[string]$SettingValue = $Setting.valueEntry.values
Write-Host ("{0} : {1}" -f $SettingName, $SettingValue)
}
The output should be something like this:
AutomateProcessing : 1
MinimumDurationInMinutes : 0
piReminderUpgradeTime : 215747402
AllBookInPolicy : True
AllRequestOutOfPolicy : False
piAutoDeleteReceipts : False
AllowConflicts : False
AddAdditionalResponse : False
piShowWorkHourOnly : 1
AllowMultipleResources : True
piGroupCalendarShowDirectReports : True
ScheduleOnlyDuringWorkHours : False
EnforceSchedulingHorizon : True
piShowFreeItems : 0
AllRequestInPolicy : False
piRemindDefault : 15
calAssistNoiseReduction : True
piAutoProcess : True
ConflictPercentageAllowed : 0
AllowDistributionGroup : True
piGroupCalendarShowCoworkers : True
AllowRecurringMeetings : True
OLPrefsVersion : 1
piGroupCalendarShowMyDepartment : True
RemoveForwardedMeetingNotifications : True
EnforceCapacity : False
MaximumConflictInstances : 0
EnforceAdjacencyAsOverlap : False
MaximumDurationInMinutes : 1440
BookingWindowInDays : 180
No Immediate Use for Microsoft 365 Tenants
I doubt very much if a Microsoft 365 tenant administrator will find much use for the userConfiguration Graph API. However, it is a step forward in the process to eradicate EWS from Exchange Online, so it’s very welcome from that perspective, and it’s always nice to know what goes on behind the scenes…
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Updates in two of our core priorities
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning.
I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a new role focused on engineering quality, reporting to me.
Charlie and I have been planning this transition for some time, given his desire to move from being an org leader to being an IC engineer. And I love how energized he is to practice this craft here day in and day out!
Hayete joins us from Google where she was President, Customer Experience for Google Cloud. Before that, she spent more than 15 years at Microsoft with senior leadership roles across engineering and sales, playing multiple critical roles in building two of our biggest franchises – Windows and Office, leading our commercial solution areas’ go-to-market efforts. And she was instrumental in the design and implementation of our Security Solution Area. She brings an ethos that combines product building with value realization for customers, which is critical right now.
As we shared during our quarterly earnings last week, we have great momentum in security, including progress with Security Copilot agents, strong Purview adoption, and continued customer growth, and we will build on this.
We have a deep bench of talent and leaders across our security business, and this team will now report to Hayete. Additionally, Ales Holecek will take on a new role as Chief Architect for Security, reporting to Hayete. Ales has spent years leading architecture and development across some of our most important platforms and will help bring that same sensibility to security and its connections back to our existing scale businesses and the Agent Platform.
As we shared yesterday, we have a new operating rhythm with commercial cohorts, and Hayete and her team will now be accountable for our security product rhythms as part of this process.
Charlie built our Security, Compliance, Identity, and Management organization and helped rally the company behind the Secure Future Initiative. And we’re fortunate to have his continued focus and leadership on another one of our top priorities. With our Quality Excellence Initiative, we have increased accountability and accelerated progress against our engineering objectives to ensure we always deliver durable, high quality-experiences at global scale. And Charlie will partner closely with Scott Guthrie and Mala Anand on this work.
I’m excited to welcome Hayete back to Microsoft to advance this mission critical work, and grateful to Charlie for all he has done for our security business and what he will continue to do for the company.
Satya
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, posted the below message to employees on Viva Engage this morning. I am excited to share a couple updates in two of our core priorities: security and quality. Hayete Gallot is rejoining Microsoft as Executive Vice President, Security, reporting to me. I’ve also asked Charlie Bell to take on a…
The post Updates in two of our core priorities appeared first on The Official Microsoft Blog.Read More
How to run a Windows-generated .slxp file on macOS?
I received a Simulink .slxp file (Protected Model) from an external source. It was originally generated on a Windows 64-bit (win64) system.
Is there any way to run or use this file on a macOS environment? Any advice would be appreciated.I received a Simulink .slxp file (Protected Model) from an external source. It was originally generated on a Windows 64-bit (win64) system.
Is there any way to run or use this file on a macOS environment? Any advice would be appreciated. I received a Simulink .slxp file (Protected Model) from an external source. It was originally generated on a Windows 64-bit (win64) system.
Is there any way to run or use this file on a macOS environment? Any advice would be appreciated. simulink MATLAB Answers — New Questions
Using for loop with if condition
Respectd Team,
I am looking solution for genration of data load by text.
My problem in simple manner, We have 14 floor building with 20 rooms in each floor, I have to write a code for once the person is enter into the room 1st floor 101 and remaining values in corresponding top floors should be zer (ex :201,301….1401) and if the person enter into 2room no 415 (4 floor & no 15) than correpsonding values in 115,215,315,515….1415 should be zero.Respectd Team,
I am looking solution for genration of data load by text.
My problem in simple manner, We have 14 floor building with 20 rooms in each floor, I have to write a code for once the person is enter into the room 1st floor 101 and remaining values in corresponding top floors should be zer (ex :201,301….1401) and if the person enter into 2room no 415 (4 floor & no 15) than correpsonding values in 115,215,315,515….1415 should be zero. Respectd Team,
I am looking solution for genration of data load by text.
My problem in simple manner, We have 14 floor building with 20 rooms in each floor, I have to write a code for once the person is enter into the room 1st floor 101 and remaining values in corresponding top floors should be zer (ex :201,301….1401) and if the person enter into 2room no 415 (4 floor & no 15) than correpsonding values in 115,215,315,515….1415 should be zero. for loop, if condition MATLAB Answers — New Questions
Superimpose Matlab figure to Mathematica figure, not vectorized
A PDF vectorized plot is generated from a MatLab figure. Import this Matlab PDF figure to Mathematica and superimpose it onto a Mathematica figure. Finally in Mathematica, export this Mathematica and Matlab figure as a vectorized PDF plot. It turns out the MatLab part of the figure is not a vectorized plot. No such issue when there’s no Matlab involved. I’m using Mathematica 11 and 14.3 in Windows. How to resolve it? Example with my code for Matlab and Mathematica:
[X,Y] = meshgrid(-5:.5:5);
Z = Y.*sin(X) – X.*cos(Y);
s = surf(X,Y,Z,’FaceAlpha’,0.5)
exportgraphics(gcf,’vectorfig.pdf’,’ContentType’,’vector’)
Show[Plot[x, {x, 0, 1}],
Epilog -> {Inset[Import["vectorfig.pdf"][[1]], {0.5, 0.5}, Center, {Automatic, 0.4} ]}]A PDF vectorized plot is generated from a MatLab figure. Import this Matlab PDF figure to Mathematica and superimpose it onto a Mathematica figure. Finally in Mathematica, export this Mathematica and Matlab figure as a vectorized PDF plot. It turns out the MatLab part of the figure is not a vectorized plot. No such issue when there’s no Matlab involved. I’m using Mathematica 11 and 14.3 in Windows. How to resolve it? Example with my code for Matlab and Mathematica:
[X,Y] = meshgrid(-5:.5:5);
Z = Y.*sin(X) – X.*cos(Y);
s = surf(X,Y,Z,’FaceAlpha’,0.5)
exportgraphics(gcf,’vectorfig.pdf’,’ContentType’,’vector’)
Show[Plot[x, {x, 0, 1}],
Epilog -> {Inset[Import["vectorfig.pdf"][[1]], {0.5, 0.5}, Center, {Automatic, 0.4} ]}] A PDF vectorized plot is generated from a MatLab figure. Import this Matlab PDF figure to Mathematica and superimpose it onto a Mathematica figure. Finally in Mathematica, export this Mathematica and Matlab figure as a vectorized PDF plot. It turns out the MatLab part of the figure is not a vectorized plot. No such issue when there’s no Matlab involved. I’m using Mathematica 11 and 14.3 in Windows. How to resolve it? Example with my code for Matlab and Mathematica:
[X,Y] = meshgrid(-5:.5:5);
Z = Y.*sin(X) – X.*cos(Y);
s = surf(X,Y,Z,’FaceAlpha’,0.5)
exportgraphics(gcf,’vectorfig.pdf’,’ContentType’,’vector’)
Show[Plot[x, {x, 0, 1}],
Epilog -> {Inset[Import["vectorfig.pdf"][[1]], {0.5, 0.5}, Center, {Automatic, 0.4} ]}] mathematica, vectorplot MATLAB Answers — New Questions
How can I extract the numerical array from a hypercube object?
Hi! I’ve been using the hyperspectral library for the image processing toolbox for some years now. In the beggining, when a variable of class hypercube was created, it was storing the nomerical array (i.e. the spectral image) in a struct way that you could acces via dot indexing, for instance:
spectral_image = cube.DataCube;
However this seems not to be working anymore. Instead if I try this, I get a string instead of a numerical array. I’ve cheked and there is a funcion called gather which is supposed to be for this. I have also tried:
spectral_image = gather(cube);
But it is also not working. I get the following error, but my spectral image fits more than enough in my RAM, so I am sure this is not a problem.
Dot indexing is not supported for variables of this type.
Error in hypercube/gather (line 1398)
if isa(obj.BlockedDataCube.Adapter,’images.blocked.InMemory’)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The problem is that I have spectral images stored in files in hypercube class that I saved some time ago with Matlab R2023 version or even earlier, but now I can not access directly to this numerical arrays that previously were perfectly fitting in my RAM.
Any clue about how could I fix this?
Thanks in advance.Hi! I’ve been using the hyperspectral library for the image processing toolbox for some years now. In the beggining, when a variable of class hypercube was created, it was storing the nomerical array (i.e. the spectral image) in a struct way that you could acces via dot indexing, for instance:
spectral_image = cube.DataCube;
However this seems not to be working anymore. Instead if I try this, I get a string instead of a numerical array. I’ve cheked and there is a funcion called gather which is supposed to be for this. I have also tried:
spectral_image = gather(cube);
But it is also not working. I get the following error, but my spectral image fits more than enough in my RAM, so I am sure this is not a problem.
Dot indexing is not supported for variables of this type.
Error in hypercube/gather (line 1398)
if isa(obj.BlockedDataCube.Adapter,’images.blocked.InMemory’)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The problem is that I have spectral images stored in files in hypercube class that I saved some time ago with Matlab R2023 version or even earlier, but now I can not access directly to this numerical arrays that previously were perfectly fitting in my RAM.
Any clue about how could I fix this?
Thanks in advance. Hi! I’ve been using the hyperspectral library for the image processing toolbox for some years now. In the beggining, when a variable of class hypercube was created, it was storing the nomerical array (i.e. the spectral image) in a struct way that you could acces via dot indexing, for instance:
spectral_image = cube.DataCube;
However this seems not to be working anymore. Instead if I try this, I get a string instead of a numerical array. I’ve cheked and there is a funcion called gather which is supposed to be for this. I have also tried:
spectral_image = gather(cube);
But it is also not working. I get the following error, but my spectral image fits more than enough in my RAM, so I am sure this is not a problem.
Dot indexing is not supported for variables of this type.
Error in hypercube/gather (line 1398)
if isa(obj.BlockedDataCube.Adapter,’images.blocked.InMemory’)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The problem is that I have spectral images stored in files in hypercube class that I saved some time ago with Matlab R2023 version or even earlier, but now I can not access directly to this numerical arrays that previously were perfectly fitting in my RAM.
Any clue about how could I fix this?
Thanks in advance. hyperspectral image library, image processing toolbox, hypercube object, hypercube MATLAB Answers — New Questions
addRule throws “Do not use a rule keyword as a variable name.”
I get the error on line 44 of my code: fis = addRule(fis,all); I’ve tried changing the name of the variable name (all) to other names, and the fis. I’ve removed spaces from my variables or renamed them completely.
% Create a Fuzzy Inference System (FIS) with the name "IFR"
fis = mamfis(‘Name’, ‘IFR’);
% Add a variable "input" to the FIS
fis = addInput(fis, [0 200], ‘Name’, ‘MAP_mm’);
% Add membership functions for variable "input"
fis = addMF(fis, "MAP_mm", "trapmf", [0 0 55 75], ‘Name’,"Low");
fis = addMF(fis, "MAP_mm", "trapmf", [55 75 100 120], ‘Name’,"Normal");
fis = addMF(fis, "MAP_mm", "trapmf", [100 120 200 200], ‘Name’,"High");
% HOU trapezoidal
fis = addInput(fis, [0 200], ‘Name’, ‘HOU_ml-hr’);
% HOU Low trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [0 0 30 40], ‘Name’,"Low");
% HOU Normal trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [30 40 100 125], ‘Name’,"Normal");
% HOU High trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [100 125 200 200], ‘Name’,"High");
% Add output variable to the FIS
fis = addOutput(fis, [0 2000], ‘Name’, ‘IFR_ml-hr’);
% Add membership functions for output variable
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 0 60 100], ‘Name’,"Low");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 100 200 400], ‘Name’,"Maintain");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [200 400 600 800], ‘Name’,"Moderate");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [600 800 1000 1500], ‘Name’,"High");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [1000 1500 2000 2000], ‘Name’,"Very_High");
% Create Rules for the FIS
r1 = "If HOU_ml-hr == Low & MAP_mm == Low => IFR_ml-hr == Very_High";
r2 = "If HOU_ml-hr == Normal & MAP_mm == Low => IFR_ml-hr == High";
r3 = "If HOU_ml-hr == High & MAP_mm == Low => IFR_ml-hr == Moderate";
r4 = "If HOU_ml-hr == Low & MAP_mm == Normal => IFR_ml-hr == Moderate";
r5 = "If HOU_ml-hr == Low & MAP_mm == High => IFR_ml-hr == Low";
r6 = "If HOU_ml-hr == Normal & MAP_mm == Normal => IFR_ml-hr == Maintain";
r7 = "If HOU_ml-hr == Normal & MAP_mm == High => IFR_ml-hr == Low";
r8 = "If HOU_ml-hr == High & MAP_mm == Normal =>IFR_ml-hr == Maintain";
r9 = "If HOU_ml-hr == High & MAP_mm == High => IFR_ml-hr == Low";
all = [r1 r2 r3 r4 r5 r6 r7 r8 r9];
% Add rules and enable rule viewing/debugging
fis = addRule(fis,all);
% Evaluate the FIS for inputs [MAP HOU]
sampleInput = [110 120; 60 25; 30 150; 180 90];
out = evalfis(fis, sampleInput);
% Plot membership functions
subplot(3,1,1)
plotmf(fis, ‘input’, 1)
subplot(3,1,2)
plotmf(fis, ‘input’, 2)
subplot(3,1,3)
plotmf(fis, ‘output’,3)I get the error on line 44 of my code: fis = addRule(fis,all); I’ve tried changing the name of the variable name (all) to other names, and the fis. I’ve removed spaces from my variables or renamed them completely.
% Create a Fuzzy Inference System (FIS) with the name "IFR"
fis = mamfis(‘Name’, ‘IFR’);
% Add a variable "input" to the FIS
fis = addInput(fis, [0 200], ‘Name’, ‘MAP_mm’);
% Add membership functions for variable "input"
fis = addMF(fis, "MAP_mm", "trapmf", [0 0 55 75], ‘Name’,"Low");
fis = addMF(fis, "MAP_mm", "trapmf", [55 75 100 120], ‘Name’,"Normal");
fis = addMF(fis, "MAP_mm", "trapmf", [100 120 200 200], ‘Name’,"High");
% HOU trapezoidal
fis = addInput(fis, [0 200], ‘Name’, ‘HOU_ml-hr’);
% HOU Low trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [0 0 30 40], ‘Name’,"Low");
% HOU Normal trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [30 40 100 125], ‘Name’,"Normal");
% HOU High trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [100 125 200 200], ‘Name’,"High");
% Add output variable to the FIS
fis = addOutput(fis, [0 2000], ‘Name’, ‘IFR_ml-hr’);
% Add membership functions for output variable
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 0 60 100], ‘Name’,"Low");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 100 200 400], ‘Name’,"Maintain");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [200 400 600 800], ‘Name’,"Moderate");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [600 800 1000 1500], ‘Name’,"High");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [1000 1500 2000 2000], ‘Name’,"Very_High");
% Create Rules for the FIS
r1 = "If HOU_ml-hr == Low & MAP_mm == Low => IFR_ml-hr == Very_High";
r2 = "If HOU_ml-hr == Normal & MAP_mm == Low => IFR_ml-hr == High";
r3 = "If HOU_ml-hr == High & MAP_mm == Low => IFR_ml-hr == Moderate";
r4 = "If HOU_ml-hr == Low & MAP_mm == Normal => IFR_ml-hr == Moderate";
r5 = "If HOU_ml-hr == Low & MAP_mm == High => IFR_ml-hr == Low";
r6 = "If HOU_ml-hr == Normal & MAP_mm == Normal => IFR_ml-hr == Maintain";
r7 = "If HOU_ml-hr == Normal & MAP_mm == High => IFR_ml-hr == Low";
r8 = "If HOU_ml-hr == High & MAP_mm == Normal =>IFR_ml-hr == Maintain";
r9 = "If HOU_ml-hr == High & MAP_mm == High => IFR_ml-hr == Low";
all = [r1 r2 r3 r4 r5 r6 r7 r8 r9];
% Add rules and enable rule viewing/debugging
fis = addRule(fis,all);
% Evaluate the FIS for inputs [MAP HOU]
sampleInput = [110 120; 60 25; 30 150; 180 90];
out = evalfis(fis, sampleInput);
% Plot membership functions
subplot(3,1,1)
plotmf(fis, ‘input’, 1)
subplot(3,1,2)
plotmf(fis, ‘input’, 2)
subplot(3,1,3)
plotmf(fis, ‘output’,3) I get the error on line 44 of my code: fis = addRule(fis,all); I’ve tried changing the name of the variable name (all) to other names, and the fis. I’ve removed spaces from my variables or renamed them completely.
% Create a Fuzzy Inference System (FIS) with the name "IFR"
fis = mamfis(‘Name’, ‘IFR’);
% Add a variable "input" to the FIS
fis = addInput(fis, [0 200], ‘Name’, ‘MAP_mm’);
% Add membership functions for variable "input"
fis = addMF(fis, "MAP_mm", "trapmf", [0 0 55 75], ‘Name’,"Low");
fis = addMF(fis, "MAP_mm", "trapmf", [55 75 100 120], ‘Name’,"Normal");
fis = addMF(fis, "MAP_mm", "trapmf", [100 120 200 200], ‘Name’,"High");
% HOU trapezoidal
fis = addInput(fis, [0 200], ‘Name’, ‘HOU_ml-hr’);
% HOU Low trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [0 0 30 40], ‘Name’,"Low");
% HOU Normal trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [30 40 100 125], ‘Name’,"Normal");
% HOU High trapezoidal
fis = addMF(fis, "HOU_ml-hr", "trapmf", [100 125 200 200], ‘Name’,"High");
% Add output variable to the FIS
fis = addOutput(fis, [0 2000], ‘Name’, ‘IFR_ml-hr’);
% Add membership functions for output variable
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 0 60 100], ‘Name’,"Low");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [0 100 200 400], ‘Name’,"Maintain");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [200 400 600 800], ‘Name’,"Moderate");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [600 800 1000 1500], ‘Name’,"High");
fis = addMF(fis, "IFR_ml-hr", "trapmf", [1000 1500 2000 2000], ‘Name’,"Very_High");
% Create Rules for the FIS
r1 = "If HOU_ml-hr == Low & MAP_mm == Low => IFR_ml-hr == Very_High";
r2 = "If HOU_ml-hr == Normal & MAP_mm == Low => IFR_ml-hr == High";
r3 = "If HOU_ml-hr == High & MAP_mm == Low => IFR_ml-hr == Moderate";
r4 = "If HOU_ml-hr == Low & MAP_mm == Normal => IFR_ml-hr == Moderate";
r5 = "If HOU_ml-hr == Low & MAP_mm == High => IFR_ml-hr == Low";
r6 = "If HOU_ml-hr == Normal & MAP_mm == Normal => IFR_ml-hr == Maintain";
r7 = "If HOU_ml-hr == Normal & MAP_mm == High => IFR_ml-hr == Low";
r8 = "If HOU_ml-hr == High & MAP_mm == Normal =>IFR_ml-hr == Maintain";
r9 = "If HOU_ml-hr == High & MAP_mm == High => IFR_ml-hr == Low";
all = [r1 r2 r3 r4 r5 r6 r7 r8 r9];
% Add rules and enable rule viewing/debugging
fis = addRule(fis,all);
% Evaluate the FIS for inputs [MAP HOU]
sampleInput = [110 120; 60 25; 30 150; 180 90];
out = evalfis(fis, sampleInput);
% Plot membership functions
subplot(3,1,1)
plotmf(fis, ‘input’, 1)
subplot(3,1,2)
plotmf(fis, ‘input’, 2)
subplot(3,1,3)
plotmf(fis, ‘output’,3) addrule MATLAB Answers — New Questions
PAYG Services Like Purview DSI Can Rack Up Large Charges
DSI AI Processing and Compute Units Lead to Large Bills
Reading MVP Dino Caputo’s cautionary tale about Microsoft Security Copilot reminded me about a similar experience I had with the preview version of Purview Data Security Investigations (DSI). Dino reports how easy it is for a tenant to consume Security Compute Units (SCUs) to run up a bill of thousands of dollars. He makes the excellent point that enabling SCU consumption without the “right data sources, telemetry, and cost controls can result in significant unexpected spend with little or no usable output.”
I wish I had known this when I set out to test the preview version of DSI (reported here). As an experienced eDiscovery practitioner, I am comfortable with the user interface and the initial processing flow used to create an investigation and find items of interest for that investigation from Microsoft 365 data sources like mailboxes and sites.
DSI operates on a pay-as-you-go (PAYG) basis and charges for the Azure storage used to hold the items found by investigations and the AI processing to reason over and analyze the items to generate trends and insights that might be valuable to investigators. Paying for storage is straightforward – the more an investigation finds, the more Azure will charge to store the items. It’s good encouragement to refine searches to focus in on what’s really needed for an investigation instead of what might be needed.
Compute Units Drive DSI Costs
The problems arise with the compute units required for AI processing. In my mind, I didn’t use any compute units because I didn’t bring a DSI case through to performing vector processing and deep content analysis (I didn’t feel that I had good enough test data for this purpose). But what I didn’t realize is that DSI pre-provisioned compute units and charged for those units even if they were not used (this doesn’t happen in the GA version). All of which led to a very unpleasant shock when I reviewed the charged applied to my Azure subscription some days later and discovered that I had incurred over a thousand dollars of charges for compute units (Figure 1).

I complained bitterly about the unfairness of charging for compute units that were never used. Fortunately, Microsoft took swift action to fix the problem. Indeed, the DSI engineering team had already figured out that charging customers for unused resources was not a good way to encourage use of their solution, and the generally available version of DSI does not pre-provision security units when investigations start. You’ll certainly pay for compute units if you use AI to process data found by an investigation, but only from the point where you start that processing.
The Case for More Transparent DSI Charging
Microsoft publishes reasonable guidelines to help investigators understand how to manage DSI costs. However, I have my doubts about how many investigators or tenant administrators will read the guidance. I guess that the temptation to plunge in and see what happens when AI processes investigation case data is just too high. In summary, there’s still too high a probability that DSI can generate unexpected heavy costs. Those costs might be absorbed without question on a corporate credit card, but that’s no reason to allow the situation to persist.
I had a chance to talk to Christopher Fiessinger, who is one of the presenters at the DSI Ask Me Anything event on Thursday, February 5 at 10AM PST, and suggested that the DSI user interface should be a lot more proactive about driving cost awareness. For example, when an investigator performs a search and reviews the search results, the UX should display the costs for processing the set of results found by the search. Each DSI case should display the total incurred cost of the investigative processing to date together with an indication of how much it will cost over the next month.
The necessary information to highlight charges is available and DSI includes a usage dashboard to show administrators details of costs incurred by investigations, but it would be more apparent if the information appeared on a per-case basis. Although Azure includes a cost management module (in preview) for PowerShell, getting charging data out isn’t as easy as you might think.
More PAYG, More Charges to Understand
Microsoft cannot stuff everything into Microsoft 365 E5. Solutions like DSI are of interest to a relatively small number of the overall Microsoft 365 base, and it makes sense that the solutions are available on a PAYG basis for those that need them. As with anything in life, it also makes sense for people to understand likely charges before doing something. After all, unexpected charges are a recipe for unhappiness.
Learn how to use Purview eDiscovery and other solutions and to exploit the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
different in Poisson distribution test
I have a problem when i take a test of poisson distribution. I have the data: 0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1. I test is wonder the data have Poisson distribution. I use 3 ways:
>> dl=[0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1];
>> x=unique(dl);
>> ts=histcounts(dl);
>> [h,p]=chi2gof(dl,’CDF’,makedist(‘Poisson’,mean(dl)))
the results are: h = 0 p = 0.9810
2. >> n=length(dl);
>> tslt=n*pdf(makedist(‘Poisson’,mean(dl)),x);
>> [h,p]=chi2gof(x,’Ctrs’, x, ‘Frequency’, ts, ‘Expected’,tslt, ‘NParams’, 1)
the results are: h = 0 p = 0.1019
3. >> [h,p]=kstest(dl’,’CDF’, makedist(‘Poisson’,mean(dl)))
h =1 p = 7.8989e-08
3 ways return 3 different results. Please help me explain why it isI have a problem when i take a test of poisson distribution. I have the data: 0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1. I test is wonder the data have Poisson distribution. I use 3 ways:
>> dl=[0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1];
>> x=unique(dl);
>> ts=histcounts(dl);
>> [h,p]=chi2gof(dl,’CDF’,makedist(‘Poisson’,mean(dl)))
the results are: h = 0 p = 0.9810
2. >> n=length(dl);
>> tslt=n*pdf(makedist(‘Poisson’,mean(dl)),x);
>> [h,p]=chi2gof(x,’Ctrs’, x, ‘Frequency’, ts, ‘Expected’,tslt, ‘NParams’, 1)
the results are: h = 0 p = 0.1019
3. >> [h,p]=kstest(dl’,’CDF’, makedist(‘Poisson’,mean(dl)))
h =1 p = 7.8989e-08
3 ways return 3 different results. Please help me explain why it is I have a problem when i take a test of poisson distribution. I have the data: 0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1. I test is wonder the data have Poisson distribution. I use 3 ways:
>> dl=[0, 1, 2, 0, 3, 2, 1, 2, 1, 0, 1, 2, 3, 2, 1, 1, 0, 1, 2, 3, 1, 0, 1, 1, 2, 1, 2, 1, 0, 1, 2, 1, 3, 2, 0, 1, 1, 2, 1, 1];
>> x=unique(dl);
>> ts=histcounts(dl);
>> [h,p]=chi2gof(dl,’CDF’,makedist(‘Poisson’,mean(dl)))
the results are: h = 0 p = 0.9810
2. >> n=length(dl);
>> tslt=n*pdf(makedist(‘Poisson’,mean(dl)),x);
>> [h,p]=chi2gof(x,’Ctrs’, x, ‘Frequency’, ts, ‘Expected’,tslt, ‘NParams’, 1)
the results are: h = 0 p = 0.1019
3. >> [h,p]=kstest(dl’,’CDF’, makedist(‘Poisson’,mean(dl)))
h =1 p = 7.8989e-08
3 ways return 3 different results. Please help me explain why it is poisson distribution, chi2gof, kstest MATLAB Answers — New Questions
Odd and even numbers
Hi I’m new in matlab so i need a little help to get started. How do i make a program which can distinguish from odd and even numbers? I know that i need to make a loop, it should be either if or while but i would love some suggestions on how to solve this problem. Thanks :)Hi I’m new in matlab so i need a little help to get started. How do i make a program which can distinguish from odd and even numbers? I know that i need to make a loop, it should be either if or while but i would love some suggestions on how to solve this problem. Thanks 🙂 Hi I’m new in matlab so i need a little help to get started. How do i make a program which can distinguish from odd and even numbers? I know that i need to make a loop, it should be either if or while but i would love some suggestions on how to solve this problem. Thanks 🙂 MATLAB Answers — New Questions
power spectral density of nonstationary data
I am hoping those with seasoned experience with signaling can assist me with two questions. The ultimate goal is to obtain a power spectral density of very nonstationay optical data sampled at 1 kHz (attached). There is a huge DC offset that persists even after detrending or subtracting the mean. For such nonstationary data, pwelch seems most appropriate but a rather odd looking series of deminishing regularly space peaks is outputted across the frequency domain. Is the windowing wrong? Is this the wrong analysis to do?
First, here is the pwelch based code I ran:
x = time_series – mean(time_series);
fs = 1000;
[pxx_smooth, f] = pwelch(x, hann(length(x)/2), [], fs);
figure;
plot(f, 10*log10(pxx_smooth), ‘LineWidth’, 1.5);
If you run the code on the attached time-series you get no distinct peaks and a huge off wtih a very narrow frequency domain. This makes no sense.
Second, despite supposedly not correct to do since this is non-stationary data, I ran an FFT:
fs = 1000;
%x = detrend(time_series);
x = time_series-mean(time_series);
figure;
L = length(x);
plot(fs/L*(-L/2:L/2-1),abs(fftshift(x)),"LineWidth",2)
With an FFT you get more defined peaks and over a much larger frequency range (see attached jpg), but confused why the asymmetry when using ffshift, where it should be symmetrical about x = 0. Why the asymmetry?
I am convinced I doing something fundamentally wrong with the processing or windowing and odd looking output is not inherently due to the data itself. Any assistance in how to correctly attain a PSD for this data and what is the most valid approach for nonstationary data of this nature would be greatly appreciated.I am hoping those with seasoned experience with signaling can assist me with two questions. The ultimate goal is to obtain a power spectral density of very nonstationay optical data sampled at 1 kHz (attached). There is a huge DC offset that persists even after detrending or subtracting the mean. For such nonstationary data, pwelch seems most appropriate but a rather odd looking series of deminishing regularly space peaks is outputted across the frequency domain. Is the windowing wrong? Is this the wrong analysis to do?
First, here is the pwelch based code I ran:
x = time_series – mean(time_series);
fs = 1000;
[pxx_smooth, f] = pwelch(x, hann(length(x)/2), [], fs);
figure;
plot(f, 10*log10(pxx_smooth), ‘LineWidth’, 1.5);
If you run the code on the attached time-series you get no distinct peaks and a huge off wtih a very narrow frequency domain. This makes no sense.
Second, despite supposedly not correct to do since this is non-stationary data, I ran an FFT:
fs = 1000;
%x = detrend(time_series);
x = time_series-mean(time_series);
figure;
L = length(x);
plot(fs/L*(-L/2:L/2-1),abs(fftshift(x)),"LineWidth",2)
With an FFT you get more defined peaks and over a much larger frequency range (see attached jpg), but confused why the asymmetry when using ffshift, where it should be symmetrical about x = 0. Why the asymmetry?
I am convinced I doing something fundamentally wrong with the processing or windowing and odd looking output is not inherently due to the data itself. Any assistance in how to correctly attain a PSD for this data and what is the most valid approach for nonstationary data of this nature would be greatly appreciated. I am hoping those with seasoned experience with signaling can assist me with two questions. The ultimate goal is to obtain a power spectral density of very nonstationay optical data sampled at 1 kHz (attached). There is a huge DC offset that persists even after detrending or subtracting the mean. For such nonstationary data, pwelch seems most appropriate but a rather odd looking series of deminishing regularly space peaks is outputted across the frequency domain. Is the windowing wrong? Is this the wrong analysis to do?
First, here is the pwelch based code I ran:
x = time_series – mean(time_series);
fs = 1000;
[pxx_smooth, f] = pwelch(x, hann(length(x)/2), [], fs);
figure;
plot(f, 10*log10(pxx_smooth), ‘LineWidth’, 1.5);
If you run the code on the attached time-series you get no distinct peaks and a huge off wtih a very narrow frequency domain. This makes no sense.
Second, despite supposedly not correct to do since this is non-stationary data, I ran an FFT:
fs = 1000;
%x = detrend(time_series);
x = time_series-mean(time_series);
figure;
L = length(x);
plot(fs/L*(-L/2:L/2-1),abs(fftshift(x)),"LineWidth",2)
With an FFT you get more defined peaks and over a much larger frequency range (see attached jpg), but confused why the asymmetry when using ffshift, where it should be symmetrical about x = 0. Why the asymmetry?
I am convinced I doing something fundamentally wrong with the processing or windowing and odd looking output is not inherently due to the data itself. Any assistance in how to correctly attain a PSD for this data and what is the most valid approach for nonstationary data of this nature would be greatly appreciated. psd MATLAB Answers — New Questions
i can’t select msg type to “sensor_msgs/Imagae” in ROS2 Subscribe block.
I try to read a ros image message in simulink.
A msg type is "sensor_msgs/Image".
But i can’t select msg type to "sensor_msgs/Imagae" in ROS2 Subscribe block.
however, it is selected in ROS subscribe block.I try to read a ros image message in simulink.
A msg type is "sensor_msgs/Image".
But i can’t select msg type to "sensor_msgs/Imagae" in ROS2 Subscribe block.
however, it is selected in ROS subscribe block. I try to read a ros image message in simulink.
A msg type is "sensor_msgs/Image".
But i can’t select msg type to "sensor_msgs/Imagae" in ROS2 Subscribe block.
however, it is selected in ROS subscribe block. ros2, simulink MATLAB Answers — New Questions
Microsoft Unified Tenant Configuration Management
UTCM Controls and Manages Microsoft 365 Tenant Configurations
In late January, Microsoft launched the preview of Unified Tenant Configuration Management (UTCM), a solution to track and manage configuration settings for a set of Microsoft 365 workloads. In effect, UTCM is a Microsoft-engineered version of Microsoft Desired State Configuration (DSC) using much the same principles but different technology. The current UTCM implementation is through a set of Microsoft Graph APIs accessed via the beta endpoint. Because the public interface of UTCM is the Graph, interactions can be via any interface that supports that Graph. Here, I use V2.34 of the Microsoft Graph PowerShell SDK.
The target for UTCM is configuration management for Microsoft 365 and associated workloads. Currently, the workloads supported by UTCM are Exchange Online, Entra, Purview, Teams, Intune, and Defender. Unsurprisingly given the weak state of Graph support for SharePoint Online management, control over SharePoint Online and OneDrive for Business configurations is not yet supported. It will be interesting to see how the solution develops in terms of scope, if Microsoft delivers a UX for UTCM (perhaps in the Purview portal), and licensing.
I don’t intend to go into UTCM in-depth and don’t pretend to cover every detail here. Instead, this is a UTCM introduction for Microsoft 365 tenant administrators based on what I discovered when I setup UTCM for a tenant. To go further, you’ll need to spend some time reviewing the documentation and figuring out what resources need to be monitored in your tenant.
UTCM Basics
UTCM is built around a set of objects accessed through Graph APIs:
Resource types: UTCM supports over 300 workload resource types to monitor. A resource type is something like an Entra administrative unit (microsoft.entra.administrativeUnit), so monitoring that resource might mean checking for the creation of new administrative units and deletions or changes to existing administrative units. The full set of monitor resource types and their properties are defined in the monitor schema.
Snapshots are UTCM captures of defined parts (monitored entities) of the current tenant configuration. A snapshot is a point in time baseline that can be used to compute configuration drift, or the changes that have occurred in monitored entities since UTCM took the snapshot Configuration Jobs
Configuration Monitor jobs run every six hours to compare monitored resources and snapshots (you cannot change this frequency now, but Microsoft says that you’ll be able to do so in the future).
Each time a monitor job runs, it creates a configuration drift record if the job detects any changes based on the monitor definition. UTCM keeps the drift records until an administrator reviews the change to figure out what to do (“resolution”). For example, resolved changes might be captured in a new baseline snapshot.
Drift records are kept for 30 days after resolution and then permanently deleted. Up to 30 monitor jobs can be active in a tenant. Collectively, UTCM can keep track of up to 800 resources daily (because monitor jobs run every six hours, each resource consumes four of this quota).
UTCM Configuration
Tenant configuration uses an Entra enterprise app to access information to monitor within a tenant. For now, each tenant must configure the UTCM app. When UTCM is generally available, it’s likely that this step will be automated. For now, to configure the app, you need to create a service principal and assign permissions. To create the service principal, run the New-MgServicePrincipal cmdlet:
New-MgServicePrincipal -AppId '03b07b79-c5bc-4b5e-9bfa-13acf4a99998' DisplayName Id AppId ------------ -- ----- Unified Tenant Configuration Management 64226bcd-ed40-4701-b92c-41e445ee2ac6 03b07b79-c5bc-4b5e-9bfa-13acf4a99998
Interestingly, the service principal created for the UTCM app holds the set of application permissions required for monitoring. Out of the box, the documentation uses Policy.Read.All and User.ReadWrite.All as examples. I don’t know why the full-blown User.ReadWrite.All permission is used instead of the more restricted User.ReadBasic.All. It doesn’t seem like an application to manage policy information for Microsoft 365 workloads should need to read the full set of properties for user accounts or update user account properties. Perhaps this is a permission overreach by the UTCM developers that’s happens regularly in other apps. In any case, here’s how to add the required permissions:
[array]$AppPermissions = @('User.ReadWrite.All', 'Policy.Read.All')
# Get Graph app details
$GraphApp = Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'"
# Define the target
$TargetId = (Get-MgServicePrincipal -filter "displayname eq 'Unified Tenant Configuration Management'").id
# Loop through each permission and assign it to the target
ForEach ($Permission in $AppPermissions) {
$Role = $GraphApp.AppRoles | Where-Object {$_.Value -eq $Permission}
$AppRoleAssignment = @{}
$AppRoleAssignment.Add("PrincipalId",$TargetId)
$AppRoleAssignment.Add("ResourceId",$GraphApp.Id)
$AppRoleAssignment.Add("AppRoleId", $Role.Id)
# Assign Graph permission
Try {
New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $TargetId -BodyParameter $AppRoleAssignment -ErrorAction Stop
} Catch {
Write-Host ("Unable to assign {0} permission to app" -f $Permission)
}
}
After creating the service principal and assigning the permissions, the result should be an enterprise app with the two configured application permissions as shown in Figure 1.

A critical point is that you must also assign the required permissions and/or roles to the UTCM service principal to allow UTCM to access information when its monitor jobs run. For example, to monitor Exchange Online resources, the service principal must be granted either the Global Reader (to read) or Exchange administrator (read-write) role. In addition, the UTCM app must hold the Exchange.ManageAsApp permission for the Office 365 Exchange Online app (see the code in this article).
Permissions to Work with UTCM
A new ConfigurationMonitoring.ReadWrite.All permission is required to work with the UTCM Graph APIs. With that permission, a signed in session can run Graph requests to interact with UTCM.
The documentation says that “any privileged role” is also required for interactive delegated access and points to the page listing Entra built-in roles. What I think Microsoft means is that the administrative role required by the signed-in user during an interactive session depends on what data is being accessed. Some trial and error might be needed to find exactly what’s required.
UTCM Snapshots
There’s no doubt that a Microsoft 365 tenant manages many different settings across workloads. The number and different types of settings makes it almost impossible for humans to be sure that no one makes a change to an important setting, like adjusting a retention policy. Let’s go through the steps of creating a simple monitor to check that no one messes with the Exchange Online CAS mailbox policy for enterprise mailboxes. First, create a snapshot of the current CAS mailbox plan:
$Uri = "https://graph.microsoft.com/beta/admin/configurationManagement/configurationSnapshots/createSnapshot"
$RequestBody = @"
{
"displayName": "CAS Mailbox Plan snapshot",
"description": "This is a snapshot created via the Graph API for CAS Mailbox Plans",
"resources": [
"microsoft.exchange.casMailboxPlan"
]
}
"@
$NewSnapShot = Invoke-MgGraphRequest -Uri $Uri -Method Post -Body $RequestBody
$NewSnapshot
Name Value
---- -----
@odata.context https://graph.microsoft.com/beta/$metadata#microsoft.graph.configurationSnapshotJob
completedDateTime 01/01/0001 00:00:00
resourceLocation
id 88349df9-52f7-4d47-91af-bd944e7af5a5
createdDateTime 30/01/2026 11:32:48
displayName CAS Mailbox Plan snapshot
createdBy {[user, System.Collections.Hashtable], [application, System.Collections.Hashtable]}
status notStarted
description This is a snapshot created via the Graph API for CAS Mailbox Plans
resources {microsoft.exchange.casMailboxPlan}
tenantId 22e90715-3da6-4a78-9ec6-b3282389492b
Creating a Configuration Monitor
To create a configuration monitor, create a request body containing details of the resource for UTCM to monitor and the properties to monitor. The request is then posted to the configurationMonitors endpoint:
$Uri = "https://graph.microsoft.com/beta/admin/configurationManagement/configurationMonitors"
$RequestBody = @"
{
"displayName": "Exchange Online monitor",
"description": "This is an Exchange Online UTCM monitor created via the Graph API. The monitor covers the CAS Mailbox Plan resource",
"baseline":
{
"displayName": "Exchange Online Monitor 1",
"description": "Exchange Online Monitor for CAS mailbox plan",
"resources": [
{
"displayName": "casMailboxPlan",
"resourceType": "microsoft.exchange.casMailboxPlan",
"properties": {
"Identity": "ExchangeOnlineEnterprise-3ba414b9-c3ea-4a9e-bad0-59e39e1173de",
"DisplayName": "ExchangeOnlineEnterprise-3ba414b9-c3ea-4a9e-bad0-59e39e1173de",
"ImapEnabled": false,
"OwaMailboxPolicy": "OwaMailboxPolicy-Default",
"PopEnabled": false,
"ActiveSyncEnabled": true,
"Ensure":"Present"
}
}
]
}
"@
$NewMonitor = Invoke-MgGraphRequest -Uri $Uri -Method POST -Body $RequestBody
The documentation includes an example showing how to monitor three Exchange Online resources (shared mailbox, mail contact, and accepted domain). In the case of Exchange, most of the monitorable properties are those that can be set using well-known cmdlets like Set-Mailbox and Set-CasMailbox. However, something like license assignments for shared mailboxes cannot be monitored.
To check the newly created monitor, fetch its details using the configurationMonitors API by filtering with the display name of the monitor (captured when created in the $NewMonitor variable):
$Uri = ("https://graph.microsoft.com/beta/admin/configurationManagement/configurationMonitors/?`$filter=displayName eq '{0}'" -f $NewMonitor.displayName)
Invoke-MgGraphRequest -Uri $Uri -Method Get
Find all Monitors
$uri = "https://graph.microsoft.com/beta/admin/configurationManagement/configurationMonitors/"
$Data = Invoke-MgGraphRequest -Uri $Uri -Method Get -OutputType PsObject | Select-Object -ExpandProperty Value
id : 04d76e5c-b565-41fb-9fe3-8a89d3de364b
displayName : Exchange Online monitor
description : This is an Exchange Online UTCM monitor created via the Graph API. The monitor covers the
CAS Mailbox Plan resource
tenantId : 22e90715-3da6-4a78-9ec6-b3282389492b
status : active
monitorRunFrequencyInHours : 6
mode : monitorOnly
createdDateTime : 30/01/2026 11:22:46
lastModifiedDateTime : 30/01/2026 11:22:46
runAsUTCMServicePrincipal : True
inactivationReason :
createdBy : @{user=; application=}
runningOnBehalfOf : @{user=; application=}
lastModifiedBy : @{user=; application=}
parameters :
Checking Monitoring Jobs
The monitor status is active, so it’s in the queue for UTCM to run according to its fixed six-hourly schedule. After at least six hours elapses, we can check monitoring results to detect if any job reports drifts against the baseline snapshot. Here’s how to select the latest job record that reports some drift:
$Uri = ("https://graph.microsoft.com/beta/admin/configurationManagement/configurationMonitoringResults/?`$filter=MonitorId eq '{0}' and driftsCount gt 0" -f $NewMonitor.Id)
[array]$MonitorJobs = Invoke-MgGraphRequest -Uri $Uri -Method Get -OutputType PsObject | Select-Object -ExpandProperty Value | Sort-Object {$_.runCompletionDateTime -as [datetime]} -Descending
$MonitorJob = $MonitorJobs[0]
id : a62eb467-165b-4e04-adfd-840e428b951d
monitorId : 04d76e5c-b565-41fb-9fe3-8a89d3de364b
tenantId : 22e90715-3da6-4a78-9ec6-b3282389492b
runInitiationDateTime : 02/02/2026 12:00:55
runCompletionDateTime : 02/02/2026 12:01:05
runStatus : successful
driftsCount : 1
driftsFixed : 0
runType : monitor
After confirming that some drift in the monitored properties exists, retrieve the drift records found by the configuration monitor by querying the configurationDrifts endpoint:
$Uri = ("https://graph.microsoft.com/beta/admin/configurationManagement/configurationDrifts/?`$filter=monitorId eq '{0}'" -f $MonitorJob.MonitorId)
[array]$DriftRecords = Invoke-MgGraphRequest -Uri $uri -Method GET -OutputType PSObject | Select-Object -ExpandProperty Value
$DriftRecords[0]
id : c8ff1df8-0f15-4738-b4c0-70db6a02a602
monitorId : 04d76e5c-b565-41fb-9fe3-8a89d3de364b
tenantId : 22e90715-3da6-4a78-9ec6-b3282389492b
resourceType : microsoft.exchange.casMailboxPlan
baselineResourceDisplayName : casMailboxPlan
firstReportedDateTime : 30/01/2026 12:01:05
status : active
resourceInstanceIdentifier : @{Identity=ExchangeOnlineEnterprise-3ba414b9-c3ea-4a9e-bad0-59e39e1173de}
driftedProperties : {@{propertyName=PopEnabled; currentValue=True; desiredValue=False},@{propertyName=DisplayName; currentValue=ExchangeOnlineEnterprise; desiredValue=ExchangeOnlineEnterprise-3ba414b9-c3ea-4a9e-bad0-59e39e1173de},
@{propertyName=ImapEnabled; currentValue=True; desiredValue=False}}
The information about the changes made to monitored properties is in the DriftedProperties property, so here’s what we see. Someone has changed the value for the PopEnabled and ImapEnabled settings from False to True:
$DriftRecords[0].DriftedProperties propertyName currentValue desiredValue ------------ ------------ ------------ PopEnabled True False DisplayName ExchangeOnlineEnterprise ExchangeOnlineEnterprise-3ba414b9-c3ea-4a9e-bad0-59e39e1173de ImapEnabled True False
Notice that there’s no mention of who made the change. To find that information, you’ll probably have to review audit records around the time when the change happens. You can narrow the change down to a six-hour period by finding the first drift record to highlight a change, which happened between that time and the previous monitoring job,
There’s no API available to resolve a drift. At this point, the appropriate method appears to be that administrators decide whether the change made to a configuration is acceptable. If it is, they should create a new snapshot. This action removes all drift records and monitoring results because those objects no longer apply. If the change is unacceptable, administrators can reverse it. Following the change, UTCM will no longer create drift records.
In the future, Microsoft will deliver a more sophisticated mechanism to resolve configuration drifts, but that’s what we have for now.
A Welcome Start
UTCM is an incomplete solution, but there’s enough structure and fundamentals in place to imagine the potential of the software. I think UTCM will be popular with Microsoft 365 tenants, subject of course to licensing and cost. Those wanting to move from DSC should read the Microsoft365DSC blog, where the topic is already being discussed.
If you want to look outside Microsoft, there are ISV products available, like CoreView Configuration Manager for Microsoft 365. The nice thing about the arrival of UTCM is that anyone working in this space needs to raise their game, and that’s goodness.
Make sure that you’re not surprised about changes that appear inside Microsoft 365 applications by subscribing to the Office 365 for IT Pros eBook. Our monthly updates make sure that our subscribers stay informed.
How to install symbolic toolbox if I have a student version of matlab
Hi! I have a student version of Matlab, and I need to use symbolic toolbox, but I don’t know how to go about installing it. I know that it was included, but I am not sure how to get to it. Very basic step by step instructions would be appreciated.Hi! I have a student version of Matlab, and I need to use symbolic toolbox, but I don’t know how to go about installing it. I know that it was included, but I am not sure how to get to it. Very basic step by step instructions would be appreciated. Hi! I have a student version of Matlab, and I need to use symbolic toolbox, but I don’t know how to go about installing it. I know that it was included, but I am not sure how to get to it. Very basic step by step instructions would be appreciated. symbolic, toolbox, installation, download MATLAB Answers — New Questions
The uploaded file was not imported because it is missing timestamps.
Have uploaded previously with same file format without any problems – most recent upload (2 Feb 26) fails with message
The uploaded file was not imported because it is missing timestamps.
It seems to conform to the template, first few lines attached (which fail to load) but I have 5 months data to upload in same format. Have tried with all field names and ,,,,,,,, appended, but same result
created_at,field1,field2,field3,field4
2025-09-01T23:55:37+00:00,6.365,-5.895,0,304.96
2025-09-02T23:55:38+00:00,6.355,-5.898,11.51,316.47
2025-09-03T23:55:41+00:00,6.34,-5.903,6.81,323.28Have uploaded previously with same file format without any problems – most recent upload (2 Feb 26) fails with message
The uploaded file was not imported because it is missing timestamps.
It seems to conform to the template, first few lines attached (which fail to load) but I have 5 months data to upload in same format. Have tried with all field names and ,,,,,,,, appended, but same result
created_at,field1,field2,field3,field4
2025-09-01T23:55:37+00:00,6.365,-5.895,0,304.96
2025-09-02T23:55:38+00:00,6.355,-5.898,11.51,316.47
2025-09-03T23:55:41+00:00,6.34,-5.903,6.81,323.28 Have uploaded previously with same file format without any problems – most recent upload (2 Feb 26) fails with message
The uploaded file was not imported because it is missing timestamps.
It seems to conform to the template, first few lines attached (which fail to load) but I have 5 months data to upload in same format. Have tried with all field names and ,,,,,,,, appended, but same result
created_at,field1,field2,field3,field4
2025-09-01T23:55:37+00:00,6.365,-5.895,0,304.96
2025-09-02T23:55:38+00:00,6.355,-5.898,11.51,316.47
2025-09-03T23:55:41+00:00,6.34,-5.903,6.81,323.28 upload, timestamp MATLAB Answers — New Questions
How do I choose the poles and zeros in order to calculate my TF from input & output data?
Hello :), I have a set of data exported from my simulator, which consists of Thrust getting in the system (T_in) and some Thrust output after some conversions (T_out) for two different types of propellers, HTU and DEP. I use iddata to package the output, input and sample data.
Then with that data I use tfest to estimate the continuous-time transfer function sys.
My problem is that I don’t know how to define the number of poles (np) and zeros (nz). Any suggestions?
Thanks in advance!
% Bode Diagram analysis
T_in_HTU = load("T_in_HTU.mat");
T_in_DEP = load("T_in_DEP.mat");
T_out_HTU = load("T_out_HTU.mat");
T_out_DEP = load("T_out_DEP.mat");
T_in_HTU = T_in_HTU.T_in_HTU;
T_in_DEP = T_in_DEP.T_in_DEP;
T_out_HTU = T_out_HTU.T_out_HTU;
T_out_DEP = T_out_DEP.T_out_DEP;
data_HTU = iddata(T_in_HTU,T_out_HTU,1/500);
data_DEP = iddata(T_in_DEP,T_out_DEP,1/500);
sys_HTU = tfest(data_HTU,np,nz,’Ts’,1/500);
sys_DEP = tfest(data_DEP,np,nz,’Ts’,1/500);
% Generate Bode plots for the identified transfer functions
figure;
bode(sys_HTU);
grid on;
title(‘Bode Diagram for HTU System’);
figure;
bode(sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’);
figure;
bode(sys_HTU,sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’);Hello :), I have a set of data exported from my simulator, which consists of Thrust getting in the system (T_in) and some Thrust output after some conversions (T_out) for two different types of propellers, HTU and DEP. I use iddata to package the output, input and sample data.
Then with that data I use tfest to estimate the continuous-time transfer function sys.
My problem is that I don’t know how to define the number of poles (np) and zeros (nz). Any suggestions?
Thanks in advance!
% Bode Diagram analysis
T_in_HTU = load("T_in_HTU.mat");
T_in_DEP = load("T_in_DEP.mat");
T_out_HTU = load("T_out_HTU.mat");
T_out_DEP = load("T_out_DEP.mat");
T_in_HTU = T_in_HTU.T_in_HTU;
T_in_DEP = T_in_DEP.T_in_DEP;
T_out_HTU = T_out_HTU.T_out_HTU;
T_out_DEP = T_out_DEP.T_out_DEP;
data_HTU = iddata(T_in_HTU,T_out_HTU,1/500);
data_DEP = iddata(T_in_DEP,T_out_DEP,1/500);
sys_HTU = tfest(data_HTU,np,nz,’Ts’,1/500);
sys_DEP = tfest(data_DEP,np,nz,’Ts’,1/500);
% Generate Bode plots for the identified transfer functions
figure;
bode(sys_HTU);
grid on;
title(‘Bode Diagram for HTU System’);
figure;
bode(sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’);
figure;
bode(sys_HTU,sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’); Hello :), I have a set of data exported from my simulator, which consists of Thrust getting in the system (T_in) and some Thrust output after some conversions (T_out) for two different types of propellers, HTU and DEP. I use iddata to package the output, input and sample data.
Then with that data I use tfest to estimate the continuous-time transfer function sys.
My problem is that I don’t know how to define the number of poles (np) and zeros (nz). Any suggestions?
Thanks in advance!
% Bode Diagram analysis
T_in_HTU = load("T_in_HTU.mat");
T_in_DEP = load("T_in_DEP.mat");
T_out_HTU = load("T_out_HTU.mat");
T_out_DEP = load("T_out_DEP.mat");
T_in_HTU = T_in_HTU.T_in_HTU;
T_in_DEP = T_in_DEP.T_in_DEP;
T_out_HTU = T_out_HTU.T_out_HTU;
T_out_DEP = T_out_DEP.T_out_DEP;
data_HTU = iddata(T_in_HTU,T_out_HTU,1/500);
data_DEP = iddata(T_in_DEP,T_out_DEP,1/500);
sys_HTU = tfest(data_HTU,np,nz,’Ts’,1/500);
sys_DEP = tfest(data_DEP,np,nz,’Ts’,1/500);
% Generate Bode plots for the identified transfer functions
figure;
bode(sys_HTU);
grid on;
title(‘Bode Diagram for HTU System’);
figure;
bode(sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’);
figure;
bode(sys_HTU,sys_DEP);
grid on;
title(‘Bode Diagram for DEP System’); tfest, transfer function, zeros and poles MATLAB Answers — New Questions
PWM on PE5 / PE6 causes XCP error on NUCLEO-H753ZI in Simulink (STM32 support package)
Hi everyone,
I am migrating a Simulink model from NUCLEO-F767ZI to NUCLEO-H753ZI using the Simulink Support Package for STM32.
Most peripherals work correctly on the H753ZI:
>I2C
>UDP / Ethernet
>PWM on other pins
However, I am facing an issue specifically with PWM outputs on pins PE_5 and PE_6.
>>Setup details:
Board: NUCLEO-H753ZI
MCU: STM32H753 (single-core Cortex-M7)
Toolchain: Simulink + STM32 support package
PWM pins: PE_5 and PE_6
These pins are mapped to TIM15
PWM configured as PWM Generation CH1 (not CH1N)
>>Observed behavior:
When PWM blocks for PE_5 and PE_6 are disabled, the model builds, downloads, and runs normally.
When PWM on PE_5 and/or PE_6 is enabled, the model fails to run:
An XCP error is reported immediately
The application does not start execution on the target
PWM on other pins works correctly.
The same Simulink model worked without issues on the F767ZI.
This appears to be specific to STM32H7 .
Any insights or recommendations would be greatly appreciated.
Thanks in advance!Hi everyone,
I am migrating a Simulink model from NUCLEO-F767ZI to NUCLEO-H753ZI using the Simulink Support Package for STM32.
Most peripherals work correctly on the H753ZI:
>I2C
>UDP / Ethernet
>PWM on other pins
However, I am facing an issue specifically with PWM outputs on pins PE_5 and PE_6.
>>Setup details:
Board: NUCLEO-H753ZI
MCU: STM32H753 (single-core Cortex-M7)
Toolchain: Simulink + STM32 support package
PWM pins: PE_5 and PE_6
These pins are mapped to TIM15
PWM configured as PWM Generation CH1 (not CH1N)
>>Observed behavior:
When PWM blocks for PE_5 and PE_6 are disabled, the model builds, downloads, and runs normally.
When PWM on PE_5 and/or PE_6 is enabled, the model fails to run:
An XCP error is reported immediately
The application does not start execution on the target
PWM on other pins works correctly.
The same Simulink model worked without issues on the F767ZI.
This appears to be specific to STM32H7 .
Any insights or recommendations would be greatly appreciated.
Thanks in advance! Hi everyone,
I am migrating a Simulink model from NUCLEO-F767ZI to NUCLEO-H753ZI using the Simulink Support Package for STM32.
Most peripherals work correctly on the H753ZI:
>I2C
>UDP / Ethernet
>PWM on other pins
However, I am facing an issue specifically with PWM outputs on pins PE_5 and PE_6.
>>Setup details:
Board: NUCLEO-H753ZI
MCU: STM32H753 (single-core Cortex-M7)
Toolchain: Simulink + STM32 support package
PWM pins: PE_5 and PE_6
These pins are mapped to TIM15
PWM configured as PWM Generation CH1 (not CH1N)
>>Observed behavior:
When PWM blocks for PE_5 and PE_6 are disabled, the model builds, downloads, and runs normally.
When PWM on PE_5 and/or PE_6 is enabled, the model fails to run:
An XCP error is reported immediately
The application does not start execution on the target
PWM on other pins works correctly.
The same Simulink model worked without issues on the F767ZI.
This appears to be specific to STM32H7 .
Any insights or recommendations would be greatly appreciated.
Thanks in advance! stm32h7, simulink MATLAB Answers — New Questions
February 2026 Update for Office 365 for IT Pros
Update #128 Available for Download

The Office 365 for IT Pros eBook team is delighted to announce the availability of the February 2026 update for Office 365 for IT Pros (2026 edition). This is monthly update #128. An update (#20.2) has already been issued for the Automating Microsoft 365 with PowerShell eBook, which is available both separately and as part of the Office 365 for IT Pros eBook bundle.
Current subscribers can download the updated PDF and EPUB files through their Gumroad.com account or using the link in the receipt emailed after purchase. The link always accesses the latest book files. Further details of how to access book updates are available in our FAQ. Details of the changes in update #128 are in our change log.
The Copilot Conundrum
Last month we discussed some of the options we’re considering when it comes to covering Microsoft 365 Copilot, Agent 365, and the various types of agents that Microsoft has put such a bet on. In Microsoft’s FY26 Q2 results, we learned that Microsoft 365 Copilot has just 15 million paid seats, or approximately 3.33% of the “over 450 million” paid Microsoft 365 commercial seats.
At list prices, those 15 million seats represent $5.4 billion revenues, but given the usual discounts available to the large enterprises (like Accenture) that are the early adopters for Microsoft 365 Copilot, it’s very unlikely that the actual income comes close to the list price. Either way, Copilot income is a long way from offsetting Microsoft’s capital expenditure on datacenters to host AI services.
Anyway, we’ll continue to cover the management of Copilot within Microsoft 365 tenants at a level that makes sense to us. We won’t go overboard, but we won’t leave our readers without the knowledge to deploy and maintain Copilot.
Unified Tenant Configuration Management
Another recent development is the preview for Unified Tenant Configuration Management (UTCM). This is a similar solution to Desired State Configuration (DSC) for Windows that’s built with different technology to manage many (bit not all) of the essential configuration settings found in Microsoft 365 tenants, like Entra ID, Exchange Online, Defender, and Teams.
As a preview, UTCM is an incomplete solution. It is comprised of a set of Microsoft Graph APIs, has no UX, and some functionality is missing. UTCM also doesn’t deal with SharePoint Online and OneDrive for Business at all, an important gap that’s partially explained by the lack of Graph support in these workloads.
I think UTCM will be popular with Microsoft 365 enterprise tenants and with Cloud Service Providers that manage multiple smaller tenants. There are thousands of settings spread across the Microsoft 365 workloads, and keeping track on changes made to the more critical settings is very difficult. UTCM takes on this challenge by monitoring changes against baseline snapshots to report configuration drift (variations against the baseline). It will be interesting to see how this solution evolves and it’s definitely something that we will keep a close eye on over the coming months.
On to Update #129
The nice thing about working on a book that’s republished monthly is that we are never unable to respond to the changing world of technology. We might need some time to think things through and make our minds up about the worth of a new development, but we’ll incorporate important news and information in a monthly update when we consider it appropriate.
On we go to update #129, due on March 1. Microsoft doesn’t stop issuing updates for Microsoft 365, and we don’t stop analyzing, questioning, and reporting on what they do.









