Month: June 2025
Undefined function handle error when loading a neural network that contain a function layer
Hi everyone,
I created a neural network that uses a functionLayer that I defined by :
functionLayer(@Feature_wise_LM,Name="FiLM_64",Formattable=1,NumInputs=2,NumOutputs=1)
However, when I save the trained neural network with :
save("./trained_networks/FiLM", "trained_network");
and reload it in the same script with :
trained_network = load("./trained_networks/FiLM.mat");
trained_network = trained_network.trained_network;
I get the error :
Warning: While loading an object of class ‘dlnetwork’:
Error using nnet.internal.cnn.layer.GraphExecutor/propagate (line 354)
Execution failed during layer(s) ‘FiLM_64’.
Error in deep.internal.network.ExecutableNetwork/configureForInputsAndForwardOnLayer (line 347)
propagate(this, fcn, Xs, outputLayerIdx, outputLayerPortIdx);
Error in deep.internal.network.EditableNetwork/convertToDlnetwork (line 101)
[executableNetwork, layerOutputSizes] = configureForInputsAndForwardOnLayer(…
Error in dlnetwork.loadobj (line 741)
net = convertToDlnetwork(privateNet, exampleInputs, initializeNetworkWeights);
Caused by:
Undefined function handle.
Error in nnet.cnn.layer.FunctionLayer/predict (line 61)
[varargout{1:layer.NumOutputs}] = layer.PredictFcn(varargin{:});
I tried to save the function in a dedicated file "Feature_wise_LM.m" , but it didn’t workHi everyone,
I created a neural network that uses a functionLayer that I defined by :
functionLayer(@Feature_wise_LM,Name="FiLM_64",Formattable=1,NumInputs=2,NumOutputs=1)
However, when I save the trained neural network with :
save("./trained_networks/FiLM", "trained_network");
and reload it in the same script with :
trained_network = load("./trained_networks/FiLM.mat");
trained_network = trained_network.trained_network;
I get the error :
Warning: While loading an object of class ‘dlnetwork’:
Error using nnet.internal.cnn.layer.GraphExecutor/propagate (line 354)
Execution failed during layer(s) ‘FiLM_64’.
Error in deep.internal.network.ExecutableNetwork/configureForInputsAndForwardOnLayer (line 347)
propagate(this, fcn, Xs, outputLayerIdx, outputLayerPortIdx);
Error in deep.internal.network.EditableNetwork/convertToDlnetwork (line 101)
[executableNetwork, layerOutputSizes] = configureForInputsAndForwardOnLayer(…
Error in dlnetwork.loadobj (line 741)
net = convertToDlnetwork(privateNet, exampleInputs, initializeNetworkWeights);
Caused by:
Undefined function handle.
Error in nnet.cnn.layer.FunctionLayer/predict (line 61)
[varargout{1:layer.NumOutputs}] = layer.PredictFcn(varargin{:});
I tried to save the function in a dedicated file "Feature_wise_LM.m" , but it didn’t work Hi everyone,
I created a neural network that uses a functionLayer that I defined by :
functionLayer(@Feature_wise_LM,Name="FiLM_64",Formattable=1,NumInputs=2,NumOutputs=1)
However, when I save the trained neural network with :
save("./trained_networks/FiLM", "trained_network");
and reload it in the same script with :
trained_network = load("./trained_networks/FiLM.mat");
trained_network = trained_network.trained_network;
I get the error :
Warning: While loading an object of class ‘dlnetwork’:
Error using nnet.internal.cnn.layer.GraphExecutor/propagate (line 354)
Execution failed during layer(s) ‘FiLM_64’.
Error in deep.internal.network.ExecutableNetwork/configureForInputsAndForwardOnLayer (line 347)
propagate(this, fcn, Xs, outputLayerIdx, outputLayerPortIdx);
Error in deep.internal.network.EditableNetwork/convertToDlnetwork (line 101)
[executableNetwork, layerOutputSizes] = configureForInputsAndForwardOnLayer(…
Error in dlnetwork.loadobj (line 741)
net = convertToDlnetwork(privateNet, exampleInputs, initializeNetworkWeights);
Caused by:
Undefined function handle.
Error in nnet.cnn.layer.FunctionLayer/predict (line 61)
[varargout{1:layer.NumOutputs}] = layer.PredictFcn(varargin{:});
I tried to save the function in a dedicated file "Feature_wise_LM.m" , but it didn’t work deep learning, neural networks, function, matlab function MATLAB Answers — New Questions
How to process in real-time data from a SoC Device using SoC Blockset?
Dear all,
I am using SoC Blockset for a simple receiver design on AMD Zynq Ultrascale+ ZCU111 evaluation board.
Now, when my design is ready, built and deployed, I have an external mode model open and wondering how I can send the data from the ZCU111 board processor to the host computer and operate it in real-time.
I have already tried using Rate Transition block, UDP write block and IO data sink block in the external mode model to save the data to the host, what I received were the files, photos of which are attached
These are binary files, content of which can be loaded to workspace using fopen() and fread() functions. This is working, but would be better if I could receive the data and work with it on the development PC in real-time (e.g. using a different model or some script). Is there a way? Am I using the correct logic for the external mode model (to send the data to the development pc using UDP)?
Thank you!Dear all,
I am using SoC Blockset for a simple receiver design on AMD Zynq Ultrascale+ ZCU111 evaluation board.
Now, when my design is ready, built and deployed, I have an external mode model open and wondering how I can send the data from the ZCU111 board processor to the host computer and operate it in real-time.
I have already tried using Rate Transition block, UDP write block and IO data sink block in the external mode model to save the data to the host, what I received were the files, photos of which are attached
These are binary files, content of which can be loaded to workspace using fopen() and fread() functions. This is working, but would be better if I could receive the data and work with it on the development PC in real-time (e.g. using a different model or some script). Is there a way? Am I using the correct logic for the external mode model (to send the data to the development pc using UDP)?
Thank you! Dear all,
I am using SoC Blockset for a simple receiver design on AMD Zynq Ultrascale+ ZCU111 evaluation board.
Now, when my design is ready, built and deployed, I have an external mode model open and wondering how I can send the data from the ZCU111 board processor to the host computer and operate it in real-time.
I have already tried using Rate Transition block, UDP write block and IO data sink block in the external mode model to save the data to the host, what I received were the files, photos of which are attached
These are binary files, content of which can be loaded to workspace using fopen() and fread() functions. This is working, but would be better if I could receive the data and work with it on the development PC in real-time (e.g. using a different model or some script). Is there a way? Am I using the correct logic for the external mode model (to send the data to the development pc using UDP)?
Thank you! udp, soc blockset, fpga, zcu111, external mode, .csv MATLAB Answers — New Questions
Copilot Agent Governance Product Launched by ISV
Microsoft Leaves Gaps in Technologies for ISVs to Fill – Like Agent Governance
Every time Microsoft makes a big move, ISVs seek to take advantage with a new product. It’s the way of the work. Microsoft creates technology and ISVs fill the holes left in that technology. In some respects, the cloud is a difficult place for ISVs. There’s less to tweak than in an on-premises environment and although the Graph APIs have extended their coverage to more areas of Microsoft 365 over the last few years, significant gaps still exist for major workloads like Exchange Online and SharePoint Online.
But a new technology creates a new opportunity because everything starts from scratch. Microsoft’s big move into artificial intelligence with Copilot hasn’t created too many opportunities because Copilot depends on a massive infrastructure operated by Microsoft that’s inaccessible except through applications like BizChat. Agents are different. They’re objects that need to be managed. They consume resources that need to be paid for. They represent potential security and compliance problems that require mitigation. In short, agents represent a chance for ISVs to build products to solve customer problems as Microsoft heads full tilt to its agentic future.
Building an Infrastructure for Agent Governance
To be fair to Microsoft, they’ve started to build an infrastructure for agent management. Apart from a whitepaper about managing and governning agents, the first concrete sign is the introduction of agent objects in Entra ID. Microsoft is thinking about how agents can work together, and how that communication can be controlled and monitored. That’s all great stuff and it will deliver benefits in the future, but the immediate risk is the fear that agents might run amok inside Microsoft 365 tenants.
Microsoft reports that there are 56 million monthly active users of Power Platform, or 13% of the 430 million paid Microsoft 365 seats. That’s a lot of citizen developers who could create agents using tools like Copilot Studio. Unless tenant administrators disable ad-hoc email subscriptions for the tenant, developers could be building agents without anyone’s knowledge.
Don’t get me wrong. I see great advantages in agent technology and have even built agents myself, notably a very useful agent to interact with the Office 365 for IT Pros eBook. One thing that we’ve learned over the last 30 years is that when users are allowed to create, they will. And they’ll create objects without thought, and those objects will need to be cleaned up eventually, or, as Microsoft discovered, the mass of SharePoint Online sites created for Teams became a real problem for Microsoft 365 Copilot deployments. Incorporating solid management and governance from the start is of great benefit for new technologies.
Rencore Steps Up with Copilot Agent Governance
All of which brings me to Rencore’s announcement of two new modules for their governance product to deal with Copilot and agent governance and Power Platform governance (Figure 1). Matthias Einig, Rencore’s CEO, has been forceful about the need to take control of these areas and it’s good to see that he’s investing in product development to help Microsoft 365 tenants take control before agents get any chance to become a problem.

I have not used the Rencore product and do not endorse it. I just think that it’s great to see an ISV move into this area with purpose and intent. It seems like Rencore aims to address some major pain points, like shadow IT, the cost of running Copilot agents, over-sharing, and “agent sprawl.” All good stuff.
I’m sure other ISVs will enter this space (and there might be some active in the area already that I don’t know of). This will be an interesting area to track as ISVs seek new ways to mitigate the potential risks posed by agents.
No Time to Relax
Product from one ISV does not mean that we can all relax and conclude that agent management is done. It’s not. The continuing huge investment by Microsoft in this space means that agent capabilities will improve and grow over time. Each improvement and new feature has the potential to affect governance and compliance strategies. Don’t let your guard down and make sure that your tenant has agents under control. And keep them that way.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365.
How to simulate a rotor inter-turn fault in synchronous generator,using matlab programme.
I need programme format for simulating rotor inter-turn fault in synchronous generator.I need programme format for simulating rotor inter-turn fault in synchronous generator. I need programme format for simulating rotor inter-turn fault in synchronous generator. rotor inter-turn fault simulation. MATLAB Answers — New Questions
Plot browser in R2025a?
in R2025a I can not find anymore the "plot browser" menu. I used a lot the hide/show options for lines.
Has this function disappeared?
Property Inspector is not really helpfull if many lines are present in the plot.in R2025a I can not find anymore the "plot browser" menu. I used a lot the hide/show options for lines.
Has this function disappeared?
Property Inspector is not really helpfull if many lines are present in the plot. in R2025a I can not find anymore the "plot browser" menu. I used a lot the hide/show options for lines.
Has this function disappeared?
Property Inspector is not really helpfull if many lines are present in the plot. plotedit MATLAB Answers — New Questions
Accessing simParameters.Carrier inside MATLAB function block in Simulink
I am trying to implement PDSCH transceiver in simulink using simulink blocks and matlab functions.
I have already implemented this in Matlab code now want to implement in simulink, so now while doing this i am facing some problems using simParameters.Carrier in simulink matlab fucntion.
I have used variable carrier = simParameters.Carrier in matlab code. Now i want to access this as when i want to do OFDM Modulation i am using matlab function and while in this function i am writing
function txWaveform = OFDMModulator(pdschGrid, carrier)
txWaveform = nrOFDMModulate(carrier, pdschGrid);
end
But now i dont know how to access this carrier (in which complete configuration is present) in Matlab function.
Any suggestions on how I can get the data stored in this struct variable accessible inside the Simulink MATLAB function block? Thank you.I am trying to implement PDSCH transceiver in simulink using simulink blocks and matlab functions.
I have already implemented this in Matlab code now want to implement in simulink, so now while doing this i am facing some problems using simParameters.Carrier in simulink matlab fucntion.
I have used variable carrier = simParameters.Carrier in matlab code. Now i want to access this as when i want to do OFDM Modulation i am using matlab function and while in this function i am writing
function txWaveform = OFDMModulator(pdschGrid, carrier)
txWaveform = nrOFDMModulate(carrier, pdschGrid);
end
But now i dont know how to access this carrier (in which complete configuration is present) in Matlab function.
Any suggestions on how I can get the data stored in this struct variable accessible inside the Simulink MATLAB function block? Thank you. I am trying to implement PDSCH transceiver in simulink using simulink blocks and matlab functions.
I have already implemented this in Matlab code now want to implement in simulink, so now while doing this i am facing some problems using simParameters.Carrier in simulink matlab fucntion.
I have used variable carrier = simParameters.Carrier in matlab code. Now i want to access this as when i want to do OFDM Modulation i am using matlab function and while in this function i am writing
function txWaveform = OFDMModulator(pdschGrid, carrier)
txWaveform = nrOFDMModulate(carrier, pdschGrid);
end
But now i dont know how to access this carrier (in which complete configuration is present) in Matlab function.
Any suggestions on how I can get the data stored in this struct variable accessible inside the Simulink MATLAB function block? Thank you. #ofdm, #simulink, #modelworkspace, #workspace, #ofdmmodulation, data import MATLAB Answers — New Questions
install_unix_legacy not found during installation on MacOS
I’m trying to package Matlab R2025A MacOS intel processor version for silent install, I ran the downloader and got the file from download without installing option, however when I ran the install script within the downloaded file I got the following error
/bin/maca64/install_unix_legacy: cannot execute: No such file or directory
When I checked the downloaded files, that install_unix_legacy is indeed missing, is there anything special process to install this version?I’m trying to package Matlab R2025A MacOS intel processor version for silent install, I ran the downloader and got the file from download without installing option, however when I ran the install script within the downloaded file I got the following error
/bin/maca64/install_unix_legacy: cannot execute: No such file or directory
When I checked the downloaded files, that install_unix_legacy is indeed missing, is there anything special process to install this version? I’m trying to package Matlab R2025A MacOS intel processor version for silent install, I ran the downloader and got the file from download without installing option, however when I ran the install script within the downloaded file I got the following error
/bin/maca64/install_unix_legacy: cannot execute: No such file or directory
When I checked the downloaded files, that install_unix_legacy is indeed missing, is there anything special process to install this version? installation, mac, matlab MATLAB Answers — New Questions
Why do my Simulink PV module simulation characteristics appear this way?
I have tried to simulate a PV module with Simulink for a change in radiation and I am unable to get the characteristics.The P-V and I-V characteristics I am expecting are <http://ecee.colorado.edu/~ecen2060/materials/simulink/PV/PV_module_model.pdf here> . I used the repeating sequence for my voltage source and the repeating sequence stair for my insolation. I get incomplete characteristics such as the figures below.
<</matlabcentral/answers/uploaded_files/192/P-V%20and%20I-V%20characteristicspage-0.jpg>>
_ . What could be the problem? I have changed the time values and output values for the repeating sequence source and the simulation time but without success. Anyone with an explanation?I have tried to simulate a PV module with Simulink for a change in radiation and I am unable to get the characteristics.The P-V and I-V characteristics I am expecting are <http://ecee.colorado.edu/~ecen2060/materials/simulink/PV/PV_module_model.pdf here> . I used the repeating sequence for my voltage source and the repeating sequence stair for my insolation. I get incomplete characteristics such as the figures below.
<</matlabcentral/answers/uploaded_files/192/P-V%20and%20I-V%20characteristicspage-0.jpg>>
_ . What could be the problem? I have changed the time values and output values for the repeating sequence source and the simulation time but without success. Anyone with an explanation? I have tried to simulate a PV module with Simulink for a change in radiation and I am unable to get the characteristics.The P-V and I-V characteristics I am expecting are <http://ecee.colorado.edu/~ecen2060/materials/simulink/PV/PV_module_model.pdf here> . I used the repeating sequence for my voltage source and the repeating sequence stair for my insolation. I get incomplete characteristics such as the figures below.
<</matlabcentral/answers/uploaded_files/192/P-V%20and%20I-V%20characteristicspage-0.jpg>>
_ . What could be the problem? I have changed the time values and output values for the repeating sequence source and the simulation time but without success. Anyone with an explanation? pv module, simulink, simulation. MATLAB Answers — New Questions
looking for strictly recurrent and fast moving median implementation
I am looking for any suitable trick how to effectively compute moving window (length w) median. I know of course the movmedian function, but I need strictly recurrent native MATLAB function working sample by sample.
My naive solution, which is equivalent to the
output_median = movmedian(input_x,[w-1,0])
is as follows:
rng(‘default’)
% number of samples
N = 25;
% moving median windows length
w = 5;
% init history buffer and median
x_hist = rand;
med_new = x_hist;
% init input x vector
input_x = zeros(1,N);
input_x(1) = x_hist;
% init output median vector of length N
output_median = zeros(1,N);
output_median(1) = med_new;
for i = 2:N
x_new = rand;
[med_new,x_hist] = moving_median(x_hist,x_new,w);
input_x(i) = x_new;
output_median(i) = med_new;
end
where function moving_median is here:
function [med_new,x_hist] = moving_median(x_hist,x_new,w)
% Old length of history
wo = length(x_hist);
% Update history
x_hist = [x_hist(max(1,wo-w+2):wo),x_new]; % Grow history until size w, then append new x and remove oldest x
med_new = median(x_hist);
end
Any idea how to make this algorithm more effective (faster) a still strictly recurrent?
Target use case should works with window length:
w ~ 1e3 – 1e4 (!!!)
Additional notes:
fast moving median computing is always based on advanced data structures use like Heap or Queues, etc.
some sort-structure information could be stored in these structures and used at the next sample step to significant speed-up median computing
similar approach is used in movmedian function, but this function is not directly applicable on running-data streamI am looking for any suitable trick how to effectively compute moving window (length w) median. I know of course the movmedian function, but I need strictly recurrent native MATLAB function working sample by sample.
My naive solution, which is equivalent to the
output_median = movmedian(input_x,[w-1,0])
is as follows:
rng(‘default’)
% number of samples
N = 25;
% moving median windows length
w = 5;
% init history buffer and median
x_hist = rand;
med_new = x_hist;
% init input x vector
input_x = zeros(1,N);
input_x(1) = x_hist;
% init output median vector of length N
output_median = zeros(1,N);
output_median(1) = med_new;
for i = 2:N
x_new = rand;
[med_new,x_hist] = moving_median(x_hist,x_new,w);
input_x(i) = x_new;
output_median(i) = med_new;
end
where function moving_median is here:
function [med_new,x_hist] = moving_median(x_hist,x_new,w)
% Old length of history
wo = length(x_hist);
% Update history
x_hist = [x_hist(max(1,wo-w+2):wo),x_new]; % Grow history until size w, then append new x and remove oldest x
med_new = median(x_hist);
end
Any idea how to make this algorithm more effective (faster) a still strictly recurrent?
Target use case should works with window length:
w ~ 1e3 – 1e4 (!!!)
Additional notes:
fast moving median computing is always based on advanced data structures use like Heap or Queues, etc.
some sort-structure information could be stored in these structures and used at the next sample step to significant speed-up median computing
similar approach is used in movmedian function, but this function is not directly applicable on running-data stream I am looking for any suitable trick how to effectively compute moving window (length w) median. I know of course the movmedian function, but I need strictly recurrent native MATLAB function working sample by sample.
My naive solution, which is equivalent to the
output_median = movmedian(input_x,[w-1,0])
is as follows:
rng(‘default’)
% number of samples
N = 25;
% moving median windows length
w = 5;
% init history buffer and median
x_hist = rand;
med_new = x_hist;
% init input x vector
input_x = zeros(1,N);
input_x(1) = x_hist;
% init output median vector of length N
output_median = zeros(1,N);
output_median(1) = med_new;
for i = 2:N
x_new = rand;
[med_new,x_hist] = moving_median(x_hist,x_new,w);
input_x(i) = x_new;
output_median(i) = med_new;
end
where function moving_median is here:
function [med_new,x_hist] = moving_median(x_hist,x_new,w)
% Old length of history
wo = length(x_hist);
% Update history
x_hist = [x_hist(max(1,wo-w+2):wo),x_new]; % Grow history until size w, then append new x and remove oldest x
med_new = median(x_hist);
end
Any idea how to make this algorithm more effective (faster) a still strictly recurrent?
Target use case should works with window length:
w ~ 1e3 – 1e4 (!!!)
Additional notes:
fast moving median computing is always based on advanced data structures use like Heap or Queues, etc.
some sort-structure information could be stored in these structures and used at the next sample step to significant speed-up median computing
similar approach is used in movmedian function, but this function is not directly applicable on running-data stream moving, median MATLAB Answers — New Questions
Solusi Inklusif Berbasis AI dari Mahasiswa UI Menangkan Hackathon AI for Accessibility 2025 Microsoft
Tim “The Leporidaes” bersama perwakilan dari Universitas Indonesia, Microsoft dan Suarise.
Microsoft dan Universitas Indonesia (UI) baru saja menyelesaikan rangkaian kompetisi Hackathon AI for Accessibility (AI4A) 2025, sebuah ajang tahunan yang mengajak para inovator muda di Asia Tenggara menciptakan solusi berbasis kecerdasan buatan (AI) dari Microsoft guna memecahkan tantangan dunia nyata yang dihadapi penyandang disabilitas – mulai dari kehidupan sehari-hari, pendidikan, komunikasi, hingga ketenagakerjaan. Memasuki tahun keenam, kali ini Microsoft menggandeng Fakultas Teknik Universitas Indonesia sebagai mitra penyelenggara kompetisi tersebut. Setelah melalui proses penjurian yang ketat, dari 46 tim yang ikut berpartisipasi, terpilihlah 10 tim yang lolos ke grand final.
Selanjutnya, dewan juri yang terdiri dari Rahma Utami, S.Ds., M.A. (Accessibility Director, Suarise), F. Astha Ekadiyanto (Dosen Departemen Teknik Komputer dan Teknik Listrik, Fakultas Teknik UI), serta Edhot Purwoko, S.T., M.T.I. (Senior Technology Specialist, Microsoft) menetapkan tim “The Leporidaes” sebagai pemenang utama dan berhak memperoleh berbagai dukungan eksklusif, mulai dari pelatihan intensif bersama pakar Microsoft, langganan LinkedIn Premium, akses Azure for Students, hingga pendampingan lanjutan untuk mengembangkan solusi mereka di Microsoft Azure.
Tim “The Leporidaes” yang terdiri dari mahasiswa Fakultas Teknik dan Fakultas Ilmu Komputer UI, berhasil terpilih sebagai pemenang utama tahun ini berkat solusi mereka yang diberi nama NeuroBuddy. Mereka membuat sebuah alat deteksi dini neurodivergensi dalam bentuk permainan anak-anak berbasis AI yang menampilkan maskot kelinci untuk mengajak anak berinteraksi. Nantinya, interaksi anak selama bermain akan dievaluasi untuk mendeteksi secara dini potensi disleksia, ASD, atau ADHD, sehingga dapat mendorong inklusi dan menjembatani kesenjangan antara teknologi, disabilitas, dan stigma. Dalam operasinya, Neurobuddy mengintegrasikan beragam layanan dari Azure Cognitive Service.
Tampilan solusi dari NeuroBuddy
Microsoft percaya bahwa aksesibilitas adalah kunci untuk mewujudkan misinya: memberdayakan setiap individu dan organisasi di dunia untuk mencapai lebih. Adapun program ini menjadi bagian dari komitmen global perusahaan senilai US$ 25 juta, dan melalui kolaborasi bersama komunitas disabilitas, akademisi, dan developer, Microsoft berupaya memperluas manfaat AI untuk mendukung kehidupan sehari-hari, komunikasi, pendidikan, dan dunia kerja yang lebih inklusif.
“Banyak inovasi teknologi, termasuk AI, berawal dari upaya menjawab tantangan aksesibilitas, seperti fitur closed captions misalnya yang kini digunakan secara luas. Inilah bukti bahwa inovasi yang lahir dari kepedulian terhadap aksesibilitas pada akhirnya membawa manfaat luas bagi semua – karena setiap individu itu unik, teknologi pun harus mampu beradaptasi secara inklusif untuk memenuhi beragam kebutuhan tersebut. Hackathon ini menjadi ruang untuk mewujudkan misi itu, dengan dukungan layanan Microsoft yang berkomitmen pada inklusivitas,” ujar Dharma Simorangkir, Presiden Direktur Microsoft Indonesia
Semangat ini sejalan dengan komitmen Universitas Indonesia (UI) untuk membangun lingkungan pendidikan yang inklusif. UI secara konsisten menghadirkan berbagai inisiatif, mulai dari pendirian Unit Layanan Mahasiswa Disabilitas di sejumlah fakultas—seperti Fakultas Kesehatan Masyarakat dan Fakultas Psikologi—hingga penyediaan layanan pendampingan belajar dan proses seleksi masuk yang inklusif.
“Universitas Indonesia memiliki banyak inovator muda berbakat yang siap menciptakan solusi teknologi demi mendukung inklusivitas. Kami meyakini bahwa inovasi-inovasi yang lahir dari ajang seperti Hackathon AI for Accessibility bersama Microsoft dapat menjadi pemicu perubahan menuju dunia yang lebih ramah dan setara bagi semua kalangan, termasuk penyandang disabilitas. Terima kasih kepada seluruh peserta yang telah mencurahkan energi, waktu, dan gagasan untuk menjawab tantangan nyata di masyarakat,” kata Prof. Kemas Ridwan Kurniawan, S.T., M.Sc., Ph.D., Dekan Fakultas Teknik Universitas Indonesia.
Selain Tim “The Leporidaes” yang keluar sebagai pemenang utama, sejumlah ide lainnya juga mendapatkan penghargaan. Misalnya, ide dari Tim “UINNOVATORS” dengan solusi bernama Pintaru didapuk sebagai juara kedua. Terinspirasi dari fakta bahwa satu dari lima pelajar di dunia memiliki disleksia, mereka merancang buku digital adaptif yang dapat menyesuaikan ukuran huruf, spasi, dan elemen visual lainnya sesuai kebutuhan pengguna. Solusi yang mereka bawakan didukung oleh Azure OpenAI, Azure Search, dan Azure Speech untuk menciptakan pengalaman belajar yang lebih inklusif.
###
Membangun Ekosistem Digital Indonesia yang Siap di Era AI
Read in English here
Transformasi digital di Indonesia kini memasuki babak baru dengan semakin masifnya adopsi kecerdasan buatan (AI). Untuk dapat mengikuti kemajuan ini, kesiapan infrastruktur dan pengembangan talenta harus berjalan beriringan agar ekosistem digital dapat tumbuh secara berkelanjutan. Dalam sesi wawancara langsung bersama CNBC Indonesia melalui program Tech A Look CNBC Indonesia TV, Dharma Simorangkir, Presiden Director Microsoft Indonesia, berbagi pandangan seputar peran Microsoft sebagai mitra jangka panjang yang mendukung transformasi digital secara inklusif, berkelanjutan, dan bertanggung jawab.
Infrastruktur Tangguh untuk Mendukung Ekosistem Digital
Pada April 2025, Microsoft resmi meluncurkan cloud region Indonesia Central sebagai bagian dari investasi sebesar USD 1,7 miliar — investasi terbesar kami selama 30 tahun berkiprah di Indonesia untuk mendukung inovasi dan #BerdayakanIndonesia.
Terintegrasi dengan lebih dari 70 Azure regions dan 300+ datacenter global, Indonesia Central menawarkan infrastruktur cloud terpercaya dengan konektivitas rendah latensi, keamanan data lokal, serta skalabilitas yang mendukung ambisi AI Indonesia.
Infrastruktur ini memungkinkan organisasi di Indonesia menjalankan layanan AI dan cloud secara real-time—baik untuk kebutuhan domestik, maupun untuk membangun solusi dari Indonesia ke panggung global.
Generasi Pembelajar dan Inovator di Era AI
Sejalan dengan komitmen Microsoft untuk #BerdayakanIndonesia, Indonesia membutuhkan talenta yang mampu memanfaatkan teknologi secara inklusif dan bertanggung jawab. Melalui program elevAIte Indonesia bersama Komdigi, kami menargetkan pelatihan bagi 1 juta peserta mulai dari sektor publik, pendidikan, UMKM, hingga komunitas di wilayah 3T.
Berkat inisiatif ini, lahir kisah-kisah inspiratif dari pemanfaatan AI, misalnya, dalam mitigasi bencana di Wonogiri, hingga pertanian yang tahan iklim. Kisah ini membuktikan bahwa kolaborasi antara teknologi AI dan kemampuan manusia mampu menciptakan solusi untuk menghadapi tantangan di masa depan.
Indonesia juga saat ini memiliki lebih dari 3,1 juta developer aktif di GitHub, menjadikannya sebagai komunitas developer terbesar ketiga di Asia Pasifik, yang mencerminkan semangat eksplorasi, kolaborasi, dan keberanian untuk tidak hanya menggunakan teknologi, tapi juga menciptakannya.
Mendorong Adopsi AI yang Bertanggung Jawab
Di tengah percepatan digitalisasi, keamanan siber tidak bisa dianggap sebagai fitur tambahan. Microsoft menerapkan prinsip privacy and security by design di seluruh layanan cloud dan AI.
Setiap hari, Microsoft menganalisis lebih dari 78 triliun sinyal keamanan, didukung oleh 34,000+ engineer keamanan dan inisiatif global seperti Secure Future Initiative (SFI).
Microsoft secara aktif berbagi praktik terbaik melalui publikasi seperti Cyber Signals dan Digital Defense Report, dan berkolaborasi dengan pemerintah Indonesia untuk berbagi praktik terbaik terkait regulasi data dan AI seperti tercermin dalam Microsoft Responsible AI Standard yang mendorong organisasi dapat menerapkan prinsip dan pengembangan AI yang bertanggung jawab secara luas.
Kekuatan Kolaborasi di Era AI
Untuk membangun ekosistem digital yang inklusif dan berkelanjutan, Microsoft percaya bahwa pendekatan pentahelix – melibatkan pemerintah, industri, akademisi, komunitas, dan media, merupakan kunci.
“Dalam dua tahun terakhir, kami telah melakukan upskilling dan reskilling di bidang digital, keamanan siber, hingga AI kepada lebih dari 700.000 orang di Indonesia melalui program elevAIte. Tentu, upaya ini tidak bisa kami jalankan sendiri—dukungan dari Komdigi, lembaga, dan berbagai komunitas menjadi kunci keberhasilannya. Kini, dengan hadirnya layanan AI dan kebutuhan data residency di dalam negeri, seluruh pelaku usaha dan organisasi dapat #InnovAIteinIndonesia.”
Saksikan rekaman wawancara lengkap saya bersama CNBC Indonesia dalam program Tech a Look di sini:
Saya mengucapkan terima kasih kepada tim redaksi CNBC Indonesia atas kesempatannya untuk berbagi pandangan mengenai masa depan ekosistem digital Indonesia. Semoga wawancara ini dapat menjadi bagian dari percakapan yang lebih luas tentang bagaimana teknologi, jika diadopsi secara inklusif dan bertanggung jawab, dapat memberikan dampak positif bagi masyarakat luas.
###
Building Indonesia’s Digital Ecosystem Ready for the AI Era
Read in Bahasa Indonesia here.
Indonesia’s digital transformation has entered a new chapter, marked by the rapid and widespread adoption of artificial intelligence (AI). To keep pace with this momentum, infrastructure readiness and talent development must go hand in hand—ensuring that the country’s digital ecosystem can grow inclusively and sustainably.
In an interview on CNBC Indonesia TV’s Tech A Look program, Dharma Simorangkir, President Director of Microsoft Indonesia, shared insights on Microsoft’s role as a long-term partner supporting an inclusive, sustainable, and responsible digital transformation.
Resilient Infrastructure to Support the Digital Ecosystem
In April 2025, Microsoft officially launched the Indonesia Central cloud region as part of a USD 1.7 billion investment—the largest investment we have made during our 30 years of operation in Indonesia to support innovation and #BerdayakanIndonesia.
Integrated with over 70 Azure regions and more than 300 datacenters worldwide, Indonesia Central delivers trusted cloud infrastructure with low-latency connectivity, local data security, and scalability that underpins Indonesia’s AI ambitions.
This infrastructure enables organizations in Indonesia to run AI and cloud services in real-time—not only to serve domestic needs, but to build solutions that scale globally.
A New Generation of Learners and Innovators
Aligned with Microsoft’s commitment to #BerdayakanIndonesia, Indonesia needs talent capable of leveraging technology inclusively and responsibly. Through the elevAIte Indonesia program in collaboration with Komdigi, we aim to train 1 million participants across the public sector, education, MSMEs, and communities in underdeveloped regions (3T areas).
This initiative has already surfaced inspiring stories of AI in action — from disaster mitigation in Wonogiri to climate-resilient agriculture. These stories demonstrate how AI, when paired with human ingenuity, can help solve real-world challenges.
Indonesia is also home to 3.1 million active developers on GitHub, making it the third-largest developer community in Asia Pacific, reflecting a spirit of exploration, collaboration, and the courage to not only use technology but also create it.
Advancing Responsible AI Adoption
In an era of accelerating digitalization, cybersecurity cannot be an afterthought. Microsoft embeds privacy and security by design across all our cloud and AI services.
Each day, Microsoft analyzes more than 78 trillion security signals, powered by 34,000+ security engineers and global initiatives such as the Secure Future Initiative (SFI).
We actively share best practices through reports like Cyber Signals and the Digital Defense Report and collaborate with the Indonesian government to share best practices on data and AI regulation, as reflected in the Microsoft Responsible AI Standard, encouraging organizations to broadly adopt responsible AI principles and development.
The Power of Collaboration in the AI Era
We believe building a resilient digital ecosystem requires a pentahelix approach – bringing together government, industry, academia, communities, and media, we can create an inclusive and robust digital ecosystem.
“In the past two years, we have upskilled and reskilled more than 700,000 people across digital skills, cybersecurity, and AI in Indonesia through the elevAIte program. Of course, this effort cannot be done alone—we rely on the support of Komdigi, institutions, and various communities. Now, with the availability of AI services and the need for data residency domestically, all businesses and organizations have the opportunity to #InnovAIteinIndonesia.”
Watch the full interview with CNBC Indonesia on Tech A Look here:
We thank CNBC Indonesia’s editorial team for the opportunity to share our vision for Indonesia’s digital future. We hope this conversation inspires broader dialogue on how inclusive and responsible technology adoption can positively impact society as a whole.
###
Token Protection Extends to Microsoft Graph PowerShell SDK Sessions
Token Protection, PRTs, Device Binding, and Session Keys
Last year, I discussed how to use a conditional access policy to apply a new session control called token protection. The idea is to protect against token theft by requiring connections to have a token (the Primary Refresh Token, or PRT) that has a “cryptographically secure tie” with the device that the connection originates from. The PRT is “bound” to a device key that’s securely stored in the device’s Trusted Platform Module (TPM). PRTs are supported on Windows 10 or later devices.
The PRT is an “opaque blob” that’s specific to a user account and device. The Entra ID authentication service issues a PRT following a successful connection by a user when the device is registered, joined, or hybrid joined. Entra ID also issues a session key, an encrypted symmetric key to serve as proof of possession when a PRT attempts to obtain tokens for applications. If an attacker attempts to hijack a connection with an access token they’ve stolen, they’ll fail because they don’t have access to the device key.
Why Does This Matter?
As noted in my article last year, it’s possible to create a conditional access policy with a session control requiring token protection. In other words, when a connection attempts to satisfy the conditions of the policy, it must be able to prove that its PRT is bound to the device where the connection originates and the user making the request. This process is managed by a component called Web Account Manager (WAM).
But conditional access policies can only work if everything involved in the connection understand what’s going on. At the time I wrote the last article, limited support existed for token protection. The reason for this article is that interactive Microsoft Graph PowerShell SDK sessions now support token protection (see details about support for token protection by other applications here). This opens the possibility of extending additional protection for administrators and developers who might work on sensitive data through the Graph SDK.
The reason why you might want to do this is revealed in a recent Entra ID change that shows the resources a user can access when they satisfy a conditional access policy to connect. In this case, the connection is to an interactive Graph PowerShell SDK session, and the resources available in that session depends on the delegated permissions held by the Microsoft Graph Command Line Tools service principal. The set of permissions tends to swell over time as administrators grant consent to permissions needed to work with different cmdlets, but as Figure 1 shows, a Graph PowerShell SDK session can have access to many different resources.

Enabling Token Protection for Graph Interactive Sessions
Normally, interactive Graph PowerShell SDK sessions don’t use WAM. To enable WAM for Graph sessions, run the Set-MgGraphOption cmdlet before running Connect-MgGraph. As the documentation says, the cmdlet sets global configuration options, so the configuration setting stays in force for all Microsoft Graph interactive sessions on the workstation until it is reversed.
Set-MgGraphOption –EnableLoginByWAM $true Connect-MgGraph
If the device isn’t registered or joined, the conditional access policy condition for token protection isn’t satisfied and the sign-in attempt is rejected with a 530084 error code. The cause is obvious if you examine the policy details captured in the sign-in event (Figure 2).

WAM doesn’t affect app-only authentication for the Graph SDK, including Azure Automation runbooks that use modules and cmdlets from the Graph PowerShell SDK.
Token Protection and Elevated PowerShell Sessions
The Web Account Manager option doesn’t work in elevated PowerShell sessions (run as administrator). Attempts to connect fail with the error “InteractiveBrowserCredential authentication failed: User canceled authentication.”
The solution is two-fold. First, revert to normal authentication on the workstation by running the Set-MgGraphOption cmdlet to set EnableLoginByWAM to $false. If you don’t, authentication fails because a protected token isn’t available (Figure 3). The second step is to remove users who need to run Graph cmdlets in elevated PowerShell sessions from the scope of the conditional access policy. This avoids the user running into problems on other workstations.

Token Protection and Microsoft Graph PowerShell SDK Versions
The WAM option also doesn’t work with the latest versions of the Microsoft Graph PowerShell SDK. This is likely due to Microsoft’s decision to remove support for .NET6 from V2.25 on. In V2.28 of the SDK, the error when running Connect-MgGraph is:
InteractiveBrowserCredential authentication failed: Could not load type 'Microsoft.Identity.Client.AuthScheme.TokenType' from assembly 'Microsoft.Identity.Client, Version=4.67.2.0, Culture=neutral, PublicKeyToken=0a613f4dd989e8ae'.
While Microsoft gets their act together and decides how to fix the issue, the only option is to remain using V2.25. PCs that have upgraded to the current V2.28 release must downgrade to V2.25.
Token Protection is Just Another Tool
Token protection is not for everyone. Its linkup with conditional access policies is another tool for administrators to consider when figuring out how to secure their tenant. My recommendation is that you test the feature and make a measured decision whether it has any value for your organization. Remember that this is an evolving space and other applications are likely to support token protection over time. Maybe one of those applications will be exactly the one you want to secure.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365.
Does SoC Builder do build optimizations, can I see the resources mapping and can I change it?
Dear all,
I am using SoC Blockset for a simple design for AMD Zynq Ultrascale+ ZCU111 evaluation board.
If I understand it correctly, the SoC Blockset add-on uses HDL coder to generate code for FPGA part of the SoC, but this is done through the SoC Builder interface, in which there are much less flexibility then in HDL coder. Does SoC Builder do some FPGA resource optimizations for build? How can I see the resource mapping? Can I change the mapping manually (for a better optimization, e.g.) or is it not possible to make a better mapping then the one produced automatically?
Thank you for your answers!Dear all,
I am using SoC Blockset for a simple design for AMD Zynq Ultrascale+ ZCU111 evaluation board.
If I understand it correctly, the SoC Blockset add-on uses HDL coder to generate code for FPGA part of the SoC, but this is done through the SoC Builder interface, in which there are much less flexibility then in HDL coder. Does SoC Builder do some FPGA resource optimizations for build? How can I see the resource mapping? Can I change the mapping manually (for a better optimization, e.g.) or is it not possible to make a better mapping then the one produced automatically?
Thank you for your answers! Dear all,
I am using SoC Blockset for a simple design for AMD Zynq Ultrascale+ ZCU111 evaluation board.
If I understand it correctly, the SoC Blockset add-on uses HDL coder to generate code for FPGA part of the SoC, but this is done through the SoC Builder interface, in which there are much less flexibility then in HDL coder. Does SoC Builder do some FPGA resource optimizations for build? How can I see the resource mapping? Can I change the mapping manually (for a better optimization, e.g.) or is it not possible to make a better mapping then the one produced automatically?
Thank you for your answers! hdl coder, soc builder, soc blockset, mapping, fpga, programmable logic, optimization MATLAB Answers — New Questions
Different output from ucover and musyn on different computers
Hello
I am working on control design using ucover and musyn functions with a colleague, and we are sharing scripts to generate controllers. We see that the same script does not produce the same controller on our computers, even though all inputs and Matlab versions are identical (24.2.0.2923080 (R2024b) Update 6).
Basically, we input FRD objects into ucover, which outputs an uncertainty model, which is processed and in turn fed into musyn, whcih outputs the controller.
I’ve found that the output from musyn is different, even if I fetch all inputs to musyn from my colleagues workspace. So the difference is introduced in the function itself, likely related to some optimization or floating point processing.
I’ve seen some other posts dealing with this, for example this one:
https://se.mathworks.com/matlabcentral/answers/130493-how-come-i-get-different-output-answers-with-the-same-matlab-version-the-same-code-installed-on-two?s_tid=ta_ans_results
We both get the same output from this prompt, so I assume differing BLAS versions is not a problem (both apparently use AVX2):
>> version(‘-blas’)
ans =
‘Intel(R) oneAPI Math Kernel Library Version 2024.1-Product Build 20240215 for Intel(R) 64 architecture applications (CNR branch AVX2)’
I’ve also found that, if we restrict MATALB to 1 CPU by running
maxNumCompThreads(1)
the output from the ucover function becomes identical (at least for our test case). But the output from musyn is still different on our computers, even with identical input. The difference is large enough to be very significant from a control design perspective, so this is a bit frustrating since we want to be able to reproduce the same controllers in the future, using the script as a recipe (for tracability).
Any tips are greatly appreciated, thanks!Hello
I am working on control design using ucover and musyn functions with a colleague, and we are sharing scripts to generate controllers. We see that the same script does not produce the same controller on our computers, even though all inputs and Matlab versions are identical (24.2.0.2923080 (R2024b) Update 6).
Basically, we input FRD objects into ucover, which outputs an uncertainty model, which is processed and in turn fed into musyn, whcih outputs the controller.
I’ve found that the output from musyn is different, even if I fetch all inputs to musyn from my colleagues workspace. So the difference is introduced in the function itself, likely related to some optimization or floating point processing.
I’ve seen some other posts dealing with this, for example this one:
https://se.mathworks.com/matlabcentral/answers/130493-how-come-i-get-different-output-answers-with-the-same-matlab-version-the-same-code-installed-on-two?s_tid=ta_ans_results
We both get the same output from this prompt, so I assume differing BLAS versions is not a problem (both apparently use AVX2):
>> version(‘-blas’)
ans =
‘Intel(R) oneAPI Math Kernel Library Version 2024.1-Product Build 20240215 for Intel(R) 64 architecture applications (CNR branch AVX2)’
I’ve also found that, if we restrict MATALB to 1 CPU by running
maxNumCompThreads(1)
the output from the ucover function becomes identical (at least for our test case). But the output from musyn is still different on our computers, even with identical input. The difference is large enough to be very significant from a control design perspective, so this is a bit frustrating since we want to be able to reproduce the same controllers in the future, using the script as a recipe (for tracability).
Any tips are greatly appreciated, thanks! Hello
I am working on control design using ucover and musyn functions with a colleague, and we are sharing scripts to generate controllers. We see that the same script does not produce the same controller on our computers, even though all inputs and Matlab versions are identical (24.2.0.2923080 (R2024b) Update 6).
Basically, we input FRD objects into ucover, which outputs an uncertainty model, which is processed and in turn fed into musyn, whcih outputs the controller.
I’ve found that the output from musyn is different, even if I fetch all inputs to musyn from my colleagues workspace. So the difference is introduced in the function itself, likely related to some optimization or floating point processing.
I’ve seen some other posts dealing with this, for example this one:
https://se.mathworks.com/matlabcentral/answers/130493-how-come-i-get-different-output-answers-with-the-same-matlab-version-the-same-code-installed-on-two?s_tid=ta_ans_results
We both get the same output from this prompt, so I assume differing BLAS versions is not a problem (both apparently use AVX2):
>> version(‘-blas’)
ans =
‘Intel(R) oneAPI Math Kernel Library Version 2024.1-Product Build 20240215 for Intel(R) 64 architecture applications (CNR branch AVX2)’
I’ve also found that, if we restrict MATALB to 1 CPU by running
maxNumCompThreads(1)
the output from the ucover function becomes identical (at least for our test case). But the output from musyn is still different on our computers, even with identical input. The difference is large enough to be very significant from a control design perspective, so this is a bit frustrating since we want to be able to reproduce the same controllers in the future, using the script as a recipe (for tracability).
Any tips are greatly appreciated, thanks! ucover, musyn, avx2 MATLAB Answers — New Questions
MATLAB Classification Learner App
"When choosing a Holdout validation method before training any of the available models so that you can compare all the models in your session using the same validation technique", does the Classification Learner train all models on the same training data samples and compute validation accuracy on the same validation data samples, or do the data samples differ each time a new model is trained?"When choosing a Holdout validation method before training any of the available models so that you can compare all the models in your session using the same validation technique", does the Classification Learner train all models on the same training data samples and compute validation accuracy on the same validation data samples, or do the data samples differ each time a new model is trained? "When choosing a Holdout validation method before training any of the available models so that you can compare all the models in your session using the same validation technique", does the Classification Learner train all models on the same training data samples and compute validation accuracy on the same validation data samples, or do the data samples differ each time a new model is trained? classifcation learner MATLAB Answers — New Questions
Help with HSDM model for lithium adsorption: simulated curves do not match experimental data (based on Jiang et al., 2020)
Hello everyone,
I’m trying to replicate the results from the paper by Jiang et al. (2020):
*“Application of concentration-dependent HSDM to the lithium adsorption from brine in fixed bed columns” – Separation and Purification Technology 241, 116682.
The paper models lithium adsorption in a fixed bed packed with Li–Al layered double hydroxide resins. It uses a concentration-dependent Homogeneous Surface Diffusion Model (HSDM), accounting for axial dispersion, film diffusion, surface diffusion, and Langmuir equilibrium. I’ve implemented a full MATLAB simulation including radial discretization inside the particles and a coupling with the axial profile.
I’ve followed the theoretical development very closely and used the same parameters as those reported by the authors. However, the breakthrough curves generated by my model still don’t fully match the experimental data shown in the paper (especially at intermediate values of C_out/C_in).
I suspect there may be a mistake in my implementation of either:
the mass balance in the fluid phase,
the boundary condition at the surface of the particle,
or how I define the rate of mass transfer using Kf.
I’m sharing my complete code below, and I would appreciate any suggestions or corrections you may have.
function hsdm_column_simulation_v2
% Full HSDM model with radial diffusion dependent on loading (Jiang et al. 2023)
clc; clear; close all
%% ==== PARÁMETROS DEL SISTEMA ==== –> %% ==== SYSTEM PARAMETERS ====
% Parámetros de la columna –> % Column parameters
Q = (15e-6)/60; % Brine flow rate [m3/s]
D = 0.02; % Column diameter [m]
A = pi/4 * D^2; % Cross-sectional area [m2]
v = Q/A; % Superficial or interstitial velocity [m/s]
epsilon = 0.355; % Bed void fraction
mu = 6.493e-3; % Fluid dynamic viscosity [Pa·s]
% Parámetros de la resina –> % Resin properties
rho = 1.3787; % Solid density [g/L] (coherent with q in mg/L)
dp = 0.65/1000; % Particle diameter [m]
Dm = 1.166e-5 / 10000; % Lithium diffusivity [m2/s]
R = 0.000325; % Particle radius [m]
Dax = 0.44*Dm + 0.83*v*dp; % Axial dispersion [m²/s] = 4.2983e-5
%% Isoterma de Langmuir –> %% Langmuir Isotherm
qmax = 5.9522; % Langmuir parameter [mg/g]
b = 0.03439; % Langmuir parameter [L/mg]
%% Difusión superficial dependiente de la carga –> %% Surface diffusion dependent on loading
Ds0 = 4e-14 ; % Surface diffusion coef. at 0 coverage [m²/s] (original was 3.2258e-14)
k_exp = 0.505; % Dimensionless constant
%% Correlaciones empíricas –> %% Empirical correlations
Re = (rho * v * dp) / mu;
Sc = mu / (rho * Dm);
Sh = 1.09 + 0.5 * Re^0.5 * Sc^(1/3);
Kf = Sh * Dm / dp; % 6.5386e-5
%% Discretización –> %% Discretization
L = 0.60; % Column height [m]
Nz = 20; % Axial nodes
Nr = 5; % Radial nodes per particle
dz = L / (Nz – 1);
dr = R / Nr;
%% Condiciones operacionales –> %% Operating conditions
cFeedVec = [300, 350, 400]; % mg/L
tf = 36000; % Final time [s] (600 min)
tspan = [0 tf];
colores = [‘b’,’g’,’r’];
%% Figura –> %% Plot
figure; hold on
for j = 1:length(cFeedVec)
cFeed = cFeedVec(j);
% Condiciones iniciales para el lecho y la partícula
c0 = zeros(Nz,1); % Initial concentration in fluid: C = 0
q0 = zeros(Nz*Nr,1); % Initial loading in particles: q = 0
y0 = [c0; q0];
% Agrupación de parámetros –> % Parameter grouping
param = struct(‘Nz’,Nz,’Nr’,Nr,’dz’,dz,’dr’,dr,’R’,R,…
‘epsilon’,epsilon,’rho’,rho,’v’,v,’Dax’,Dax,…
‘qmax’,qmax,’b’,b,’Ds0′,Ds0,’k’,k_exp,…
‘cFeed’,cFeed,’Kf’,Kf);
%% Resolución del sistema con ode15s –> %% System solution using ode15s
[T, Y] = ode15s(@(t,y) hsdm_rhs(t,y,param), tspan, y0);
% Ploteo de gráfico con salida normalizada –> % Plot normalized outlet
C_out = Y(:,Nz);
plot(T/60, C_out / cFeed, colores(j), ‘LineWidth’, 2, …
‘DisplayName’, [‘C_{in} = ‘ num2str(cFeed) ‘ mg/L’]);
end
% %% Evaluación de carga adsorbida para corroborar modelo
% Nz = param.Nz;
% Nr = param.Nr;
% R = param.R;
% dr = param.dr;
% qmax = param.qmax;
% q_final = reshape(Y(end, Nz+1:end), Nr, Nz); % q(r,z) at final time
% q_avg = trapz(linspace(0, R, Nr), q_final .* linspace(0, R, Nr)’, 1) * 2 / R^2;
% fprintf(‘n—– Saturation Analysis for C_in = %d mg/L —–n’, cFeed);
% fprintf(‘Global average q : %.4f mg/gn’, mean(q_avg));
% fprintf(‘Max q in column : %.4f mg/gn’, max(q_avg));
% fprintf(‘Theoretical qmax : %.4f mg/gn’, qmax);
%% Gráfico de Cout/Cin vs tiempo –> %% Plot Cout/Cin vs time
xlabel(‘Tiempo (min)’)
ylabel(‘C_{out} / C_{in}’)
title(‘Curva de ruptura Modelo HSDM’)
legend(‘Location’,’southeast’)
set(gca, ‘FontName’, ‘Palatino Linotype’) % Axes font
box on
%% Puntos experimentales aproximados (visualmente desde el paper)
t_exp = 0:30:600;
Cexp_300 = [0.00 0.22 0.36 0.48 0.57 0.63 0.69 0.73 0.77 0.80 …
0.82 0.84 0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.89 0.90];
Cexp_350 = [0.00 0.26 0.40 0.53 0.62 0.69 0.74 0.78 0.81 0.83 …
0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.90 0.90 0.90 0.91];
Cexp_400 = [0.00 0.29 0.45 0.59 0.68 0.75 0.79 0.83 0.85 0.87 …
0.88 0.89 0.90 0.90 0.91 0.91 0.91 0.91 0.92 0.92 0.92];
% Superponer puntos experimentales –> % Plot experimental data
plot(t_exp, Cexp_400, ‘r^’, ‘MarkerFaceColor’, ‘r’, ‘DisplayName’, ‘Exp 400 mg/L’)
plot(t_exp, Cexp_350, ‘go’, ‘MarkerFaceColor’, ‘g’, ‘DisplayName’, ‘Exp 350 mg/L’)
plot(t_exp, Cexp_300, ‘bs’, ‘MarkerFaceColor’, ‘b’, ‘DisplayName’, ‘Exp 300 mg/L’)
end
%% FUNCIÓN RHS DEL MODELO HSDM –> %% RHS FUNCTION FOR HSDM MODEL
function dydt = hsdm_rhs(~, y, p)
% Extraer parámetros –> % Extract parameters
Nz = p.Nz; Nr = p.Nr; dz = p.dz; dr = p.dr;
R = p.R; epsilon = p.epsilon; rho = p.rho;
v = p.v; Dax = p.Dax; qmax = p.qmax; b = p.b;
Ds0 = p.Ds0; k_exp = p.k; cFeed = p.cFeed; Kf = p.Kf;
% Separar variables –> % Split variables
c = y(1:Nz);
q = reshape(y(Nz+1:end), Nr, Nz); % q(r,z)
% Inicializar derivadas –> % Initialize derivatives
dc_dt = zeros(Nz,1);
dq_dt = zeros(Nr,Nz);
%% Balance de masa axial (fase fluida) –> %% Mass balance (fluid phase)
for i = 2:Nz-1
dcdz = (c(i+1)-c(i-1))/(2*dz);
d2cdz2 = (c(i+1) – 2*c(i) + c(i-1))/dz^2;
qsurf = q(end,i);
Csurf = c(i);
dqR_dt = (3/R) * Kf * (Csurf – qsurf);
dc_dt(i) = Dax * d2cdz2 – v * dcdz – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt;
end
%% Condiciones de contorno en la columna
% Entrada z = 0 – tipo Danckwerts
qsurf = q(end,1);
Csurf = c(1);
dqR_dt_in = (3 / R) * Kf * (Csurf – qsurf);
dc_dt(1) = Dax * (c(2) – c(1)) / dz^2 – v * (cFeed – c(1)) – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt_in;
% Salida z = L
dc_dt(end) = Dax * (c(end-1) – c(end)) / dz;
%% Difusión radial al interior de la partícula
for iz = 1:Nz
for ir = 2:Nr-1
rq = (ir-1)*dr;
d2q = (q(ir+1,iz) – 2*q(ir,iz) + q(ir-1,iz)) / dr^2;
Ds = Ds0 * (1 – q(ir,iz)/qmax)^k_exp;
dq_dt(ir,iz) = Ds * (d2q + (2/rq)*(q(ir+1,iz)-q(ir-1,iz))/(2*dr));
end
% Centro de la partícula
dq_dt(1,iz) = 0;
% Superficie de la partícula
q_eq = (qmax * b * c(iz)) / (1 + b * c(iz));
dq_dt(Nr,iz) = 3 * Kf / R * (q_eq – q(Nr,iz));
end
% Vector final
dydt = [dc_dt; dq_dt(:)];
endHello everyone,
I’m trying to replicate the results from the paper by Jiang et al. (2020):
*“Application of concentration-dependent HSDM to the lithium adsorption from brine in fixed bed columns” – Separation and Purification Technology 241, 116682.
The paper models lithium adsorption in a fixed bed packed with Li–Al layered double hydroxide resins. It uses a concentration-dependent Homogeneous Surface Diffusion Model (HSDM), accounting for axial dispersion, film diffusion, surface diffusion, and Langmuir equilibrium. I’ve implemented a full MATLAB simulation including radial discretization inside the particles and a coupling with the axial profile.
I’ve followed the theoretical development very closely and used the same parameters as those reported by the authors. However, the breakthrough curves generated by my model still don’t fully match the experimental data shown in the paper (especially at intermediate values of C_out/C_in).
I suspect there may be a mistake in my implementation of either:
the mass balance in the fluid phase,
the boundary condition at the surface of the particle,
or how I define the rate of mass transfer using Kf.
I’m sharing my complete code below, and I would appreciate any suggestions or corrections you may have.
function hsdm_column_simulation_v2
% Full HSDM model with radial diffusion dependent on loading (Jiang et al. 2023)
clc; clear; close all
%% ==== PARÁMETROS DEL SISTEMA ==== –> %% ==== SYSTEM PARAMETERS ====
% Parámetros de la columna –> % Column parameters
Q = (15e-6)/60; % Brine flow rate [m3/s]
D = 0.02; % Column diameter [m]
A = pi/4 * D^2; % Cross-sectional area [m2]
v = Q/A; % Superficial or interstitial velocity [m/s]
epsilon = 0.355; % Bed void fraction
mu = 6.493e-3; % Fluid dynamic viscosity [Pa·s]
% Parámetros de la resina –> % Resin properties
rho = 1.3787; % Solid density [g/L] (coherent with q in mg/L)
dp = 0.65/1000; % Particle diameter [m]
Dm = 1.166e-5 / 10000; % Lithium diffusivity [m2/s]
R = 0.000325; % Particle radius [m]
Dax = 0.44*Dm + 0.83*v*dp; % Axial dispersion [m²/s] = 4.2983e-5
%% Isoterma de Langmuir –> %% Langmuir Isotherm
qmax = 5.9522; % Langmuir parameter [mg/g]
b = 0.03439; % Langmuir parameter [L/mg]
%% Difusión superficial dependiente de la carga –> %% Surface diffusion dependent on loading
Ds0 = 4e-14 ; % Surface diffusion coef. at 0 coverage [m²/s] (original was 3.2258e-14)
k_exp = 0.505; % Dimensionless constant
%% Correlaciones empíricas –> %% Empirical correlations
Re = (rho * v * dp) / mu;
Sc = mu / (rho * Dm);
Sh = 1.09 + 0.5 * Re^0.5 * Sc^(1/3);
Kf = Sh * Dm / dp; % 6.5386e-5
%% Discretización –> %% Discretization
L = 0.60; % Column height [m]
Nz = 20; % Axial nodes
Nr = 5; % Radial nodes per particle
dz = L / (Nz – 1);
dr = R / Nr;
%% Condiciones operacionales –> %% Operating conditions
cFeedVec = [300, 350, 400]; % mg/L
tf = 36000; % Final time [s] (600 min)
tspan = [0 tf];
colores = [‘b’,’g’,’r’];
%% Figura –> %% Plot
figure; hold on
for j = 1:length(cFeedVec)
cFeed = cFeedVec(j);
% Condiciones iniciales para el lecho y la partícula
c0 = zeros(Nz,1); % Initial concentration in fluid: C = 0
q0 = zeros(Nz*Nr,1); % Initial loading in particles: q = 0
y0 = [c0; q0];
% Agrupación de parámetros –> % Parameter grouping
param = struct(‘Nz’,Nz,’Nr’,Nr,’dz’,dz,’dr’,dr,’R’,R,…
‘epsilon’,epsilon,’rho’,rho,’v’,v,’Dax’,Dax,…
‘qmax’,qmax,’b’,b,’Ds0′,Ds0,’k’,k_exp,…
‘cFeed’,cFeed,’Kf’,Kf);
%% Resolución del sistema con ode15s –> %% System solution using ode15s
[T, Y] = ode15s(@(t,y) hsdm_rhs(t,y,param), tspan, y0);
% Ploteo de gráfico con salida normalizada –> % Plot normalized outlet
C_out = Y(:,Nz);
plot(T/60, C_out / cFeed, colores(j), ‘LineWidth’, 2, …
‘DisplayName’, [‘C_{in} = ‘ num2str(cFeed) ‘ mg/L’]);
end
% %% Evaluación de carga adsorbida para corroborar modelo
% Nz = param.Nz;
% Nr = param.Nr;
% R = param.R;
% dr = param.dr;
% qmax = param.qmax;
% q_final = reshape(Y(end, Nz+1:end), Nr, Nz); % q(r,z) at final time
% q_avg = trapz(linspace(0, R, Nr), q_final .* linspace(0, R, Nr)’, 1) * 2 / R^2;
% fprintf(‘n—– Saturation Analysis for C_in = %d mg/L —–n’, cFeed);
% fprintf(‘Global average q : %.4f mg/gn’, mean(q_avg));
% fprintf(‘Max q in column : %.4f mg/gn’, max(q_avg));
% fprintf(‘Theoretical qmax : %.4f mg/gn’, qmax);
%% Gráfico de Cout/Cin vs tiempo –> %% Plot Cout/Cin vs time
xlabel(‘Tiempo (min)’)
ylabel(‘C_{out} / C_{in}’)
title(‘Curva de ruptura Modelo HSDM’)
legend(‘Location’,’southeast’)
set(gca, ‘FontName’, ‘Palatino Linotype’) % Axes font
box on
%% Puntos experimentales aproximados (visualmente desde el paper)
t_exp = 0:30:600;
Cexp_300 = [0.00 0.22 0.36 0.48 0.57 0.63 0.69 0.73 0.77 0.80 …
0.82 0.84 0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.89 0.90];
Cexp_350 = [0.00 0.26 0.40 0.53 0.62 0.69 0.74 0.78 0.81 0.83 …
0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.90 0.90 0.90 0.91];
Cexp_400 = [0.00 0.29 0.45 0.59 0.68 0.75 0.79 0.83 0.85 0.87 …
0.88 0.89 0.90 0.90 0.91 0.91 0.91 0.91 0.92 0.92 0.92];
% Superponer puntos experimentales –> % Plot experimental data
plot(t_exp, Cexp_400, ‘r^’, ‘MarkerFaceColor’, ‘r’, ‘DisplayName’, ‘Exp 400 mg/L’)
plot(t_exp, Cexp_350, ‘go’, ‘MarkerFaceColor’, ‘g’, ‘DisplayName’, ‘Exp 350 mg/L’)
plot(t_exp, Cexp_300, ‘bs’, ‘MarkerFaceColor’, ‘b’, ‘DisplayName’, ‘Exp 300 mg/L’)
end
%% FUNCIÓN RHS DEL MODELO HSDM –> %% RHS FUNCTION FOR HSDM MODEL
function dydt = hsdm_rhs(~, y, p)
% Extraer parámetros –> % Extract parameters
Nz = p.Nz; Nr = p.Nr; dz = p.dz; dr = p.dr;
R = p.R; epsilon = p.epsilon; rho = p.rho;
v = p.v; Dax = p.Dax; qmax = p.qmax; b = p.b;
Ds0 = p.Ds0; k_exp = p.k; cFeed = p.cFeed; Kf = p.Kf;
% Separar variables –> % Split variables
c = y(1:Nz);
q = reshape(y(Nz+1:end), Nr, Nz); % q(r,z)
% Inicializar derivadas –> % Initialize derivatives
dc_dt = zeros(Nz,1);
dq_dt = zeros(Nr,Nz);
%% Balance de masa axial (fase fluida) –> %% Mass balance (fluid phase)
for i = 2:Nz-1
dcdz = (c(i+1)-c(i-1))/(2*dz);
d2cdz2 = (c(i+1) – 2*c(i) + c(i-1))/dz^2;
qsurf = q(end,i);
Csurf = c(i);
dqR_dt = (3/R) * Kf * (Csurf – qsurf);
dc_dt(i) = Dax * d2cdz2 – v * dcdz – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt;
end
%% Condiciones de contorno en la columna
% Entrada z = 0 – tipo Danckwerts
qsurf = q(end,1);
Csurf = c(1);
dqR_dt_in = (3 / R) * Kf * (Csurf – qsurf);
dc_dt(1) = Dax * (c(2) – c(1)) / dz^2 – v * (cFeed – c(1)) – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt_in;
% Salida z = L
dc_dt(end) = Dax * (c(end-1) – c(end)) / dz;
%% Difusión radial al interior de la partícula
for iz = 1:Nz
for ir = 2:Nr-1
rq = (ir-1)*dr;
d2q = (q(ir+1,iz) – 2*q(ir,iz) + q(ir-1,iz)) / dr^2;
Ds = Ds0 * (1 – q(ir,iz)/qmax)^k_exp;
dq_dt(ir,iz) = Ds * (d2q + (2/rq)*(q(ir+1,iz)-q(ir-1,iz))/(2*dr));
end
% Centro de la partícula
dq_dt(1,iz) = 0;
% Superficie de la partícula
q_eq = (qmax * b * c(iz)) / (1 + b * c(iz));
dq_dt(Nr,iz) = 3 * Kf / R * (q_eq – q(Nr,iz));
end
% Vector final
dydt = [dc_dt; dq_dt(:)];
end Hello everyone,
I’m trying to replicate the results from the paper by Jiang et al. (2020):
*“Application of concentration-dependent HSDM to the lithium adsorption from brine in fixed bed columns” – Separation and Purification Technology 241, 116682.
The paper models lithium adsorption in a fixed bed packed with Li–Al layered double hydroxide resins. It uses a concentration-dependent Homogeneous Surface Diffusion Model (HSDM), accounting for axial dispersion, film diffusion, surface diffusion, and Langmuir equilibrium. I’ve implemented a full MATLAB simulation including radial discretization inside the particles and a coupling with the axial profile.
I’ve followed the theoretical development very closely and used the same parameters as those reported by the authors. However, the breakthrough curves generated by my model still don’t fully match the experimental data shown in the paper (especially at intermediate values of C_out/C_in).
I suspect there may be a mistake in my implementation of either:
the mass balance in the fluid phase,
the boundary condition at the surface of the particle,
or how I define the rate of mass transfer using Kf.
I’m sharing my complete code below, and I would appreciate any suggestions or corrections you may have.
function hsdm_column_simulation_v2
% Full HSDM model with radial diffusion dependent on loading (Jiang et al. 2023)
clc; clear; close all
%% ==== PARÁMETROS DEL SISTEMA ==== –> %% ==== SYSTEM PARAMETERS ====
% Parámetros de la columna –> % Column parameters
Q = (15e-6)/60; % Brine flow rate [m3/s]
D = 0.02; % Column diameter [m]
A = pi/4 * D^2; % Cross-sectional area [m2]
v = Q/A; % Superficial or interstitial velocity [m/s]
epsilon = 0.355; % Bed void fraction
mu = 6.493e-3; % Fluid dynamic viscosity [Pa·s]
% Parámetros de la resina –> % Resin properties
rho = 1.3787; % Solid density [g/L] (coherent with q in mg/L)
dp = 0.65/1000; % Particle diameter [m]
Dm = 1.166e-5 / 10000; % Lithium diffusivity [m2/s]
R = 0.000325; % Particle radius [m]
Dax = 0.44*Dm + 0.83*v*dp; % Axial dispersion [m²/s] = 4.2983e-5
%% Isoterma de Langmuir –> %% Langmuir Isotherm
qmax = 5.9522; % Langmuir parameter [mg/g]
b = 0.03439; % Langmuir parameter [L/mg]
%% Difusión superficial dependiente de la carga –> %% Surface diffusion dependent on loading
Ds0 = 4e-14 ; % Surface diffusion coef. at 0 coverage [m²/s] (original was 3.2258e-14)
k_exp = 0.505; % Dimensionless constant
%% Correlaciones empíricas –> %% Empirical correlations
Re = (rho * v * dp) / mu;
Sc = mu / (rho * Dm);
Sh = 1.09 + 0.5 * Re^0.5 * Sc^(1/3);
Kf = Sh * Dm / dp; % 6.5386e-5
%% Discretización –> %% Discretization
L = 0.60; % Column height [m]
Nz = 20; % Axial nodes
Nr = 5; % Radial nodes per particle
dz = L / (Nz – 1);
dr = R / Nr;
%% Condiciones operacionales –> %% Operating conditions
cFeedVec = [300, 350, 400]; % mg/L
tf = 36000; % Final time [s] (600 min)
tspan = [0 tf];
colores = [‘b’,’g’,’r’];
%% Figura –> %% Plot
figure; hold on
for j = 1:length(cFeedVec)
cFeed = cFeedVec(j);
% Condiciones iniciales para el lecho y la partícula
c0 = zeros(Nz,1); % Initial concentration in fluid: C = 0
q0 = zeros(Nz*Nr,1); % Initial loading in particles: q = 0
y0 = [c0; q0];
% Agrupación de parámetros –> % Parameter grouping
param = struct(‘Nz’,Nz,’Nr’,Nr,’dz’,dz,’dr’,dr,’R’,R,…
‘epsilon’,epsilon,’rho’,rho,’v’,v,’Dax’,Dax,…
‘qmax’,qmax,’b’,b,’Ds0′,Ds0,’k’,k_exp,…
‘cFeed’,cFeed,’Kf’,Kf);
%% Resolución del sistema con ode15s –> %% System solution using ode15s
[T, Y] = ode15s(@(t,y) hsdm_rhs(t,y,param), tspan, y0);
% Ploteo de gráfico con salida normalizada –> % Plot normalized outlet
C_out = Y(:,Nz);
plot(T/60, C_out / cFeed, colores(j), ‘LineWidth’, 2, …
‘DisplayName’, [‘C_{in} = ‘ num2str(cFeed) ‘ mg/L’]);
end
% %% Evaluación de carga adsorbida para corroborar modelo
% Nz = param.Nz;
% Nr = param.Nr;
% R = param.R;
% dr = param.dr;
% qmax = param.qmax;
% q_final = reshape(Y(end, Nz+1:end), Nr, Nz); % q(r,z) at final time
% q_avg = trapz(linspace(0, R, Nr), q_final .* linspace(0, R, Nr)’, 1) * 2 / R^2;
% fprintf(‘n—– Saturation Analysis for C_in = %d mg/L —–n’, cFeed);
% fprintf(‘Global average q : %.4f mg/gn’, mean(q_avg));
% fprintf(‘Max q in column : %.4f mg/gn’, max(q_avg));
% fprintf(‘Theoretical qmax : %.4f mg/gn’, qmax);
%% Gráfico de Cout/Cin vs tiempo –> %% Plot Cout/Cin vs time
xlabel(‘Tiempo (min)’)
ylabel(‘C_{out} / C_{in}’)
title(‘Curva de ruptura Modelo HSDM’)
legend(‘Location’,’southeast’)
set(gca, ‘FontName’, ‘Palatino Linotype’) % Axes font
box on
%% Puntos experimentales aproximados (visualmente desde el paper)
t_exp = 0:30:600;
Cexp_300 = [0.00 0.22 0.36 0.48 0.57 0.63 0.69 0.73 0.77 0.80 …
0.82 0.84 0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.89 0.90];
Cexp_350 = [0.00 0.26 0.40 0.53 0.62 0.69 0.74 0.78 0.81 0.83 …
0.85 0.86 0.87 0.88 0.88 0.89 0.89 0.90 0.90 0.90 0.91];
Cexp_400 = [0.00 0.29 0.45 0.59 0.68 0.75 0.79 0.83 0.85 0.87 …
0.88 0.89 0.90 0.90 0.91 0.91 0.91 0.91 0.92 0.92 0.92];
% Superponer puntos experimentales –> % Plot experimental data
plot(t_exp, Cexp_400, ‘r^’, ‘MarkerFaceColor’, ‘r’, ‘DisplayName’, ‘Exp 400 mg/L’)
plot(t_exp, Cexp_350, ‘go’, ‘MarkerFaceColor’, ‘g’, ‘DisplayName’, ‘Exp 350 mg/L’)
plot(t_exp, Cexp_300, ‘bs’, ‘MarkerFaceColor’, ‘b’, ‘DisplayName’, ‘Exp 300 mg/L’)
end
%% FUNCIÓN RHS DEL MODELO HSDM –> %% RHS FUNCTION FOR HSDM MODEL
function dydt = hsdm_rhs(~, y, p)
% Extraer parámetros –> % Extract parameters
Nz = p.Nz; Nr = p.Nr; dz = p.dz; dr = p.dr;
R = p.R; epsilon = p.epsilon; rho = p.rho;
v = p.v; Dax = p.Dax; qmax = p.qmax; b = p.b;
Ds0 = p.Ds0; k_exp = p.k; cFeed = p.cFeed; Kf = p.Kf;
% Separar variables –> % Split variables
c = y(1:Nz);
q = reshape(y(Nz+1:end), Nr, Nz); % q(r,z)
% Inicializar derivadas –> % Initialize derivatives
dc_dt = zeros(Nz,1);
dq_dt = zeros(Nr,Nz);
%% Balance de masa axial (fase fluida) –> %% Mass balance (fluid phase)
for i = 2:Nz-1
dcdz = (c(i+1)-c(i-1))/(2*dz);
d2cdz2 = (c(i+1) – 2*c(i) + c(i-1))/dz^2;
qsurf = q(end,i);
Csurf = c(i);
dqR_dt = (3/R) * Kf * (Csurf – qsurf);
dc_dt(i) = Dax * d2cdz2 – v * dcdz – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt;
end
%% Condiciones de contorno en la columna
% Entrada z = 0 – tipo Danckwerts
qsurf = q(end,1);
Csurf = c(1);
dqR_dt_in = (3 / R) * Kf * (Csurf – qsurf);
dc_dt(1) = Dax * (c(2) – c(1)) / dz^2 – v * (cFeed – c(1)) – ((1 – epsilon)/epsilon) * rho * 1000 * dqR_dt_in;
% Salida z = L
dc_dt(end) = Dax * (c(end-1) – c(end)) / dz;
%% Difusión radial al interior de la partícula
for iz = 1:Nz
for ir = 2:Nr-1
rq = (ir-1)*dr;
d2q = (q(ir+1,iz) – 2*q(ir,iz) + q(ir-1,iz)) / dr^2;
Ds = Ds0 * (1 – q(ir,iz)/qmax)^k_exp;
dq_dt(ir,iz) = Ds * (d2q + (2/rq)*(q(ir+1,iz)-q(ir-1,iz))/(2*dr));
end
% Centro de la partícula
dq_dt(1,iz) = 0;
% Superficie de la partícula
q_eq = (qmax * b * c(iz)) / (1 + b * c(iz));
dq_dt(Nr,iz) = 3 * Kf / R * (q_eq – q(Nr,iz));
end
% Vector final
dydt = [dc_dt; dq_dt(:)];
end adsorption, hsdm, resin, adsorption column, lithium MATLAB Answers — New Questions
Querying data from enum arrays using C API
I am passing a struct as a block dialog parameter to an S-function. The struct is deeply nested and contains some fields that are enumeration arrays. The enums are defined in matlab and inherit from Simulink.IntType. I am able to grab the data fine when the field is a scalar, as in just one element. However, if the field is an enumeration array, then I cannot. For example, I have an enumeration array that in matlab shows size = [16×1]. I have a corresponding C type array of size 16, and type of the enumeration, also defined in C. I’m confused as to how to copy the data from the enum array since I’m getting rows =1, cols = 1, and numElements = 1 when running the following code.
My code is as follows:
const char *class_name = mxGetClassName(matlabArray); -> This returns the expected enumeration type name.
mxArray *enumArrayField = mxGetField(parentField, 0, “myEnumInstanceName”);
size_t rows = mxGetM(enumArrayField); %-> This returns 1
size_t cols = mxGetN(enumArrayField); %-> This also returns 1
size_t numElements = mxGetNumberOfElements(matlabArray); %-> This also returns 1.
//
void *raw_data = mxGetData(matlabArray); %->i tried to grab the data this way but doesn’t seem to work because i cannot iterate on it
size_t element_size = mxGetElementSize(matlabArray); %-> This returns 8 bytesI am passing a struct as a block dialog parameter to an S-function. The struct is deeply nested and contains some fields that are enumeration arrays. The enums are defined in matlab and inherit from Simulink.IntType. I am able to grab the data fine when the field is a scalar, as in just one element. However, if the field is an enumeration array, then I cannot. For example, I have an enumeration array that in matlab shows size = [16×1]. I have a corresponding C type array of size 16, and type of the enumeration, also defined in C. I’m confused as to how to copy the data from the enum array since I’m getting rows =1, cols = 1, and numElements = 1 when running the following code.
My code is as follows:
const char *class_name = mxGetClassName(matlabArray); -> This returns the expected enumeration type name.
mxArray *enumArrayField = mxGetField(parentField, 0, “myEnumInstanceName”);
size_t rows = mxGetM(enumArrayField); %-> This returns 1
size_t cols = mxGetN(enumArrayField); %-> This also returns 1
size_t numElements = mxGetNumberOfElements(matlabArray); %-> This also returns 1.
//
void *raw_data = mxGetData(matlabArray); %->i tried to grab the data this way but doesn’t seem to work because i cannot iterate on it
size_t element_size = mxGetElementSize(matlabArray); %-> This returns 8 bytes I am passing a struct as a block dialog parameter to an S-function. The struct is deeply nested and contains some fields that are enumeration arrays. The enums are defined in matlab and inherit from Simulink.IntType. I am able to grab the data fine when the field is a scalar, as in just one element. However, if the field is an enumeration array, then I cannot. For example, I have an enumeration array that in matlab shows size = [16×1]. I have a corresponding C type array of size 16, and type of the enumeration, also defined in C. I’m confused as to how to copy the data from the enum array since I’m getting rows =1, cols = 1, and numElements = 1 when running the following code.
My code is as follows:
const char *class_name = mxGetClassName(matlabArray); -> This returns the expected enumeration type name.
mxArray *enumArrayField = mxGetField(parentField, 0, “myEnumInstanceName”);
size_t rows = mxGetM(enumArrayField); %-> This returns 1
size_t cols = mxGetN(enumArrayField); %-> This also returns 1
size_t numElements = mxGetNumberOfElements(matlabArray); %-> This also returns 1.
//
void *raw_data = mxGetData(matlabArray); %->i tried to grab the data this way but doesn’t seem to work because i cannot iterate on it
size_t element_size = mxGetElementSize(matlabArray); %-> This returns 8 bytes simulink, matlab, matlab coder MATLAB Answers — New Questions
Microsoft 365 PowerShell Modules Need Better Testing
Problems with Azure Automation Afflict Microsoft 365 PowerShell Modules
The recent problems with the Microsoft Graph PowerShell SDK are well documented. Suffice to say that the Graph PowerShell SDK hasn’t been very stable since V2.25. V2.26 and V2.27 just didn’t work, and although Microsoft delivered a much-improved update in V2.28 in May 2025, the Graph PowerShell SDK still has problems with Azure Automation.
In the Azure Automation environment, runbooks are configured to use a runtime version of PowerShell. When a runbook starts, Azure Automation loads the dependent modules (which must be a version that matches the runtime version) on the target server where the runbook executes. Currently, Azure Automation supports runtime versions for PowerShell V5.1, V7.1, and V7.2.
A Question of .NET
PowerShell V5.1 is the “classic” version. V7-based PowerShell is “PowerShell Core.” The V7.1 and V7.2 runtimes support .NET 6 while the latest versions of PowerShell use .NET 8. Software engineering groups don’t like supporting what they consider to be outdated software, so a decision was taken to drop support for .NET 6. The net effect was that V7.1 and V7.2 runbooks couldn’t use the Graph PowerShell SDK. The workaround was to use the PowerShell V5.1 runtime or revert to V2.25 of the Graph PowerShell SDK, which still supports .NET6.
Microsoft says that the solution will come when Azure Automation supports the PowerShell V7.4 runtime. That update was supposed to arrive by June 15, 2025. It’s late, so I cannot confirm or deny if Graph PowerShell SDK V2.28 code supports PowerShell V7.4 runbooks.
The .NET Versioning Problem Strikes Exchange
A week or so ago, a reader complained that the latest version of the Exchange Online management module (now V3.8.0) didn’t run with PowerShell V7.2 runbooks. A previous comment for the article where the issue was raised said that V3.5 was required to support PowerShell V7.2 runbooks as long ago as February 13, 2025. At the time, apart from finding a relevant Stack Overflow discussion, I didn’t pay too much attention to the problem. I guess I became accustomed to the Exchange module just working while the Graph PowerShell SDK was the problem child of the Microsoft 365 PowerShell modules.
As it turns out, the Exchange Online management module shares the same problem as the Microsoft Graph PowerShell SDK. Engineering decided to remove support for .NET 6 in V3.5.1 of the Exchange module and screwed up Azure Automation V7 runbooks. The release notes for V3.5.1 are brief and concise:
Version 3.5.1
- Bug fixes in Get-EXOMailboxPermission and Get-EXOMailbox.
- The module has been upgraded to run on .NET 8, replacing the previous version based on .NET 6.
- Enhancements in Add-VivaModuleFeaturePolicy.
There’s nothing to raise awareness for tenant administrators that the change in supported .NET version will stop runbooks dead in the water. It’s easy to glance over the release notes and conclude that not much has changed and it’s therefore safe to upgrade to the new version. The problem becomes very evident when the Connect-ExchangeOnline cmdlet can’t run and as a result, every other Exchange cmdlet cannot be found (Figure 1).

The Need for Solid Azure Automation Support
No one denies that Microsoft must prune old software from their cloud services. It’s hard enough to keep a service running smoothly when it carries unnecessary baggage in the form of old code. But in the cases of both the Microsoft Graph PowerShell SDK and the Exchange Online Management module, it seems like the engineering groups never stopped to ask if the change might impact the ability of scripts to run. Running scripts interactively revealed no issues, but running code in an interactive session on a Windows PC (or even a Mac) is not the same as Azure Automation firing up a headless Linux server and configuring it with the software necessary to execute a runbook.
Ensuring that shipped modules support Azure Automation is a problem that can be solved by incorporating Azure Automation runbooks in the test procedures that must succeed before a new version of a module can be released. What’s more upsetting is the lack of awareness within Microsoft about why customers pay for Azure Automation to run scripts.
When a script moves from running interactively on an administrator workstation to become an Azure Automation runbook, it’s probably because the script is deemed to be important enough to run on a stable, robust, and secure environment, often on a schedule (the Windows Task Schedule should not be relied upon to run important scripts). In other words, Azure Automation is an important platform that deserves the respect and solid support of the Microsoft engineers that build PowerShell modules that can run within Azure Automation. That doesn’t seem to be the case today.
Too Much Disruption
Microsoft 365 tenants have suffered far too much disruption with PowerShell modules over the last few years. The retirement of the old Azure AD and MSOL modules was a necessary evil, but Microsoft didn’t handle the situation as well as they should. Many sins might be forgiven if the Microsoft 365 PowerShell modules were rock solid. They’re not currently. Let’s hope that Microsoft does a better job in their testing and pre-release verification processes for PowerShell modules in the future.
Need some assistance to write and manage PowerShell scripts for Microsoft 365? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.