Month: August 2024
access teams meeting recording
Hi, I have question about access teams meeting recording through API. I accessed to azure app and got a usr token with permission onlineMeetings.ReadWrite and OnlineMeetingRecording.Read.All. I recorded the meeting and I saw available recording download on chat channel. But I cannot see any recording info when I get recording through api.
I have attached the screenshoot of meeting recording and empty get response
Is there any pre-requirements to get the recording and transcript through api? Thanks
Hi, I have question about access teams meeting recording through API. I accessed to azure app and got a usr token with permission onlineMeetings.ReadWrite and OnlineMeetingRecording.Read.All. I recorded the meeting and I saw available recording download on chat channel. But I cannot see any recording info when I get recording through api. I have attached the screenshoot of meeting recording and empty get responseIs there any pre-requirements to get the recording and transcript through api? Thanks Read More
Last time a specific data appear
Hello everyone
I have a table where I keep track of which supplement my birds have got. In column A is the date the bird received the supplement. Column B is the type of supplement it got. Column C tells how many days there need to be inbetween two uses of this type of supplement. Column D gives the first date this supplement can be used again.
This is an simplified version of my table:
Date of use Type of supplement Time inbetween uses (in days) Date next use
01/04/2024 Supplement A 7 08/04/2024
03/04/2024 Supplement B 1 04/04/2024
04/04/2024 Supplement C 30 04/05/2024
06/04/2024 Supplement A 7 13/04/2024
07/04/2024 Supplement B 1 08/04/2024
09/04/2024 Supplement D 14 23/04/2024
12/04/2024 Supplement C 30 12/05/2024
As you see I used supplement A on the first of April 2024 and I need to wait 7 days before I can give it again, so I can give it again on the Eight of April 2024. But as you see, I have made a mistake and I give supplement A again on the Sixth of April 2024, which is two days too early. Now I want that the cell “06/04/2024” turns red because I use the supplement too early again. So I want to use conditional formatting in this case. I want to write a formula that excel searches the previous use of the supplement used in this line and than takes the value on the intersection of this row and the column D ‘Date next use’ and compare this with the ‘Date of use’ of the current row.
Now the problem I am having is that I can’t find out how to write the formula to find ‘the last use of a supplement’. Can anyone help me out please?
A big thank you in advance
Benjamin Herremans
Hello everyone I have a table where I keep track of which supplement my birds have got. In column A is the date the bird received the supplement. Column B is the type of supplement it got. Column C tells how many days there need to be inbetween two uses of this type of supplement. Column D gives the first date this supplement can be used again.This is an simplified version of my table:Date of use Type of supplement Time inbetween uses (in days) Date next use01/04/2024 Supplement A 7 08/04/202403/04/2024 Supplement B 1 04/04/202404/04/2024 Supplement C 30 04/05/202406/04/2024 Supplement A 7 13/04/202407/04/2024 Supplement B 1 08/04/202409/04/2024 Supplement D 14 23/04/202412/04/2024 Supplement C 30 12/05/2024As you see I used supplement A on the first of April 2024 and I need to wait 7 days before I can give it again, so I can give it again on the Eight of April 2024. But as you see, I have made a mistake and I give supplement A again on the Sixth of April 2024, which is two days too early. Now I want that the cell “06/04/2024” turns red because I use the supplement too early again. So I want to use conditional formatting in this case. I want to write a formula that excel searches the previous use of the supplement used in this line and than takes the value on the intersection of this row and the column D ‘Date next use’ and compare this with the ‘Date of use’ of the current row.Now the problem I am having is that I can’t find out how to write the formula to find ‘the last use of a supplement’. Can anyone help me out please? A big thank you in advanceBenjamin Herremans Read More
Send Email to Email in List on Specific Date
Hey everyone! I am new to Power Automate and could use some assistance.
My goal is to automate an email sent to access card holders two weeks prior to the card expiration.
So far, I have a column setup for the correct date for the email to be sent and the recipient’s email, but I am not sure how to automate the email on that date.
Any help would be much appreciated.
Hey everyone! I am new to Power Automate and could use some assistance. My goal is to automate an email sent to access card holders two weeks prior to the card expiration. So far, I have a column setup for the correct date for the email to be sent and the recipient’s email, but I am not sure how to automate the email on that date. Any help would be much appreciated. Read More
cnn-lstm error
hello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
endhello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
end hello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
end deep learning, cnn, communication MATLAB Answers — New Questions
Mean and Standard Deviation of outputs on a neural network
I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
];I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
]; I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
]; deep learning, machine learning, neural network MATLAB Answers — New Questions
How to make a specific bar to be hatched with a specific color
I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’); I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’); hatched bars, matlab bars, matlab MATLAB Answers — New Questions
Cisco Secure Endpoint connector Sentinel integration
Has anyone recently added the data connector for Cisco Secure Endpoint (AMP) (using Azure Functions) and successfully started receiving logs? I’ve tried to use the Azure Resource Manager (ARM) Template multiple times; however, I’ve had no success. I have used this method for adding other connectors without any issue. I spoke with Cisco support, and they stated that the instructions Microsoft provided were not correct. Long story short, Cisco support was unable to help get it connected. Any insight would be helpful. Thanks!
Has anyone recently added the data connector for Cisco Secure Endpoint (AMP) (using Azure Functions) and successfully started receiving logs? I’ve tried to use the Azure Resource Manager (ARM) Template multiple times; however, I’ve had no success. I have used this method for adding other connectors without any issue. I spoke with Cisco support, and they stated that the instructions Microsoft provided were not correct. Long story short, Cisco support was unable to help get it connected. Any insight would be helpful. Thanks! Read More
SelfeServe License Request Administation
Hello Microsoft Community!
In our Org , using the powershell module MSCommerce, we have already set the AllowSelfServicePurchase policy to Disabled for all products as we do not want our Members to purchase licenses or sign up for trials on their own.
This has not stopped our Members from making requests for licenses.
We’d like to have our Members utilize our established process for requesting licenses vs using SelfServe.
Is it possible to stop this altogether vs only being able to disable the purchasing function?
Here is an example of our policy status and the weekly digest email we just received.
Hello Microsoft Community!In our Org , using the powershell module MSCommerce, we have already set the AllowSelfServicePurchase policy to Disabled for all products as we do not want our Members to purchase licenses or sign up for trials on their own.This has not stopped our Members from making requests for licenses.We’d like to have our Members utilize our established process for requesting licenses vs using SelfServe.Is it possible to stop this altogether vs only being able to disable the purchasing function?Here is an example of our policy status and the weekly digest email we just received.Policy statusWeekly Digest Read More
Can’t delete tasks
Using Outlook>Tasks I can’t delete tasks. They all reappear after a minute. Tried deleting one by one after deleing all failed – no success!
I found other discussions on the same issue, but no resolutions.
Using Outlook>Tasks I can’t delete tasks. They all reappear after a minute. Tried deleting one by one after deleing all failed – no success! I found other discussions on the same issue, but no resolutions. Read More
Is there any default password for administrator in Windows Servers?
Hello,
I would like to understand the risk of default account. Is there any default password for the default account in the Windows Servers below?
Windows Server 2012Windows Server 2016Windows Server 2019Windows Server 2022
Thank you!
Hello, I would like to understand the risk of default account. Is there any default password for the default account in the Windows Servers below?Windows Server 2012Windows Server 2016Windows Server 2019Windows Server 2022 Thank you! Read More
Multi input convolutional neural network
How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil.How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil. How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil. multi input cnn, deep learning, cnn, input layer, image input, concatenation, multi stream cnn, deep learning toolbox, data import, data augmentation, cnn training, image database, convolutional neural network, image processing, image classification, table array MATLAB Answers — New Questions
How can I model a nearly complicated robotic system?
I am trying to model a 5 DOF robotic system to evaluate my driven kinematic and geometric modelling but this robot includes 5 kinematic chains that makes it somekind of complicated. Also the fact that I am new to robotic modelling adds up to it. I tried to design cad model of robot in Solidworks and import it to matlab simulink using
smimport
command but without getting any errors, my model doesn’t work properly. Everytime I try to fix the model and run the simulink model, I face new problems such as most of the joints not moving or broken assembly. Also I cannot add it to workspace with the command
importrobot
I get some errors like "Targets or motion inputs are specified for every joint around a kinematic loop". I’ll attach the picture of robot model, its simulink generated model and file of simulink and Solidworks model. I’ll be so grateful if anyone could help me or give me a hint.
and all of the related files are in a attached zip fileI am trying to model a 5 DOF robotic system to evaluate my driven kinematic and geometric modelling but this robot includes 5 kinematic chains that makes it somekind of complicated. Also the fact that I am new to robotic modelling adds up to it. I tried to design cad model of robot in Solidworks and import it to matlab simulink using
smimport
command but without getting any errors, my model doesn’t work properly. Everytime I try to fix the model and run the simulink model, I face new problems such as most of the joints not moving or broken assembly. Also I cannot add it to workspace with the command
importrobot
I get some errors like "Targets or motion inputs are specified for every joint around a kinematic loop". I’ll attach the picture of robot model, its simulink generated model and file of simulink and Solidworks model. I’ll be so grateful if anyone could help me or give me a hint.
and all of the related files are in a attached zip file I am trying to model a 5 DOF robotic system to evaluate my driven kinematic and geometric modelling but this robot includes 5 kinematic chains that makes it somekind of complicated. Also the fact that I am new to robotic modelling adds up to it. I tried to design cad model of robot in Solidworks and import it to matlab simulink using
smimport
command but without getting any errors, my model doesn’t work properly. Everytime I try to fix the model and run the simulink model, I face new problems such as most of the joints not moving or broken assembly. Also I cannot add it to workspace with the command
importrobot
I get some errors like "Targets or motion inputs are specified for every joint around a kinematic loop". I’ll attach the picture of robot model, its simulink generated model and file of simulink and Solidworks model. I’ll be so grateful if anyone could help me or give me a hint.
and all of the related files are in a attached zip file matlab, simulink, robotic, modelling, simulation, simscape, rigidboytree, robot MATLAB Answers — New Questions
How to calculate the poles of the fractional transfer function
This is my fractional transfer fucnction, I want to draw the poles of the transfer function. I tried to use the FOMCON toolbox, but there is no pzmap function.How should I do?
———
clear;clc;
s=fotf(‘s’);
lamda1=0.8;
lamda2=0.8;
w0=300;
alpha1=2*w0;
alpha2=w0^2;
beta1=2*w0;
beta2=w0^2;
k=60;
r=1;
h1=s^(lamda1+1)+alpha1*s^(lamda1)+alpha2;
h2=s^(lamda2+1)+beta1*s^(lamda2)+beta2;
m1=h1*h2-h2*alpha2-h1*beta2+beta2*(s^(lamda1+1)+alpha2);
m2=h1*h2*k+alpha2*h2*s+alpha1*beta2*s^(lamda1+1);
G=k*h1*h2/(r*m1*s+m2);
——
where G is the transfer function.This is my fractional transfer fucnction, I want to draw the poles of the transfer function. I tried to use the FOMCON toolbox, but there is no pzmap function.How should I do?
———
clear;clc;
s=fotf(‘s’);
lamda1=0.8;
lamda2=0.8;
w0=300;
alpha1=2*w0;
alpha2=w0^2;
beta1=2*w0;
beta2=w0^2;
k=60;
r=1;
h1=s^(lamda1+1)+alpha1*s^(lamda1)+alpha2;
h2=s^(lamda2+1)+beta1*s^(lamda2)+beta2;
m1=h1*h2-h2*alpha2-h1*beta2+beta2*(s^(lamda1+1)+alpha2);
m2=h1*h2*k+alpha2*h2*s+alpha1*beta2*s^(lamda1+1);
G=k*h1*h2/(r*m1*s+m2);
——
where G is the transfer function. This is my fractional transfer fucnction, I want to draw the poles of the transfer function. I tried to use the FOMCON toolbox, but there is no pzmap function.How should I do?
———
clear;clc;
s=fotf(‘s’);
lamda1=0.8;
lamda2=0.8;
w0=300;
alpha1=2*w0;
alpha2=w0^2;
beta1=2*w0;
beta2=w0^2;
k=60;
r=1;
h1=s^(lamda1+1)+alpha1*s^(lamda1)+alpha2;
h2=s^(lamda2+1)+beta1*s^(lamda2)+beta2;
m1=h1*h2-h2*alpha2-h1*beta2+beta2*(s^(lamda1+1)+alpha2);
m2=h1*h2*k+alpha2*h2*s+alpha1*beta2*s^(lamda1+1);
G=k*h1*h2/(r*m1*s+m2);
——
where G is the transfer function. pole, fractional, transfer function, pzmap MATLAB Answers — New Questions
NEED HELP emails not showing when switched to old outlook app
Hi! I need to back up my school email account since I’m graduating, and school IT suggested to export into .pst file then import into a new alumni account. I searched up tutorials and realize this can only be accomplished through the old version of outlook app, but when I switch back none of the emails show, and I tried to follow through the back up process but the .pst file turn out to be 375kb. I’m using windows and the new outlook that I’ve been using works perfectly. Please help…..
Hi! I need to back up my school email account since I’m graduating, and school IT suggested to export into .pst file then import into a new alumni account. I searched up tutorials and realize this can only be accomplished through the old version of outlook app, but when I switch back none of the emails show, and I tried to follow through the back up process but the .pst file turn out to be 375kb. I’m using windows and the new outlook that I’ve been using works perfectly. Please help….. Read More
Intune Policies did not apply MDM
We are testing Intune policies in the admin center, but they do not apply to the device specified in the security group that was created.
We are testing Intune policies in the admin center, but they do not apply to the device specified in the security group that was created. Read More
Unable to enroll in Microsoft Partner Program
Hello,
I want to enroll for Microsoft Partner program but my country defaults to US and I have the company registered in India, I want to provide Microsoft solutions to Global audiences.
Please help.
Hello, I want to enroll for Microsoft Partner program but my country defaults to US and I have the company registered in India, I want to provide Microsoft solutions to Global audiences.Please help. Read More
TSI Partner Community Update | July 2024
Hello Partners,
Get the latest insights in our TSI July Community Update with tips on kickstarting your FY25 with Copilot for Microsoft 365, Azure, and Business Applications. Attend a CSP bootcamp, learn about the FY25 Nonprofit Community goals and read the Children’s Hospital of Philadelphia case study, a Digital Natives Partner Program participant.
Download the TSI July Community Update
Hello Partners,
Get the latest insights in our TSI July Community Update with tips on kickstarting your FY25 with Copilot for Microsoft 365, Azure, and Business Applications. Attend a CSP bootcamp, learn about the FY25 Nonprofit Community goals and read the Children’s Hospital of Philadelphia case study, a Digital Natives Partner Program participant.
Download the TSI July Community Update
OpenAI’s GPT-4o mini Now Available in API with Vision Capabilities on Azure AI
We recently launched OpenAI’s fastest model, GPT-4o mini, in the Azure OpenAI Studio Playground, simultaneously with OpenAI. The response from our customers has been phenomenal. Today, we are excited to bring this powerful model to even more developers by releasing the GPT-4o mini API with vision support for Global and East US Regional Standard Deployments.
From Playground to API: Expanding Accessibility
Launching GPT-4o mini in the Azure OpenAI Studio Playground provided our customers with the opportunity to experiment and innovate with the latest AI technology. Now, by extending its availability to the API with global and regional pricing, we are empowering developers to seamlessly integrate GPT-4o mini into their applications, leveraging its incredible speed and versatility for a wide range of tasks.
Unlocking New Possibilities with Vision and Text Capabilities
With the addition of vision input capabilities, GPT-4o mini expands its versatility and opens new horizons for developers and businesses. This enhancement allows users to process and analyze visual data, extracting valuable insights and generating comprehensive text outputs. Whether it’s interpreting images or processing documents, GPT-4o mini is designed to handle a wide range of tasks and use cases efficiently.
Flexible Pricing: Regional and Global Options
GPT4o-mini is available for Global Standard deployments in all regions and Standard Regional deployments in East US, with more regions coming soon.
Operating costs can vary significantly across different regions due to factors such as data center expenses and local costs for renewable energy. Additionally, the strict compliance and residency requirements offered by Azure necessitate increased infrastructure investments. To provide our customers with the best possible price while maintaining high standards, we are introducing price tiers for regional Standard and Global Standard for GPT-4o mini. Global Standard provides the lowest price with the highest throughput. It is the best starting point for customers without data processing requirements. Regional Standard pricing will fluctuate based on regional operating costs, ensuring that customers receive a fair and transparent pricing model, meeting the requirement of data residency and compliance. This approach aligns with how services like Azure VMs already offer regional pricing, allowing for flexibility and cost-efficiency tailored to specific regional needs.
Model
Context
Input (Per 1,000 tokens)
Output (Per 1,000 tokens)
GPT-4o Global Deployment
128K
$0.005
$0.015
GPT-4o Regional API
128K
$0.005
$0.015
GPT-4o-mini Global Deployment
1K
$0.00015
$0.0006
GPT-4o-mini Regional API
1K
$0.000165
$0.00066
Key Features and Benefits
Enhanced Vision Input: Leverage the power of GPT-4o mini to process images and videos, enabling applications such as visual recognition, scene understanding, and multimedia content analysis.
Comprehensive Text Output: Generate detailed and contextually accurate text outputs from visual inputs, making it easier to create reports, summaries, and detailed analyses.
Cost-Effective Solutions: Benefit from the cost efficiencies of GPT-4o mini, which is significantly cheaper than previous models, allowing you to deliver high-quality applications at a lower cost. For example, GPT-4o mini offers the quality of GPT-4 Turbo at a price lower than GPT-3.5 Turbo. We are also happy to make the model available in both global and regional standard deployments.
Stay Tuned
Stay tuned for more updates and announcements as we continue to enhance the capabilities of GPT-4o mini. We look forward to seeing the incredible innovations you will create with GPT-4o mini with API access on Azure AI.
Resources
Get all the details about GPT-4o mini on Microsoft Learn. New to Azure? Learn more about Azure OpenAI Service and check out our release newsfeed for the latest enhancements.
Microsoft Tech Community – Latest Blogs –Read More
How to Clean the MATLAB Runtime (MCR) Cache?
How can I clean MATLAB Runtime cache?How can I clean MATLAB Runtime cache? How can I clean MATLAB Runtime cache? MATLAB Answers — New Questions
Why do I get the error “Unrecognized function or variable ‘ctfroot'” when running “MATLABWebAppServer.exe”?
Why do I get the error "Unrecognized function or variable ‘ctfroot’" when running the Development version of MATLAB Web App Server – "MATLABWebAppServer.exe"?
Here is a screenshot of the exact error:Why do I get the error "Unrecognized function or variable ‘ctfroot’" when running the Development version of MATLAB Web App Server – "MATLABWebAppServer.exe"?
Here is a screenshot of the exact error: Why do I get the error "Unrecognized function or variable ‘ctfroot’" when running the Development version of MATLAB Web App Server – "MATLABWebAppServer.exe"?
Here is a screenshot of the exact error: matlabwebappserver, ctfroot, unrecognized, web, app, server MATLAB Answers — New Questions