Category: News
How to export and download content search results by New-ComplianceSearchAction on Linux?
I need to do a compliance project. I want to export the content search results to a specific location, and then download them locally. I saw one possible solution like this:
1. Create one new content search:
New-ComplianceSearch “your_descriptive_name” -ExchangeLocation all | Start-ComplianceSearch
2. Execute the search:
New-ComplianceSearchAction “your_descriptive_name” -Export -Format Fxstream
3.Once the tool finishes exporting the results, you can run the Get-ComplianceSearchAction cmdlet to find out the url required to download the exported data:
Get-ComplianceSearchAction “your_descriptive_name_export” -IncludeCredential | FL
The Results include two pieces of information you need to download the PSTs: the Container url and the SAS token. Together, they form a full URL.
4. Use Azcopy to download the result from the URL(use SAS token) which could be got by step3.
Confirm serveral questions, pls:
1.Is this solution available on Linux Server?
2.If this solution available, could someone help to share one sample data about step3? I don’t know the exact structure of the data.
3. Is there a better solution, pls?
Below link is also my related question:
Any help or guidance would be greatly appreciated!
I need to do a compliance project. I want to export the content search results to a specific location, and then download them locally. I saw one possible solution like this: 1. Create one new content search:New-ComplianceSearch “your_descriptive_name” -ExchangeLocation all | Start-ComplianceSearch2. Execute the search:New-ComplianceSearchAction “your_descriptive_name” -Export -Format Fxstream3.Once the tool finishes exporting the results, you can run the Get-ComplianceSearchAction cmdlet to find out the url required to download the exported data:Get-ComplianceSearchAction “your_descriptive_name_export” -IncludeCredential | FLThe Results include two pieces of information you need to download the PSTs: the Container url and the SAS token. Together, they form a full URL.4. Use Azcopy to download the result from the URL(use SAS token) which could be got by step3.Confirm serveral questions, pls:1.Is this solution available on Linux Server?2.If this solution available, could someone help to share one sample data about step3? I don’t know the exact structure of the data.3. Is there a better solution, pls? Below link is also my related question:https://techcommunity.microsoft.com/t5/microsoft-365/how-to-use-powershell-cmdlet-to-export-content-search-results/m-p/4204782#M53470 Any help or guidance would be greatly appreciated! Read More
How can I stop the MathWorks Service Host from running on startup?
How can I stop the MathWorks Service Host from running on startup?How can I stop the MathWorks Service Host from running on startup? How can I stop the MathWorks Service Host from running on startup? mathworks-startup-host MATLAB Answers — New Questions
cpsd output vector differs from plotted result
When using cpsd to calculate the cross power spectral density, I’ve noticed that the plot produced by directly running the command with no output arguments is slightly different from what I get when I run it with an output argument and plot that output myself.
The following code (attached along with example signal file) produces the figure below.
signalx = randn(2e4,1);
signaly = randn(2e4,1);
window = length(signalx)/3;
noverlap = round(0.9*window);
fs = 2074;
[Pxy,F] = cpsd(signalx,signaly,window,noverlap,[],fs);
plot(F./1000,10.*log10(real(Pxy)))
hold on
cpsd(signalx,signaly,window,noverlap,[],fs);
legend(‘CPSD output’,’CPSD direct plot’)
I’m trying to run cpsd column-wise on a pair of very large matrices. Obviously I can’t just plot them one by one, but I don’t know how to interpret this difference and I’m not sure if I trust the output argument. Does anyone know what’s going on here? Thanks!When using cpsd to calculate the cross power spectral density, I’ve noticed that the plot produced by directly running the command with no output arguments is slightly different from what I get when I run it with an output argument and plot that output myself.
The following code (attached along with example signal file) produces the figure below.
signalx = randn(2e4,1);
signaly = randn(2e4,1);
window = length(signalx)/3;
noverlap = round(0.9*window);
fs = 2074;
[Pxy,F] = cpsd(signalx,signaly,window,noverlap,[],fs);
plot(F./1000,10.*log10(real(Pxy)))
hold on
cpsd(signalx,signaly,window,noverlap,[],fs);
legend(‘CPSD output’,’CPSD direct plot’)
I’m trying to run cpsd column-wise on a pair of very large matrices. Obviously I can’t just plot them one by one, but I don’t know how to interpret this difference and I’m not sure if I trust the output argument. Does anyone know what’s going on here? Thanks! When using cpsd to calculate the cross power spectral density, I’ve noticed that the plot produced by directly running the command with no output arguments is slightly different from what I get when I run it with an output argument and plot that output myself.
The following code (attached along with example signal file) produces the figure below.
signalx = randn(2e4,1);
signaly = randn(2e4,1);
window = length(signalx)/3;
noverlap = round(0.9*window);
fs = 2074;
[Pxy,F] = cpsd(signalx,signaly,window,noverlap,[],fs);
plot(F./1000,10.*log10(real(Pxy)))
hold on
cpsd(signalx,signaly,window,noverlap,[],fs);
legend(‘CPSD output’,’CPSD direct plot’)
I’m trying to run cpsd column-wise on a pair of very large matrices. Obviously I can’t just plot them one by one, but I don’t know how to interpret this difference and I’m not sure if I trust the output argument. Does anyone know what’s going on here? Thanks! signal processing, cross power spectral density, cpsd, spectral analysis MATLAB Answers — New Questions
How can I simulate this paper?
I need help to simulate this paper for my final project.
Please help me to simulate this paper.
thanksI need help to simulate this paper for my final project.
Please help me to simulate this paper.
thanks I need help to simulate this paper for my final project.
Please help me to simulate this paper.
thanks comparison of artifact correction methods for infant eeg applied to extraction of event-related potential signals MATLAB Answers — New Questions
Generating multiple page content using Report generator!
I am using report generator to generate a document in Matlab. The document is generated by form based generation process, where my template has a number of holes, at a well defined positions. I do not have any sections or chapter or paragraphs to append directly into.
The problem is in using the template and to genearate different pages. I tried using pageBreak() statement at the end of the page. And that being the last hole in the report. I receive the following error ”Error using mlreportgen.dom.Document/append Unable to append to #end# hole”. I did try adding additional hole in the template after the line of page break, buit still it loads my first holes content and then throws the same error.
To be more concise, I simply want to reuse the template(consititng of signle page with multiple holes) and loop it several times for the different data, so that I can capture all the single page document outputs in one single document.
Note: Above picture illustrates how the page generating function (defined by ‘MesaPointAnalys’) terminates.
Note: Above picture illustrates how the loop is performed to call for assignement for different page. The MesaPointAnalys is a page generating function and handles different data based on the page numbers.I am using report generator to generate a document in Matlab. The document is generated by form based generation process, where my template has a number of holes, at a well defined positions. I do not have any sections or chapter or paragraphs to append directly into.
The problem is in using the template and to genearate different pages. I tried using pageBreak() statement at the end of the page. And that being the last hole in the report. I receive the following error ”Error using mlreportgen.dom.Document/append Unable to append to #end# hole”. I did try adding additional hole in the template after the line of page break, buit still it loads my first holes content and then throws the same error.
To be more concise, I simply want to reuse the template(consititng of signle page with multiple holes) and loop it several times for the different data, so that I can capture all the single page document outputs in one single document.
Note: Above picture illustrates how the page generating function (defined by ‘MesaPointAnalys’) terminates.
Note: Above picture illustrates how the loop is performed to call for assignement for different page. The MesaPointAnalys is a page generating function and handles different data based on the page numbers. I am using report generator to generate a document in Matlab. The document is generated by form based generation process, where my template has a number of holes, at a well defined positions. I do not have any sections or chapter or paragraphs to append directly into.
The problem is in using the template and to genearate different pages. I tried using pageBreak() statement at the end of the page. And that being the last hole in the report. I receive the following error ”Error using mlreportgen.dom.Document/append Unable to append to #end# hole”. I did try adding additional hole in the template after the line of page break, buit still it loads my first holes content and then throws the same error.
To be more concise, I simply want to reuse the template(consititng of signle page with multiple holes) and loop it several times for the different data, so that I can capture all the single page document outputs in one single document.
Note: Above picture illustrates how the page generating function (defined by ‘MesaPointAnalys’) terminates.
Note: Above picture illustrates how the loop is performed to call for assignement for different page. The MesaPointAnalys is a page generating function and handles different data based on the page numbers. report generator MATLAB Answers — New Questions
Bayes Factor functions/packages
I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
and I would like to calculate the Bayes Factor as well.
I found the bayesFactor Version 1.0.0 (253 KB) by Bart Krekelberg. However, to the best of my understanfding, that package has a limited number of implemented tests:
One sample t-test (bf.ttest)
Two sample t-test (bf.ttest2)
N-Way Anova with fixed and random effects, including continuous co-variates (bf.anova)
Regression (bf.regression)
Pearson Correlation (bf.corr)
Binomial Test (bf.binom)
Experimental Design & Power Analysis (bf.designAnalysis)
and I am not sure if they can be used as additional analysis to the 4 initially listed ones that I am employing.
Does anyone know if there are other Matlab functions/packages to calculate the Bayes factor, in relation to the two-sample tests I am currently using (i.e. the chi2gof, the kstest2, the ranksum, and the kruskalwallis)?I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
and I would like to calculate the Bayes Factor as well.
I found the bayesFactor Version 1.0.0 (253 KB) by Bart Krekelberg. However, to the best of my understanfding, that package has a limited number of implemented tests:
One sample t-test (bf.ttest)
Two sample t-test (bf.ttest2)
N-Way Anova with fixed and random effects, including continuous co-variates (bf.anova)
Regression (bf.regression)
Pearson Correlation (bf.corr)
Binomial Test (bf.binom)
Experimental Design & Power Analysis (bf.designAnalysis)
and I am not sure if they can be used as additional analysis to the 4 initially listed ones that I am employing.
Does anyone know if there are other Matlab functions/packages to calculate the Bayes factor, in relation to the two-sample tests I am currently using (i.e. the chi2gof, the kstest2, the ranksum, and the kruskalwallis)? I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
and I would like to calculate the Bayes Factor as well.
I found the bayesFactor Version 1.0.0 (253 KB) by Bart Krekelberg. However, to the best of my understanfding, that package has a limited number of implemented tests:
One sample t-test (bf.ttest)
Two sample t-test (bf.ttest2)
N-Way Anova with fixed and random effects, including continuous co-variates (bf.anova)
Regression (bf.regression)
Pearson Correlation (bf.corr)
Binomial Test (bf.binom)
Experimental Design & Power Analysis (bf.designAnalysis)
and I am not sure if they can be used as additional analysis to the 4 initially listed ones that I am employing.
Does anyone know if there are other Matlab functions/packages to calculate the Bayes factor, in relation to the two-sample tests I am currently using (i.e. the chi2gof, the kstest2, the ranksum, and the kruskalwallis)? chi2gof, chi-square, kstest2, kolmogorov-smirnov test, ranksum, wilcoxon rank sum test, kruskalwallis, kruskal-wallis test, bayes factor MATLAB Answers — New Questions
Check Out the Latest Updates on Copilot for Microsoft 365 – July 2024!
Hey everyone!
We’ve got some exciting news! The latest blog post on the Copilot for Microsoft 365 tech community is out, and it’s packed with new features and improvements announced in July 2024.
:link: Read the full blog here: What’s New in Copilot – July 2024
Highlights include:
Enhanced user interface for a more intuitive experience.
New integrations with popular Microsoft 365 apps.
Performance improvements for faster and smoother operations.
Feedback-driven updates that reflect what you’ve been asking for.
Don’t miss out on these updates and more! Head over to the blog now and let us know what you think. Your feedback is invaluable in shaping the future of Copilot.
Happy reading and discussing! :speech_balloon:
Hey everyone!
We’ve got some exciting news! The latest blog post on the Copilot for Microsoft 365 tech community is out, and it’s packed with new features and improvements announced in July 2024.
:link: Read the full blog here: What’s New in Copilot – July 2024
Highlights include:
Enhanced user interface for a more intuitive experience.
New integrations with popular Microsoft 365 apps.
Performance improvements for faster and smoother operations.
Feedback-driven updates that reflect what you’ve been asking for.
Don’t miss out on these updates and more! Head over to the blog now and let us know what you think. Your feedback is invaluable in shaping the future of Copilot.
Happy reading and discussing! :speech_balloon: Read More
Numbered List Style does not start at 1, is corrupted if changed to 1, and doesn’t Square wrap right
I’m experiencing three anomalies with respect to WORD Styles. I have a Style called “Numbered List” which is for a numbered list with indent .2 and hanging .2. and numbering starting at 1. (However, in the Styles summary list, it says “numbering style 1,2,3 … Alignment .45 Indent .7 and I don’t see a way to change that.)
When I change paragraphs to the “Numbered List” Style, the correct formatting is applied, but the list numbering continues from the most previous list instead of starting at 1.If I right-click and “Restart at 1”, the numbering is corrected, but the formatting of the first item is corrupted. The formatting appears to change to .45 and .7If the numbered list square wraps around text, the indent isn’t correct.
Please see this video for more details: Anomaly Video
Thank you in advance,
Gary
I’m experiencing three anomalies with respect to WORD Styles. I have a Style called “Numbered List” which is for a numbered list with indent .2 and hanging .2. and numbering starting at 1. (However, in the Styles summary list, it says “numbering style 1,2,3 … Alignment .45 Indent .7 and I don’t see a way to change that.)When I change paragraphs to the “Numbered List” Style, the correct formatting is applied, but the list numbering continues from the most previous list instead of starting at 1.If I right-click and “Restart at 1”, the numbering is corrected, but the formatting of the first item is corrupted. The formatting appears to change to .45 and .7If the numbered list square wraps around text, the indent isn’t correct. Please see this video for more details: Anomaly Video Thank you in advance, Gary Read More
Bar plot with a hatched fill pattern
I have a grouped bar plot, bb:
bb = bar(ax, x, y)
where ‘ax’ is the axis handle, x is a 1×7 datetime vector and y is a 5×7 double vector. For each of the seven dates, I get five bars with data.
I then specify the color of the bars:
for i = 1:5
bb(i).FaceColor = colmapLight(i,:);
bb(i).EdgeColor = colmapDark(i,:);
end
In addition to specifying the colors, I want to use a hatched fill pattern, e.g. horizontal lines in the first two bars in each group, and dots in the last three. I tried using the functions mentioned in this post (https://blogs.mathworks.com/pick/2011/07/15/creating-hatched-patches/), but I haven’t managed to make any of them work. I think the hatchfill function (https://se.mathworks.com/matlabcentral/fileexchange/30733-hatchfill) suits my needs best (I want to keep my custom bar colors; plus I don’t need a bitmap copy of the figure, want to keep it as a fig). However, the function works on ‘patch’ objects and I don’t know how to get their handles. The following:
hPatch = findobj(bb, ‘Type’, ‘patch’);
returns an empty, 0x0 GraphicsPlaceholder.
Does anyone know a way to solve this? Thanks in advance!I have a grouped bar plot, bb:
bb = bar(ax, x, y)
where ‘ax’ is the axis handle, x is a 1×7 datetime vector and y is a 5×7 double vector. For each of the seven dates, I get five bars with data.
I then specify the color of the bars:
for i = 1:5
bb(i).FaceColor = colmapLight(i,:);
bb(i).EdgeColor = colmapDark(i,:);
end
In addition to specifying the colors, I want to use a hatched fill pattern, e.g. horizontal lines in the first two bars in each group, and dots in the last three. I tried using the functions mentioned in this post (https://blogs.mathworks.com/pick/2011/07/15/creating-hatched-patches/), but I haven’t managed to make any of them work. I think the hatchfill function (https://se.mathworks.com/matlabcentral/fileexchange/30733-hatchfill) suits my needs best (I want to keep my custom bar colors; plus I don’t need a bitmap copy of the figure, want to keep it as a fig). However, the function works on ‘patch’ objects and I don’t know how to get their handles. The following:
hPatch = findobj(bb, ‘Type’, ‘patch’);
returns an empty, 0x0 GraphicsPlaceholder.
Does anyone know a way to solve this? Thanks in advance! I have a grouped bar plot, bb:
bb = bar(ax, x, y)
where ‘ax’ is the axis handle, x is a 1×7 datetime vector and y is a 5×7 double vector. For each of the seven dates, I get five bars with data.
I then specify the color of the bars:
for i = 1:5
bb(i).FaceColor = colmapLight(i,:);
bb(i).EdgeColor = colmapDark(i,:);
end
In addition to specifying the colors, I want to use a hatched fill pattern, e.g. horizontal lines in the first two bars in each group, and dots in the last three. I tried using the functions mentioned in this post (https://blogs.mathworks.com/pick/2011/07/15/creating-hatched-patches/), but I haven’t managed to make any of them work. I think the hatchfill function (https://se.mathworks.com/matlabcentral/fileexchange/30733-hatchfill) suits my needs best (I want to keep my custom bar colors; plus I don’t need a bitmap copy of the figure, want to keep it as a fig). However, the function works on ‘patch’ objects and I don’t know how to get their handles. The following:
hPatch = findobj(bb, ‘Type’, ‘patch’);
returns an empty, 0x0 GraphicsPlaceholder.
Does anyone know a way to solve this? Thanks in advance! bar, plot, patch, hatched, pattern MATLAB Answers — New Questions
access teams meeting recording
Hi, I have question about access teams meeting recording through API. I accessed to azure app and got a usr token with permission onlineMeetings.ReadWrite and OnlineMeetingRecording.Read.All. I recorded the meeting and I saw available recording download on chat channel. But I cannot see any recording info when I get recording through api.
I have attached the screenshoot of meeting recording and empty get response
Is there any pre-requirements to get the recording and transcript through api? Thanks
Hi, I have question about access teams meeting recording through API. I accessed to azure app and got a usr token with permission onlineMeetings.ReadWrite and OnlineMeetingRecording.Read.All. I recorded the meeting and I saw available recording download on chat channel. But I cannot see any recording info when I get recording through api. I have attached the screenshoot of meeting recording and empty get responseIs there any pre-requirements to get the recording and transcript through api? Thanks Read More
Last time a specific data appear
Hello everyone
I have a table where I keep track of which supplement my birds have got. In column A is the date the bird received the supplement. Column B is the type of supplement it got. Column C tells how many days there need to be inbetween two uses of this type of supplement. Column D gives the first date this supplement can be used again.
This is an simplified version of my table:
Date of use Type of supplement Time inbetween uses (in days) Date next use
01/04/2024 Supplement A 7 08/04/2024
03/04/2024 Supplement B 1 04/04/2024
04/04/2024 Supplement C 30 04/05/2024
06/04/2024 Supplement A 7 13/04/2024
07/04/2024 Supplement B 1 08/04/2024
09/04/2024 Supplement D 14 23/04/2024
12/04/2024 Supplement C 30 12/05/2024
As you see I used supplement A on the first of April 2024 and I need to wait 7 days before I can give it again, so I can give it again on the Eight of April 2024. But as you see, I have made a mistake and I give supplement A again on the Sixth of April 2024, which is two days too early. Now I want that the cell “06/04/2024” turns red because I use the supplement too early again. So I want to use conditional formatting in this case. I want to write a formula that excel searches the previous use of the supplement used in this line and than takes the value on the intersection of this row and the column D ‘Date next use’ and compare this with the ‘Date of use’ of the current row.
Now the problem I am having is that I can’t find out how to write the formula to find ‘the last use of a supplement’. Can anyone help me out please?
A big thank you in advance
Benjamin Herremans
Hello everyone I have a table where I keep track of which supplement my birds have got. In column A is the date the bird received the supplement. Column B is the type of supplement it got. Column C tells how many days there need to be inbetween two uses of this type of supplement. Column D gives the first date this supplement can be used again.This is an simplified version of my table:Date of use Type of supplement Time inbetween uses (in days) Date next use01/04/2024 Supplement A 7 08/04/202403/04/2024 Supplement B 1 04/04/202404/04/2024 Supplement C 30 04/05/202406/04/2024 Supplement A 7 13/04/202407/04/2024 Supplement B 1 08/04/202409/04/2024 Supplement D 14 23/04/202412/04/2024 Supplement C 30 12/05/2024As you see I used supplement A on the first of April 2024 and I need to wait 7 days before I can give it again, so I can give it again on the Eight of April 2024. But as you see, I have made a mistake and I give supplement A again on the Sixth of April 2024, which is two days too early. Now I want that the cell “06/04/2024” turns red because I use the supplement too early again. So I want to use conditional formatting in this case. I want to write a formula that excel searches the previous use of the supplement used in this line and than takes the value on the intersection of this row and the column D ‘Date next use’ and compare this with the ‘Date of use’ of the current row.Now the problem I am having is that I can’t find out how to write the formula to find ‘the last use of a supplement’. Can anyone help me out please? A big thank you in advanceBenjamin Herremans Read More
Send Email to Email in List on Specific Date
Hey everyone! I am new to Power Automate and could use some assistance.
My goal is to automate an email sent to access card holders two weeks prior to the card expiration.
So far, I have a column setup for the correct date for the email to be sent and the recipient’s email, but I am not sure how to automate the email on that date.
Any help would be much appreciated.
Hey everyone! I am new to Power Automate and could use some assistance. My goal is to automate an email sent to access card holders two weeks prior to the card expiration. So far, I have a column setup for the correct date for the email to be sent and the recipient’s email, but I am not sure how to automate the email on that date. Any help would be much appreciated. Read More
cnn-lstm error
hello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
endhello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
end hello everyone
i have error whene i use cnn-lstm
this is the error
Error using trainNetwork (line 191)
Invalid training data. The output size (1024) of the last layer does not match the response size (1).
Error in Main_fn (line 266)
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
Error in Fig12_generator (line 49)
[Rate_DL,Rate_OPT]=Main_fn(L,My_ar,Mz_ar,M_bar,K_DL,Pt,kbeams(rr),Training_Size);
but whene use cnn onle the code run without error and i make every possible to fix it but it not work and i change the shape of YTrain but still the error
function [Rate_DL,Rate_OPT]=Main_fn(L,My,Mz,M_bar,K_DL,Pt,kbeams,Training_Size)
%% Description:
%
% This is the function called by the main script for ploting Figure 10
% in the original article mentioned below.
%
% version 1.0 (Last edited: 2019-05-10)
%
% The definitions and equations used in this code refer (mostly) to the
% following publication:
%
% Abdelrahman Taha, Muhammad Alrabeiah, and Ahmed Alkhateeb, "Enabling
% Large Intelligent Surfaces with Compressive Sensing and Deep Learning,"
% arXiv e-prints, p. arXiv:1904.10136, Apr 2019.
% [Online]. Available: https://arxiv.org/abs/1904.10136
%
% The DeepMIMO dataset is adopted.
% [Online]. Available: http://deepmimo.net/
%
% License: This code is licensed under a Creative Commons
% Attribution-NonCommercial-ShareAlike 4.0 International License.
% [Online]. Available: https://creativecommons.org/licenses/by-nc-sa/4.0/
% If you in any way use this code for research that results in
% publications, please cite our original article mentioned above.
%% System Model Parameters
params.scenario=’O1_28′; % DeepMIMO Dataset scenario: http://deepmimo.net/
params.active_BS=3; % active basestation(/s) in the chosen scenario
D_Lambda = 0.5; % Antenna spacing relative to the wavelength
BW = 100e6; % Bandwidth
Ut_row = 850; % user Ut row number
Ut_element = 90; % user Ut position from the row chosen above
Ur_rows = [1000 1200]; % user Ur rows
Validation_Size = 6200; % Validation dataset Size
K = 512; % number of subcarriers
miniBatchSize = 500; % Size of the minibatch for the Deep Learning
% Note: The axes of the antennas match the axes of the ray-tracing scenario
Mx = 1; % number of LIS reflecting elements across the x axis
M = Mx.*My.*Mz; % Total number of LIS reflecting elements
% Preallocation of output variables
Rate_DL = zeros(1,length(Training_Size));
Rate_OPT = Rate_DL;
LastValidationRMSE = Rate_DL;
%— Accounting SNR in ach rate calculations
%— Definning Noisy channel measurements
Gt=3; % dBi
Gr=3; % dBi
NF=5; % Noise figure at the User equipment
Process_Gain=10; % Channel estimation processing gain
noise_power_dB=-204+10*log10(BW/K)+NF-Process_Gain; % Noise power in dB
SNR=10^(.1*(-noise_power_dB))*(10^(.1*(Gt+Gr+Pt)))^2; % Signal-to-noise ratio
% channel estimation noise
noise_power_bar=10^(.1*(noise_power_dB))/(10^(.1*(Gt+Gr+Pt)));
No_user_pairs = (Ur_rows(2)-Ur_rows(1))*181; % Number of (Ut,Ur) user pairs
RandP_all = randperm(No_user_pairs).’; % Random permutation of the available dataset
%% Starting the code
disp(‘======================================================================================================================’);
disp([‘ Calculating for M = ‘ num2str(M)]);
Rand_M_bar_all = randperm(M);
%% Beamforming Codebook
% BF codebook parameters
over_sampling_x=1; % The beamsteering oversampling factor in the x direction
over_sampling_y=1; % The beamsteering oversampling factor in the y direction
over_sampling_z=1; % The beamsteering oversampling factor in the z direction
% Generating the BF codebook
[BF_codebook]=sqrt(Mx*My*Mz)*UPA_codebook_generator(Mx,My,Mz,over_sampling_x,over_sampling_y,over_sampling_z,D_Lambda);
codebook_size=size(BF_codebook,2);
%% DeepMIMO Dataset Generation
disp(‘————————————————————-‘);
disp([‘ Calculating for K_DL = ‘ num2str(K_DL)]);
% —— Inputs to the DeepMIMO dataset generation code ———— %
% Note: The axes of the antennas match the axes of the ray-tracing scenario
params.num_ant_x= Mx; % Number of the UPA antenna array on the x-axis
params.num_ant_y= My; % Number of the UPA antenna array on the y-axis
params.num_ant_z= Mz; % Number of the UPA antenna array on the z-axis
params.ant_spacing=D_Lambda; % ratio of the wavelnegth; for half wavelength enter .5
params.bandwidth= BW*1e-9; % The bandiwdth in GHz
params.num_OFDM= K; % Number of OFDM subcarriers
params.OFDM_sampling_factor=1; % The constructed channels will be calculated only at the sampled subcarriers (to reduce the size of the dataset)
params.OFDM_limit=K_DL*1; % Only the first params.OFDM_limit subcarriers will be considered when constructing the channels
params.num_paths=L; % Maximum number of paths to be considered (a value between 1 and 25), e.g., choose 1 if you are only interested in the strongest path
params.saveDataset=0;
disp([‘ Calculating for L = ‘ num2str(params.num_paths)]);
% —————— DeepMIMO "Ut" Dataset Generation —————–%
params.active_user_first=Ut_row;
params.active_user_last=Ut_row;
DeepMIMO_dataset=DeepMIMO_generator(params);
Ht = single(DeepMIMO_dataset{1}.user{Ut_element}.channel);
clear DeepMIMO_dataset
% —————— DeepMIMO "Ur" Dataset Generation —————–%
%Validation part for the actual achievable rate perf eval
Validation_Ind = RandP_all(end-Validation_Size+1:end);
[~,VI_sortind] = sort(Validation_Ind);
[~,VI_rev_sortind] = sort(VI_sortind);
%initialization
Ur_rows_step = 100; % access the dataset 100 rows at a time
Ur_rows_grid=Ur_rows(1):Ur_rows_step:Ur_rows(2);
Delta_H_max = single(0);
for pp = 1:1:numel(Ur_rows_grid)-1 % loop for Normalizing H
clear DeepMIMO_dataset
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
for u=1:params.num_user
Hr = single(conj(DeepMIMO_dataset{1}.user{u}.channel));
Delta_H = max(max(abs(Ht.*Hr)));
if Delta_H >= Delta_H_max
Delta_H_max = single(Delta_H);
end
end
end
clear Delta_H
disp(‘=============================================================’);
disp([‘ Calculating for M_bar = ‘ num2str(M_bar)]);
Rand_M_bar =unique(Rand_M_bar_all(1:M_bar));
Ht_bar = reshape(Ht(Rand_M_bar,:),M_bar*K_DL,1);
DL_input = single(zeros(M_bar*K_DL*2,No_user_pairs));
DL_output = single(zeros(No_user_pairs,codebook_size));
DL_output_un= single(zeros(numel(Validation_Ind),codebook_size));
Delta_H_bar_max = single(0);
count=0;
for pp = 1:1:numel(Ur_rows_grid)-1
clear DeepMIMO_dataset
disp([‘Starting received user access ‘ num2str(pp)]);
params.active_user_first=Ur_rows_grid(pp);
params.active_user_last=Ur_rows_grid(pp+1)-1;
[DeepMIMO_dataset,params]=DeepMIMO_generator(params);
%% Construct Deep Learning inputs
u_step=100;
Htx=repmat(Ht(:,1),1,u_step);
Hrx=zeros(M,u_step);
for u=1:u_step:params.num_user
for uu=1:1:u_step
Hr = single(conj(DeepMIMO_dataset{1}.user{u+uu-1}.channel));
Hr_bar = reshape(Hr(Rand_M_bar,:),M_bar*K_DL,1);
%— Constructing the sampled channel
n1=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
n2=sqrt(noise_power_bar/2)*(randn(M_bar*K_DL,1)+1j*randn(M_bar*K_DL,1));
H_bar = ((Ht_bar+n1).*(Hr_bar+n2));
DL_input(:,u+uu-1+((pp-1)*params.num_user))= reshape([real(H_bar) imag(H_bar)].’,[],1);
Delta_H_bar = max(max(abs(H_bar)));
if Delta_H_bar >= Delta_H_bar_max
Delta_H_bar_max = single(Delta_H_bar);
end
Hrx(:,uu)=Hr(:,1);
end
%— Actual achievable rate for performance evaluation
H = Htx.*Hrx;
H_BF=H.’*BF_codebook;
SNR_sqrt_var = abs(H_BF);
for uu=1:1:u_step
if sum((Validation_Ind == u+uu-1+((pp-1)*params.num_user)))
count=count+1;
DL_output_un(count,:) = single(sum(log2(1+(SNR*((SNR_sqrt_var(uu,:)).^2))),1));
end
end
%— Label for the sampled channel
R = single(log2(1+(SNR_sqrt_var/Delta_H_max).^2));
% — DL output normalization
Delta_Out_max = max(R,[],2);
if ~sum(Delta_Out_max == 0)
Rn=diag(1./Delta_Out_max)*R;
end
DL_output(u+((pp-1)*params.num_user):u+((pp-1)*params.num_user)+u_step-1,:) = 1*Rn; %%%%% Normalized %%%%%
end
end
clear u Delta_H_bar R Rn
%– Sorting back the DL_output_un
DL_output_un = DL_output_un(VI_rev_sortind,:);
%— DL input normalization
DL_input= 1*(DL_input/Delta_H_bar_max); %%%%% Normalized from -1->1 %%%%%
%% DL Beamforming
% —————— Training and Testing Datasets —————–%
% Reshape for CNN-LSTM
% Assuming each sample is a sequence of features where each feature vector should be treated as a 1D image (sequence length x 1 x 1)
DL_output_reshaped = reshape(DL_output.’, size(DL_output,2), 1, 1, size(DL_output,1));
DL_output_reshaped_un = reshape(DL_output_un.’, size(DL_output_un,2), 1, 1, size(DL_output_un,1));
DL_input_reshaped = reshape(DL_input, size(DL_input,1), 1, 1, size(DL_input,2));
for dd=1:numel(Training_Size)
disp([‘ Calculating for Dataset Size = ‘ num2str(Training_Size(dd))]);
Training_Ind = RandP_all(1:Training_Size(dd));
% Index the reshaped data for training and validation
XTrain = single(DL_input_reshaped(:,1,:,Training_Ind));
YTrain = single(DL_output_reshaped(:,:,1,Training_Ind));
XValidation = single(DL_input_reshaped(:,1,:,Validation_Ind));
YValidation = single(DL_output_reshaped(:,:,1,Validation_Ind));
YValidation_un = single(DL_output_reshaped_un(:,:,1,:));
%% DL Model definition with adjusted pooling and convolution layers
layers = [
imageInputLayer([size(XTrain,1), 1, 1],’Name’,’input’,’Normalization’,’none’)
convolution2dLayer(3, 64, ‘Padding’, ‘same’, ‘Name’, ‘conv1’)
batchNormalizationLayer(‘Name’, ‘bn1’)
reluLayer(‘Name’, ‘relu1’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool1’)
convolution2dLayer(3, 128, ‘Padding’, ‘same’, ‘Name’, ‘conv2’)
batchNormalizationLayer(‘Name’, ‘bn2’)
reluLayer(‘Name’, ‘relu2’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool2’)
convolution2dLayer(3, 256, ‘Padding’, ‘same’, ‘Name’, ‘conv3’)
batchNormalizationLayer(‘Name’, ‘bn3’)
reluLayer(‘Name’, ‘relu3’)
maxPooling2dLayer([3,1], ‘Stride’, [3,1], ‘Name’, ‘maxpool3’)
flattenLayer(‘Name’, ‘flatten’)
lstmLayer(128, ‘Name’, ‘lstm1’, ‘OutputMode’, ‘sequence’)
lstmLayer(128, ‘Name’, ‘lstm2’, ‘OutputMode’, ‘last’)
fullyConnectedLayer(512, ‘Name’, ‘fc1’)
reluLayer(‘Name’, ‘relu4’)
dropoutLayer(0.5, ‘Name’, ‘dropout1’)
fullyConnectedLayer(1024, ‘Name’, ‘fc2’)
reluLayer(‘Name’, ‘relu5’)
dropoutLayer(0.5, ‘Name’, ‘dropout2’)
fullyConnectedLayer(2048, ‘Name’, ‘fc3’)
reluLayer(‘Name’, ‘relu6’)
dropoutLayer(0.5, ‘Name’, ‘dropout3’)
fullyConnectedLayer(size(YTrain,3), ‘Name’, ‘fc4’)
regressionLayer(‘Name’, ‘output’)
];
options = trainingOptions(‘rmsprop’, …
‘MiniBatchSize’, miniBatchSize, …
‘MaxEpochs’, 20, …
‘InitialLearnRate’, 1e-3, …
‘LearnRateSchedule’, ‘piecewise’, …
‘LearnRateDropFactor’, 0.5, …
‘LearnRateDropPeriod’, 10, …
‘L2Regularization’, 1e-4, …
‘Shuffle’, ‘every-epoch’, …
‘ValidationData’, {XValidation, YValidation}, …
‘ValidationFrequency’, 30, …
‘Verbose’, 1, …
‘Plots’, ‘none’, …
‘ExecutionEnvironment’, ‘cpu’);
[~,Indmax_OPT]= max(YValidation,[],3);
Indmax_OPT = squeeze(Indmax_OPT); %Upper bound on achievable rates
MaxR_OPT = single(zeros(numel(Indmax_OPT),1));
[trainedNet,traininfo] = trainNetwork(XTrain,YTrain,layers,options);
YPredicted = predict(trainedNet,XValidation);
% ——————— Achievable Rate ————————–%
[~,Indmax_DL] = maxk(YPredicted,kbeams,2);
MaxR_DL = single(zeros(size(Indmax_DL,1),1)); %True achievable rates
for b=1:size(Indmax_DL,1)
MaxR_DL(b) = max(squeeze(YValidation_un(1,1,Indmax_DL(b,:),b)));
MaxR_OPT(b) = squeeze(YValidation_un(1,1,Indmax_OPT(b),b));
end
Rate_OPT(dd) = mean(MaxR_OPT);
Rate_DL(dd) = mean(MaxR_DL);
LastValidationRMSE(dd) = traininfo.ValidationRMSE(end);
clear trainedNet traininfo YPredicted
clear layers options Rate_DL_Temp MaxR_DL_Temp Highest_Rate
end
end deep learning, cnn, communication MATLAB Answers — New Questions
Mean and Standard Deviation of outputs on a neural network
I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
];I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
]; I am trying to train a Bayesian neural network, and with 5 inputs and 4 outputs. In the end, I want to have a mean prediction for all the outputs and a estimate of the standard deviation. When I ran the following code, it says that the network must have an output layer. I am wondering whats incorrect. I have followed this example.
numResponses = 4; % y1 y2 y3 y4
featureDimension = 5; % u1 u2 u3 u4 u5 % with feedback imep
% featureDimension = 4; % u1 u2 u3 u4 u5
maxEpochs = 2; % IMPORTANT PARAMETER
miniBatchSize = 512; % IMPORTANT PARAMETER
addpath(‘C:Usersvasu3DocumentsMATLABExamplesR2024annetTrainBayesianNeuralNetworkUsingBayesByBackpropExample’)
% architecture
Networklayer_h2df = […
sequenceInputLayer(featureDimension)
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
bayesFullyConnectedLayer(4*numHiddenUnits1,Sigma1=1,Sigma2=0.5)
reluLayer
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
gruLayer(LSTMStateNum,’OutputMode’,’sequence’,InputWeightsInitializer=’he’,RecurrentWeightsInitializer=’he’)
fullyConnectedLayer(8*numHiddenUnits1)
reluLayer
fullyConnectedLayer(4*numHiddenUnits1)
reluLayer
fullyConnectedLayer(numResponses)
bayesFullyConnectedLayer(numResponses,Sigma1=1,Sigma2=0.5)
]; deep learning, machine learning, neural network MATLAB Answers — New Questions
How to make a specific bar to be hatched with a specific color
I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’); I have the array y1 which consists of 5 sets, and each set consists of 6 elements. For example, the first set is 0.25 1.14 2.20 0.21 1.09 2.16. I need to make the last three elements in each set to be cross hatched with a specific color I choose.. how can I make it. my code is below
x=[1,2,3,4,5];
y1=[0.25 1.14 2.20 0.21 1.09 2.16 ; 0.48 2.26 4.40 0.42 2.20 4.34; 0.72 3.38 6.58 0.74 3.27 5.86 ;1.01 4.56 8.82 0.99 4.34 7.65;1.33 5.76 11.04 1.33 5.50 9.61 ]
figure
h1 = bar(y1);
set(h1, {‘DisplayName’}, {‘textbf{Proposed framework without AES}’,’textbf{Proposed framework with AES-128}’,’textbf{Proposed framework with AES-256}’,’textbf{Delay-energy-aware without AES}’,’textbf{Delay-energy-aware with AES-128}’,’textbf{Delay-energy-aware with AES-256}’}’)
set(gca,’TickLabelInterpreter’,’latex’, ‘LineWidth’, 1,’FontSize’,12, ‘YMinorTick’,’on’);
legend(‘Location’,’northwest’,’Interpreter’,’latex’, ‘FontWeight’,’bold’,’FontSize’,9.5,…
‘FontName’,’Palatino Linotype’,…
‘Location’,’best’);
xlabel(‘$textbf{Number of tasks}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’);
ylabel(‘$textbf{Total delay [S]}$’,’FontWeight’,’bold’,’FontSize’,12,…
‘FontName’,’Palatino Linotype’,’Interpreter’,’latex’); hatched bars, matlab bars, matlab MATLAB Answers — New Questions
Cisco Secure Endpoint connector Sentinel integration
Has anyone recently added the data connector for Cisco Secure Endpoint (AMP) (using Azure Functions) and successfully started receiving logs? I’ve tried to use the Azure Resource Manager (ARM) Template multiple times; however, I’ve had no success. I have used this method for adding other connectors without any issue. I spoke with Cisco support, and they stated that the instructions Microsoft provided were not correct. Long story short, Cisco support was unable to help get it connected. Any insight would be helpful. Thanks!
Has anyone recently added the data connector for Cisco Secure Endpoint (AMP) (using Azure Functions) and successfully started receiving logs? I’ve tried to use the Azure Resource Manager (ARM) Template multiple times; however, I’ve had no success. I have used this method for adding other connectors without any issue. I spoke with Cisco support, and they stated that the instructions Microsoft provided were not correct. Long story short, Cisco support was unable to help get it connected. Any insight would be helpful. Thanks! Read More
SelfeServe License Request Administation
Hello Microsoft Community!
In our Org , using the powershell module MSCommerce, we have already set the AllowSelfServicePurchase policy to Disabled for all products as we do not want our Members to purchase licenses or sign up for trials on their own.
This has not stopped our Members from making requests for licenses.
We’d like to have our Members utilize our established process for requesting licenses vs using SelfServe.
Is it possible to stop this altogether vs only being able to disable the purchasing function?
Here is an example of our policy status and the weekly digest email we just received.
Hello Microsoft Community!In our Org , using the powershell module MSCommerce, we have already set the AllowSelfServicePurchase policy to Disabled for all products as we do not want our Members to purchase licenses or sign up for trials on their own.This has not stopped our Members from making requests for licenses.We’d like to have our Members utilize our established process for requesting licenses vs using SelfServe.Is it possible to stop this altogether vs only being able to disable the purchasing function?Here is an example of our policy status and the weekly digest email we just received.Policy statusWeekly Digest Read More
Can’t delete tasks
Using Outlook>Tasks I can’t delete tasks. They all reappear after a minute. Tried deleting one by one after deleing all failed – no success!
I found other discussions on the same issue, but no resolutions.
Using Outlook>Tasks I can’t delete tasks. They all reappear after a minute. Tried deleting one by one after deleing all failed – no success! I found other discussions on the same issue, but no resolutions. Read More
Is there any default password for administrator in Windows Servers?
Hello,
I would like to understand the risk of default account. Is there any default password for the default account in the Windows Servers below?
Windows Server 2012Windows Server 2016Windows Server 2019Windows Server 2022
Thank you!
Hello, I would like to understand the risk of default account. Is there any default password for the default account in the Windows Servers below?Windows Server 2012Windows Server 2016Windows Server 2019Windows Server 2022 Thank you! Read More
Multi input convolutional neural network
How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil.How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil. How to implement three stream convolutional neural network. I was tried to use the following file exchange which is the implementation of two stream CNN using digit database.
https://jp.mathworks.com/matlabcentral/fileexchange/74760-image-classification-using-cnn-with-multi-input
But my database in folder format. Above file exchange use the digitTrain4Darray as the input in which the images are stored in the table and corresponding label is stored in the table in array format. But i don’t know how to map my database to the code provided in the file exchange.
my database contains 50 subfolders represents the 50 classes. Each class contains the 6 image. I have to use 4 images for training and 2 images for testing.
Similarly i have to 2 more database in same format.
I need to train my CNN with these three database separately. Finally i have to concatenate the results as in the image.
Kindly suggest the ways to do this task.
Thanks and regards,
Ramasenthil. multi input cnn, deep learning, cnn, input layer, image input, concatenation, multi stream cnn, deep learning toolbox, data import, data augmentation, cnn training, image database, convolutional neural network, image processing, image classification, table array MATLAB Answers — New Questions