Month: July 2024
what is subplot and how to use it?
i want to plot two graphs in one single window then how to do so?
what is block? also give some hint on how join various elements and blocks while using simulink?i want to plot two graphs in one single window then how to do so?
what is block? also give some hint on how join various elements and blocks while using simulink? i want to plot two graphs in one single window then how to do so?
what is block? also give some hint on how join various elements and blocks while using simulink? plot, subplot, layout for subplot explained MATLAB Answers — New Questions
Example of Powershell Script to creat CQ and AA
Hello
I am looking for a Website that has allready a script to create CQ and AA with a CSV.
The usecase would be
1. Create CQ
2. Create AA
3. Assign phone number to AA
4. Connect both of them together
5. Create Teams team
6. Connect CQ with Teams
or
1. Do the above with CQ only without AA
2. Do the above with AA only without CQ
Do you have a website that has this.
Regards
JFM_12
HelloI am looking for a Website that has allready a script to create CQ and AA with a CSV. The usecase would be1. Create CQ2. Create AA3. Assign phone number to AA4. Connect both of them together5. Create Teams team6. Connect CQ with Teamsor 1. Do the above with CQ only without AA2. Do the above with AA only without CQ Do you have a website that has this. Regards JFM_12 Read More
Why is Quick-Books Not Working on Windows and How Can I Fix It?
I’m encountering issues with Q.B not working properly on my Windows computer. Whenever I attempt to open the program, it crashes or freezes, preventing me from accessing my financial data. I’ve already tried basic troubleshooting steps like restarting my computer and reinstalling the software, but the problem persists. Can someone provide detailed troubleshooting steps or solutions specific to Windows to help me resolve this issue? Any assistance would be greatly appreciated.
I’m encountering issues with Q.B not working properly on my Windows computer. Whenever I attempt to open the program, it crashes or freezes, preventing me from accessing my financial data. I’ve already tried basic troubleshooting steps like restarting my computer and reinstalling the software, but the problem persists. Can someone provide detailed troubleshooting steps or solutions specific to Windows to help me resolve this issue? Any assistance would be greatly appreciated. Read More
Office 365 apps are closing randomly on MacOS
Hi,
is anyone currently experiencing issues with all MS Office 365 apps (Outlook, Excel, Powerpoint, OneNote, Word) in the way that they close all at once, randomly, and you are probably losing work due to the unexpected shutdown.
I got a new MacBook Pro with M3 Max with Sonoma 14.5 recently, and after using it for a few hours, it seems that sporadically, all O365 apps mentioned above that are open at the moment are closing all at once, randomly, without any error message, without any prior notice or any user interaction. ALL other applications except Office are working fine, also OneDrive and MS Teams stay open without any issues. After using the MacBook for a few days, it seems that it’s maybe more likely to happen when the Mac is going to standby / lid closed and is woken up afterwards?
Steps I already tried without any change of the behavior, each also including a complete reboot of the machine, in the following order.
Updating Office with the MS AutoUpdater applicationSimple uninstall of MS Office applicationsReinstallation by using a clean new O365 downloadManual uninstallation using https://support.microsoft.com/en-us/office/uninstall-office-for-mac-eefa1199-5b58-43af-8a3d-b73dc1a8cae3Clean reinstallation after complete manual uninstallationAfter uninstalling again, I tried to install it manually with deselection of the MS Defender which is included in the O365 installer package (Defender is already installed by default on the Mac).Renaming the MacBooks hostname from XXX-MBP-ABC123DEF to XXXMBPABC123DEF and renaming the SSD name from Macintosh HD to MacintoshHD. Complete wipe of the MacBook with my IT department and reinstall / setup of the machine, followed by starting auto updater and updating everything to the latest version.
None of these steps is working, and Office keeps shutting down / crashing without any prior notice at random times, most likely after a sleep. It is enough to just open up some office applications and leave the Mac alone, after return you will find the Mac with all office applications closed (except Teams and Onedrive as mentioned above).
Furthermore, I already set the network availability during sleep within my energy saving settings:
Wake for network access from “Only on Power adapter” to “Always”
Does anyone have any further ideas for analysis or a solution?
On my private MBP with M1 Pro and Sonoma 14.5 there are no issues at all.
Thank you!!!
Hi,is anyone currently experiencing issues with all MS Office 365 apps (Outlook, Excel, Powerpoint, OneNote, Word) in the way that they close all at once, randomly, and you are probably losing work due to the unexpected shutdown. I got a new MacBook Pro with M3 Max with Sonoma 14.5 recently, and after using it for a few hours, it seems that sporadically, all O365 apps mentioned above that are open at the moment are closing all at once, randomly, without any error message, without any prior notice or any user interaction. ALL other applications except Office are working fine, also OneDrive and MS Teams stay open without any issues. After using the MacBook for a few days, it seems that it’s maybe more likely to happen when the Mac is going to standby / lid closed and is woken up afterwards? Steps I already tried without any change of the behavior, each also including a complete reboot of the machine, in the following order.Updating Office with the MS AutoUpdater applicationSimple uninstall of MS Office applicationsReinstallation by using a clean new O365 downloadManual uninstallation using https://support.microsoft.com/en-us/office/uninstall-office-for-mac-eefa1199-5b58-43af-8a3d-b73dc1a8cae3Clean reinstallation after complete manual uninstallationAfter uninstalling again, I tried to install it manually with deselection of the MS Defender which is included in the O365 installer package (Defender is already installed by default on the Mac).Renaming the MacBooks hostname from XXX-MBP-ABC123DEF to XXXMBPABC123DEF and renaming the SSD name from Macintosh HD to MacintoshHD. Complete wipe of the MacBook with my IT department and reinstall / setup of the machine, followed by starting auto updater and updating everything to the latest version.None of these steps is working, and Office keeps shutting down / crashing without any prior notice at random times, most likely after a sleep. It is enough to just open up some office applications and leave the Mac alone, after return you will find the Mac with all office applications closed (except Teams and Onedrive as mentioned above).Furthermore, I already set the network availability during sleep within my energy saving settings:Wake for network access from “Only on Power adapter” to “Always” Does anyone have any further ideas for analysis or a solution?On my private MBP with M1 Pro and Sonoma 14.5 there are no issues at all. Thank you!!! Read More
Copilot en Windows 24H2 no aparece en barra de tareas
No aparece en la barra de tareas Copilot ni tampoco en Configuración. No puedo habilitarlo, tengo la version 2600.1000 y no consigo que aparezca para habilitarlo.
No aparece en la barra de tareas Copilot ni tampoco en Configuración. No puedo habilitarlo, tengo la version 2600.1000 y no consigo que aparezca para habilitarlo. Read More
Attachement in sharepoint list won’t open (Sharepoint, Oulook, Power Automate)
Hi, I created the flow below. My emails are now transfered to sharepoint list and I can see the original mail and attachements within Sharepoint as an attachment. The problem is that when I try to open them they are either empty or I get a message from error. Can you help me? Thanks
Hi, I created the flow below. My emails are now transfered to sharepoint list and I can see the original mail and attachements within Sharepoint as an attachment. The problem is that when I try to open them they are either empty or I get a message from error. Can you help me? Thanks Read More
Encountered an inability to add error when using the modified UNet structure for prediction.
I am trying to add a ViT module to the UNet constructed by the updated unet3d in MATLAB r2024a, and everything is normal during the training process. I have verified the performance of the model after a certain period of time. The analyzeNetwork function shows no errors, and the size of the front and back connections is 65 * 1024 * 1 (SCB). This is the result of serializing the image.
Incorrect use of dlnetwork/predict (line 658)
Execution failed during layer ‘Transformer PositionEmbedding, Encoder Stage-4 Add-1’.
Error Unet3dTrain (line 288)
PredictedLabel=predict (net, image);
Reason:
Incorrect use of matlab. internal. path. cnn MLFusedNetwork/forwardExampleInputs
Arrays are not compatible for addition
Problem seems to occur when add the vector before position embedding and after position embedding.
There are no issues with adding custom print input size layers before and after this layer.
Here is part of the structure of the network.
My native language is not English, and I am using translation software. Please forgive any errors. The following is the code, which includes non English comments.
Code:
clc; clear;
rng(1);
% ========== 数据读取和数据集创建阶段 ==========
% 指定图像和标签文件的位置
imageDir = ‘X:BaiduDownloadbrats2021ProcessedData’;
labelDir = ‘X:BaiduDownloadbrats2021ProcessedData’; % 标签数据存储在同一位置
% 定义类别名和对应的标签ID
categories = ["background", "necrotic_tumor_core", "peritumoral_edema", "enhancing_tumor"]; % 有4类
labelIDs = [0, 1, 2, 4]; % 分别对应上述类别
% 假定输入数据为 128×128 的体积,有一个背景类和一个肿瘤类
inputSize = [128 128 8 2]; % 最后一个维度1表示1种不同的模态
numClasses = 4; % 类别数(背景和肿瘤)
% 创建图像和标签的数据存储
imds = imageDatastore(imageDir, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadData);
pxds = pixelLabelDatastore(labelDir, categories, labelIDs, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadLabels);
% 分割训练集和验证集
numFiles = numel(imds.Files);
idx = randperm(numFiles); % 随机打乱索引
% numFiles = round(0.001 * numFiles); % 选取小数据集测试
numTrain = round(0.9 * numFiles); % 假设80%的数据用于训练
% 使用索引分割数据
trainImds = subset(imds, idx(1:numTrain));
trainPxds = subset(pxds, idx(1:numTrain));
valImds = subset(imds, idx(numTrain+1:end));
valPxds = subset(pxds, idx(numTrain+1:end));
% 组合训练和验证数据
dsTrain = combine(trainImds, trainPxds);
dsVal = combine(valImds, valPxds);
% 补充函数
function labels = customReadLabels(filename)
fileContent = load(filename);
segmentation = fileContent.segmentation(:,:,74:81);
% 假设原始大小为240×240,计算裁剪偏移
cropSize = 200;
startCrop = (size(segmentation,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedSegmentation = segmentation(startCrop:endCrop, startCrop:endCrop, :);
% 重置三维数据大小到128×128,使用最近邻插值方法
segmentationResized = imresize3(croppedSegmentation, [128, 128, size(croppedSegmentation, 3)], ‘Method’, ‘nearest’);
% 创建分类数据,确保使用正确的类别名
labels = categorical(segmentationResized, [0, 1, 2, 4], {‘background’, ‘necrotic_tumor_core’, ‘peritumoral_edema’, ‘enhancing_tumor’});
end
function data = customReadData(filename)
fileContent = load(filename);
% 提取特定切片
originalData = squeeze(fileContent.combinedData(:,:,74:81,[1, 3]));
% 同样计算裁剪偏移
cropSize = 200;
startCrop = (size(originalData,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedData = originalData(startCrop:endCrop, startCrop:endCrop, :, :);
% 初始化一个新的四维数组,用于存储调整后的数据
resizedData = zeros(128, 128, size(croppedData, 3), size(croppedData, 4));
% 循环处理每一个通道
for i = 1:size(croppedData, 4)
% 调整每个通道的数据大小并进行灰度化
resizedData(:,:,:,i) = imresize3(mat2gray(croppedData(:,:,:,i)), [128, 128, size(croppedData, 3)]);
end
% 输出处理后的数据
data = resizedData;
end
% 创建3D U-Net网络
net = unet3d(inputSize, numClasses, Encoderdepth = 3);
% ========== Unet网络改造阶段 ==========
% 改造ResBlock
% 对Stage-1进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer = convolution3dLayer([1, 1, 1], 64, ‘Name’, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-1-BN-Ident-1’);
addLayer = additionLayer(2, ‘Name’, ‘Encoder-Stage-1-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer);
net = addLayers(net, adjustBnLayer);
net = addLayers(net, addLayer);
% 连接新层
net = connectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-1-BN-2′, ‘Encoder-Stage-1-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-2’, ‘Encoder-Stage-1-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-Ident-1’, ‘Encoder-Stage-1-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-1-Add-1’, ‘Encoder-Stage-1-ReLU-2’);
% 对Stage-2进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer2 = convolution3dLayer([1, 1, 1], 128, ‘Name’, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer2 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-2-BN-Ident-1’);
addLayer2 = additionLayer(2, ‘Name’, ‘Encoder-Stage-2-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer2);
net = addLayers(net, adjustBnLayer2);
net = addLayers(net, addLayer2);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-1-MaxPool’, ‘Encoder-Stage-2-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Encoder-Stage-2-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-2-BN-2′, ‘Encoder-Stage-2-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-2’, ‘Encoder-Stage-2-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-Ident-1’, ‘Encoder-Stage-2-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-2-Add-1’, ‘Encoder-Stage-2-ReLU-2’);
% 对Stage-3进行操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer3 = convolution3dLayer([1, 1, 1], 256, ‘Name’, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer3 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-3-BN-Ident-1’);
addLayer3 = additionLayer(2, ‘Name’, ‘Encoder-Stage-3-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer3);
net = addLayers(net, adjustBnLayer3);
net = addLayers(net, addLayer3);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-2-MaxPool’, ‘Encoder-Stage-3-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Encoder-Stage-3-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-BN-2′, ‘Encoder-Stage-3-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-2’, ‘Encoder-Stage-3-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-Ident-1’, ‘Encoder-Stage-3-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-3-Add-1’, ‘Encoder-Stage-3-ReLU-2’);
% BatchNormalization改造为GroupNormalization
% 获取网络中所有层的名称
layerNames = {net.Layers.Name};
% 循环遍历所有层的名称,寻找匹配“BN”的层
for i = 1:length(layerNames)
if contains(layerNames{i}, ‘BN’)
% 创建新的组归一化层
gnLayer = groupNormalizationLayer(4, ‘Name’, layerNames{i});
% 替换现有的 BN 层
net = replaceLayer(net, layerNames{i}, gnLayer);
end
end
% 添加Vision Transformer Layer
PatchEmbeddingLayer1 = patchEmbeddingLayer([4 4 2], 1024, ‘Name’, ‘Transformer-PatchEmbedding’);
EmbeddingConcatenationLayer1 = embeddingConcatenationLayer(‘Name’, ‘Transformer-EmbeddingConcatenation’);
PositionEmbeddingLayer1 = positionEmbeddingLayer(1024, 1024, ‘Name’, ‘Transformer-PositionEmbedding’);
addLayer4 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-1’);
addLayer5 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-2’);
addLayer6 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-3’);
dropoutLayer1 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-1’);
dropoutLayer2 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-2’);
LayerNormalizationLayer1 = layerNormalizationLayer(‘Name’,’Transformer-LN-1′);
LayerNormalizationLayer2 = layerNormalizationLayer(‘Name’,’Transformer-LN-2′);
SelfAttentionLayer = selfAttentionLayer(8, 32, ‘Name’, ‘Transformer-SelfAttention’);
FullyConnectedLayer = fullyConnectedLayer(1024, ‘Name’, ‘Transformer-fc’);
ReshapeLayer = reshapeLayer(‘Transformer-reshape’);
index1dLayer = indexing1dLayer(‘Name’, ‘Transformer-index1d’);
% printShapeLayer1 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer1’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer2 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer2’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer3 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer3’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
net = addLayers(net, PatchEmbeddingLayer1);
net = addLayers(net, EmbeddingConcatenationLayer1);
net = addLayers(net, PositionEmbeddingLayer1);
net = addLayers(net, addLayer4);
net = addLayers(net, addLayer5);
net = addLayers(net, addLayer6);
net = addLayers(net, dropoutLayer1);
net = addLayers(net, dropoutLayer2);
net = addLayers(net, LayerNormalizationLayer1);
net = addLayers(net, LayerNormalizationLayer2);
net = addLayers(net, SelfAttentionLayer);
net = addLayers(net, FullyConnectedLayer);
net = addLayers(net, ReshapeLayer);
net = addLayers(net, index1dLayer);
% net = addLayers(net, printShapeLayer1);
% net = addLayers(net, printShapeLayer2);
% net = addLayers(net, printShapeLayer3);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-1’);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-BN-Ident-1’);
% net = connectLayers(net, ‘encoderImageInputLayer’, ‘printShapeLayer3’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-Conv-1’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-DropOut’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net,’Encoder-Stage-3-ReLU-2′, ‘printShapeLayer1’);
% net = connectLayers(net,’printShapeLayer1′, ‘Transformer-PatchEmbedding’);
net = connectLayers(net,’Encoder-Stage-3-DropOut’, ‘Transformer-PatchEmbedding’);
net = connectLayers(net, ‘Transformer-PatchEmbedding’, ‘Transformer-EmbeddingConcatenation’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Transformer-PositionEmbedding’);
net = connectLayers(net, ‘Transformer-PositionEmbedding’, ‘Encoder-Stage-4-Add-1/in1’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Encoder-Stage-4-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-1’, ‘Transformer-DropOut-1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Transformer-LN-1’);
net = connectLayers(net, ‘Transformer-LN-1’, ‘Transformer-SelfAttention’);
net = connectLayers(net, ‘Transformer-SelfAttention’, ‘Transformer-DropOut-2’);
net = connectLayers(net, ‘Transformer-DropOut-2’, ‘Encoder-Stage-4-Add-2/in1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Encoder-Stage-4-Add-2/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Transformer-LN-2’);
net = connectLayers(net, ‘Transformer-LN-2’, ‘Transformer-index1d’);
net = connectLayers(net, ‘Transformer-index1d’, ‘Transformer-fc’);
net = connectLayers(net, ‘Transformer-fc’, ‘Encoder-Stage-4-Add-3/in1’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Encoder-Stage-4-Add-3/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-3’, ‘Transformer-reshape’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-DropOut’);
net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘encoderDecoderSkipConnectionCrop3/in’);
% net = disconnectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘printShapeLayer2’);
net = removeLayers(net, ‘Encoder-Stage-3-MaxPool’);
net = connectLayers(net, ‘Transformer-reshape’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% 添加Attention Gate
relulayer1 = reluLayer(‘Name’, ‘AttentionGate-Stage-1-relu’);
relulayer2 = reluLayer(‘Name’, ‘AttentionGate-Stage-2-relu’);
relulayer3 = reluLayer(‘Name’, ‘AttentionGate-Stage-3-relu’);
sigmoidlayer1 = sigmoidLayer(‘Name’,’AttentionGate-Stage-1-sigmoid’);
sigmoidlayer2 = sigmoidLayer(‘Name’,’AttentionGate-Stage-2-sigmoid’);
sigmoidlayer3 = sigmoidLayer(‘Name’,’AttentionGate-Stage-3-sigmoid’);
convolution3dlayer11 = convolution3dLayer(1, 512, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-1′);
convolution3dlayer12 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-2′);
convolution3dlayer13 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-3′);
convolution3dlayer21 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-1′);
convolution3dlayer22 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-2′);
convolution3dlayer23 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-3′);
convolution3dlayer31 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-1′);
convolution3dlayer32 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-2′);
convolution3dlayer33 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-3′);
net = addLayers(net, relulayer1);
net = addLayers(net, relulayer2);
net = addLayers(net, relulayer3);
net = addLayers(net, sigmoidlayer1);
net = addLayers(net, sigmoidlayer2);
net = addLayers(net, sigmoidlayer3);
net = addLayers(net, convolution3dlayer11);
net = addLayers(net, convolution3dlayer12);
net = addLayers(net, convolution3dlayer13);
net = addLayers(net, convolution3dlayer21);
net = addLayers(net, convolution3dlayer22);
net = addLayers(net, convolution3dlayer23);
net = addLayers(net, convolution3dlayer31);
net = addLayers(net, convolution3dlayer32);
net = addLayers(net, convolution3dlayer33);
net = disconnectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = disconnectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = disconnectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = disconnectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = connectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘AttentionGate-Stage-1-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘AttentionGate-Stage-2-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘AttentionGate-Stage-3-conv-1’);
net = connectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘AttentionGate-Stage-1-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘AttentionGate-Stage-2-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘AttentionGate-Stage-3-conv-2’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-1’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-1’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-1’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-2’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘AttentionGate-Stage-1-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘AttentionGate-Stage-2-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘AttentionGate-Stage-3-relu’);
net = connectLayers(net, ‘AttentionGate-Stage-1-relu’, ‘AttentionGate-Stage-1-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-3-relu’, ‘AttentionGate-Stage-3-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-2-relu’, ‘AttentionGate-Stage-2-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-3’, ‘AttentionGate-Stage-1-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-3’, ‘AttentionGate-Stage-3-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-3’, ‘AttentionGate-Stage-2-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-1-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-2-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-3-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
% 设置训练选项,使用GPU,启用详细输出,以及其他重要训练参数
options = trainingOptions(‘adam’, …
‘InitialLearnRate’, 1e-4, …
‘LearnRateSchedule’, ‘piecewise’, … % 学习率计划
‘LearnRateDropFactor’, 0.5, … % 学习率降低因子
‘LearnRateDropPeriod’, 5, … % 每5个epochs降低学习率
‘L2Regularization’, 1e-4, … % L2正则化,有助于防止过拟合
‘MaxEpochs’, 10, …
‘MiniBatchSize’, 4, …
‘Verbose’, true, …
‘ValidationData’, dsVal, …
‘ValidationFrequency’, 5, …
‘ValidationPatience’, 20, …
‘Plots’, ‘training-progress’, …
‘ExecutionEnvironment’, ‘gpu’, …
‘CheckpointPath’, ‘X:MATLAB codesStatisticsModeling2’);
analyzeNetwork(net);
net = initialize(net);
% 从验证数据集中读取一个样本
[data, info] = read(dsVal);
image = data{1}; % 图像数据
label = data{2}; % 真实标签
image = double(image);
% 进行预测
predictedLabel = predict(net, image);I am trying to add a ViT module to the UNet constructed by the updated unet3d in MATLAB r2024a, and everything is normal during the training process. I have verified the performance of the model after a certain period of time. The analyzeNetwork function shows no errors, and the size of the front and back connections is 65 * 1024 * 1 (SCB). This is the result of serializing the image.
Incorrect use of dlnetwork/predict (line 658)
Execution failed during layer ‘Transformer PositionEmbedding, Encoder Stage-4 Add-1’.
Error Unet3dTrain (line 288)
PredictedLabel=predict (net, image);
Reason:
Incorrect use of matlab. internal. path. cnn MLFusedNetwork/forwardExampleInputs
Arrays are not compatible for addition
Problem seems to occur when add the vector before position embedding and after position embedding.
There are no issues with adding custom print input size layers before and after this layer.
Here is part of the structure of the network.
My native language is not English, and I am using translation software. Please forgive any errors. The following is the code, which includes non English comments.
Code:
clc; clear;
rng(1);
% ========== 数据读取和数据集创建阶段 ==========
% 指定图像和标签文件的位置
imageDir = ‘X:BaiduDownloadbrats2021ProcessedData’;
labelDir = ‘X:BaiduDownloadbrats2021ProcessedData’; % 标签数据存储在同一位置
% 定义类别名和对应的标签ID
categories = ["background", "necrotic_tumor_core", "peritumoral_edema", "enhancing_tumor"]; % 有4类
labelIDs = [0, 1, 2, 4]; % 分别对应上述类别
% 假定输入数据为 128×128 的体积,有一个背景类和一个肿瘤类
inputSize = [128 128 8 2]; % 最后一个维度1表示1种不同的模态
numClasses = 4; % 类别数(背景和肿瘤)
% 创建图像和标签的数据存储
imds = imageDatastore(imageDir, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadData);
pxds = pixelLabelDatastore(labelDir, categories, labelIDs, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadLabels);
% 分割训练集和验证集
numFiles = numel(imds.Files);
idx = randperm(numFiles); % 随机打乱索引
% numFiles = round(0.001 * numFiles); % 选取小数据集测试
numTrain = round(0.9 * numFiles); % 假设80%的数据用于训练
% 使用索引分割数据
trainImds = subset(imds, idx(1:numTrain));
trainPxds = subset(pxds, idx(1:numTrain));
valImds = subset(imds, idx(numTrain+1:end));
valPxds = subset(pxds, idx(numTrain+1:end));
% 组合训练和验证数据
dsTrain = combine(trainImds, trainPxds);
dsVal = combine(valImds, valPxds);
% 补充函数
function labels = customReadLabels(filename)
fileContent = load(filename);
segmentation = fileContent.segmentation(:,:,74:81);
% 假设原始大小为240×240,计算裁剪偏移
cropSize = 200;
startCrop = (size(segmentation,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedSegmentation = segmentation(startCrop:endCrop, startCrop:endCrop, :);
% 重置三维数据大小到128×128,使用最近邻插值方法
segmentationResized = imresize3(croppedSegmentation, [128, 128, size(croppedSegmentation, 3)], ‘Method’, ‘nearest’);
% 创建分类数据,确保使用正确的类别名
labels = categorical(segmentationResized, [0, 1, 2, 4], {‘background’, ‘necrotic_tumor_core’, ‘peritumoral_edema’, ‘enhancing_tumor’});
end
function data = customReadData(filename)
fileContent = load(filename);
% 提取特定切片
originalData = squeeze(fileContent.combinedData(:,:,74:81,[1, 3]));
% 同样计算裁剪偏移
cropSize = 200;
startCrop = (size(originalData,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedData = originalData(startCrop:endCrop, startCrop:endCrop, :, :);
% 初始化一个新的四维数组,用于存储调整后的数据
resizedData = zeros(128, 128, size(croppedData, 3), size(croppedData, 4));
% 循环处理每一个通道
for i = 1:size(croppedData, 4)
% 调整每个通道的数据大小并进行灰度化
resizedData(:,:,:,i) = imresize3(mat2gray(croppedData(:,:,:,i)), [128, 128, size(croppedData, 3)]);
end
% 输出处理后的数据
data = resizedData;
end
% 创建3D U-Net网络
net = unet3d(inputSize, numClasses, Encoderdepth = 3);
% ========== Unet网络改造阶段 ==========
% 改造ResBlock
% 对Stage-1进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer = convolution3dLayer([1, 1, 1], 64, ‘Name’, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-1-BN-Ident-1’);
addLayer = additionLayer(2, ‘Name’, ‘Encoder-Stage-1-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer);
net = addLayers(net, adjustBnLayer);
net = addLayers(net, addLayer);
% 连接新层
net = connectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-1-BN-2′, ‘Encoder-Stage-1-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-2’, ‘Encoder-Stage-1-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-Ident-1’, ‘Encoder-Stage-1-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-1-Add-1’, ‘Encoder-Stage-1-ReLU-2’);
% 对Stage-2进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer2 = convolution3dLayer([1, 1, 1], 128, ‘Name’, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer2 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-2-BN-Ident-1’);
addLayer2 = additionLayer(2, ‘Name’, ‘Encoder-Stage-2-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer2);
net = addLayers(net, adjustBnLayer2);
net = addLayers(net, addLayer2);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-1-MaxPool’, ‘Encoder-Stage-2-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Encoder-Stage-2-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-2-BN-2′, ‘Encoder-Stage-2-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-2’, ‘Encoder-Stage-2-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-Ident-1’, ‘Encoder-Stage-2-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-2-Add-1’, ‘Encoder-Stage-2-ReLU-2’);
% 对Stage-3进行操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer3 = convolution3dLayer([1, 1, 1], 256, ‘Name’, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer3 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-3-BN-Ident-1’);
addLayer3 = additionLayer(2, ‘Name’, ‘Encoder-Stage-3-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer3);
net = addLayers(net, adjustBnLayer3);
net = addLayers(net, addLayer3);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-2-MaxPool’, ‘Encoder-Stage-3-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Encoder-Stage-3-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-BN-2′, ‘Encoder-Stage-3-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-2’, ‘Encoder-Stage-3-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-Ident-1’, ‘Encoder-Stage-3-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-3-Add-1’, ‘Encoder-Stage-3-ReLU-2’);
% BatchNormalization改造为GroupNormalization
% 获取网络中所有层的名称
layerNames = {net.Layers.Name};
% 循环遍历所有层的名称,寻找匹配“BN”的层
for i = 1:length(layerNames)
if contains(layerNames{i}, ‘BN’)
% 创建新的组归一化层
gnLayer = groupNormalizationLayer(4, ‘Name’, layerNames{i});
% 替换现有的 BN 层
net = replaceLayer(net, layerNames{i}, gnLayer);
end
end
% 添加Vision Transformer Layer
PatchEmbeddingLayer1 = patchEmbeddingLayer([4 4 2], 1024, ‘Name’, ‘Transformer-PatchEmbedding’);
EmbeddingConcatenationLayer1 = embeddingConcatenationLayer(‘Name’, ‘Transformer-EmbeddingConcatenation’);
PositionEmbeddingLayer1 = positionEmbeddingLayer(1024, 1024, ‘Name’, ‘Transformer-PositionEmbedding’);
addLayer4 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-1’);
addLayer5 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-2’);
addLayer6 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-3’);
dropoutLayer1 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-1’);
dropoutLayer2 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-2’);
LayerNormalizationLayer1 = layerNormalizationLayer(‘Name’,’Transformer-LN-1′);
LayerNormalizationLayer2 = layerNormalizationLayer(‘Name’,’Transformer-LN-2′);
SelfAttentionLayer = selfAttentionLayer(8, 32, ‘Name’, ‘Transformer-SelfAttention’);
FullyConnectedLayer = fullyConnectedLayer(1024, ‘Name’, ‘Transformer-fc’);
ReshapeLayer = reshapeLayer(‘Transformer-reshape’);
index1dLayer = indexing1dLayer(‘Name’, ‘Transformer-index1d’);
% printShapeLayer1 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer1’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer2 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer2’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer3 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer3’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
net = addLayers(net, PatchEmbeddingLayer1);
net = addLayers(net, EmbeddingConcatenationLayer1);
net = addLayers(net, PositionEmbeddingLayer1);
net = addLayers(net, addLayer4);
net = addLayers(net, addLayer5);
net = addLayers(net, addLayer6);
net = addLayers(net, dropoutLayer1);
net = addLayers(net, dropoutLayer2);
net = addLayers(net, LayerNormalizationLayer1);
net = addLayers(net, LayerNormalizationLayer2);
net = addLayers(net, SelfAttentionLayer);
net = addLayers(net, FullyConnectedLayer);
net = addLayers(net, ReshapeLayer);
net = addLayers(net, index1dLayer);
% net = addLayers(net, printShapeLayer1);
% net = addLayers(net, printShapeLayer2);
% net = addLayers(net, printShapeLayer3);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-1’);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-BN-Ident-1’);
% net = connectLayers(net, ‘encoderImageInputLayer’, ‘printShapeLayer3’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-Conv-1’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-DropOut’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net,’Encoder-Stage-3-ReLU-2′, ‘printShapeLayer1’);
% net = connectLayers(net,’printShapeLayer1′, ‘Transformer-PatchEmbedding’);
net = connectLayers(net,’Encoder-Stage-3-DropOut’, ‘Transformer-PatchEmbedding’);
net = connectLayers(net, ‘Transformer-PatchEmbedding’, ‘Transformer-EmbeddingConcatenation’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Transformer-PositionEmbedding’);
net = connectLayers(net, ‘Transformer-PositionEmbedding’, ‘Encoder-Stage-4-Add-1/in1’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Encoder-Stage-4-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-1’, ‘Transformer-DropOut-1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Transformer-LN-1’);
net = connectLayers(net, ‘Transformer-LN-1’, ‘Transformer-SelfAttention’);
net = connectLayers(net, ‘Transformer-SelfAttention’, ‘Transformer-DropOut-2’);
net = connectLayers(net, ‘Transformer-DropOut-2’, ‘Encoder-Stage-4-Add-2/in1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Encoder-Stage-4-Add-2/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Transformer-LN-2’);
net = connectLayers(net, ‘Transformer-LN-2’, ‘Transformer-index1d’);
net = connectLayers(net, ‘Transformer-index1d’, ‘Transformer-fc’);
net = connectLayers(net, ‘Transformer-fc’, ‘Encoder-Stage-4-Add-3/in1’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Encoder-Stage-4-Add-3/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-3’, ‘Transformer-reshape’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-DropOut’);
net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘encoderDecoderSkipConnectionCrop3/in’);
% net = disconnectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘printShapeLayer2’);
net = removeLayers(net, ‘Encoder-Stage-3-MaxPool’);
net = connectLayers(net, ‘Transformer-reshape’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% 添加Attention Gate
relulayer1 = reluLayer(‘Name’, ‘AttentionGate-Stage-1-relu’);
relulayer2 = reluLayer(‘Name’, ‘AttentionGate-Stage-2-relu’);
relulayer3 = reluLayer(‘Name’, ‘AttentionGate-Stage-3-relu’);
sigmoidlayer1 = sigmoidLayer(‘Name’,’AttentionGate-Stage-1-sigmoid’);
sigmoidlayer2 = sigmoidLayer(‘Name’,’AttentionGate-Stage-2-sigmoid’);
sigmoidlayer3 = sigmoidLayer(‘Name’,’AttentionGate-Stage-3-sigmoid’);
convolution3dlayer11 = convolution3dLayer(1, 512, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-1′);
convolution3dlayer12 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-2′);
convolution3dlayer13 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-3′);
convolution3dlayer21 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-1′);
convolution3dlayer22 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-2′);
convolution3dlayer23 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-3′);
convolution3dlayer31 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-1′);
convolution3dlayer32 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-2′);
convolution3dlayer33 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-3′);
net = addLayers(net, relulayer1);
net = addLayers(net, relulayer2);
net = addLayers(net, relulayer3);
net = addLayers(net, sigmoidlayer1);
net = addLayers(net, sigmoidlayer2);
net = addLayers(net, sigmoidlayer3);
net = addLayers(net, convolution3dlayer11);
net = addLayers(net, convolution3dlayer12);
net = addLayers(net, convolution3dlayer13);
net = addLayers(net, convolution3dlayer21);
net = addLayers(net, convolution3dlayer22);
net = addLayers(net, convolution3dlayer23);
net = addLayers(net, convolution3dlayer31);
net = addLayers(net, convolution3dlayer32);
net = addLayers(net, convolution3dlayer33);
net = disconnectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = disconnectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = disconnectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = disconnectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = connectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘AttentionGate-Stage-1-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘AttentionGate-Stage-2-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘AttentionGate-Stage-3-conv-1’);
net = connectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘AttentionGate-Stage-1-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘AttentionGate-Stage-2-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘AttentionGate-Stage-3-conv-2’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-1’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-1’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-1’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-2’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘AttentionGate-Stage-1-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘AttentionGate-Stage-2-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘AttentionGate-Stage-3-relu’);
net = connectLayers(net, ‘AttentionGate-Stage-1-relu’, ‘AttentionGate-Stage-1-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-3-relu’, ‘AttentionGate-Stage-3-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-2-relu’, ‘AttentionGate-Stage-2-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-3’, ‘AttentionGate-Stage-1-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-3’, ‘AttentionGate-Stage-3-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-3’, ‘AttentionGate-Stage-2-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-1-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-2-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-3-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
% 设置训练选项,使用GPU,启用详细输出,以及其他重要训练参数
options = trainingOptions(‘adam’, …
‘InitialLearnRate’, 1e-4, …
‘LearnRateSchedule’, ‘piecewise’, … % 学习率计划
‘LearnRateDropFactor’, 0.5, … % 学习率降低因子
‘LearnRateDropPeriod’, 5, … % 每5个epochs降低学习率
‘L2Regularization’, 1e-4, … % L2正则化,有助于防止过拟合
‘MaxEpochs’, 10, …
‘MiniBatchSize’, 4, …
‘Verbose’, true, …
‘ValidationData’, dsVal, …
‘ValidationFrequency’, 5, …
‘ValidationPatience’, 20, …
‘Plots’, ‘training-progress’, …
‘ExecutionEnvironment’, ‘gpu’, …
‘CheckpointPath’, ‘X:MATLAB codesStatisticsModeling2’);
analyzeNetwork(net);
net = initialize(net);
% 从验证数据集中读取一个样本
[data, info] = read(dsVal);
image = data{1}; % 图像数据
label = data{2}; % 真实标签
image = double(image);
% 进行预测
predictedLabel = predict(net, image); I am trying to add a ViT module to the UNet constructed by the updated unet3d in MATLAB r2024a, and everything is normal during the training process. I have verified the performance of the model after a certain period of time. The analyzeNetwork function shows no errors, and the size of the front and back connections is 65 * 1024 * 1 (SCB). This is the result of serializing the image.
Incorrect use of dlnetwork/predict (line 658)
Execution failed during layer ‘Transformer PositionEmbedding, Encoder Stage-4 Add-1’.
Error Unet3dTrain (line 288)
PredictedLabel=predict (net, image);
Reason:
Incorrect use of matlab. internal. path. cnn MLFusedNetwork/forwardExampleInputs
Arrays are not compatible for addition
Problem seems to occur when add the vector before position embedding and after position embedding.
There are no issues with adding custom print input size layers before and after this layer.
Here is part of the structure of the network.
My native language is not English, and I am using translation software. Please forgive any errors. The following is the code, which includes non English comments.
Code:
clc; clear;
rng(1);
% ========== 数据读取和数据集创建阶段 ==========
% 指定图像和标签文件的位置
imageDir = ‘X:BaiduDownloadbrats2021ProcessedData’;
labelDir = ‘X:BaiduDownloadbrats2021ProcessedData’; % 标签数据存储在同一位置
% 定义类别名和对应的标签ID
categories = ["background", "necrotic_tumor_core", "peritumoral_edema", "enhancing_tumor"]; % 有4类
labelIDs = [0, 1, 2, 4]; % 分别对应上述类别
% 假定输入数据为 128×128 的体积,有一个背景类和一个肿瘤类
inputSize = [128 128 8 2]; % 最后一个维度1表示1种不同的模态
numClasses = 4; % 类别数(背景和肿瘤)
% 创建图像和标签的数据存储
imds = imageDatastore(imageDir, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadData);
pxds = pixelLabelDatastore(labelDir, categories, labelIDs, ‘FileExtensions’,’.mat’, ‘ReadFcn’, @customReadLabels);
% 分割训练集和验证集
numFiles = numel(imds.Files);
idx = randperm(numFiles); % 随机打乱索引
% numFiles = round(0.001 * numFiles); % 选取小数据集测试
numTrain = round(0.9 * numFiles); % 假设80%的数据用于训练
% 使用索引分割数据
trainImds = subset(imds, idx(1:numTrain));
trainPxds = subset(pxds, idx(1:numTrain));
valImds = subset(imds, idx(numTrain+1:end));
valPxds = subset(pxds, idx(numTrain+1:end));
% 组合训练和验证数据
dsTrain = combine(trainImds, trainPxds);
dsVal = combine(valImds, valPxds);
% 补充函数
function labels = customReadLabels(filename)
fileContent = load(filename);
segmentation = fileContent.segmentation(:,:,74:81);
% 假设原始大小为240×240,计算裁剪偏移
cropSize = 200;
startCrop = (size(segmentation,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedSegmentation = segmentation(startCrop:endCrop, startCrop:endCrop, :);
% 重置三维数据大小到128×128,使用最近邻插值方法
segmentationResized = imresize3(croppedSegmentation, [128, 128, size(croppedSegmentation, 3)], ‘Method’, ‘nearest’);
% 创建分类数据,确保使用正确的类别名
labels = categorical(segmentationResized, [0, 1, 2, 4], {‘background’, ‘necrotic_tumor_core’, ‘peritumoral_edema’, ‘enhancing_tumor’});
end
function data = customReadData(filename)
fileContent = load(filename);
% 提取特定切片
originalData = squeeze(fileContent.combinedData(:,:,74:81,[1, 3]));
% 同样计算裁剪偏移
cropSize = 200;
startCrop = (size(originalData,1) – cropSize) / 2 + 1;
endCrop = startCrop + cropSize – 1;
% 四周均匀裁剪为160×160
croppedData = originalData(startCrop:endCrop, startCrop:endCrop, :, :);
% 初始化一个新的四维数组,用于存储调整后的数据
resizedData = zeros(128, 128, size(croppedData, 3), size(croppedData, 4));
% 循环处理每一个通道
for i = 1:size(croppedData, 4)
% 调整每个通道的数据大小并进行灰度化
resizedData(:,:,:,i) = imresize3(mat2gray(croppedData(:,:,:,i)), [128, 128, size(croppedData, 3)]);
end
% 输出处理后的数据
data = resizedData;
end
% 创建3D U-Net网络
net = unet3d(inputSize, numClasses, Encoderdepth = 3);
% ========== Unet网络改造阶段 ==========
% 改造ResBlock
% 对Stage-1进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer = convolution3dLayer([1, 1, 1], 64, ‘Name’, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-1-BN-Ident-1’);
addLayer = additionLayer(2, ‘Name’, ‘Encoder-Stage-1-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer);
net = addLayers(net, adjustBnLayer);
net = addLayers(net, addLayer);
% 连接新层
net = connectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-1-Conv-Ident-1’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-1-BN-2′, ‘Encoder-Stage-1-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-2’, ‘Encoder-Stage-1-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-1-BN-Ident-1’, ‘Encoder-Stage-1-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-1-Add-1’, ‘Encoder-Stage-1-ReLU-2’);
% 对Stage-2进行的操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer2 = convolution3dLayer([1, 1, 1], 128, ‘Name’, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer2 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-2-BN-Ident-1’);
addLayer2 = additionLayer(2, ‘Name’, ‘Encoder-Stage-2-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer2);
net = addLayers(net, adjustBnLayer2);
net = addLayers(net, addLayer2);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-1-MaxPool’, ‘Encoder-Stage-2-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-2-Conv-Ident-1’, ‘Encoder-Stage-2-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-2-BN-2′, ‘Encoder-Stage-2-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-2’, ‘Encoder-Stage-2-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-2-BN-Ident-1’, ‘Encoder-Stage-2-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-2-Add-1’, ‘Encoder-Stage-2-ReLU-2’);
% 对Stage-3进行操作
% 添加一个1x1x1卷积层以适应通道数
adjustConvLayer3 = convolution3dLayer([1, 1, 1], 256, ‘Name’, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Padding’, ‘same’);
adjustBnLayer3 = batchNormalizationLayer(‘Name’, ‘Encoder-Stage-3-BN-Ident-1’);
addLayer3 = additionLayer(2, ‘Name’, ‘Encoder-Stage-3-Add-1’);
% 添加层到图
net = addLayers(net, adjustConvLayer3);
net = addLayers(net, adjustBnLayer3);
net = addLayers(net, addLayer3);
% 连接新层
net = connectLayers(net, ‘Encoder-Stage-2-MaxPool’, ‘Encoder-Stage-3-Conv-Ident-1’);
net = connectLayers(net, ‘Encoder-Stage-3-Conv-Ident-1’, ‘Encoder-Stage-3-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-BN-2′, ‘Encoder-Stage-3-ReLU-2’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-2’, ‘Encoder-Stage-3-Add-1/in1’);
net = connectLayers(net, ‘Encoder-Stage-3-BN-Ident-1’, ‘Encoder-Stage-3-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-3-Add-1’, ‘Encoder-Stage-3-ReLU-2’);
% BatchNormalization改造为GroupNormalization
% 获取网络中所有层的名称
layerNames = {net.Layers.Name};
% 循环遍历所有层的名称,寻找匹配“BN”的层
for i = 1:length(layerNames)
if contains(layerNames{i}, ‘BN’)
% 创建新的组归一化层
gnLayer = groupNormalizationLayer(4, ‘Name’, layerNames{i});
% 替换现有的 BN 层
net = replaceLayer(net, layerNames{i}, gnLayer);
end
end
% 添加Vision Transformer Layer
PatchEmbeddingLayer1 = patchEmbeddingLayer([4 4 2], 1024, ‘Name’, ‘Transformer-PatchEmbedding’);
EmbeddingConcatenationLayer1 = embeddingConcatenationLayer(‘Name’, ‘Transformer-EmbeddingConcatenation’);
PositionEmbeddingLayer1 = positionEmbeddingLayer(1024, 1024, ‘Name’, ‘Transformer-PositionEmbedding’);
addLayer4 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-1’);
addLayer5 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-2’);
addLayer6 = additionLayer(2, ‘Name’, ‘Encoder-Stage-4-Add-3’);
dropoutLayer1 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-1’);
dropoutLayer2 = dropoutLayer(0.1, ‘Name’, ‘Transformer-DropOut-2’);
LayerNormalizationLayer1 = layerNormalizationLayer(‘Name’,’Transformer-LN-1′);
LayerNormalizationLayer2 = layerNormalizationLayer(‘Name’,’Transformer-LN-2′);
SelfAttentionLayer = selfAttentionLayer(8, 32, ‘Name’, ‘Transformer-SelfAttention’);
FullyConnectedLayer = fullyConnectedLayer(1024, ‘Name’, ‘Transformer-fc’);
ReshapeLayer = reshapeLayer(‘Transformer-reshape’);
index1dLayer = indexing1dLayer(‘Name’, ‘Transformer-index1d’);
% printShapeLayer1 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer1’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer2 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer2’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
% printShapeLayer3 = functionLayer(@printShape, …
% ‘Name’, ‘printShapeLayer3’, …
% ‘NumInputs’, 1, …
% ‘NumOutputs’, 1, …
% ‘InputNames’, {‘in’}, …
% ‘OutputNames’, {‘out’});
net = addLayers(net, PatchEmbeddingLayer1);
net = addLayers(net, EmbeddingConcatenationLayer1);
net = addLayers(net, PositionEmbeddingLayer1);
net = addLayers(net, addLayer4);
net = addLayers(net, addLayer5);
net = addLayers(net, addLayer6);
net = addLayers(net, dropoutLayer1);
net = addLayers(net, dropoutLayer2);
net = addLayers(net, LayerNormalizationLayer1);
net = addLayers(net, LayerNormalizationLayer2);
net = addLayers(net, SelfAttentionLayer);
net = addLayers(net, FullyConnectedLayer);
net = addLayers(net, ReshapeLayer);
net = addLayers(net, index1dLayer);
% net = addLayers(net, printShapeLayer1);
% net = addLayers(net, printShapeLayer2);
% net = addLayers(net, printShapeLayer3);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-Conv-1’);
% net = disconnectLayers(net, ‘encoderImageInputLayer’, ‘Encoder-Stage-1-BN-Ident-1’);
% net = connectLayers(net, ‘encoderImageInputLayer’, ‘printShapeLayer3’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-Conv-1’);
% net = connectLayers(net, ‘printShapeLayer3’, ‘Encoder-Stage-1-BN-Ident-1’);
net = disconnectLayers(net,’Encoder-Stage-3-DropOut’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net,’Encoder-Stage-3-ReLU-2′, ‘printShapeLayer1’);
% net = connectLayers(net,’printShapeLayer1′, ‘Transformer-PatchEmbedding’);
net = connectLayers(net,’Encoder-Stage-3-DropOut’, ‘Transformer-PatchEmbedding’);
net = connectLayers(net, ‘Transformer-PatchEmbedding’, ‘Transformer-EmbeddingConcatenation’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Transformer-PositionEmbedding’);
net = connectLayers(net, ‘Transformer-PositionEmbedding’, ‘Encoder-Stage-4-Add-1/in1’);
net = connectLayers(net, ‘Transformer-EmbeddingConcatenation’, ‘Encoder-Stage-4-Add-1/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-1’, ‘Transformer-DropOut-1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Transformer-LN-1’);
net = connectLayers(net, ‘Transformer-LN-1’, ‘Transformer-SelfAttention’);
net = connectLayers(net, ‘Transformer-SelfAttention’, ‘Transformer-DropOut-2’);
net = connectLayers(net, ‘Transformer-DropOut-2’, ‘Encoder-Stage-4-Add-2/in1’);
net = connectLayers(net, ‘Transformer-DropOut-1’, ‘Encoder-Stage-4-Add-2/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Transformer-LN-2’);
net = connectLayers(net, ‘Transformer-LN-2’, ‘Transformer-index1d’);
net = connectLayers(net, ‘Transformer-index1d’, ‘Transformer-fc’);
net = connectLayers(net, ‘Transformer-fc’, ‘Encoder-Stage-4-Add-3/in1’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-2’, ‘Encoder-Stage-4-Add-3/in2’);
net = connectLayers(net, ‘Encoder-Stage-4-Add-3’, ‘Transformer-reshape’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-DropOut’);
net = connectLayers(net, ‘Transformer-reshape’, ‘Encoder-Stage-3-MaxPool’);
% net = connectLayers(net, ‘Transformer-reshape’, ‘encoderDecoderSkipConnectionCrop3/in’);
% net = disconnectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘printShapeLayer2’);
net = removeLayers(net, ‘Encoder-Stage-3-MaxPool’);
net = connectLayers(net, ‘Transformer-reshape’, ‘LatentNetwork-Bridge-Conv-1’);
% net = connectLayers(net, ‘Encoder-Stage-3-MaxPool’, ‘LatentNetwork-Bridge-Conv-1’);
% 添加Attention Gate
relulayer1 = reluLayer(‘Name’, ‘AttentionGate-Stage-1-relu’);
relulayer2 = reluLayer(‘Name’, ‘AttentionGate-Stage-2-relu’);
relulayer3 = reluLayer(‘Name’, ‘AttentionGate-Stage-3-relu’);
sigmoidlayer1 = sigmoidLayer(‘Name’,’AttentionGate-Stage-1-sigmoid’);
sigmoidlayer2 = sigmoidLayer(‘Name’,’AttentionGate-Stage-2-sigmoid’);
sigmoidlayer3 = sigmoidLayer(‘Name’,’AttentionGate-Stage-3-sigmoid’);
convolution3dlayer11 = convolution3dLayer(1, 512, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-1′);
convolution3dlayer12 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-2′);
convolution3dlayer13 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-1-conv-3′);
convolution3dlayer21 = convolution3dLayer(1, 256, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-1′);
convolution3dlayer22 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-2′);
convolution3dlayer23 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-2-conv-3′);
convolution3dlayer31 = convolution3dLayer(1, 128, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-1′);
convolution3dlayer32 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-2′);
convolution3dlayer33 = convolution3dLayer(1, 64, ‘Padding’,’same’, ‘Name’,’AttentionGate-Stage-3-conv-3′);
net = addLayers(net, relulayer1);
net = addLayers(net, relulayer2);
net = addLayers(net, relulayer3);
net = addLayers(net, sigmoidlayer1);
net = addLayers(net, sigmoidlayer2);
net = addLayers(net, sigmoidlayer3);
net = addLayers(net, convolution3dlayer11);
net = addLayers(net, convolution3dlayer12);
net = addLayers(net, convolution3dlayer13);
net = addLayers(net, convolution3dlayer21);
net = addLayers(net, convolution3dlayer22);
net = addLayers(net, convolution3dlayer23);
net = addLayers(net, convolution3dlayer31);
net = addLayers(net, convolution3dlayer32);
net = addLayers(net, convolution3dlayer33);
net = disconnectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = disconnectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = disconnectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = disconnectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = disconnectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = connectLayers(net, ‘Decoder-Stage-1-UpReLU’, ‘AttentionGate-Stage-1-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-2-UpReLU’, ‘AttentionGate-Stage-2-conv-1’);
net = connectLayers(net, ‘Decoder-Stage-3-UpReLU’, ‘AttentionGate-Stage-3-conv-1’);
net = connectLayers(net, ‘Encoder-Stage-3-DropOut’, ‘AttentionGate-Stage-1-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-2-ReLU-2’, ‘AttentionGate-Stage-2-conv-2’);
net = connectLayers(net, ‘Encoder-Stage-1-ReLU-2’, ‘AttentionGate-Stage-3-conv-2’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-1’, ‘encoderDecoderSkipConnectionCrop3/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-1’, ‘encoderDecoderSkipConnectionCrop2/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-1’, ‘encoderDecoderSkipConnectionCrop1/ref’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-2’, ‘encoderDecoderSkipConnectionCrop3/in’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-2’, ‘encoderDecoderSkipConnectionCrop2/in’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-2’, ‘encoderDecoderSkipConnectionCrop1/in’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = disconnectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop3’, ‘AttentionGate-Stage-1-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop2’, ‘AttentionGate-Stage-2-relu’);
net = connectLayers(net, ‘encoderDecoderSkipConnectionCrop1’, ‘AttentionGate-Stage-3-relu’);
net = connectLayers(net, ‘AttentionGate-Stage-1-relu’, ‘AttentionGate-Stage-1-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-3-relu’, ‘AttentionGate-Stage-3-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-2-relu’, ‘AttentionGate-Stage-2-conv-3’);
net = connectLayers(net, ‘AttentionGate-Stage-1-conv-3’, ‘AttentionGate-Stage-1-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-3-conv-3’, ‘AttentionGate-Stage-3-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-2-conv-3’, ‘AttentionGate-Stage-2-sigmoid’);
net = connectLayers(net, ‘AttentionGate-Stage-1-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge3/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-2-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge2/in1’);
net = connectLayers(net, ‘AttentionGate-Stage-3-sigmoid’, ‘encoderDecoderSkipConnectionFeatureMerge1/in1’);
% 设置训练选项,使用GPU,启用详细输出,以及其他重要训练参数
options = trainingOptions(‘adam’, …
‘InitialLearnRate’, 1e-4, …
‘LearnRateSchedule’, ‘piecewise’, … % 学习率计划
‘LearnRateDropFactor’, 0.5, … % 学习率降低因子
‘LearnRateDropPeriod’, 5, … % 每5个epochs降低学习率
‘L2Regularization’, 1e-4, … % L2正则化,有助于防止过拟合
‘MaxEpochs’, 10, …
‘MiniBatchSize’, 4, …
‘Verbose’, true, …
‘ValidationData’, dsVal, …
‘ValidationFrequency’, 5, …
‘ValidationPatience’, 20, …
‘Plots’, ‘training-progress’, …
‘ExecutionEnvironment’, ‘gpu’, …
‘CheckpointPath’, ‘X:MATLAB codesStatisticsModeling2’);
analyzeNetwork(net);
net = initialize(net);
% 从验证数据集中读取一个样本
[data, info] = read(dsVal);
image = data{1}; % 图像数据
label = data{2}; % 真实标签
image = double(image);
% 进行预测
predictedLabel = predict(net, image); unet3d, visiontransformer, transunet MATLAB Answers — New Questions
help with nested loop and plot
hello everyone, i need help with the following code.
i am trying to plot the boundary of the region that satisfies the following conditions:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range.
so far i have got (which is working fine for fixed e and t):
n= 101;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
Z = zeros(n, n);
e= 10;
t=-25;
B = X + Y + e + t;
D = X.*Y – e.*t;
for i= 1:n
for j= 1:n
if B(i,j) >= 0
Z(i,j) = D(i,j);
else
Z(i,j) = -1;
end
end
end
v = [0,0];
contour(X,Y,Z,v, ‘LineWidth’, 1.5)
grid on
axis equal
xline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
yline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
now i would like to see the effects of the two constants e and t on the above mentioned boundary. i would like to plot different curves with varying e and t on the same graph but i am having troubles to understand an efficient way to do it.
e and t are two arrays such as linspace(-25, 25, 3), so i want to check how the plot evolves over 3×3 combiations of e and t.
i tried nesting for loops but it didn’t work as i got a blank plot. could anybody please give me any suggestions as to do it with for loops or with any other way?
i know i could do it "manually", changing e and t every time and using hold on to plot the curves on the same figure but it is rather inefficient.
thanks to anyone who will helphello everyone, i need help with the following code.
i am trying to plot the boundary of the region that satisfies the following conditions:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range.
so far i have got (which is working fine for fixed e and t):
n= 101;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
Z = zeros(n, n);
e= 10;
t=-25;
B = X + Y + e + t;
D = X.*Y – e.*t;
for i= 1:n
for j= 1:n
if B(i,j) >= 0
Z(i,j) = D(i,j);
else
Z(i,j) = -1;
end
end
end
v = [0,0];
contour(X,Y,Z,v, ‘LineWidth’, 1.5)
grid on
axis equal
xline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
yline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
now i would like to see the effects of the two constants e and t on the above mentioned boundary. i would like to plot different curves with varying e and t on the same graph but i am having troubles to understand an efficient way to do it.
e and t are two arrays such as linspace(-25, 25, 3), so i want to check how the plot evolves over 3×3 combiations of e and t.
i tried nesting for loops but it didn’t work as i got a blank plot. could anybody please give me any suggestions as to do it with for loops or with any other way?
i know i could do it "manually", changing e and t every time and using hold on to plot the curves on the same figure but it is rather inefficient.
thanks to anyone who will help hello everyone, i need help with the following code.
i am trying to plot the boundary of the region that satisfies the following conditions:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range.
so far i have got (which is working fine for fixed e and t):
n= 101;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
Z = zeros(n, n);
e= 10;
t=-25;
B = X + Y + e + t;
D = X.*Y – e.*t;
for i= 1:n
for j= 1:n
if B(i,j) >= 0
Z(i,j) = D(i,j);
else
Z(i,j) = -1;
end
end
end
v = [0,0];
contour(X,Y,Z,v, ‘LineWidth’, 1.5)
grid on
axis equal
xline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
yline(0, ‘Color’, ‘k’, ‘LineWidth’, 0.5);
now i would like to see the effects of the two constants e and t on the above mentioned boundary. i would like to plot different curves with varying e and t on the same graph but i am having troubles to understand an efficient way to do it.
e and t are two arrays such as linspace(-25, 25, 3), so i want to check how the plot evolves over 3×3 combiations of e and t.
i tried nesting for loops but it didn’t work as i got a blank plot. could anybody please give me any suggestions as to do it with for loops or with any other way?
i know i could do it "manually", changing e and t every time and using hold on to plot the curves on the same figure but it is rather inefficient.
thanks to anyone who will help nested loops, for loop, plot, hold, loops, cycles, grid, meshgrid MATLAB Answers — New Questions
Can we change the Constraints in MPC ?
Hello,
I’m using the Online feature of mpc block .i.e varying the MV constraints with simulation time.
Is it posible to change the inequality of the constraints in mpc block in matlab/simulink?
For example:
If ECR values (V) is 0 and scale factor is 1, then
default constraint is : u_min <= u <= u_max
and I would like to change to : u_min < u < u_maxHello,
I’m using the Online feature of mpc block .i.e varying the MV constraints with simulation time.
Is it posible to change the inequality of the constraints in mpc block in matlab/simulink?
For example:
If ECR values (V) is 0 and scale factor is 1, then
default constraint is : u_min <= u <= u_max
and I would like to change to : u_min < u < u_max Hello,
I’m using the Online feature of mpc block .i.e varying the MV constraints with simulation time.
Is it posible to change the inequality of the constraints in mpc block in matlab/simulink?
For example:
If ECR values (V) is 0 and scale factor is 1, then
default constraint is : u_min <= u <= u_max
and I would like to change to : u_min < u < u_max constraints, simulink, model predictive control, mpc MATLAB Answers — New Questions
Sharepoint list with nested IFs
Hi
I have seen myself blind on this but keep getting syntax error when trying to use this if statement in a calculated cloumn.
=IF([Type Betaling] = “Månedlig”, [Kost pr mnd] * 12, IF([Type Betaling] = “Årlig”, [Kost pr mnd], IF([Type Betaling] = “Halvår” ,[Kost pr mnd] * 2, IF([Type Betaling] = “Kvartalsvis”, [Kost pr mnd] * 4, [Kost pr mnd]))))
Hope someone can spot my misstake as I thought this should be possible.
Hi I have seen myself blind on this but keep getting syntax error when trying to use this if statement in a calculated cloumn. =IF([Type Betaling] = “Månedlig”, [Kost pr mnd] * 12, IF([Type Betaling] = “Årlig”, [Kost pr mnd], IF([Type Betaling] = “Halvår” ,[Kost pr mnd] * 2, IF([Type Betaling] = “Kvartalsvis”, [Kost pr mnd] * 4, [Kost pr mnd])))) Hope someone can spot my misstake as I thought this should be possible. Read More
Microsoft Store Downloads Not Progressing
I have noticed several issues with my Windows updates, such as updates retrying, being stuck at 0% downloaded, or displaying error messages. I believe these issues might be affecting my Microsoft Store downloads, as they are also stuck at 1%. Despite trying various solutions from online sources like YouTube, I haven’t been successful. What steps should I take next to resolve this issue?
I have noticed several issues with my Windows updates, such as updates retrying, being stuck at 0% downloaded, or displaying error messages. I believe these issues might be affecting my Microsoft Store downloads, as they are also stuck at 1%. Despite trying various solutions from online sources like YouTube, I haven’t been successful. What steps should I take next to resolve this issue? Read More
There was a problem when resetting your PC. No changes were made.
After attempting to reset my PC on Windows 11, I encountered an error message stating, “There was a problem when resetting your PC. No changes were made.” This issue arose after installing the Microsoft .NET SDK on my system. Upon restarting, I was met with a blue screen displaying the error “Critical Process Died.”
Unfortunately, I can’t use the system restore point from the troubleshooting page because it requires enabling system protection on the drive. Additionally, I am unable to reset the PC or retrieve my data.
Please help me.
After attempting to reset my PC on Windows 11, I encountered an error message stating, “There was a problem when resetting your PC. No changes were made.” This issue arose after installing the Microsoft .NET SDK on my system. Upon restarting, I was met with a blue screen displaying the error “Critical Process Died.”Unfortunately, I can’t use the system restore point from the troubleshooting page because it requires enabling system protection on the drive. Additionally, I am unable to reset the PC or retrieve my data.Please help me. Read More
Microsoft Confirms Recent Windows 11 Updates Disrupt Taskbar Functionality
Recent non-security updates for Windows 11 versions 22H2 and 23H2 introduced several new features and minor fixes. Unfortunately, the update KB5039302 has caused significant issues. Microsoft has confirmed that this update leads to infinite restart loops on some systems.
Additionally, Microsoft has issued a warning on the official Windows Health Dashboard website about another problem. The update KB5039302 is also breaking the taskbar on specific editions of Windows 11, particularly Windows N.
https://learn.microsoft.com/en-us/windows/release-health/status-windows-11-23H2#3345msgdesc
Recent non-security updates for Windows 11 versions 22H2 and 23H2 introduced several new features and minor fixes. Unfortunately, the update KB5039302 has caused significant issues. Microsoft has confirmed that this update leads to infinite restart loops on some systems. Additionally, Microsoft has issued a warning on the official Windows Health Dashboard website about another problem. The update KB5039302 is also breaking the taskbar on specific editions of Windows 11, particularly Windows N. https://learn.microsoft.com/en-us/windows/release-health/status-windows-11-23H2#3345msgdesc Read More
WhatsApp Widget Appearing on Lock Screen – Need Help
In recent weeks, I’ve noticed a WhatsApp “player” widget appearing on my PC’s lock screen when I wake it up. This widget includes back, pause, and forward buttons, similar to what you’d see on a phone’s lock screen when media is playing. However, the WhatsApp desktop app isn’t playing any audio or video; it’s just idle, waiting for messages as usual.
I’ve checked the WhatsApp desktop app settings but found nothing related to the lock screen. I also looked through Windows 11 settings under Lock Screen, but there’s no mention of WhatsApp there either.
Does anyone know why this is happening? Any help would be appreciated!
In recent weeks, I’ve noticed a WhatsApp “player” widget appearing on my PC’s lock screen when I wake it up. This widget includes back, pause, and forward buttons, similar to what you’d see on a phone’s lock screen when media is playing. However, the WhatsApp desktop app isn’t playing any audio or video; it’s just idle, waiting for messages as usual. I’ve checked the WhatsApp desktop app settings but found nothing related to the lock screen. I also looked through Windows 11 settings under Lock Screen, but there’s no mention of WhatsApp there either. Does anyone know why this is happening? Any help would be appreciated! Read More
Unable to Switch from Beta to Release Preview for 24H2 Update
I am currently on version 23H2 and need to switch to the Release Preview to update to 24H2. It appears that I need to perform a clean install of Windows 11. How can I do this while retaining all my settings and applications?
I am currently on version 23H2 and need to switch to the Release Preview to update to 24H2. It appears that I need to perform a clean install of Windows 11. How can I do this while retaining all my settings and applications? Read More
SharePoint Lookup Column Not Working Anymore
Since today, I have been unable to use the lookup columns.
When I try to type a search word, I get no results.
By entering a search term, the system does not return any results
The browser’s development tools report the following error:
RESPONSE CODE 400 BAD REQUEST
“error”: {
“code”: “-1, Microsoft.SharePoint.Client.InvalidClientQueryException”,
“message”: {
“lang”: “en-US”,
“value”: “The expression ‘web/GetListUsingPath(DecodedUrl=@v1)/SearchLookupFieldChoices(targetFieldName=’Title’,pagingInfo=’,’ is not valid.”
}
}
}
The script that generated the request is named: https://res-1.cdn.office.net/files/odsp-web-prod_2024-06-21.012/splistwebpack/plt.odsp-common.js
In another tenant that I manage, the lookup columns work correctly.
In that case, the script that generates the request is named: https://res-1.cdn.office.net/files/odsp-web-prod_2024-06-14.009/splistwebpack/plt.odsp-common.js
Has anyone noticed this problem?
Can I do something to resolve it, or do I just have to wait?
Since today, I have been unable to use the lookup columns.When I try to type a search word, I get no results. By entering a search term, the system does not return any resultsThe browser’s development tools report the following error:POST https://MYTENANT.sharepoint.com/sites/JORTODEL/_api/web/GetListUsingPath(DecodedUrl=@v1)/SearchLookupFieldChoices(targetFieldName=’Title’,pagingInfo=”,?@v1=’%2Fsites%2FJORTODEL%2FLists%2FTEST%20LOOKUP2′)RESPONSE CODE 400 BAD REQUEST{
“error”: {
“code”: “-1, Microsoft.SharePoint.Client.InvalidClientQueryException”,
“message”: {
“lang”: “en-US”,
“value”: “The expression ‘web/GetListUsingPath(DecodedUrl=@v1)/SearchLookupFieldChoices(targetFieldName=’Title’,pagingInfo=’,’ is not valid.”
}
}
}The script that generated the request is named: https://res-1.cdn.office.net/files/odsp-web-prod_2024-06-21.012/splistwebpack/plt.odsp-common.js In another tenant that I manage, the lookup columns work correctly.In that case, the script that generates the request is named: https://res-1.cdn.office.net/files/odsp-web-prod_2024-06-14.009/splistwebpack/plt.odsp-common.js Has anyone noticed this problem?Can I do something to resolve it, or do I just have to wait? Read More
Sharepoint Permission
I want to set a permission that enable users to edit content without also allowing them to modify the folder. So users can only upload and edit documents but cannot create nor rename nor delete a folders.
Thank you.
I want to set a permission that enable users to edit content without also allowing them to modify the folder. So users can only upload and edit documents but cannot create nor rename nor delete a folders. Thank you. Read More
How Can I Successfully Migrate from Q.B Desktop to Quick-Books Online?
I’m planning to migrate from Q.B Desktop to Quick-Books Online and need some help with the process. I’m concerned about potential data loss and want to ensure that all my financial information, including transactions, customer details, and historical data, transfers correctly. Can someone provide a step-by-step guide or share tips to make the migration seamless? Any advice on common issues and how to avoid them would also be greatly appreciated.
I’m planning to migrate from Q.B Desktop to Quick-Books Online and need some help with the process. I’m concerned about potential data loss and want to ensure that all my financial information, including transactions, customer details, and historical data, transfers correctly. Can someone provide a step-by-step guide or share tips to make the migration seamless? Any advice on common issues and how to avoid them would also be greatly appreciated. Read More
Coach by Copilot in Outlook
Is it possible to change the prompt of the “Coach by Copilot” feature in Outlook?
It’s annoying to always get the same suggestion for me with useless advice on Tone to write my email more formal.
It would be wonderful if we can custom the instruction prompt in “Coach by Copilot” so that we can get exactly what we want from it, just like the Premium ChatGPT.
Regards,
Eddy
Is it possible to change the prompt of the “Coach by Copilot” feature in Outlook?
It’s annoying to always get the same suggestion for me with useless advice on Tone to write my email more formal.
It would be wonderful if we can custom the instruction prompt in “Coach by Copilot” so that we can get exactly what we want from it, just like the Premium ChatGPT.
Regards,
Eddy Read More