Month: July 2024
Table of contents and page number not working: Mac
Hello,
I’m a Mac user. I routinely create documents with a table of contents and headers/footers for page numbers. I use the Table of Contents option under the References Menu.
Today it stopped working. Instead of a TOC , curly brackets and code is inserted into my document, e.g. {TOC } or {page}.
I uninstalled and reinstalled the Word App. I restarted my computer. I checked for updates for my operating system. I uninstalled an reactivated my license.
Is anyone else having this experience?
M
Hello,I’m a Mac user. I routinely create documents with a table of contents and headers/footers for page numbers. I use the Table of Contents option under the References Menu.Today it stopped working. Instead of a TOC , curly brackets and code is inserted into my document, e.g. {TOC } or {page}.I uninstalled and reinstalled the Word App. I restarted my computer. I checked for updates for my operating system. I uninstalled an reactivated my license. Is anyone else having this experience? M Read More
Can’t find correct RBAC permissions to approve AIR actions
I’ve been configuring custom RBAC roles, and even though the “Response (manage)” permission in the Security Operations permissions group includes “approve or dismiss pending remediation actions,” it doesn’t work. I’ve tried it with pending “soft delete emails” actions in the Action Center, and I get an error. The only way we can approve or reject these actions is with the Entra Security Administrator role checked out.
Does anyone know which RBAC permission is supposed to grant the rights to approve these remediation actions?
I’ve been configuring custom RBAC roles, and even though the “Response (manage)” permission in the Security Operations permissions group includes “approve or dismiss pending remediation actions,” it doesn’t work. I’ve tried it with pending “soft delete emails” actions in the Action Center, and I get an error. The only way we can approve or reject these actions is with the Entra Security Administrator role checked out. Does anyone know which RBAC permission is supposed to grant the rights to approve these remediation actions? Read More
Defender XDR RBAC and Cloud Apps
Is there any roadmap for integrating Defender XDR RBAC with Defender for Cloud Apps?
Is there any roadmap for integrating Defender XDR RBAC with Defender for Cloud Apps? Read More
Cloud Apps Score Metrics per category
Hi All,
I am trying to create a Cloud App discovery policy that applies to only a specific category of apps, and I want to fine tune the “Score metrics” for only one category.
Settings –> Cloud Discovery –> Score metrics applies to all apps. I need a way to apply this only to a specific category.
From what I can see this is not possible. Does anyone have any idea if there is a way to do this?
Regards,
Andrew
Hi All,I am trying to create a Cloud App discovery policy that applies to only a specific category of apps, and I want to fine tune the “Score metrics” for only one category.Settings –> Cloud Discovery –> Score metrics applies to all apps. I need a way to apply this only to a specific category.From what I can see this is not possible. Does anyone have any idea if there is a way to do this?Regards,Andrew Read More
Unwrap with tolerance other than default (=pi)
I encountered a problem with the Matlab function “unwrap” when I tried to unwrap the phase of a signal with a jump tolerance other than pi.
I used the following phase for testing the behavior of the “unwrap” function:
phs = [0.1, 0.2, 0.3, 0.4, 0.4+pi-0.1, 0.4+pi-0.05, 0.4+pi-0.01];
There is a jump in the phase angle between elements four and five that is smaller than pi.
I then tried to eliminate this jump using the “unwrap” function and the tolerance pi/2:
phs = unwrap(phs, pi/2);
The phs vector didn’t change. It also didn’t change by using any other value for the tolerance.
Shouldn’t it eliminate the jump by adding +/- 2pi or by adding +/- pi/2?I encountered a problem with the Matlab function “unwrap” when I tried to unwrap the phase of a signal with a jump tolerance other than pi.
I used the following phase for testing the behavior of the “unwrap” function:
phs = [0.1, 0.2, 0.3, 0.4, 0.4+pi-0.1, 0.4+pi-0.05, 0.4+pi-0.01];
There is a jump in the phase angle between elements four and five that is smaller than pi.
I then tried to eliminate this jump using the “unwrap” function and the tolerance pi/2:
phs = unwrap(phs, pi/2);
The phs vector didn’t change. It also didn’t change by using any other value for the tolerance.
Shouldn’t it eliminate the jump by adding +/- 2pi or by adding +/- pi/2? I encountered a problem with the Matlab function “unwrap” when I tried to unwrap the phase of a signal with a jump tolerance other than pi.
I used the following phase for testing the behavior of the “unwrap” function:
phs = [0.1, 0.2, 0.3, 0.4, 0.4+pi-0.1, 0.4+pi-0.05, 0.4+pi-0.01];
There is a jump in the phase angle between elements four and five that is smaller than pi.
I then tried to eliminate this jump using the “unwrap” function and the tolerance pi/2:
phs = unwrap(phs, pi/2);
The phs vector didn’t change. It also didn’t change by using any other value for the tolerance.
Shouldn’t it eliminate the jump by adding +/- 2pi or by adding +/- pi/2? unwrap, tolerance, tol, jump, fish MATLAB Answers — New Questions
eliminate phase jumps in unwrapping
Hi everyone. I’m unwrapping the phase of a hemisphere image, using MATLAB unwrap function. the image contains vertical lines. the function ‘unwrap’ works well for a constant surface but it doesn’t for the hemisphere. there are some jumps in the unwrapped image.
I use the function in the form of:
UnwrappedImage7=unwrap(unwrap(PHI,[],2),[],1);
Do you have any sugestions to eliminate the jumps?Hi everyone. I’m unwrapping the phase of a hemisphere image, using MATLAB unwrap function. the image contains vertical lines. the function ‘unwrap’ works well for a constant surface but it doesn’t for the hemisphere. there are some jumps in the unwrapped image.
I use the function in the form of:
UnwrappedImage7=unwrap(unwrap(PHI,[],2),[],1);
Do you have any sugestions to eliminate the jumps? Hi everyone. I’m unwrapping the phase of a hemisphere image, using MATLAB unwrap function. the image contains vertical lines. the function ‘unwrap’ works well for a constant surface but it doesn’t for the hemisphere. there are some jumps in the unwrapped image.
I use the function in the form of:
UnwrappedImage7=unwrap(unwrap(PHI,[],2),[],1);
Do you have any sugestions to eliminate the jumps? unwrap, phase jump, image processing, image MATLAB Answers — New Questions
how to align two columns with different times
Hi, I have to align two timestamps, but I need help!
Input data: ML
the first two columns refer to the events codes (column 1) and the corresponding timestamps (column 2, in millieconds) from a time 0, of when each events occurs
ML (:,1)) = (events codes)
9 62 15 40 54 50 18 9 63 15 40 54 50 18 9 64 15 40 54 50 18 9 65 15 40 54 50 18 9 66 15 40 18 9 67 15 40 54 50 18 9 68 15 40 54 50 18 9 69 15 40 54 50 18 9 70 15 40 54 50 18 9 71 15 40 54 50 18 9 72 15 40 54 50 18 9 73 15 40 54 50 18 9 74 15 40 54 50 18 9 75 15 40 54 50 18 9 76 15 40 54 50 18 9 77 15 40 54 50 18 9 78 15 40 54 50 18 9 79 15 40 54 50 18 9 80 15 40 54 50 18 9 81 15 40 54 50 18 ;
ML(:,2) = ( timestamp 1, milliseconds of the times of each code from 0)
4974 4980 5112 5579 5968 6042 6374 6877 6882 6912 7545 8066 8133 8419 8921 8925 9012 9612 10115 10247 10533 11036 11040 11112 11679 12082 12102 12393 12896 12899 12979 13445 16592 17095 17098 17179 17545 17917 18000 18307 18809 18813 18913 19645 20148 20163 20462 20964 20968 21079 21612 21999 22065 22422 22926 22930 23112 23613 24010 24081 24384 24888 24893 24912 25379 25946 26006 26420 26922 26925 27112 27612 27991 28075 28368 28870 28873 28979 29546 30045 30061 30374 30875 30878 30946 31512 32029 32090 32349 32851 32854 33046 33712 34107 34164 34487 34988 34991 35046 35579 36093 36165 36460 36962 36965 37046 37446 37843 37919 38211 38712 38715 38780 39279 39655 39672 39975 40477 40479 40580 41146 41691 41753 42002 42504 42507 42613 43146 43657 43728 44019 44520 44523 44579 45180 45576 45641 45936
The third column is a separate time stamp (timestamp 2) , wtih the time when the codes40 occur from a time 0, relative to THAT timestamp (not to the timestamp 1) . The absolute values are different. In this timestamp, there are only the value of the code 40. The zeroes values correspond to a NaN.
ML(:,3)
0 0 0 1989 0 0 0 0 0 0 3464 0 0 0 0 0 0 5014 0 0 0 0 0 0 6564 0 0 0 0 0 0 7889 0 0 0 0 10964 0 0 0 0 0 0 12539 0 0 0 0 0 0 14014 0 0 0 0 0 0 15514 0 0 0 0 0 0 16839 0 0 0 0 0 0 18514 0 0 0 0 0 0 19964 0 0 0 0 0 0 21439 0 0 0 0 0 0 23089 0 0 0 0 0 0 24489 0 0 0 0 0 0 25889 0 0 0 0 0 0 27265 0 0 0 0 0 0 28665 0 0 0 0 0 0 30165 0 0 0 0 0 0 31690 0 0 0
even thought the absolute values of the two timestamps are different, nonetheless, the difference between two consecutive times of codes 40 in the column 2, should be the same as the difference bettween the consecutive times of code s 40 in colum 3. If this is the case, I can fill the zeroes value of the column 3, and obtain a complete timestamp of column 3 (timestamp 2) which correspond to the event codes:
%% select only the values which correspond to the code 40
gt = find(ML(:,1) == 40;
new_ML = ML(gt,:);
%% differences between times of consecutive codes 40:
timestamp1_diff = new_ML(2:end-1,2) – new_ML(1:end-1,2)
timestamp2_diff = new_ML(2:end-1,3) – new_ML(1:end-1,3)
However, the values of timestamp1_diff and timestamp2_diff are not the same. The values of the timestamp1_diff are alwasy between 400 and 550 bigger than the timestamp2_diff That means, that the values of the column 3 do not correspond to the code 40. I have to find to which code the values of the timestamp2 (ML:,3) correspond.
I tried, but I gave up!
any idea??
thanks
AnnaHi, I have to align two timestamps, but I need help!
Input data: ML
the first two columns refer to the events codes (column 1) and the corresponding timestamps (column 2, in millieconds) from a time 0, of when each events occurs
ML (:,1)) = (events codes)
9 62 15 40 54 50 18 9 63 15 40 54 50 18 9 64 15 40 54 50 18 9 65 15 40 54 50 18 9 66 15 40 18 9 67 15 40 54 50 18 9 68 15 40 54 50 18 9 69 15 40 54 50 18 9 70 15 40 54 50 18 9 71 15 40 54 50 18 9 72 15 40 54 50 18 9 73 15 40 54 50 18 9 74 15 40 54 50 18 9 75 15 40 54 50 18 9 76 15 40 54 50 18 9 77 15 40 54 50 18 9 78 15 40 54 50 18 9 79 15 40 54 50 18 9 80 15 40 54 50 18 9 81 15 40 54 50 18 ;
ML(:,2) = ( timestamp 1, milliseconds of the times of each code from 0)
4974 4980 5112 5579 5968 6042 6374 6877 6882 6912 7545 8066 8133 8419 8921 8925 9012 9612 10115 10247 10533 11036 11040 11112 11679 12082 12102 12393 12896 12899 12979 13445 16592 17095 17098 17179 17545 17917 18000 18307 18809 18813 18913 19645 20148 20163 20462 20964 20968 21079 21612 21999 22065 22422 22926 22930 23112 23613 24010 24081 24384 24888 24893 24912 25379 25946 26006 26420 26922 26925 27112 27612 27991 28075 28368 28870 28873 28979 29546 30045 30061 30374 30875 30878 30946 31512 32029 32090 32349 32851 32854 33046 33712 34107 34164 34487 34988 34991 35046 35579 36093 36165 36460 36962 36965 37046 37446 37843 37919 38211 38712 38715 38780 39279 39655 39672 39975 40477 40479 40580 41146 41691 41753 42002 42504 42507 42613 43146 43657 43728 44019 44520 44523 44579 45180 45576 45641 45936
The third column is a separate time stamp (timestamp 2) , wtih the time when the codes40 occur from a time 0, relative to THAT timestamp (not to the timestamp 1) . The absolute values are different. In this timestamp, there are only the value of the code 40. The zeroes values correspond to a NaN.
ML(:,3)
0 0 0 1989 0 0 0 0 0 0 3464 0 0 0 0 0 0 5014 0 0 0 0 0 0 6564 0 0 0 0 0 0 7889 0 0 0 0 10964 0 0 0 0 0 0 12539 0 0 0 0 0 0 14014 0 0 0 0 0 0 15514 0 0 0 0 0 0 16839 0 0 0 0 0 0 18514 0 0 0 0 0 0 19964 0 0 0 0 0 0 21439 0 0 0 0 0 0 23089 0 0 0 0 0 0 24489 0 0 0 0 0 0 25889 0 0 0 0 0 0 27265 0 0 0 0 0 0 28665 0 0 0 0 0 0 30165 0 0 0 0 0 0 31690 0 0 0
even thought the absolute values of the two timestamps are different, nonetheless, the difference between two consecutive times of codes 40 in the column 2, should be the same as the difference bettween the consecutive times of code s 40 in colum 3. If this is the case, I can fill the zeroes value of the column 3, and obtain a complete timestamp of column 3 (timestamp 2) which correspond to the event codes:
%% select only the values which correspond to the code 40
gt = find(ML(:,1) == 40;
new_ML = ML(gt,:);
%% differences between times of consecutive codes 40:
timestamp1_diff = new_ML(2:end-1,2) – new_ML(1:end-1,2)
timestamp2_diff = new_ML(2:end-1,3) – new_ML(1:end-1,3)
However, the values of timestamp1_diff and timestamp2_diff are not the same. The values of the timestamp1_diff are alwasy between 400 and 550 bigger than the timestamp2_diff That means, that the values of the column 3 do not correspond to the code 40. I have to find to which code the values of the timestamp2 (ML:,3) correspond.
I tried, but I gave up!
any idea??
thanks
Anna Hi, I have to align two timestamps, but I need help!
Input data: ML
the first two columns refer to the events codes (column 1) and the corresponding timestamps (column 2, in millieconds) from a time 0, of when each events occurs
ML (:,1)) = (events codes)
9 62 15 40 54 50 18 9 63 15 40 54 50 18 9 64 15 40 54 50 18 9 65 15 40 54 50 18 9 66 15 40 18 9 67 15 40 54 50 18 9 68 15 40 54 50 18 9 69 15 40 54 50 18 9 70 15 40 54 50 18 9 71 15 40 54 50 18 9 72 15 40 54 50 18 9 73 15 40 54 50 18 9 74 15 40 54 50 18 9 75 15 40 54 50 18 9 76 15 40 54 50 18 9 77 15 40 54 50 18 9 78 15 40 54 50 18 9 79 15 40 54 50 18 9 80 15 40 54 50 18 9 81 15 40 54 50 18 ;
ML(:,2) = ( timestamp 1, milliseconds of the times of each code from 0)
4974 4980 5112 5579 5968 6042 6374 6877 6882 6912 7545 8066 8133 8419 8921 8925 9012 9612 10115 10247 10533 11036 11040 11112 11679 12082 12102 12393 12896 12899 12979 13445 16592 17095 17098 17179 17545 17917 18000 18307 18809 18813 18913 19645 20148 20163 20462 20964 20968 21079 21612 21999 22065 22422 22926 22930 23112 23613 24010 24081 24384 24888 24893 24912 25379 25946 26006 26420 26922 26925 27112 27612 27991 28075 28368 28870 28873 28979 29546 30045 30061 30374 30875 30878 30946 31512 32029 32090 32349 32851 32854 33046 33712 34107 34164 34487 34988 34991 35046 35579 36093 36165 36460 36962 36965 37046 37446 37843 37919 38211 38712 38715 38780 39279 39655 39672 39975 40477 40479 40580 41146 41691 41753 42002 42504 42507 42613 43146 43657 43728 44019 44520 44523 44579 45180 45576 45641 45936
The third column is a separate time stamp (timestamp 2) , wtih the time when the codes40 occur from a time 0, relative to THAT timestamp (not to the timestamp 1) . The absolute values are different. In this timestamp, there are only the value of the code 40. The zeroes values correspond to a NaN.
ML(:,3)
0 0 0 1989 0 0 0 0 0 0 3464 0 0 0 0 0 0 5014 0 0 0 0 0 0 6564 0 0 0 0 0 0 7889 0 0 0 0 10964 0 0 0 0 0 0 12539 0 0 0 0 0 0 14014 0 0 0 0 0 0 15514 0 0 0 0 0 0 16839 0 0 0 0 0 0 18514 0 0 0 0 0 0 19964 0 0 0 0 0 0 21439 0 0 0 0 0 0 23089 0 0 0 0 0 0 24489 0 0 0 0 0 0 25889 0 0 0 0 0 0 27265 0 0 0 0 0 0 28665 0 0 0 0 0 0 30165 0 0 0 0 0 0 31690 0 0 0
even thought the absolute values of the two timestamps are different, nonetheless, the difference between two consecutive times of codes 40 in the column 2, should be the same as the difference bettween the consecutive times of code s 40 in colum 3. If this is the case, I can fill the zeroes value of the column 3, and obtain a complete timestamp of column 3 (timestamp 2) which correspond to the event codes:
%% select only the values which correspond to the code 40
gt = find(ML(:,1) == 40;
new_ML = ML(gt,:);
%% differences between times of consecutive codes 40:
timestamp1_diff = new_ML(2:end-1,2) – new_ML(1:end-1,2)
timestamp2_diff = new_ML(2:end-1,3) – new_ML(1:end-1,3)
However, the values of timestamp1_diff and timestamp2_diff are not the same. The values of the timestamp1_diff are alwasy between 400 and 550 bigger than the timestamp2_diff That means, that the values of the column 3 do not correspond to the code 40. I have to find to which code the values of the timestamp2 (ML:,3) correspond.
I tried, but I gave up!
any idea??
thanks
Anna align timestamps MATLAB Answers — New Questions
Error in calculating path to workspace goal region in robot’s path planning algorithm
Hi there, I’m having an issue in calculating a path for my rigid body tree object to follow to the specified goal region. I’m developing a script to find the path from from the home position of the robot to a specified goal region – much like the "Plan Path to a Workspace Goal Region Example" demonstrated in the manipulatorRRT documentation page. My issue specifically is that the plan function can not find a suitable path (outputing IsPathFound = 0, and ExitFlag = 2 – meaning that the maximum number of iterations was reached without finding a suitable path).
I have been following the prevously cited MATLAB example nearly word-for-word and my inputs to the plan function (the rrt object, the initial robot configuration, and the goal region) all function correctly and are passed without any errors.
I checked the following areas to try to fix this issue:
The robot (the rrt object) has the joint position limits set to [-inf, inf] and therefore can be rotated manually through the script to any angle – I believe that the joints have available freedom to rototate during the pathing calculations.
I have placed the goal region very close to the final link and set the bounds to have a very large positional and angular tolerance – I believe this should clear up any issues with difficult to reach angles.
Perhaps my lack of an end effector is affecting the pathing algorithm but every link (and frame of each link) resides in the goal region.
I have attached the URDF file and the smallest working example of the code as a .zip file to demonstrate the issue I’m having. Any help and/or advice would be greatly appreciated.
Thank you, Scott Brown.
Cal Poly SLO – Mechanical Engineering Master’s StudentHi there, I’m having an issue in calculating a path for my rigid body tree object to follow to the specified goal region. I’m developing a script to find the path from from the home position of the robot to a specified goal region – much like the "Plan Path to a Workspace Goal Region Example" demonstrated in the manipulatorRRT documentation page. My issue specifically is that the plan function can not find a suitable path (outputing IsPathFound = 0, and ExitFlag = 2 – meaning that the maximum number of iterations was reached without finding a suitable path).
I have been following the prevously cited MATLAB example nearly word-for-word and my inputs to the plan function (the rrt object, the initial robot configuration, and the goal region) all function correctly and are passed without any errors.
I checked the following areas to try to fix this issue:
The robot (the rrt object) has the joint position limits set to [-inf, inf] and therefore can be rotated manually through the script to any angle – I believe that the joints have available freedom to rototate during the pathing calculations.
I have placed the goal region very close to the final link and set the bounds to have a very large positional and angular tolerance – I believe this should clear up any issues with difficult to reach angles.
Perhaps my lack of an end effector is affecting the pathing algorithm but every link (and frame of each link) resides in the goal region.
I have attached the URDF file and the smallest working example of the code as a .zip file to demonstrate the issue I’m having. Any help and/or advice would be greatly appreciated.
Thank you, Scott Brown.
Cal Poly SLO – Mechanical Engineering Master’s Student Hi there, I’m having an issue in calculating a path for my rigid body tree object to follow to the specified goal region. I’m developing a script to find the path from from the home position of the robot to a specified goal region – much like the "Plan Path to a Workspace Goal Region Example" demonstrated in the manipulatorRRT documentation page. My issue specifically is that the plan function can not find a suitable path (outputing IsPathFound = 0, and ExitFlag = 2 – meaning that the maximum number of iterations was reached without finding a suitable path).
I have been following the prevously cited MATLAB example nearly word-for-word and my inputs to the plan function (the rrt object, the initial robot configuration, and the goal region) all function correctly and are passed without any errors.
I checked the following areas to try to fix this issue:
The robot (the rrt object) has the joint position limits set to [-inf, inf] and therefore can be rotated manually through the script to any angle – I believe that the joints have available freedom to rototate during the pathing calculations.
I have placed the goal region very close to the final link and set the bounds to have a very large positional and angular tolerance – I believe this should clear up any issues with difficult to reach angles.
Perhaps my lack of an end effector is affecting the pathing algorithm but every link (and frame of each link) resides in the goal region.
I have attached the URDF file and the smallest working example of the code as a .zip file to demonstrate the issue I’m having. Any help and/or advice would be greatly appreciated.
Thank you, Scott Brown.
Cal Poly SLO – Mechanical Engineering Master’s Student matlab, robotics, rigid body tree, pathing, rrt, path, robotics_toolbox, path planning, path_planning, workspace goal region, workspace_goal_region MATLAB Answers — New Questions
cannot get account targets from Get-MgBetaSecurityAttackSimulationTrainingCampaign
Dear all,
I’m trying to read (and later modify) Microsoft Defender Attack Simulation Training Campaigns. There is an Microsoft Graph BETA(!) API available.
When I query list of training campaigns with
Get-MgBetaSecurityAttackSimulationTrainingCampaign
I get the list of campaigns I also can see in Microsoft Defender Web-UI.
When I query more details for a specific training campaign with
Get-MgBetaSecurityAttackSimulationTrainingCampaign -TrainingCampaignId MY-TRAINING-IDI get a result like thisCampaignSchedule : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphCampaignSchedule
CreatedBy : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEmailIdentity
CreatedDateTime : 7/8/2024 1:55:57 PM
Description :
DisplayName : 2024-07 – Basic IT security training for regular users
EndUserNotificationSetting : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEndUserNotificationSetting
ExcludedAccountTarget : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAccountTargetContent
Id : 11111111-2222-3333-4444-555555555
IncludedAccountTarget : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAccountTargetContent
LastModifiedBy : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEmailIdentity
LastModifiedDateTime : 7/8/2024 3:20:25 PM
Report : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphTrainingCampaignReport
TrainingSetting : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphTrainingSetting
AdditionalProperties : {[@odata.context,
https://graph.microsoft.com/beta/$metadata#security/attackSimulation/trainingCampaigns/$entity]}
I was not able to get any results for IncludedAccountTarget or any other object. Neither with ForEach-Object nor anything else.
Any idea? Any hint?
Thank you so much for your help!
Best regards
Daniel
Dear all,I’m trying to read (and later modify) Microsoft Defender Attack Simulation Training Campaigns. There is an Microsoft Graph BETA(!) API available. When I query list of training campaigns withGet-MgBetaSecurityAttackSimulationTrainingCampaign I get the list of campaigns I also can see in Microsoft Defender Web-UI. When I query more details for a specific training campaign with Get-MgBetaSecurityAttackSimulationTrainingCampaign -TrainingCampaignId MY-TRAINING-IDI get a result like thisCampaignSchedule : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphCampaignScheduleCreatedBy : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEmailIdentityCreatedDateTime : 7/8/2024 1:55:57 PMDescription :DisplayName : 2024-07 – Basic IT security training for regular usersEndUserNotificationSetting : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEndUserNotificationSettingExcludedAccountTarget : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAccountTargetContentId : 11111111-2222-3333-4444-555555555IncludedAccountTarget : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphAccountTargetContentLastModifiedBy : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphEmailIdentityLastModifiedDateTime : 7/8/2024 3:20:25 PMReport : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphTrainingCampaignReportTrainingSetting : Microsoft.Graph.Beta.PowerShell.Models.MicrosoftGraphTrainingSettingAdditionalProperties : {[@odata.context,https://graph.microsoft.com/beta/$metadata#security/attackSimulation/trainingCampaigns/$entity]} I was not able to get any results for IncludedAccountTarget or any other object. Neither with ForEach-Object nor anything else. Any idea? Any hint? Thank you so much for your help!Best regardsDaniel Read More
Named Ranges – Is there a way to easily copy from one spreadsheet to another?
I have spreadsheets that have already been filled out and I want to “upgrade” them to Named Ranges. Is there a method to easily copy a list of named ranges from one spreadsheet to another?
I have spreadsheets that have already been filled out and I want to “upgrade” them to Named Ranges. Is there a method to easily copy a list of named ranges from one spreadsheet to another? Read More
How will i add another value (e.g. TLOG_MMF_COR_0003) in the below
Hi,
How will i add another value (e.g. TLOG_MMF_COR_0003) in the below ‘Type:=xlCaptionDoesNotBeginWith’ where existing
value1 ‘ATLOG_MMF_0003’ is there already:
=======================================================
ActiveSheet.PivotTables(“PivotTable9”).PivotFields(“USSMG_File-list”). _
PivotFilters.Add2 Type:=xlCaptionDoesNotBeginWith, Value1:=”ATLOG_MMF_0003“
Hi, How will i add another value (e.g. TLOG_MMF_COR_0003) in the below ‘Type:=xlCaptionDoesNotBeginWith’ where existingvalue1 ‘ATLOG_MMF_0003’ is there already:=======================================================ActiveSheet.PivotTables(“PivotTable9”).PivotFields(“USSMG_File-list”). _PivotFilters.Add2 Type:=xlCaptionDoesNotBeginWith, Value1:=”ATLOG_MMF_0003″ Read More
Removing an attachment strips non-Microsoft X-* headers from a message
I’ve run into an odd behavior that doesn’t seem to be documented. When I delete an attachment from an email message via Remove-MgUserMessageAttachment, Graph appears to strip all non-Microsoft X-* Internet message headers from the message.
For example, an existing X-Spam header will disappear, but X-MS-Exchange* headers will remain.
Is this behavior documented anywhere either as a bug or a feature? Is it just me?
I’ve run into an odd behavior that doesn’t seem to be documented. When I delete an attachment from an email message via Remove-MgUserMessageAttachment, Graph appears to strip all non-Microsoft X-* Internet message headers from the message. For example, an existing X-Spam header will disappear, but X-MS-Exchange* headers will remain. Is this behavior documented anywhere either as a bug or a feature? Is it just me? Read More
This site is read only at the farm administrator’s request.
Share Point will not allow me to upload my reports
Share Point will not allow me to upload my reports Read More
How to check collision between several robots?
I have created four robots (rigidBodyTrees) with the Robotics Toolbox.
In reality, this is one robot with four arms.
I chose to create four different robots, because I want to ignore self collision (the self being the same arm).
I want to generate configurations for each arm (each robot) and then check whether there is collision between them.
How do I do this? The documentation says:
checkCollision(robot,config,worldObjects)
This suggests that I can only enter the configuration for one robot and the worldObjects cannot be placed depending on the configuration that is generated but are ‘fixed’.
I really hope someone can help me!I have created four robots (rigidBodyTrees) with the Robotics Toolbox.
In reality, this is one robot with four arms.
I chose to create four different robots, because I want to ignore self collision (the self being the same arm).
I want to generate configurations for each arm (each robot) and then check whether there is collision between them.
How do I do this? The documentation says:
checkCollision(robot,config,worldObjects)
This suggests that I can only enter the configuration for one robot and the worldObjects cannot be placed depending on the configuration that is generated but are ‘fixed’.
I really hope someone can help me! I have created four robots (rigidBodyTrees) with the Robotics Toolbox.
In reality, this is one robot with four arms.
I chose to create four different robots, because I want to ignore self collision (the self being the same arm).
I want to generate configurations for each arm (each robot) and then check whether there is collision between them.
How do I do this? The documentation says:
checkCollision(robot,config,worldObjects)
This suggests that I can only enter the configuration for one robot and the worldObjects cannot be placed depending on the configuration that is generated but are ‘fixed’.
I really hope someone can help me! robot, collision checking, rigid body tree MATLAB Answers — New Questions
If statement in SharePoint
Good Evening
I am trying to enter what I feel is a rather simple “IF” statement but I can’t figure it out.
I have a Field titled “Item Type” and one titled “FLW – Business Days Remaining” and want to return a “2” if the Item Type equal “Prospect” and “FLW – Business Days Remaining” is grater than 2.
This is what I have
=IF(AND([Item Type]=”Prospect”,[FLW – Business Days Remaining] >2,”2″)))
Any help would be greatly appreciated
Good EveningI am trying to enter what I feel is a rather simple “IF” statement but I can’t figure it out. I have a Field titled “Item Type” and one titled “FLW – Business Days Remaining” and want to return a “2” if the Item Type equal “Prospect” and “FLW – Business Days Remaining” is grater than 2. This is what I have=IF(AND([Item Type]=”Prospect”,[FLW – Business Days Remaining] >2,”2″))) Any help would be greatly appreciated Read More
Copilot Studio – Questions about Knowledge
Hello!
We’re testing copilot studio and we’re having questions about the knowledge we’re uploading.
We’re using it for employees to ask internal questions.
Is there anyone who is using it and performing well?
Do you have any recommendations on what the file should look like?
Pdf, Word, Excel, with or without topics
I would appreciate it if you could support us
Thank you for your attention
Hello!We’re testing copilot studio and we’re having questions about the knowledge we’re uploading.We’re using it for employees to ask internal questions.Is there anyone who is using it and performing well?Do you have any recommendations on what the file should look like?Pdf, Word, Excel, with or without topicsI would appreciate it if you could support usThank you for your attention Read More
Bell Notifications doesn’t work. Again.
and above link forwards on
But I definitely shall have them.
and above link forwards on
But I definitely shall have them. Read More
Leveraging phi-3 for an Enhanced Semantic Cache in RAG Applications
The field of Generative AI (GenAI) is rapidly evolving, with Large Language Models (LLMs) playing a central role. Building responsive and efficient applications using these models is crucial. Retrieval-Augmented Generation (RAG) applications, which combine retrieval and generation techniques, have emerged as a powerful solution for generating high-quality responses. However, a key challenge arises in handling repeat queries efficiently while maintaining contextually accurate and diverse responses. This blog post explores a solution that addresses this challenge. We propose a multi-layered approach that utilizes a semantic cache layer and phi-3, a Small Language Model (SLM) from Microsoft, to rewrite responses. This approach enhances both performance and user experience.
Demystifying RAG: Retrieval Meets Generation
Retrieval-Augmented Generation (RAG) is a cutting-edge framework that extends the capabilities of natural language generation models by incorporating information retrieval.
Here’s how it works:
User Query: This is the initial input from the user.
App service: Central component that orchestrates the entire RAG workflow, managing user queries, interacting with the cache and search service, and delivering final responses.
Vectorize Query: Leverage OpenAI Embedding models to vectorize the user query into numerical representations. These vectors, similar to fingerprints, allow for efficient comparison and retrieval of relevant information from the vector store and semantic cache.
Semantic Cache Store: This component acts as a storage unit for responses to previously encountered queries, and to check if the current user query aligns with any queries stored in the semantic cache. If a response is found in the cache (cache-hit), the response is fetched and sent to the user.
Vector Store: If no matching query is found in the cache (cache-miss), leverage Azure AI Search service to scour the vast corpus of text to identify relevant documents or snippets based on the user’s query.
Azure OpenAI LLM (GPT 3.5/4/4o): The retrieved documents from AI Search are fed to these LLMs to craft a response, in-context to the user’s query.
Logs: These are used to monitor and analyze system performance.
What is Semantic Cache?
Semantic caching plays a pivotal role in enhancing the efficiency and responsiveness of Retrieval-Augmented Generation (RAG) applications. This section delves into its significance and functionality within the broader architecture:
Understanding Semantic Cache
Storage and Retrieval: The semantic cache acts as a specialized storage unit that stores responses to previously encountered queries. It indexes these responses based on the semantic content of the queries, allowing for efficient retrieval when similar queries are encountered in the future.
Query Matching: When a user query is received, it undergoes vectorization using embedding models to create a numerical representation. This representation is compared against stored queries in the semantic cache. If a match is found (cache-hit), the corresponding response is fetched without the need for additional computation.
Benefits of Semantic Cache:
Speed: Responses retrieved from the semantic cache are delivered almost instantaneously, significantly reducing latency compared to generating responses from scratch.
Resource Efficiency: By reusing pre-computed responses, semantic caching optimizes resource utilization, allowing computational resources to be allocated more effectively.
Consistency: Cached responses ensure consistency in answers to frequently asked questions or similar queries, maintaining a coherent user experience.
Scalability: As the volume of queries increases, semantic caching scales efficiently by storing and retrieving responses based on semantic similarities rather than raw text matching.
Implementing Semantic Cache in RAG
Integration with RAG Workflow: The semantic cache is seamlessly integrated into the RAG workflow, typically managed by the application service. Upon receiving a user query, the application service first checks the semantic cache for a matching response.
Update and Refresh: Regular updates and maintenance of the semantic cache are essential to ensure that responses remain relevant and up to date. This may involve periodic pruning of outdated entries and adding new responses based on recent user interactions.
Performance Monitoring: Monitoring tools track the performance of the semantic cache, providing insights into cache-hit rates, response retrieval times, and overall system efficiency. These metrics guide optimization efforts and ensure continuous improvement.
Challenges in RAG with Semantic Caching
While RAG models are undeniably powerful, they encounter some hurdles:
Repetitive Queries: When users pose similar or identical queries repeatedly, it can lead to redundant processing, resulting in slower response times.
Response Consistency: Ensuring responses maintain contextual accuracy and relevance, especially for similar queries, is crucial.
Computational Burden: Generating responses from scratch for every query can be computationally expensive, impacting resource utilization.
Improving the Semantic Cache with phi-3
To address these challenges, we propose a multi-layered approach built on top of RAG architecture with semantic caching that leverages phi-3, a Small Language Model (SLM) from Microsoft, to dynamically rewrite cached responses retrieved from the semantic cache for similar repeat queries. This ensures responses remain contextually relevant and varied, even when served from the cache.
Major change in the architeture above is addition of phi-3, When a matching query is found in the cache, the retrieved cached response is routed through phi-3. This SLM analyzes the cached response and the current user query, dynamically rewriting the cached response to better suit the nuances of the new query.
By integrating phi-3 into the semantic cache layer, we can achieve the following:
Dynamic Rewriting: When a query matching a cached response is received, phi-3 steps in. It analyses the cached response and the user’s current query, identifying nuances and differences. Subsequently, phi-3 rewrites the cached response to seamlessly incorporate the specific context of the new query while preserving the core meaning. This ensures that even cached responses feel fresh, relevant, and up to date.
Reduced Computational Load: By leveraging phi-3 for rewriting cached responses, we significantly reduce the burden on the larger, computationally expensive LLMs (like GPT-3). This frees up resources for the LLM to handle complex or novel queries that require its full generative power.
Improved Response Diversity: Even for repetitive queries, phi-3 injects variation into the responses through rewriting. This prevents users from encountering identical responses repeatedly, enhancing the overall user experience.
Implementation Considerations
Integrating phi-3 into your RAG application requires careful planning and execution:
Semantic Cache Management: Efficient management of the semantic cache is crucial to ensure quick access to relevant cached responses. Regular updates and pruning of the cache can help maintain its effectiveness.
Fine-Tuning phi-3: Fine-tuning phi-3 to handle specific rewriting tasks can further enhance its performance and ensure it aligns well with the context of your application.
Monitoring and Analytics: Continuous monitoring and analytics can help identify patterns in user queries and optimize the caching strategy. Logs play a crucial role in this aspect, providing insights into the system’s performance and areas for improvement.
Conclusion
The integration of phi-3 into the semantic cache layer of a RAG application represents a significant advancement in handling repeat queries efficiently while maintaining contextually accurate and diverse responses. By leveraging the dynamic rewriting capabilities of phi-3, we can enhance both the performance and user experience of RAG applications.
This multi-layered approach not only addresses the challenges of repetitive queries and computational burden but also ensures that responses remain fresh and relevant, even when served from the cache. As Generative AI continues to evolve, such innovations will play a crucial role in building responsive and efficient applications that can meet the diverse needs of users.
Incorporating these strategies into your RAG application can help you stay ahead in the rapidly evolving field of Generative AI, delivering high-quality and contextually accurate responses that enhance user satisfaction and engagement.
Microsoft Tech Community – Latest Blogs –Read More
Why are the gradients not backpropagating into the encoder in this custom loop?
I am building a convolutional autoencoder using a custom training loop. When I attempt to reconstruct the images, the network’s output degenerates to guessing the same incorrect value for all inputs. However, training the autoencoder in a single stack with the trainnet function works fine, indicating that the gradient updates are unable to bridge the bottleneck layer in the custom training loop. Unfortunately, I need to use the custom training loop for a different task and am prohibited from using TensorFlow or PyTorch.
What is the syntax to ensure that the encoder is able to update based on the decoder’s reconstruction performance?
%% Functional ‘trainnet’ loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
aeLayers = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
autoencoder = dlnetwork(aeLayers);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
options = trainingOptions("adam", …
InitialLearnRate=learnRate,…
MaxEpochs=30, …
Plots="training-progress", …
TargetDataFormats="SSCB", …
InputDataFormats="SSCB", …
MiniBatchSize=miniBatchSize, …
OutputNetwork="last-iteration", …
Shuffle="every-epoch");
autoencoder = trainnet(dlarray(xTrain, ‘SSCB’),dlarray(xTrain, ‘SSCB’), …
autoencoder, ‘mse’, options);
%% Testing
YTest = predict(autoencoder, dlarray(xTest, ‘SSCB’));
indices = randi(size(xTest, 4), [1, size(xTest, 4)]); % Shuffle YTest & xTest
xTest = xTest(:,:,:,indices); YTest = YTest(:,:,:,indices);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = extractdata(YTest(:,:,:,1:numImages));
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Nonfunctional Custom Training Loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Encoder
layersE = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer];
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
layersD = [
imageInputLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
% Create training minibatchqueue
dsTrain = arrayDatastore(xTrain,IterationDimension=4);
numOutputs = 1;
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat="SSCB", …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(size(xTrain, 4) / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
% Assess validation criterion
iteration = iteration + 1;
% Read mini-batch of data.
X = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = arrayDatastore(xTest,IterationDimension=4);
numOutputs = 1;
ntest = size(xTest, 4);
indices = randi(ntest,[1,ntest]);
xTest = xTest(:,:,:,indices);% Shuffle test data
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
YTest = modelPredictions(netE,netD,mbqTest);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = YTest(:,:,:,1:numImages);
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X)
% Image Reconstruction loss.
reconstructionLoss = l2loss(Xrecon, X, ‘NormalizationFactor’,’all-elements’);
% Combined loss.
loss = reconstructionLoss;
end
function Xrecon = modelPredictions(netE,netD,mbq)
Xrecon = [];
shuffle(mbq)
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
end
end
function X = preprocessMiniBatch(Xcell)
% Concatenate.
X = cat(4,Xcell{:});
endI am building a convolutional autoencoder using a custom training loop. When I attempt to reconstruct the images, the network’s output degenerates to guessing the same incorrect value for all inputs. However, training the autoencoder in a single stack with the trainnet function works fine, indicating that the gradient updates are unable to bridge the bottleneck layer in the custom training loop. Unfortunately, I need to use the custom training loop for a different task and am prohibited from using TensorFlow or PyTorch.
What is the syntax to ensure that the encoder is able to update based on the decoder’s reconstruction performance?
%% Functional ‘trainnet’ loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
aeLayers = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
autoencoder = dlnetwork(aeLayers);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
options = trainingOptions("adam", …
InitialLearnRate=learnRate,…
MaxEpochs=30, …
Plots="training-progress", …
TargetDataFormats="SSCB", …
InputDataFormats="SSCB", …
MiniBatchSize=miniBatchSize, …
OutputNetwork="last-iteration", …
Shuffle="every-epoch");
autoencoder = trainnet(dlarray(xTrain, ‘SSCB’),dlarray(xTrain, ‘SSCB’), …
autoencoder, ‘mse’, options);
%% Testing
YTest = predict(autoencoder, dlarray(xTest, ‘SSCB’));
indices = randi(size(xTest, 4), [1, size(xTest, 4)]); % Shuffle YTest & xTest
xTest = xTest(:,:,:,indices); YTest = YTest(:,:,:,indices);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = extractdata(YTest(:,:,:,1:numImages));
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Nonfunctional Custom Training Loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Encoder
layersE = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer];
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
layersD = [
imageInputLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
% Create training minibatchqueue
dsTrain = arrayDatastore(xTrain,IterationDimension=4);
numOutputs = 1;
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat="SSCB", …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(size(xTrain, 4) / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
% Assess validation criterion
iteration = iteration + 1;
% Read mini-batch of data.
X = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = arrayDatastore(xTest,IterationDimension=4);
numOutputs = 1;
ntest = size(xTest, 4);
indices = randi(ntest,[1,ntest]);
xTest = xTest(:,:,:,indices);% Shuffle test data
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
YTest = modelPredictions(netE,netD,mbqTest);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = YTest(:,:,:,1:numImages);
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X)
% Image Reconstruction loss.
reconstructionLoss = l2loss(Xrecon, X, ‘NormalizationFactor’,’all-elements’);
% Combined loss.
loss = reconstructionLoss;
end
function Xrecon = modelPredictions(netE,netD,mbq)
Xrecon = [];
shuffle(mbq)
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
end
end
function X = preprocessMiniBatch(Xcell)
% Concatenate.
X = cat(4,Xcell{:});
end I am building a convolutional autoencoder using a custom training loop. When I attempt to reconstruct the images, the network’s output degenerates to guessing the same incorrect value for all inputs. However, training the autoencoder in a single stack with the trainnet function works fine, indicating that the gradient updates are unable to bridge the bottleneck layer in the custom training loop. Unfortunately, I need to use the custom training loop for a different task and am prohibited from using TensorFlow or PyTorch.
What is the syntax to ensure that the encoder is able to update based on the decoder’s reconstruction performance?
%% Functional ‘trainnet’ loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
aeLayers = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
autoencoder = dlnetwork(aeLayers);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
options = trainingOptions("adam", …
InitialLearnRate=learnRate,…
MaxEpochs=30, …
Plots="training-progress", …
TargetDataFormats="SSCB", …
InputDataFormats="SSCB", …
MiniBatchSize=miniBatchSize, …
OutputNetwork="last-iteration", …
Shuffle="every-epoch");
autoencoder = trainnet(dlarray(xTrain, ‘SSCB’),dlarray(xTrain, ‘SSCB’), …
autoencoder, ‘mse’, options);
%% Testing
YTest = predict(autoencoder, dlarray(xTest, ‘SSCB’));
indices = randi(size(xTest, 4), [1, size(xTest, 4)]); % Shuffle YTest & xTest
xTest = xTest(:,:,:,indices); YTest = YTest(:,:,:,indices);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = extractdata(YTest(:,:,:,1:numImages));
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Nonfunctional Custom Training Loop
clear
close all
clc
% Get handwritten digit data
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
% Check that all pixel values are min-max scaled
assert(max(xTrain(:)) == 1); assert(min(xTrain(:)) == 0);
assert(max(xTest(:)) == 1); assert(min(xTest(:)) == 0);
imageSize = [28 28 1];
%% Layer definitions
% Encoder
layersE = [
imageInputLayer(imageSize)
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer];
% Latent projection
projectionSize = [7 7 64];
numInputChannels = imageSize(3);
% Decoder
layersD = [
imageInputLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(Name=’Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 25;
learnRate = 1e-3;
% Create training minibatchqueue
dsTrain = arrayDatastore(xTrain,IterationDimension=4);
numOutputs = 1;
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat="SSCB", …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(size(xTrain, 4) / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics="TrainingLoss", …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
% Assess validation criterion
iteration = iteration + 1;
% Read mini-batch of data.
X = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = arrayDatastore(xTest,IterationDimension=4);
numOutputs = 1;
ntest = size(xTest, 4);
indices = randi(ntest,[1,ntest]);
xTest = xTest(:,:,:,indices);% Shuffle test data
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
YTest = modelPredictions(netE,netD,mbqTest);
% Display test images
numImages = 64;
figure
subplot(1,2,1)
preds = YTest(:,:,:,1:numImages);
I = imtile(preds);
imshow(I)
title("Reconstructed Images")
subplot(1,2,2)
orgs = xTest(:,:,:,1:numImages);
I = imtile(orgs);
imshow(I)
title("Original Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X)
% Image Reconstruction loss.
reconstructionLoss = l2loss(Xrecon, X, ‘NormalizationFactor’,’all-elements’);
% Combined loss.
loss = reconstructionLoss;
end
function Xrecon = modelPredictions(netE,netD,mbq)
Xrecon = [];
shuffle(mbq)
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
end
end
function X = preprocessMiniBatch(Xcell)
% Concatenate.
X = cat(4,Xcell{:});
end deep learning, autoencoder, regularization, initialization, custom loops MATLAB Answers — New Questions
Integrating Two Unrelated Values
Hi!
I am trying to integrate two different series that correlate to the same image. I was able to obtain X, Y, and Z values for an image in which the x-values correlate with length, the y-values correlate with width, and the z-values correlate with intensity. However, because these measurements are taken across a rectangular ROI, there are 528 x-values (length) and 504 y-values (width), as the object resembles 1/2 of an ellipse.
I would like the integrate these values so that I can plot (length x width x intensity) for my given shape. I have tried to integrate these values by plotting them on the same scatterplot, however I am not having a lot of sucess. I also am not having any luck finding code that will allow me to integrate these values.
Does anyone know an effective way to integrate two "unrelated" values?Hi!
I am trying to integrate two different series that correlate to the same image. I was able to obtain X, Y, and Z values for an image in which the x-values correlate with length, the y-values correlate with width, and the z-values correlate with intensity. However, because these measurements are taken across a rectangular ROI, there are 528 x-values (length) and 504 y-values (width), as the object resembles 1/2 of an ellipse.
I would like the integrate these values so that I can plot (length x width x intensity) for my given shape. I have tried to integrate these values by plotting them on the same scatterplot, however I am not having a lot of sucess. I also am not having any luck finding code that will allow me to integrate these values.
Does anyone know an effective way to integrate two "unrelated" values? Hi!
I am trying to integrate two different series that correlate to the same image. I was able to obtain X, Y, and Z values for an image in which the x-values correlate with length, the y-values correlate with width, and the z-values correlate with intensity. However, because these measurements are taken across a rectangular ROI, there are 528 x-values (length) and 504 y-values (width), as the object resembles 1/2 of an ellipse.
I would like the integrate these values so that I can plot (length x width x intensity) for my given shape. I have tried to integrate these values by plotting them on the same scatterplot, however I am not having a lot of sucess. I also am not having any luck finding code that will allow me to integrate these values.
Does anyone know an effective way to integrate two "unrelated" values? image analysis, integration, 3d plots, function, matrix manipulation MATLAB Answers — New Questions