Category: News
Not able to login form docker image for 1 month trail subscription?
Hi Support team.
I am trying to do MATLAB login from docker image, but my login is failing.
Note:- I am using 1 month free trial login credetials.
Docker file and error image I have given below.
Docker File:-
# Use MATLAB base image from MathWorks
FROM mathworks/matlab:r2023b
# Set the working directory inside the container
WORKDIR /usr/src/matlab
# Copy your MATLAB conversion script to the container
COPY convert_mf4_to_csv.m .
## Environment varibales for MathWorks credentials
ENV MW_USERNAME = "{User_email}"
ENV MW_PASSWORD = "{password}"
# Expose the port for communication (if needed)
EXPOSE 8080
# The default command will pass arguments for input and output paths
CMD ["matlab", "-batch", "convert_mf4_to_csv(‘/x_input_folder/input_file.mf4’, ‘/x_output_folder/output_file.csv’)"]
Error:-
MATLAB is selecting SOFTWARE OPENGL rendering.
Please enter your MathWorks Account email address and press Enter:
Sign-in failed. Would you like to retry? y/n [n]
Please check attached image.
Any support will be appreciable.
Thanks in Advance!!Hi Support team.
I am trying to do MATLAB login from docker image, but my login is failing.
Note:- I am using 1 month free trial login credetials.
Docker file and error image I have given below.
Docker File:-
# Use MATLAB base image from MathWorks
FROM mathworks/matlab:r2023b
# Set the working directory inside the container
WORKDIR /usr/src/matlab
# Copy your MATLAB conversion script to the container
COPY convert_mf4_to_csv.m .
## Environment varibales for MathWorks credentials
ENV MW_USERNAME = "{User_email}"
ENV MW_PASSWORD = "{password}"
# Expose the port for communication (if needed)
EXPOSE 8080
# The default command will pass arguments for input and output paths
CMD ["matlab", "-batch", "convert_mf4_to_csv(‘/x_input_folder/input_file.mf4’, ‘/x_output_folder/output_file.csv’)"]
Error:-
MATLAB is selecting SOFTWARE OPENGL rendering.
Please enter your MathWorks Account email address and press Enter:
Sign-in failed. Would you like to retry? y/n [n]
Please check attached image.
Any support will be appreciable.
Thanks in Advance!! Hi Support team.
I am trying to do MATLAB login from docker image, but my login is failing.
Note:- I am using 1 month free trial login credetials.
Docker file and error image I have given below.
Docker File:-
# Use MATLAB base image from MathWorks
FROM mathworks/matlab:r2023b
# Set the working directory inside the container
WORKDIR /usr/src/matlab
# Copy your MATLAB conversion script to the container
COPY convert_mf4_to_csv.m .
## Environment varibales for MathWorks credentials
ENV MW_USERNAME = "{User_email}"
ENV MW_PASSWORD = "{password}"
# Expose the port for communication (if needed)
EXPOSE 8080
# The default command will pass arguments for input and output paths
CMD ["matlab", "-batch", "convert_mf4_to_csv(‘/x_input_folder/input_file.mf4’, ‘/x_output_folder/output_file.csv’)"]
Error:-
MATLAB is selecting SOFTWARE OPENGL rendering.
Please enter your MathWorks Account email address and press Enter:
Sign-in failed. Would you like to retry? y/n [n]
Please check attached image.
Any support will be appreciable.
Thanks in Advance!! docker, sign in issue MATLAB Answers â New Questions
â
LSTM-CNN “The size of the convolution dimension of the padded input data must be larger than or equal to the filter size”
Hello everyone,
I am trying to implement LSTM-CNN for speech recognition, I have a matrix for train and test which already are converted it to cell, when I excuted the code I got the error below:
net = trainNetwork(AllCellTrain, YCA, layers, options);
Caused by:
Layer 3: The size of the convolution dimension of the padded input data must be larger than or equal to the filter
size. For networks with sequence input, this check depends on the MinLength property of the sequence input layer. To
ensure that this check is accurate, set MinLength to the shortest sequence length of your training data.
The code that I have used:
% Define LSTM-CNN model architecture
numHiddenUnits = 100; % Number of hidden units in the LSTM layer
numFilters = 100; % Number of filters in the CNN layer
%filterSize = [3, 3]; % Size of the filters in the CNN layer
filterSize=3;
num_features = 39;
layers = [
sequenceInputLayer(num_features)
lstmLayer(numHiddenUnits,’OutputMode’,’sequence’)
convolution1dLayer(filterSize, numFilters)
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(num_classes)
softmaxLayer
classificationLayer
];
% Specify the training options
max_epochs = 26;
mini_batch_size = 128;
initial_learning_rate = 0.001;
options = trainingOptions(‘adam’, …
‘MaxEpochs’, max_epochs, …
‘MiniBatchSize’, mini_batch_size, …
‘InitialLearnRate’, initial_learning_rate, …
‘GradientThreshold’, 1, …
‘Shuffle’, ‘every-epoch’, …
‘Verbose’, 1, …
‘ExecutionEnvironment’,’auto’, …
‘Plots’, ‘training-progress’);
% Train the LSTM-CNN model
YCA = categorical(CA);
net = trainNetwork(AllCellTrain, YCA, layers, options);
Thanks in advance!Hello everyone,
I am trying to implement LSTM-CNN for speech recognition, I have a matrix for train and test which already are converted it to cell, when I excuted the code I got the error below:
net = trainNetwork(AllCellTrain, YCA, layers, options);
Caused by:
Layer 3: The size of the convolution dimension of the padded input data must be larger than or equal to the filter
size. For networks with sequence input, this check depends on the MinLength property of the sequence input layer. To
ensure that this check is accurate, set MinLength to the shortest sequence length of your training data.
The code that I have used:
% Define LSTM-CNN model architecture
numHiddenUnits = 100; % Number of hidden units in the LSTM layer
numFilters = 100; % Number of filters in the CNN layer
%filterSize = [3, 3]; % Size of the filters in the CNN layer
filterSize=3;
num_features = 39;
layers = [
sequenceInputLayer(num_features)
lstmLayer(numHiddenUnits,’OutputMode’,’sequence’)
convolution1dLayer(filterSize, numFilters)
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(num_classes)
softmaxLayer
classificationLayer
];
% Specify the training options
max_epochs = 26;
mini_batch_size = 128;
initial_learning_rate = 0.001;
options = trainingOptions(‘adam’, …
‘MaxEpochs’, max_epochs, …
‘MiniBatchSize’, mini_batch_size, …
‘InitialLearnRate’, initial_learning_rate, …
‘GradientThreshold’, 1, …
‘Shuffle’, ‘every-epoch’, …
‘Verbose’, 1, …
‘ExecutionEnvironment’,’auto’, …
‘Plots’, ‘training-progress’);
% Train the LSTM-CNN model
YCA = categorical(CA);
net = trainNetwork(AllCellTrain, YCA, layers, options);
Thanks in advance! Hello everyone,
I am trying to implement LSTM-CNN for speech recognition, I have a matrix for train and test which already are converted it to cell, when I excuted the code I got the error below:
net = trainNetwork(AllCellTrain, YCA, layers, options);
Caused by:
Layer 3: The size of the convolution dimension of the padded input data must be larger than or equal to the filter
size. For networks with sequence input, this check depends on the MinLength property of the sequence input layer. To
ensure that this check is accurate, set MinLength to the shortest sequence length of your training data.
The code that I have used:
% Define LSTM-CNN model architecture
numHiddenUnits = 100; % Number of hidden units in the LSTM layer
numFilters = 100; % Number of filters in the CNN layer
%filterSize = [3, 3]; % Size of the filters in the CNN layer
filterSize=3;
num_features = 39;
layers = [
sequenceInputLayer(num_features)
lstmLayer(numHiddenUnits,’OutputMode’,’sequence’)
convolution1dLayer(filterSize, numFilters)
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(num_classes)
softmaxLayer
classificationLayer
];
% Specify the training options
max_epochs = 26;
mini_batch_size = 128;
initial_learning_rate = 0.001;
options = trainingOptions(‘adam’, …
‘MaxEpochs’, max_epochs, …
‘MiniBatchSize’, mini_batch_size, …
‘InitialLearnRate’, initial_learning_rate, …
‘GradientThreshold’, 1, …
‘Shuffle’, ‘every-epoch’, …
‘Verbose’, 1, …
‘ExecutionEnvironment’,’auto’, …
‘Plots’, ‘training-progress’);
% Train the LSTM-CNN model
YCA = categorical(CA);
net = trainNetwork(AllCellTrain, YCA, layers, options);
Thanks in advance! cnn, audio, lstm, deep learning, machine learning, classification MATLAB Answers â New Questions
â
Share button on Meeting Side Panel is not sharing the CURRENT PAGE to the stage
I am using teams-js sdk version 2.9.0. I have a web app which can be added to meetings on teams.
Â
When a user adds my app to the meeting, the app will show a menu of items (landing page for meeting) on the meeting side panel. Once the user selects an item from the list, user will be redirected to item details page (separate page from landing page). Now., If the user clicks on share button, the stage will show landing page (items list page) instead of the item details page which is the current page of the side panel!!
Â
Sample Reproducing Scenario:
Create a web app which has 2 pages. One page should show list of items (PageA). Other page(PageB) should show item details on click on an item from the PageA.
Create a teams app for your website with meetingStage and meetingSidePanel context enabled in config and.
Â
1. Create a Teams Meeting
2. Load the app you created to the meeting.
3. It will show a list of items (PageA). Select an item.
4.You will be redirected to item details page (PageB) on side panel itself.
5.Click on Share button.
6.Stage will load the landing page (PageA-list of items) again instead of the current page (PageB) on side panel.
Â
I am expecting the current state of the side panel to be shared to stage on click of share.
Can anyone please help on this?
Â
âI am using teams-js sdk version 2.9.0. I have a web app which can be added to meetings on teams. When a user adds my app to the meeting, the app will show a menu of items (landing page for meeting) on the meeting side panel. Once the user selects an item from the list, user will be redirected to item details page (separate page from landing page). Now., If the user clicks on share button, the stage will show landing page (items list page) instead of the item details page which is the current page of the side panel!! Sample Reproducing Scenario:Create a web app which has 2 pages. One page should show list of items (PageA). Other page(PageB) should show item details on click on an item from the PageA.Create a teams app for your website with meetingStage and meetingSidePanel context enabled in config and. 1. Create a Teams Meeting2. Load the app you created to the meeting.3. It will show a list of items (PageA). Select an item.4.You will be redirected to item details page (PageB) on side panel itself.5.Click on Share button.6.Stage will load the landing page (PageA-list of items) again instead of the current page (PageB) on side panel. I am expecting the current state of the side panel to be shared to stage on click of share.Can anyone please help on this?@prasad_das-MSFT   Read More
â
Exclamation mark instead of unread messages – bad user experience
Hi,
Â
Since the new teams i get sometimes the exclamation mark and i have an “account problem” with one of my customers. Normally my signin is expired there and i need to resign in.
But i do not need them at the moment. The exclamation mark kills the “unread messages” number on my taskbar – which is needed. Anyone else experiencing this? Signing in to the customer tenant(s) is no real solution – as this disrupts the work flow.
Â
BR
Stephan
âHi, Since the new teams i get sometimes the exclamation mark and i have an “account problem” with one of my customers. Normally my signin is expired there and i need to resign in. But i do not need them at the moment. The exclamation mark kills the “unread messages” number on my taskbar – which is needed. Anyone else experiencing this? Signing in to the customer tenant(s) is no real solution – as this disrupts the work flow. BRStephan  Read More
â
Episode 157 of Microsoft Cloud and Hosting Partner Online Meeting | Tuesday September 17, 2024
I’m looking forward to seeing you online on Tuesday September 17 from 12:30pm Sydney time for the 157th episode of the Microsoft Cloud and Hosting Partner Online Meeting. If you haven’t registered yet it’s not too late. Simply click here for details, the agenda and to register.
Â
Attached here are the slides we’ll cover so you can “follow the bouncing ball” as we cover the usual topics of Commerce and Operations Updates, the feature topic “A Tour of Microsoft Entra Suite”, News, Views, Venues; Did You See; and Technical Updates.
Â
Have a good evening.
Â
Regards, Phil
âI’m looking forward to seeing you online on Tuesday September 17 from 12:30pm Sydney time for the 157th episode of the Microsoft Cloud and Hosting Partner Online Meeting. If you haven’t registered yet it’s not too late. Simply click here for details, the agenda and to register.
Â
Attached here are the slides we’ll cover so you can “follow the bouncing ball” as we cover the usual topics of Commerce and Operations Updates, the feature topic “A Tour of Microsoft Entra Suite”, News, Views, Venues; Did You See; and Technical Updates.
Â
Have a good evening.
Â
Regards, Phil  Read More
â
How to disable two/three-finger zoom (pinch) in app on touchscreen?
I have an app developed with AppDesigner that you can see below on the left. It is being run on a touchscreen. Using two or three fingers (also called a "pinch") on the touchscreen, it is possible to zoom in on the app window contents. The picture on the right shows what is happening (and apologies for the glare). This was reproduced across two OSs and two different touchscreens, which suggests it is something inherent to MATLAB. This only appears to be possible with a touchscreen, since I was unable to zoom like this using combinations of mouse and/or keyboard.
Would anyone know how to disable this zoom?I have an app developed with AppDesigner that you can see below on the left. It is being run on a touchscreen. Using two or three fingers (also called a "pinch") on the touchscreen, it is possible to zoom in on the app window contents. The picture on the right shows what is happening (and apologies for the glare). This was reproduced across two OSs and two different touchscreens, which suggests it is something inherent to MATLAB. This only appears to be possible with a touchscreen, since I was unable to zoom like this using combinations of mouse and/or keyboard.
Would anyone know how to disable this zoom? I have an app developed with AppDesigner that you can see below on the left. It is being run on a touchscreen. Using two or three fingers (also called a "pinch") on the touchscreen, it is possible to zoom in on the app window contents. The picture on the right shows what is happening (and apologies for the glare). This was reproduced across two OSs and two different touchscreens, which suggests it is something inherent to MATLAB. This only appears to be possible with a touchscreen, since I was unable to zoom like this using combinations of mouse and/or keyboard.
Would anyone know how to disable this zoom? app, gui, ui, touchscreen, multitouch, gesture, pinch, zoom MATLAB Answers â New Questions
â
Error in Modeling and Simulation of an Autonomous Underwater Vehicle matlab exemple
I was trying to familiarize myself with this example made by matlab, but I can’t test it because of this error. I haven’t changed anything. Why is this mismatch happening? It only occurs when switching to high fidelity simulation in the asbAUV.slx.I was trying to familiarize myself with this example made by matlab, but I can’t test it because of this error. I haven’t changed anything. Why is this mismatch happening? It only occurs when switching to high fidelity simulation in the asbAUV.slx. I was trying to familiarize myself with this example made by matlab, but I can’t test it because of this error. I haven’t changed anything. Why is this mismatch happening? It only occurs when switching to high fidelity simulation in the asbAUV.slx. exemple MATLAB Answers â New Questions
â
Assistance Needed: Simulating Power-Exponent-Phase Vortex Beam (PEPVB) Propagation in Oceanic Turbulence
I am attempting to simulate the propagation characteristics of the Power-Exponent-Phase Vortex Beam (PEPVB) in oceanic turbulence based on the theoretical model provided in the paper Propagation properties of rotationally-symmetric power-exponent-phase vortex beam through oceanic turbulence. The model uses the extended Huygens-Fresnel diffraction integral and oceanic turbulence theory, and Iâm trying to implement this in MATLAB.
I have followed the mathematical formulas provided in section 2 of the paper, particularly equations (1) to (8), which define the electric field and cross-spectral density for the PEPVB passing through turbulence. However, despite various attempts, I havenât been able to get the simulation to work as expected.
The first image shows the results I generated using my own code, while the second image shows the results provided by the authors of the PEPVB study. The differences between these results are confusing me, especially regarding the implementation of the electric field formulas and how to handle the oceanic turbulence parameters, such as the rate of dissipation of turbulence kinetic energy (Δ), the temperature-salinity contribution ratio (Ï), and the dissipation rate of the mean-squared temperature (ÏT).
Could anyone with experience in PEPVB or similar simulations in MATLAB help me check my code or provide working examples? Any help would be greatly appreciated!
Thank you for your time and assistanceI am attempting to simulate the propagation characteristics of the Power-Exponent-Phase Vortex Beam (PEPVB) in oceanic turbulence based on the theoretical model provided in the paper Propagation properties of rotationally-symmetric power-exponent-phase vortex beam through oceanic turbulence. The model uses the extended Huygens-Fresnel diffraction integral and oceanic turbulence theory, and Iâm trying to implement this in MATLAB.
I have followed the mathematical formulas provided in section 2 of the paper, particularly equations (1) to (8), which define the electric field and cross-spectral density for the PEPVB passing through turbulence. However, despite various attempts, I havenât been able to get the simulation to work as expected.
The first image shows the results I generated using my own code, while the second image shows the results provided by the authors of the PEPVB study. The differences between these results are confusing me, especially regarding the implementation of the electric field formulas and how to handle the oceanic turbulence parameters, such as the rate of dissipation of turbulence kinetic energy (Δ), the temperature-salinity contribution ratio (Ï), and the dissipation rate of the mean-squared temperature (ÏT).
Could anyone with experience in PEPVB or similar simulations in MATLAB help me check my code or provide working examples? Any help would be greatly appreciated!
Thank you for your time and assistance I am attempting to simulate the propagation characteristics of the Power-Exponent-Phase Vortex Beam (PEPVB) in oceanic turbulence based on the theoretical model provided in the paper Propagation properties of rotationally-symmetric power-exponent-phase vortex beam through oceanic turbulence. The model uses the extended Huygens-Fresnel diffraction integral and oceanic turbulence theory, and Iâm trying to implement this in MATLAB.
I have followed the mathematical formulas provided in section 2 of the paper, particularly equations (1) to (8), which define the electric field and cross-spectral density for the PEPVB passing through turbulence. However, despite various attempts, I havenât been able to get the simulation to work as expected.
The first image shows the results I generated using my own code, while the second image shows the results provided by the authors of the PEPVB study. The differences between these results are confusing me, especially regarding the implementation of the electric field formulas and how to handle the oceanic turbulence parameters, such as the rate of dissipation of turbulence kinetic energy (Δ), the temperature-salinity contribution ratio (Ï), and the dissipation rate of the mean-squared temperature (ÏT).
Could anyone with experience in PEPVB or similar simulations in MATLAB help me check my code or provide working examples? Any help would be greatly appreciated!
Thank you for your time and assistance optical communication, laser beams, angular momentum MATLAB Answers â New Questions
â
Beta channel selection disabled and unselectable after reboot
Hello everyone,
Yesterday I downloaded the iso of the latest 24h2 release preview and installed it, then chose the beta channel as the insider subscription type, but when I restarted it didn’t keep the selection because I saw myself in release preview without the possibility of choosing the beta channel, which in fact is grayed out and not selectable, while dev and canary can be chosen.
Currently, I am in release preview with build 26100.1742 ge_release.
I installed all the updates and restarted again, but nothing has changed.
Can you tell me why and where I went wrong?
Thanks everyone.
âHello everyone,Yesterday I downloaded the iso of the latest 24h2 release preview and installed it, then chose the beta channel as the insider subscription type, but when I restarted it didn’t keep the selection because I saw myself in release preview without the possibility of choosing the beta channel, which in fact is grayed out and not selectable, while dev and canary can be chosen.Currently, I am in release preview with build 26100.1742 ge_release.I installed all the updates and restarted again, but nothing has changed.Can you tell me why and where I went wrong?Thanks everyone.  Read More
â
XDR deception – decoy working – lures not deploying
Hi everyone,
Â
i am trying to create some custom deceptions with the help of this blog post:
Stack Your Deception: Stacking MDE Deception Rules with Thinkst Canarytokens · Attack the SOC
Â
The decoys are working (if i ping a host i specified – alerts are raised).
But i cannot find the lures. I created some special lures for high privilege personas and placed them into {HOME} and a filepath beneath that.
But i cannot find the files (show hidden is on). Are the folders also created by deception?
It’s 5 days now – so time should also not be the problem.
Â
How to troubleshoot?
Â
BR
Stephan
âHi everyone, i am trying to create some custom deceptions with the help of this blog post:Stack Your Deception: Stacking MDE Deception Rules with Thinkst Canarytokens · Attack the SOC The decoys are working (if i ping a host i specified – alerts are raised).But i cannot find the lures. I created some special lures for high privilege personas and placed them into {HOME} and a filepath beneath that.But i cannot find the files (show hidden is on). Are the folders also created by deception?It’s 5 days now – so time should also not be the problem. How to troubleshoot? BRStephan  Read More
â
Filter SAP Data at source with Synapse/ADF CDC
Hi everyone,
Â
I’m currently working on a project in Azure Synapse where I’m using the SAP CDC Connector to connect to an S4Hana system. My goal is to filter data on the source side before storing it in my ADLS Gen2, as there are certain data restrictions that I need to adhere to.
I need to fetch multiple objects from SAP, and I typically use a parameterized approach for this. I have a JSON file that contains parameters and queries for each object I want to retrieve from the source. For instance, I define SQL queries in the JSON file to perform the filtering. This method works well with SQL Connectors.
However, with the SAP CDC Connector, I havenât been able to find any functionality that allows me to apply such filtering directly at the source.
Â
Hereâs what Iâm doing so far:
Iâm currently using a dataflow in a for each loop. In the dataflow however, I cannot pass SQL queries and Im stuck with the expression builder. I cannot figure out how to dynamically pass query like filtering. So Im just getting the unfiltered objects, which is not an option. I have so many objects, that I cant maintain a non parameterized version.
I tried using a copy data activity as well, however when selecting it, I do not get the option to choose the SAP CDC Integration Dataset.
Â
Has anyone successfully managed to filter tables at the source when using the SAP CDC linked service? Any insights or suggestions on how to achieve this would be greatly appreciated.
Â
Thanks in advance for your help!
âHi everyone, I’m currently working on a project in Azure Synapse where I’m using the SAP CDC Connector to connect to an S4Hana system. My goal is to filter data on the source side before storing it in my ADLS Gen2, as there are certain data restrictions that I need to adhere to.I need to fetch multiple objects from SAP, and I typically use a parameterized approach for this. I have a JSON file that contains parameters and queries for each object I want to retrieve from the source. For instance, I define SQL queries in the JSON file to perform the filtering. This method works well with SQL Connectors.However, with the SAP CDC Connector, I havenât been able to find any functionality that allows me to apply such filtering directly at the source. Hereâs what Iâm doing so far:Iâm currently using a dataflow in a for each loop. In the dataflow however, I cannot pass SQL queries and Im stuck with the expression builder. I cannot figure out how to dynamically pass query like filtering. So Im just getting the unfiltered objects, which is not an option. I have so many objects, that I cant maintain a non parameterized version.I tried using a copy data activity as well, however when selecting it, I do not get the option to choose the SAP CDC Integration Dataset. Has anyone successfully managed to filter tables at the source when using the SAP CDC linked service? Any insights or suggestions on how to achieve this would be greatly appreciated. Thanks in advance for your help!  Read More
â
Grades app – filter by TAG
On the grades app, I can filter by date range, and by grading category. I have the need to filter by TAG
(I co-teach a class with a colleague, we use categories for TYPE of assignment, and tags to mark WHICH of us is associated with the assignment…. and I’d like to filter this).
Hopefully, it should be an uncontentious tweak to make!
âOn the grades app, I can filter by date range, and by grading category. I have the need to filter by TAG(I co-teach a class with a colleague, we use categories for TYPE of assignment, and tags to mark WHICH of us is associated with the assignment…. and I’d like to filter this).Hopefully, it should be an uncontentious tweak to make!  Read More
â
Entra Connect Sync duplicated UPN
Hi
Â
I had Entra Connect running for a long time without issues. Out of the blue Connect Sync started to report Duplicate Attribute on 3 users User Principal Name.
Â
The 3 users, Connect Sync believe has a conflicting value in Entra, do exist in Entra, but with a smtp address which matches the UPN, and is not the the users UPN.Â
Â
If i run the following command on my on-prem AD the UPN does not exist in any form of domain name:
Â
Get-ADUser -Filter {UserPrincipalName -eq “email address removed for privacy reasons”}
Get-ADUser -Filter {UserPrincipalName -eq “e-mail@domain.local”}
Get-ADUser -Filter {UserPrincipalName -eq “email address removed for privacy reasons”}
Â
All my users UPN are different from the configured on-prem ProxyAddresses so the above error mesage makes no sense. And futher more the 3 users which sync sees as a conflict do not even has a ProxyAddresses configured.
Â
Any ideas how to futher debug this?
Â
/Robert
Â
Â
Â
Â
Â
Â
âHi I had Entra Connect running for a long time without issues. Out of the blue Connect Sync started to report Duplicate Attribute on 3 users User Principal Name. The 3 users, Connect Sync believe has a conflicting value in Entra, do exist in Entra, but with a smtp address which matches the UPN, and is not the the users UPN.  If i run the following command on my on-prem AD the UPN does not exist in any form of domain name: Get-ADUser -Filter {UserPrincipalName -eq “email address removed for privacy reasons”}Get-ADUser -Filter {UserPrincipalName -eq “e-mail@domain.local”}Get-ADUser -Filter {UserPrincipalName -eq “email address removed for privacy reasons”} All my users UPN are different from the configured on-prem ProxyAddresses so the above error mesage makes no sense. And futher more the 3 users which sync sees as a conflict do not even has a ProxyAddresses configured. Any ideas how to futher debug this? /Robert        Read More
â
Microsoft Attack Simulator Training Foreign Language
Â
I need some help in the ability to change the Microsoft Attack Simulator Video training from the default of English to a foreign language. The chosen video training does support the language, but I have been unsuccessful in finding the setting in activating the foreign language.
â I need some help in the ability to change the Microsoft Attack Simulator Video training from the default of English to a foreign language. The chosen video training does support the language, but I have been unsuccessful in finding the setting in activating the foreign language.  Read More
â
licença necessåria
boa noite, hoje eu fui usar a função histĂłrico de açÔes no Excel do meu pc, mais sempre da bloqueado, e quando eu vou ver aparece que preciso de licença, vocĂȘs sabem o que isso significa? espero que possam me ajudar.
âboa noite, hoje eu fui usar a função histĂłrico de açÔes no Excel do meu pc, mais sempre da bloqueado, e quando eu vou ver aparece que preciso de licença, vocĂȘs sabem o que isso significa? espero que possam me ajudar.  Read More
â
2 Stocks missing from Stocks Data type in excel
Two stock are missing from the Stocks data type.
Â
1. Premier Energies Ltd listed on the National Stock Exchange (NSE) of India & Bombay Stock Exchange (BSE). XNSE:PREMIERENE.
Â
2. Bajaj Housing Finance Ltd. listed on the same exchanges as above.
Â
Both are newly listed stocks. First one listed on 3rd September 2024. Second listed today i.e. 16th Spetember 2024.
Â
How to get these added ? I have given feedback on the 1st one multiple times.
Where to log a request ?
Â
Thanks.
Â
Â
Â
âTwo stock are missing from the Stocks data type. 1. Premier Energies Ltd listed on the National Stock Exchange (NSE) of India & Bombay Stock Exchange (BSE). XNSE:PREMIERENE. 2. Bajaj Housing Finance Ltd. listed on the same exchanges as above. Both are newly listed stocks. First one listed on 3rd September 2024. Second listed today i.e. 16th Spetember 2024. How to get these added ? I have given feedback on the 1st one multiple times.Where to log a request ? Thanks.     Read More
â
Switch to Azure Business Continuity Center for your at scale BCDR management needs
In response to the evolving customer requirements and environments since COVID-19, including the shift towards hybrid work models and the increase in ransomware attacks, we have observed a growing trend among customers to invest in multiple vendors for data protection. To address these needs, we have developed the Azure Business Continuity (ABC) Center, a streamlined, centralized management center that simplifies backup and disaster recovery across various environments (Azure, Hybrid) and solutions (Azure Backup and Azure Site Recovery).  Below are few resources to learn more about Azure Business Continuity Center:
Â
Business Continuity with ABCC: Part 1: Understand Protection Estate Summary – Microsoft Community Hub           Â
Business Continuity with ABCC: Part 2: understand your protectable resources inventory – Microsoft Community Hub         Â
Business Continuity with ABCC: Part 4: optimize security configuration – Microsoft Community Hub
Business Continuity with ABCC: Part 5: Monitoring protection – Microsoft Community Hub
Â
ABCC, currently in public preview since November 2023, is designed as an enhanced version of the Backup Center and will eventually replace it. Getting started is simple, with no prerequisites or costs involved. Even if you’ve been using Backup Center, no additional action is needed to begin viewing your protection estate in Azure Business Continuity Center. To start with , simply navigate to Azure portal and search for Azure Business Continuity Center.
Â
Azure Business Continuity Center (ABCC) providers enhanced experiences for business continuity, and we want our customers to adapt to it before it replaces the Backup Center. To support this transition, we have removed the Backup Center from the global search in the Azure portal, bust there is still option available from ABCC to go to Backup Center.
Â
Backup Center will no longer appear in the Azure Portal search results across all regions. We encourage you to explore the Azure Business Continuity Center (ABCC) for your BCDR journey and provide your valuable feedback to help us enhance it to better meet your needs.
Â
If you still want to launch Backup center, you can first go to Azure Business Continuity Center, from the Azure portal search.
Â
Then, from ABCC help menu, kindly select âGo to Backup Centerâ.
Â
If you are transitioning to the Backup Center, please share your reasons for doing so, including any missing capabilities, performance issues, or other concerns you may have encountered. Your insights are invaluable in helping us enhance the ABCC experience.
Â
Â
Â
âMicrosoft Tech Community – Latest Blogs –Read MoreÂ
Enhancing Retrieval-Augmented Generation with a Multimodal Knowledge Extraction and Retrieval System
The rapid evolution of AI has led to powerful tools for knowledge retrieval and question-answering systems, particularly with the rise of Retrieval-Augmented Generation (RAG) systems. This blog post introduces my capstone project, created as part of the IXN program at UCL in collaboration with Microsoft, aimed at enhancing RAG systems by integrating multimodal knowledge extraction and retrieval capabilities. The system enables AI agents to process both textual and visual data, offering more accurate and contextually relevant responses. In this post, Iâll walk you through the projectâs goals, development journey, technical implementation, and outcomes.
Â
Project Overview
The main goal of this project was to improve the performance of RAG systems by refining how multimodal data is extracted, stored, and retrieved. Current RAG systems primarily rely on text-based data, which limits their ability to generate accurate responses when queries require a combination of text and images. To address this, I developed a system capable of extracting, processing, and retrieving multimodal data from Wikimedia, allowing AI agents to generate more accurate, grounded and contextually relevant answers.
Â
Key features include:
Multimodal Knowledge Extraction: Data from Wikimedia (text, images, tables) is preprocessed, run through the transformation pipeline, and stored in vector and graph databases for efficient retrieval.
Dynamic Knowledge Retrieval: A custom query engine, combined with an agentic approach using the ReAct agent, ensures flexible and accurate retrieval of information by dynamically selecting the best tools and strategies for each query.
Â
The project began by addressing the limitations of existing RAG systems, particularly their difficulties with handling visual data and delivering accurate responses. After reviewing various technologies, a system architecture was developed to support both text and image data. Throughout the process, components were refined to ensure compatibility between LlamaIndex, Qdrant, and Neo4j, while optimising performance for managing large datasets. The primary challenges lay in handling the large volumes of data from Wikimedia, especially the processing of images, and refactoring the system for Dockerisation. These challenges were met through iterative improvements to the system architecture, ensuring efficient multimodal data handling and reliable deployment across environments.
Â
Implementation Overview
This project integrates both textual and visual data to enhance RAG systems’ retrieval and response generation. The systemâs architecture is split into two main processes:
Knowledge Extraction: Data is fetched from Wikimedia and transformed into embeddings for text and images. These embeddings are stored in Qdrant for efficient retrieval, while Neo4j captures the relationships between the nodes, ensuring the preservation of data structure.
Knowledge Retrieval: A dynamic query engine processes user queries, retrieving data from both Qdrant (using vector search) and Neo4j (via graph traversal). Advanced techniques like query expansion, reranking, and cross-referencing ensure the most relevant information is returned.
System Architecture Diagram
Â
Tech Stack
The following technologies were used to build and deploy the system:
Python: Core programming language for data pipelines
LlamaIndex: Framework for indexing, transforming, and retrieving multimodal data
Qdrant: Vector database for similarity searches based on embeddings
Neo4j: Graph database used to store and manage relationships between data entities
Azure OpenAI (GPT-4O): Used for handling multimodal inputs, deploying models via Azure App Services
Text Embedding Ada-002: Model for generating text embeddings
Azure Computer Vision: Used for generating image embeddings
Gradio: Provides an interactive interface for querying the system
Docker and Docker Compose: Used for containerization and orchestration, ensuring consistent deployment
Â
Â
Implementation Details
Multimodal Knowledge Extraction
The system starts by fetching both textual and visual data from Wikimedia, using the Wikimedia API and web scraping techniques. Then the key steps in knowledge extraction implementation are:
Data Preprocessing: Text is cleaned, images are classified into categories such as plots or images for appropriate handling during later transformations, and tables are structured for easier processing.
Node Creation and Transformation: Initial LlamaIndex nodes are created from this data, which then undergo several transformations through the transformation pipeline using GPT-4O model deployed via Azure OpenAI:
Text and Table Transformations: Text data is cleaned, split into smaller chunks using semantic chunking, and new derived nodes are created from various transformations, like key entity extraction or table analysis. Each node has a unique Llamaindex ID and carries metadata such as title, context, and relationships reflecting the hierarchical structure of the Wikimedia page and parent-child relationships with new transformed nodes.
Image Transformations: Images are processed to generate descriptions, perform plot analysis, and identify key objects based on the image type, resulting in the creation of new text nodes.
Embeddings Generation: The last stage of the pipeline is to generate embeddings for images and transformed text nodes:
Text Embeddings: Generated using the text-embedding-ada-002Â model deployed with Azure OpenAI on Azure App Services.
Image Embeddings: Generated using the Azure Computer Vision service.
Storage: Both text and image embeddings are stored in Qdrant with reference node IDs in the payload for fast retrieval. The full nodes and their relationships are stored in Neo4j:
Neo4j graphs (left) and close-up section of the graph (right)
Â
Â
Knowledge Retrieval
The retrieval process involves several key steps:
Query Expansion: The system generates multiple variations of the original query, expanding the search space to capture relevant data.
Vector Search: The expanded queries are passed to Qdrant for a similarity-based search using cosine similarity.
Reranking and Cross-Retrieval: Results are then reranked by relevance. Retrieved nodes from Qdrant contain LlamaIndex node IDs in the payload. These are used to fetch the nodes from Neo4j and then to get the nodes with original data from Wikimedia by traversing the graph, ensuring the final response is based only on original Wikipedia content.
ReAct Agent Integration: The ReAct agent dynamically manages the retrieval process by selecting tools based on the query context. It integrates with the custom-built query engine to balance AI-generated insights with the original data from Neo4j and Qdrant.
Â
Dockerization with Docker Compose
To ensure consistent deployment across different environments, the entire application is containerised using Docker. Docker Compose orchestrates multiple containers, including the knowledge extractor, retriever, Neo4j, and Qdrant services. This setup simplifies the deployment process and enhances scalability.
Â
Docker Containers
Â
Â
Â
Results and Outcomes
The system effectively enhances the grounding and accuracy of responses generated by RAG systems. By incorporating multimodal data, it delivers contextually relevant answers, particularly in scenarios where visual information was critical. The integration of Qdrant and Neo4j proved to be highly efficient, enabling fast retrieval and accurate results.
Additionally, a user-friendly interface built with Gradio allows users to interact with the system and compare the AI-generated responses with standard LLM output, offering an easy way to evaluate the improvements.
Here is a snapshot of the Gradio UI:â
Â
Â
Â
Future Development
Several directions for future development have been identified to further enhance the systemâs capabilities:
Agentic Framework Expansion: A future version of the system could incorporate an autonomous tool capable of determining whether the existing knowledge base is sufficient for a query. If the knowledge base is found lacking, the system could automatically initiate a knowledge extraction process to address the gap. This enhancement would bring greater adaptability and self-sufficiency to the system.
Knowledge Graph with Entities: Expanding the knowledge graph to include key entities such as individuals, locations, and events or others appropriate for the domain. This would add considerable depth and precision to the retrieval process. The integration of such entities would provide a more comprehensive and interconnected knowledge base, improving both the relevance and accuracy of results.
Enhanced Multimodality: Future iterations could expand the systemâs capabilities in handling image data. This may include adding support for image comparison, object detection, or breaking images down into distinct components. Such features would enable more sophisticated queries and increase the systemâs versatility in handling diverse data formats.
Incorporating these advancements will position the system to play an important role in the evolving field of multimodal AI, further bridging the gap between text and visual data integration in knowledge retrieval.
Â
Summary
This project demonstrates the potential of enhancing RAG systems by integrating multimodal data, allowing AI to process both text and images more effectively. Through the use of technologies like LlamaIndex, Qdrant, and Neo4j, the system delivers more grounded, contextually relevant answers at high speed. With a focus on accurate knowledge retrieval and dynamic query handling, the project showcases a significant advancement in AI-driven question-answering systems. For more insights and to explore the project, please visit the GitHub repository.
Â
If youâd like to connect, feel free to reach out to me on LinkedIn.
âMicrosoft Tech Community – Latest Blogs –Read MoreÂ
Discover the Hub Page Every JavaScript Developer Needs to Know at Microsoft!
Â
Did you know that Microsoft offers an exclusive Hub Page just for JavaScript developers? JavaScript at Microsoft brings everything you need into one place to start building apps, learn more about JavaScript, and stay updated on the latest from Microsoft!
Â
Letâs dive in and explore this incredible platform, and see how you can make the most of its resources!
Â
What is JavaScript at Microsoft?
Â
On JavaScript at Microsoft, you’ll find practical tutorials, detailed documentation, code samples using Azure, and so much more! Whether you’re a beginner or a seasoned developer, this platform is designed to support and speed up your learning and development, helping you get the most out of JavaScript-related technologies.
Â
What will you find on JavaScript at Microsoft?
Â
There are tons of exciting resources on this portal! What’s great is that everything is super organized and centralized, so you can quickly find all the info you need about the JavaScript world at Microsoft.
Â
Letâs take a closer look at what you can find on JavaScript at Microsoft:
Â
Serverless ChatGPT with RAG using LangChain.js
Â
Right at the top of the page, you’ll find the latest videos, tutorials, articles, and even code samples like the Serverless AI Chat with RAG using LangChain.js. This is an app where youâll learn how to create your own serverless ChatGPT using the Retrieval-Augmented Generation (RAG) technique with LangChain.js. You can run it locally with Ollama and Mistral, or deploy it on Azure in just a few minutes, using your own data.
Â
We highly recommend exploring this awesome example! There’s so much to learn, and who knows, it might inspire you to create your own version of a chatbot with JavaScript! Fork the project right now and drop a star â!
Â
Â
Videos and Series on JavaScript + Azure
Â
In the video section, youâll find a range of content on how to use JavaScript with Azure. These videos vary from short tutorials to longer talks, from 30 to 45 minutes, showing you how to build amazing applications with JavaScript and Azure.
Â
For example, this year, we had the JavaScript Developer Day with lots of amazing talks from Microsoft experts and the technical community, covering how you can use JavaScript with different Azure services! Some standout sessions include:
Â
Building a versatile RAG Pattern chat bot with Azure OpenAI, LangChain | JavaScript Dev Day
LangChain.js + Azure: A Generative AI App Journey | JavaScript Dev Day
GitHub Copilot Can Do That? | JavaScript Dev Day
Â
Â
JavaScript + Azure Code Samples and Open Source Projects
Â
In this section, youâll find a variety of open-source projects that you can contribute to! Many of these projects are maintained by the JavaScript Advocacy and Developer Division teams at Microsoft. Theyâre aimed at enterprise use and follow the best development practices in JavaScript! Dive into these projects, experiment, and help us improve them with your contributions!
Â
Â
Tutorials and More Videos!
Â
In the tutorials section, youâll find a wide variety of video tutorials covering different needs. From using the Visual Studio Code debugger to deploying apps on Azure Static Web Apps.
Â
Here are some examples of tutorials youâll find:
Â
End-to-end browser debugging of your Azure Static Web Apps with Visual Studio Code
Azure libraries packages for JavaScript
Introduction to Playwright: What is Playwright?
Deploy React websites to the cloud with Azure Static Web Apps
Â
Â
Workshops and Documentation
Â
Finally, youâll find various workshops and official documentation on how to use JavaScript with Azure and other Microsoft technologies.
Â
On this hub, youâll find workshops like:
Â
Microservices in practice with Node.js, Docker and Azure
LAB: Build a serverless web application end-to-end on Microsoft Azure
Create your own ChatGPT with Retrieval-Augmented-Generation
Build JavaScript applications with Node.js
Â
Conclusion
Â
JavaScript at Microsoft is the complete portal for anyone who wants to learn more about JavaScript and how to use it with Microsoft technologies. So, if you’re looking to dive deeper into JavaScript, Azure, TypeScript, Artificial Intelligence, Testing, and more, be sure to check out the portal and explore all the resources available!
Â
I hope you enjoyed this article and that it inspires you to explore more about JavaScript at Microsoft! If you have any questions or suggestions, feel free to leave a comment below! đ
âMicrosoft Tech Community – Latest Blogs –Read MoreÂ
The New Microsoft 365 Photo Update Settings Policy for User Profile Photos
Photo Update Settings Policy is Long-term Unified Replacement for Other Controls
Given the historical foundation of Microsoft 365 in several on-premises applications, it probably wasnât surprising that we ended up with a confusing mish-mash of routes by which it was possible to update the profile photos for user accounts through SharePoint, Exchange, Teams, Delve, PowerShell, and so on. Looking back, it took a surprising amount of time before Microsoft acknowledged that the situation was untenable.
A new approach that worked across Microsoft 365 was necessary. That process began in October 2023 with the retirement of the Exchange Online cmdlets to update photos for mailboxes. The foundation for the new approach was a set of Graph APIs surfaced as cmdlets in the Microsoft Graph PowerShell SDK, like Set-MgUserPhotoContent.
A New Photo Update Settings Policy to Control User Profile Updates
In June 2024, Microsoft introduced a new Entra ID policy based on the photoUpdateSettings resource to control who can update photos and the allowed sources for updates. Managing the photo update settings policy requires the PeopleSettings.ReadWrite.all scope. The settings for a tenant can be retrieved as follows:
$Uri = “https://graph.microsoft.com/beta/admin/people/photoupdatesettings”
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Get
$Settings
Name Value
—- —–
allowedRoles {}
@odata.context https://graph.microsoft.com/beta/$metadata#admin/people/photoUpdateSettings/$entity
Source
The settings shown above are the default. The supported values are described in the photoUpdateSettings documentation.
Controlling From Where Photos Can Be Updated
The source for photo updates can be undefined, meaning that photo updates can be sourced from applications running in either the cloud or on-premises (synchronized to Entra ID from Active Directory). Alternatively, you can set the source to be either cloud or on-premises. For example, to update the settings so that photo changes are only possible through cloud applications, create a hash table with a single item to change the source to cloud and use the hash table as the payload to patch the policy:
$Body = @{}
$Body.Add(“Source”, “Cloud”)
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
Like any update to an Entra ID policy, it can take 24 hours before the policy update is effective across a tenant.
Controlling Who Can Update Photos
By default, any user can update the photo for their account and the value for AllowedRoles is blank. If you want to restrict who can update photos, you can select one or more directory roles and include the GUIDs for these roles in the AllowedRoles property (a string collection).
The roles defined in AllowedRoles must hold the permission to set user photos. In Graph terms, these permissions are either microsoft.directory/users/photo/update or microsoft.directory/users/allProperties/allTasks (only held by the Global administrator role). The following roles can be used:
Directory writers (9360feb5-f418-4baa-8175-e2a00bac4301).
Intune administrator (3a2c62db-5318-420d-8d74-23affee5d9d5).
Partner Tier1 Support (4ba39ca4-527c-499a-b93d-d9b492c50246) â not intended for general use.
Partner Tier2 Support (e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8) â not intended for general use
User administrator (fe930be7-5e62-47db-91af-98c3a49a38b1).
Global administrator (62e90394-69f5-4237-9190-012177145e10).
All are privileged roles, meaning that these are roles that enjoy a heightened level of access to sensitive information.
To update the photo settings policy to confine updates to specific roles, create a hash table to hold the GUIDs of the selected roles. Create a second hash table to hold the payload to update the settings and include the hash table with the roles. Finally, patch the policy.
$Roles = @{}
$Roles.Add(“62e90394-69f5-4237-9190-012177145e10”, $null)
$Roles.Add(“fe930be7-5e62-47db-91af-98c3a49a38b1”, $null)
$Body =@{}
$Body.Add(“allowedRoles”, $Roles)
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
To reverse the restriction by removing the roles, run this code:
$Body = ‘{
“allowedRoles”: []
}’
$Settings = Invoke-MgGraphrequest -Uri $Uri -Method Patch -Body $Body
The result of limiting photo updates for user accounts to the user administrator and global administrator roles means that after the new policy percolates throughout the tenant, any account that doesnât hold a specified role cannot change their profile photo.
The Teams client is probably the best example. The implementation here is not yet optimal. The block on photo updates imposed by an OWA mailbox policy causes Teams to inform the user that administrative restrictions stop photo updates. If the photo update settings policy restricts updates to specific roles, Teams allows the user to go through the process of selecting and uploading a photo before failing (Figure 1).
Figure 1: A failure to update a profile photo due to policy restrictions
An Early Implementation of the Photo Update Settings Policy
This kind of thing happens in the early stages of implementation. It will take time for Microsoft to update clients to allow and block profile updates based on the photo settings policy. And it will take time for tenants to move from the previous block imposed by OWA mailbox policies. In doing so, youâll notice that the only restriction supported by the new policy is through roles. The OWA mailbox policy setting allows per-user control and multiple policies can exist within a tenant. Weâre therefore heading to a less granular policy.
Maybe a less granular mechanism will be acceptable if it helps with the rationalization of photo updates across Microsoft 365. However, I canât help thinking that this is a retrograde step. Perhaps Microsoft will address the need for more granular control through Entra ID administrative units, which seems to be the answer for this kind of requirement everywhere else in Entra ID.
Insight like this doesnât come easily. Youâve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Â