Category: News
Copy existing formulas into Advanced Formula Environment module?
I haven’t found any guidance here or elsewhere on copying existing LAMBDA functions from a workbook into a module in AFE so that I can export it into a GitHub Gist for reuse in other workbooks.
I could copy & paste one at a time … but in some cases it would be onerous!
Situation: I have a few workbooks with several LAMBDA functions defined as “Names”. I can bring up the list in AFE:
The “duplicate” button creates the copy only in the workbook with no option to move it to a module. I can “paste” the definitions in “Formulas>Use In Formula>Paste Names” from the menu and paste them into the module text, but then lose the comments (“Creates string with…” in the image).
Am I missing something?
Thanks!
I haven’t found any guidance here or elsewhere on copying existing LAMBDA functions from a workbook into a module in AFE so that I can export it into a GitHub Gist for reuse in other workbooks.I could copy & paste one at a time … but in some cases it would be onerous! Situation: I have a few workbooks with several LAMBDA functions defined as “Names”. I can bring up the list in AFE: The “duplicate” button creates the copy only in the workbook with no option to move it to a module. I can “paste” the definitions in “Formulas>Use In Formula>Paste Names” from the menu and paste them into the module text, but then lose the comments (“Creates string with…” in the image). Am I missing something? Thanks! Read More
MVP’s Favorite Content: Surface, Azure, Microsoft AI
In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.
SungKi Park, Windows and Devices MVP, Korea
Surface MVP showcase: Enabling commercial experiences on Surface – Microsoft Community Hub
“Introduce a joint blog article that provides accurate product information about Microsoft Surface, which provides both AI PC and Copilot PC, as well as insights from five Surface MVPs.”
*Relevant activities:
– Event/ FY24 Surface Partner Day: Post | LinkedIn
– Blog/ Surface Pro 10 for Business Product Review: “비즈니스용 서피스 프로10” AI PC 시대.. : 네이버블로그 (naver.com)
– Blog/ Surface Laptop 6 for Business Product Review: 비즈니스용 마이크로소프트 서피스 랩탑 6, A.. : 네이버블로그 (naver.com)
Hamid Sadeghpour Saleh, Microsoft Azure MVP, Azerbaijan
“Well Architected Framework is an important design framework to learn and make an architectural mindset out of it and having those best practices in the pocket!”
Mohamed Azarudeen Z, AI Platform MVP, India
Machine learning Archives | Microsoft AI Blogs
“The Microsoft AI Blogs is an invaluable resource that offers the latest insights on the forefront of artificial intelligence advancements. Covering a diverse array of topics, it delves into the transformative power of AI across multiple industries and its seamless integration within Microsoft’s ecosystem of products and services. as an AI- MVP, I suggest this to everyone to yearn learning AI.
The blog also explores the wider impact of AI on various industry landscapes, analyzing how this groundbreaking technology is revolutionizing business operations, fostering innovation, and significantly influencing individuals’ lives on a global scale.”
Tomoitsu Kusaba, Developer Technologies MVP, Japan
Generative AI for Beginners – Full Videos Series Released! (microsoft.com)
“This video provides a well-organized summary of what developers should learn about generative AI. By making it available in video format, it caters to those who prefer learning through text as well as those who prefer video. From an accessibility standpoint, this approach is excellent.”
(In Japanese: 生成AIについて開発者が学ぶべき事柄がよくまとまっています。動画で公開されたことで、テキストで学習したい方、動画で学習したい方それぞれに対応しアクセシビリティの観点から見ても素晴らしい対応と感じています。)
Microsoft Tech Community – Latest Blogs –Read More
Downloading Microsoft Store apps using Windows Package Manager
By: Carlos Britos and Jason Sandys – Principal Product Managers | Microsoft Intune
Offline apps is the last remaining significant function of the Microsoft Store for Business on its path to full retirement. Offline apps allows customers to download packaged apps from the Microsoft Store for Business or Education for distribution through alternate mechanisms like a Windows Provisioning Package.
With the impending retirement of the Microsoft Store for Business and Education on August 15, 2024, this offline apps functionality will also retire but the ability to download and distribute packaged apps from the Microsoft Store to devices with restricted connectivity to the Microsoft Store remains. For this reason, starting with version 1.8, Windows Package Manager (WinGet) added the capability to download packages from the Microsoft Store. Unless explicitly disabled, all Windows devices will have automatically updated to this version already. To check the version running locally, you can run winget –v from a command prompt. For troubleshooting guidance, see Debugging and troubleshooting issues with the WinGet tool.
Keep in mind that just as with offline applications from the Microsoft Store for Business and Education, the download feature in WinGet is limited to packaged apps where the publisher has permitted offline licensing and distribution for organizations. This is controlled by the app publisher, not Microsoft. All unpackaged apps published to the Microsoft Store are available for download.
Also note, packaged apps include UWP apps packaged in the AppX format as well as apps packaged in the MSIX format. Unpackaged apps include all Win32 apps packaged in an alternate format such as MSI or EXE.
Downloading a Microsoft Store app using WinGet
Using the WinGet command line interface (CLI) to download an app from the Microsoft Store is straight-forward. The following example walks through the download of the Microsoft Remote Desktop app. This is an app published by Microsoft and allows offline downloads. For more information on any of the below steps or information related to the new download option, please refer to the WinGet download command documentation. Note that WinGet leverages Delivery Optimization to download apps from the Microsoft Store.
Locate the package you wish to download using the WinGet CLI. This step is optional
if you already know the exact package name or ID of the desired package in which case you can skip directly to step 2 below.
winget search “remote desktop” –source MSStore
Use the new download command line argument for the CLI along with the package ID previously returned. By default, files for the specified package are downloaded to the Downloads subfolder of the current user’s profile folder. To override this location, use the -d or –download-directory option on the WinGet command line.
winget download –id 9WZDNCRFJ3PS
Note: You can limit the scope of the downloaded package using additional filtering options on the WinGet command line, e.g., use -a or –architecture to only download content related to a specific OS architecture.
Review the initial information shown and accept the agreements linked by pressing Y and then Enter. If the current account is not currently logged into Microsoft Entra, you will be presented with a standard Entra ID authentication prompt and must successfully authenticate to proceed. Additionally, the account used requires one of the following roles:
Global Administrator
User Administrator
License Administrator
WinGet creates a new folder in the default or specified download folder named for the package ID you specified and proceeds to download the packages and its dependencies to this subfolder. Additionally, WinGet retrieves a license for the package as all packaged apps from the Microsoft Store require a license.
You can now use the downloaded package using your management tool of choice.
Installing a WinGet downloaded package in a Windows provisioning package
Using packages downloaded by WinGet within a Windows provisioning package allows you to install the downloaded apps while provisioning a Windows device for management by Microsoft Intune. To do this, follow these steps:
Download the Windows Configuration Designer (WCD) app from the Microsoft Store.
Launch WCD and choose the Provision desktop devices option on the Start page.
Provide a name and location for the project.
Provide information on the Set up device, Set up network, and Account Management pages as needed.
For the Add applications page, click Add an Application.
Provide the Application name, Installer path and License path for the application that you are adding.
Add all Required appx dependencies and click Add to finish. The following screenshot shows the completed Add applications page in WCD for Microsoft Remote Desktop including its x64 dependencies.
Complete the Add certificates page as needed and under the Finish step, select Create to complete the process.
Your provisioning package is now ready.
Installing a WinGet downloaded package using Intune
In general, we recommend using the built-in Intune functionality to distribute Microsoft Store apps to managed Windows devices. However, you can also use other device management tools to deploy packaged apps separately downloaded using WinGet download. Scenarios where you may consider this include the following:
Managed clients cannot access or are restricted from connecting to the Microsoft Store.
Strict app version control is required.
To use Intune for this, follow the steps at Add a Windows line-of-business app to Microsoft Intune. Note that managed Windows endpoints must be able to connect to the Microsoft license server to retrieve a license for any apps deployed this way as Intune has no built-in capability to do this. Additionally, Microsoft Store apps will automatically update from the Microsoft Store if devices have connectivity to the Microsoft Store and Automatic Store app updates is not disabled, regardless of the app deployment method.
By following these steps, you can effectively utilize WinGet and Intune to manage app deployments, ensuring all necessary licenses and dependencies are correctly handled. This approach facilitates a streamlined and controlled deployment process across managed Windows devices.
If you have any questions or feedback, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Why did I get duplicate variables in my workspace after changing their names?
(I did solve this problem but I haven’t been able to repeat it. I hope it’s okay to ask here anyway because I am curious if anyone has an explanation. I will describe below what solved it and how I have tried to repeat it. Couldn’t find any threads describing this anywhere.)
I had a few variables in my script with names starting with capital letters that I wanted to change to lowercase first letters. For example I wanted to change "Flux_th" to "flux_th". See code example below with only the relevant lines from my script.
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
Flux_th = 42078172084928.6; % [neutrons/cm2/s]
I used the shift+enter function in matlab to change "Flux_th" to "flux_th" (same with the other variables).
I cleared my workspace, I ran the script (like below) and got BOTH "Flux_th" and "flux_th" in my workspace (they have the same value).
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
I tried changing back to capital letters on "Flux_th" and doing the same thing again but it still gave me duplicates.
I am pretty sure that I tried manually deleting the duplicates with capital letters in the workspace and then running the script, but that also gave me duplicates.
It happened again even after restarting matlab.
What DID actually solve the problem is the following sequence:
I cleared the workspace, then I ran this separate script that creates and saves the .mat files, that script looks like this:
%% Global nuclear data
%data for reactions
dataNucGlobal_reac = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["barn";"isotopefrac";"reference"]);
dataNucGlobal_reac(‘O16 (n,p) N16: s-f’, 🙂 = {2.026e-5, 99.76, "ENDL"};
save(‘dataNucGlobal_reac’)
%data for nuclides
dataNucGlobal_nuc = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["halflife_s"; "decayconst"; "reference"]);
dataNucGlobal_nuc(‘N16’, 🙂 = {7.12, log(2)/7.12, "?"};
save(‘dataNucGlobal_nuc’)
Then I ran my original script without the load lines like this:
% load(‘dataNucGlobal_nuc.mat’)
% load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
This time I did not get any duplicates, so I cleared my workspace again and reinserted the load lines and finally ran the script like this:
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
Now everything is working and I can’t reproduce what happened by changing variable names from capital to lowercase letters.
I am super confused because as you can see, the .mat-files DO NOT contain any of the variables that became duplicates, so it doesn’t make any sense that this would help. I have no idea what happened so that is why I am asking here.(I did solve this problem but I haven’t been able to repeat it. I hope it’s okay to ask here anyway because I am curious if anyone has an explanation. I will describe below what solved it and how I have tried to repeat it. Couldn’t find any threads describing this anywhere.)
I had a few variables in my script with names starting with capital letters that I wanted to change to lowercase first letters. For example I wanted to change "Flux_th" to "flux_th". See code example below with only the relevant lines from my script.
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
Flux_th = 42078172084928.6; % [neutrons/cm2/s]
I used the shift+enter function in matlab to change "Flux_th" to "flux_th" (same with the other variables).
I cleared my workspace, I ran the script (like below) and got BOTH "Flux_th" and "flux_th" in my workspace (they have the same value).
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
I tried changing back to capital letters on "Flux_th" and doing the same thing again but it still gave me duplicates.
I am pretty sure that I tried manually deleting the duplicates with capital letters in the workspace and then running the script, but that also gave me duplicates.
It happened again even after restarting matlab.
What DID actually solve the problem is the following sequence:
I cleared the workspace, then I ran this separate script that creates and saves the .mat files, that script looks like this:
%% Global nuclear data
%data for reactions
dataNucGlobal_reac = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["barn";"isotopefrac";"reference"]);
dataNucGlobal_reac(‘O16 (n,p) N16: s-f’, 🙂 = {2.026e-5, 99.76, "ENDL"};
save(‘dataNucGlobal_reac’)
%data for nuclides
dataNucGlobal_nuc = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["halflife_s"; "decayconst"; "reference"]);
dataNucGlobal_nuc(‘N16’, 🙂 = {7.12, log(2)/7.12, "?"};
save(‘dataNucGlobal_nuc’)
Then I ran my original script without the load lines like this:
% load(‘dataNucGlobal_nuc.mat’)
% load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
This time I did not get any duplicates, so I cleared my workspace again and reinserted the load lines and finally ran the script like this:
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
Now everything is working and I can’t reproduce what happened by changing variable names from capital to lowercase letters.
I am super confused because as you can see, the .mat-files DO NOT contain any of the variables that became duplicates, so it doesn’t make any sense that this would help. I have no idea what happened so that is why I am asking here. (I did solve this problem but I haven’t been able to repeat it. I hope it’s okay to ask here anyway because I am curious if anyone has an explanation. I will describe below what solved it and how I have tried to repeat it. Couldn’t find any threads describing this anywhere.)
I had a few variables in my script with names starting with capital letters that I wanted to change to lowercase first letters. For example I wanted to change "Flux_th" to "flux_th". See code example below with only the relevant lines from my script.
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
Flux_th = 42078172084928.6; % [neutrons/cm2/s]
I used the shift+enter function in matlab to change "Flux_th" to "flux_th" (same with the other variables).
I cleared my workspace, I ran the script (like below) and got BOTH "Flux_th" and "flux_th" in my workspace (they have the same value).
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
I tried changing back to capital letters on "Flux_th" and doing the same thing again but it still gave me duplicates.
I am pretty sure that I tried manually deleting the duplicates with capital letters in the workspace and then running the script, but that also gave me duplicates.
It happened again even after restarting matlab.
What DID actually solve the problem is the following sequence:
I cleared the workspace, then I ran this separate script that creates and saves the .mat files, that script looks like this:
%% Global nuclear data
%data for reactions
dataNucGlobal_reac = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["barn";"isotopefrac";"reference"]);
dataNucGlobal_reac(‘O16 (n,p) N16: s-f’, 🙂 = {2.026e-5, 99.76, "ENDL"};
save(‘dataNucGlobal_reac’)
%data for nuclides
dataNucGlobal_nuc = table(‘Size’,[0,3], ‘VariableTypes’,["double", "double", "string"],’VariableNames’,["halflife_s"; "decayconst"; "reference"]);
dataNucGlobal_nuc(‘N16’, 🙂 = {7.12, log(2)/7.12, "?"};
save(‘dataNucGlobal_nuc’)
Then I ran my original script without the load lines like this:
% load(‘dataNucGlobal_nuc.mat’)
% load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
This time I did not get any duplicates, so I cleared my workspace again and reinserted the load lines and finally ran the script like this:
load(‘dataNucGlobal_nuc.mat’)
load(‘dataNucGlobal_reac.mat’)
flux_th = 42078172084928.6; % [neutrons/cm2/s]
Now everything is working and I can’t reproduce what happened by changing variable names from capital to lowercase letters.
I am super confused because as you can see, the .mat-files DO NOT contain any of the variables that became duplicates, so it doesn’t make any sense that this would help. I have no idea what happened so that is why I am asking here. variable-names, duplicates MATLAB Answers — New Questions
Field-Oriented Control of PMSM Using Reinforcement Learning code
Does anyone know where to find that Matlab code for the example presented
Field-Oriented Control of PMSM Using Reinforcement Learning
the is shown below but Matlab does not seem to have this code:
https://www.mathworks.com/help/mcb/gs/foc-of-pmsm-using-reinforcement-learning.htmlDoes anyone know where to find that Matlab code for the example presented
Field-Oriented Control of PMSM Using Reinforcement Learning
the is shown below but Matlab does not seem to have this code:
https://www.mathworks.com/help/mcb/gs/foc-of-pmsm-using-reinforcement-learning.html Does anyone know where to find that Matlab code for the example presented
Field-Oriented Control of PMSM Using Reinforcement Learning
the is shown below but Matlab does not seem to have this code:
https://www.mathworks.com/help/mcb/gs/foc-of-pmsm-using-reinforcement-learning.html @john MATLAB Answers — New Questions
New to Powerpoint
How do I add narration to a Ppt and have the slide show play like a movie? Additionally, the movie should last for 60 seconds or less. Is there a way to ensure this with Powerpoint? Any help that you can give me would be greatly appreciated. Thank you.
How do I add narration to a Ppt and have the slide show play like a movie? Additionally, the movie should last for 60 seconds or less. Is there a way to ensure this with Powerpoint? Any help that you can give me would be greatly appreciated. Thank you. Read More
Field Type Modification
I performed an import of Excel data into SharePoint List. I brought the Excel (ID) field into the Title column. I need this to be sortable so I created a Numeric column (IDMaster) with the values from the Title column and there are over 1000 records. How do I format the IDMaster so that it does not include a “,” (i.e., 1,000 -> 1000)?
I performed an import of Excel data into SharePoint List. I brought the Excel (ID) field into the Title column. I need this to be sortable so I created a Numeric column (IDMaster) with the values from the Title column and there are over 1000 records. How do I format the IDMaster so that it does not include a “,” (i.e., 1,000 -> 1000)? Read More
Editing the Pinned Section in my Viva Engage Community
I am trying to re-sort & edit the Pinned Section of my Viva Engage page/community. I can add links & delete links, but cannot change the order of all the links on the page for some reason. Does anyone have a solution besides a complete manual re-do of the list?
I am trying to re-sort & edit the Pinned Section of my Viva Engage page/community. I can add links & delete links, but cannot change the order of all the links on the page for some reason. Does anyone have a solution besides a complete manual re-do of the list? Read More
Outlook contact lists are not syncing on mobile outlook app (iOS)
Hello All,
Outlook contact lists which I created on my desktop are not syncing on my outlook mobile app. Individual contacts are syncing without any issues but not the contact lists. I see many reported the same issue. I would like to know whether this is actually the issue or it was built in that way.
Hello All, Outlook contact lists which I created on my desktop are not syncing on my outlook mobile app. Individual contacts are syncing without any issues but not the contact lists. I see many reported the same issue. I would like to know whether this is actually the issue or it was built in that way. Read More
Linked Server to Excel Spreadsheet
I have a perplexing problem which I suspect is permission related.
I have created a Linked Server in SSMS (SQL Express) that connects to an Excel Document on a Network Share. A stored procedure (spUpdateProducts) uses this Linked Server to Merge data into an existing table.
When I execute the stored procedure in SSMS, it works correctly (domain admin).
Similarly, when I execute a script that holds a SQLCMD from the server, that also executes correctly.
SQLCMD -S serverSQLEXPRESS -E -d OutEnd24 -Q “EXEC [dbo].[spUploadProducts]”
However, if I try and execute that as a scheduled task the sproc does not appear to run (or runs and fails to initialise the Linked Server – see below).
Similarly, if I try running the SQL CMD from my client laptop, I get the error:
Msg 7303, Level 16, State 1, Server *****SQLEXPRESS, Procedure spUploadProducts, Line 22
Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA”.
OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA” returned message “Unspecified error”.
My User account on the laptop has permission to execute the sproc, but it appears the Linked Server is unable to access the file in this case (which I suspect is also what is happening in the scheduled task).
I have set the advanced option in SQL Server to allow Ad Hoc Distributed Queries.
The SQL Server instance is running under NT ServiceMSSQL$SQLEXPRESS
I have granted Full Control on the network share and file path to DomainServerName$
I have tried experimenting with various combinations of login mapping with the Linked Server, but either I get a user not recognized error, or the same failure as above.
Any thoughts appreciated.
I have a perplexing problem which I suspect is permission related.I have created a Linked Server in SSMS (SQL Express) that connects to an Excel Document on a Network Share. A stored procedure (spUpdateProducts) uses this Linked Server to Merge data into an existing table.When I execute the stored procedure in SSMS, it works correctly (domain admin).Similarly, when I execute a script that holds a SQLCMD from the server, that also executes correctly.SQLCMD -S serverSQLEXPRESS -E -d OutEnd24 -Q “EXEC [dbo].[spUploadProducts]”However, if I try and execute that as a scheduled task the sproc does not appear to run (or runs and fails to initialise the Linked Server – see below).Similarly, if I try running the SQL CMD from my client laptop, I get the error:Msg 7303, Level 16, State 1, Server *****SQLEXPRESS, Procedure spUploadProducts, Line 22Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA”.OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA” returned message “Unspecified error”.My User account on the laptop has permission to execute the sproc, but it appears the Linked Server is unable to access the file in this case (which I suspect is also what is happening in the scheduled task).I have set the advanced option in SQL Server to allow Ad Hoc Distributed Queries.The SQL Server instance is running under NT ServiceMSSQL$SQLEXPRESSI have granted Full Control on the network share and file path to DomainServerName$ I have tried experimenting with various combinations of login mapping with the Linked Server, but either I get a user not recognized error, or the same failure as above.Any thoughts appreciated. Read More
Unrecognized property Value for class Axes?
Hi all,
I have been trying to create a CT scan viewer through Matlab.
It is up and running, however, I am getting the error Unrecognized property Value for class Axes whenever I change and image.
It hightlights these two lines of code for me.
% Add listener to the slider to call a function when its value changes
addlistener(hSlider, ‘Value’, ‘PostSet’, @(src, event) updateImage(src, event, hAxes, dicomImages));
and
% Get the current value of the slider
sliderValue = round(get(hAxes.Parent.Children(2), ‘Value’)); % hAxes.Parent.Children(2) is the slider handle
Would anyone know where I went wrong?Hi all,
I have been trying to create a CT scan viewer through Matlab.
It is up and running, however, I am getting the error Unrecognized property Value for class Axes whenever I change and image.
It hightlights these two lines of code for me.
% Add listener to the slider to call a function when its value changes
addlistener(hSlider, ‘Value’, ‘PostSet’, @(src, event) updateImage(src, event, hAxes, dicomImages));
and
% Get the current value of the slider
sliderValue = round(get(hAxes.Parent.Children(2), ‘Value’)); % hAxes.Parent.Children(2) is the slider handle
Would anyone know where I went wrong? Hi all,
I have been trying to create a CT scan viewer through Matlab.
It is up and running, however, I am getting the error Unrecognized property Value for class Axes whenever I change and image.
It hightlights these two lines of code for me.
% Add listener to the slider to call a function when its value changes
addlistener(hSlider, ‘Value’, ‘PostSet’, @(src, event) updateImage(src, event, hAxes, dicomImages));
and
% Get the current value of the slider
sliderValue = round(get(hAxes.Parent.Children(2), ‘Value’)); % hAxes.Parent.Children(2) is the slider handle
Would anyone know where I went wrong? dicom, slider, matlab gui, image processing MATLAB Answers — New Questions
How do I save each response in an excel file every time I click the submit button?
It currently only saves the recent response and not the previous response as well.
function SubmitButtonPushed(app, event)
data = {firstName, lastName, dob, email, nationality, sex, mobileNumber, type};
passengerData = cell2table(data, ‘VariableNames’, {‘First Name’, ‘Last Name’, ‘Date of Birth’, ‘Email’, ‘Nationality’, ‘Sex’, ‘Mobile Number’, ‘Type’});
disp(‘New Passenger Data: ‘)
disp(passengerData)
filename = ‘passengerDetails.xlsx’;
if isfile(filename)
existingData = readtable(filename);
combinedData = [existingData; passengerData];
else
combinedData = passengerData;
end
writetable(combinedData,filename, ‘WriteMode’,’append’);
disp(‘Combined Data: ‘)
disp(combinedData)
new_line = randn(1,9);
sheetName = sprintf(‘Submission_%d’, submissionCount);
writematrix(new_line, filename, ‘Sheet’, sheetName, ‘WriteMode’, ‘overwrite’);
submissionCount = submissionCount + 1;
msgbox(‘Successfully submitted’,’Success’);
delete(app);It currently only saves the recent response and not the previous response as well.
function SubmitButtonPushed(app, event)
data = {firstName, lastName, dob, email, nationality, sex, mobileNumber, type};
passengerData = cell2table(data, ‘VariableNames’, {‘First Name’, ‘Last Name’, ‘Date of Birth’, ‘Email’, ‘Nationality’, ‘Sex’, ‘Mobile Number’, ‘Type’});
disp(‘New Passenger Data: ‘)
disp(passengerData)
filename = ‘passengerDetails.xlsx’;
if isfile(filename)
existingData = readtable(filename);
combinedData = [existingData; passengerData];
else
combinedData = passengerData;
end
writetable(combinedData,filename, ‘WriteMode’,’append’);
disp(‘Combined Data: ‘)
disp(combinedData)
new_line = randn(1,9);
sheetName = sprintf(‘Submission_%d’, submissionCount);
writematrix(new_line, filename, ‘Sheet’, sheetName, ‘WriteMode’, ‘overwrite’);
submissionCount = submissionCount + 1;
msgbox(‘Successfully submitted’,’Success’);
delete(app); It currently only saves the recent response and not the previous response as well.
function SubmitButtonPushed(app, event)
data = {firstName, lastName, dob, email, nationality, sex, mobileNumber, type};
passengerData = cell2table(data, ‘VariableNames’, {‘First Name’, ‘Last Name’, ‘Date of Birth’, ‘Email’, ‘Nationality’, ‘Sex’, ‘Mobile Number’, ‘Type’});
disp(‘New Passenger Data: ‘)
disp(passengerData)
filename = ‘passengerDetails.xlsx’;
if isfile(filename)
existingData = readtable(filename);
combinedData = [existingData; passengerData];
else
combinedData = passengerData;
end
writetable(combinedData,filename, ‘WriteMode’,’append’);
disp(‘Combined Data: ‘)
disp(combinedData)
new_line = randn(1,9);
sheetName = sprintf(‘Submission_%d’, submissionCount);
writematrix(new_line, filename, ‘Sheet’, sheetName, ‘WriteMode’, ‘overwrite’);
submissionCount = submissionCount + 1;
msgbox(‘Successfully submitted’,’Success’);
delete(app); matlab, excel, table MATLAB Answers — New Questions
How can you use MATLAB Grader in Moodle Quizzes?
MATLAB Grader can be integrated into Moodle as an external tool. How can you use MATLAB Grader in Moodle Quizzes, to implement MATLAB Grader as a quiz question?MATLAB Grader can be integrated into Moodle as an external tool. How can you use MATLAB Grader in Moodle Quizzes, to implement MATLAB Grader as a quiz question? MATLAB Grader can be integrated into Moodle as an external tool. How can you use MATLAB Grader in Moodle Quizzes, to implement MATLAB Grader as a quiz question? distance_learning, moodle, matlab grader MATLAB Answers — New Questions
SharePoint List – Select from a Choice Field and Display an Image
I am attempting to set a column in a Project Tracker List to allow three choices, Red, Amber and Green. Depending on which status the user selects, I am trying to display an image, either a red circle (image1.jpg), an amber circle (image2.jpg) or a green circle (image3.jpg). I am not certain how to have a Choice column return an image.
Thanks,
John
I am attempting to set a column in a Project Tracker List to allow three choices, Red, Amber and Green. Depending on which status the user selects, I am trying to display an image, either a red circle (image1.jpg), an amber circle (image2.jpg) or a green circle (image3.jpg). I am not certain how to have a Choice column return an image. Thanks,John Read More
Speech Recognition for Alphanumeric
Hi,
I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low.
For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”.
I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks.
Is there any settings available to improve the recognition success rate?
Azure Services used:
ACS Phone Number
Azure Cognitive Service
Event Grid Subscriptions
Thanks,
Aravind
Hi, I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low. For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”. I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks. Is there any settings available to improve the recognition success rate?Azure Services used:ACS Phone NumberAzure Cognitive Service Event Grid SubscriptionsThanks,Aravind Read More
Optimizing Query Performance with Work_Mem
work_mem plays a crucial role in optimizing query performance in Azure Database for PostgreSQL. By allocating sufficient memory for sorting, hashing, and other internal operations, One can improve overall database performance and responsiveness, especially under heavy load or complex query scenarios. Fine-tuning work_mem based on workload characteristics is key to achieving optimal performance in your PostgreSQL environment
Understanding work_mem
Purpose:
Memory for Operations: work_mem sets the maximum amount of memory that can be used by operations such as sorting, hashing, and joins before PostgreSQL writes data to temporary disk files. This includes operations to accomplish:
ORDER BY: Sort nodes are introduced in the plan when ordering cannot be satisfied by an index.
DISTINCT and GROUP BY: These can introduce Aggregate nodes with a hashing strategy, which require memory to build hash tables, and potentially Sort nodes when the Aggregate is parallelized.
Merge Joins: When sorting of some or both of the relations being joined is not satisfied via indexes.
Hash Joins: To build hash tables.
Nested Loop Joins: When memoize nodes are introduced in the plan because the estimated number of duplicates is high enough that caching results of lookups is estimated to be cheaper than doing the lookups again.
Default Value: The default work_mem value is 4 MB (or 4096 KB). This means that any operation can use up to 4 MB of memory. If the operation requires more memory, it will write data to temporary disk files, which can significantly slow down query performance.
Concurrent Operations:
Multiple Operations: A single complex query may involve several sorts or hash operations that run in parallel. Each operation can utilize the work_mem allocated, potentially leading to high total memory consumption if multiple operations are occurring simultaneously.
Multiple Sessions: If there are several active sessions, each can also use up to the work_mem value for their operations, which further increases memory usage. For example, if you set work_mem to 10 MB and have 100 concurrent connections, the total potential memory usage for sorting and hashing operations could reach 1,000 MB (or 1 GB).
Impact of Disk Usage:
Spilling to Disk: When the memory allocated for an operation exceeds work_mem, PostgreSQL writes data to temporary files on disk. Disk I/O is significantly slower than memory access, which can lead to degraded performance. Therefore, optimizing work_mem is crucial to minimize disk spills.
Disk Space Considerations: Excessive disk spills can also lead to increased disk space usage, particularly for large queries, which may affect overall database performance and health.
Hash Operations:
Sensitivity to Memory: Hash-based operations (e.g., hash joins, hash aggregates) are particularly sensitive to memory availability. PostgreSQL can use a hash_mem_multiplier to allow these operations to use more memory than specified by work_mem. This multiplier can be adjusted to allocate a higher memory limit for hash operations when needed.
Adjusting work_mem at Different Levels
Server Parameter:
Affects all connections unless overridden.
Configured globally, via REST APIs, Azure CLI or the Azure portal. For more information, read Server parameters in Azure Database for PostgreSQL – Flexible Server
Session Level:
Adjusted using SET work_mem = ’32MB’;
Affects only the current session.
Reverts to default after the session ends.
Useful for optimizing specific queries.
Role or user level:
Set using ALTER ROLE username SET work_mem = ’16MB’;
Applied automatically upon user login.
Tailors settings to user-specific workloads.
Database Level:
Set using ALTER DATABASE dbname SET work_mem = ’20MB’;
Affects all connections to the specified database.
Function, Procedure Level:
Adjusted within a stored procedure/function using SET work_mem = ’64MB’;
Valid for the duration of the procedure/function execution.
Allows fine-tuning of memory settings based on specific operations.
Server Parameter: work_mem
The formula provided, work_mem = Total RAM / Max Connections / 16, is a guideline to ensure that the memory is distributed effectively without over-committing resources. Refer to the official Microsoft documentation on managing high memory utilization in Azure Database for PostgreSQL here.
Breaking Down the Formula
Total RAM:
This is the total physical memory available on your PostgreSQL server. It’s the starting point for calculating memory allocation for various PostgreSQL operations.
Max Connections:
This is the maximum number of concurrent database connections allowed. PostgreSQL needs to ensure that each connection can operate efficiently without causing the system to run out of memory.
Division by 16:
The factor of 16 is a conservative estimate to prevent overallocation of memory. This buffer accounts for other memory needs of PostgreSQL and the operating system.
If your server has a significant amount of RAM and you are confident that other memory requirements (e.g., operating system, cache, other processes) are sufficiently covered, you might reduce the divisor (e.g., to 8 or 4) to allocate more memory per operation.
Analytical workloads often involve complex queries with large sorts and joins. For such workloads, increasing work_mem by reducing the divisor can improve query performance significantly.
Step-by-Step Calculation of work_mem
Total RAM:
The server has 512 GB of RAM.
Convert 512 GB to MB: 512 * 1024 = 524,288 MB
Max Connections:
The server allows up to 2000 maximum connections.
Base Memory Per Connection:
Divide the total RAM by the number of connections: 524,288 / 2000 = 262.144 MB
Apply the Conservative Factor (Divide by 16):
Apply the Conservative Factor (Divide by 16): 262.144 / 16 = 16.384 MB
One should set work_memto approximately 16 MB (rounded from 16.384 MB).
In case one need help with how to set up server parameters or require more information, please refer to the official documentation at Azure PostgreSQL Flexible Server Server Parameters. This resource provides comprehensive insights into the server parameters and their configurations.
Query Execution with EXPLAIN ANALYZE
Fine-Tune work_mem with EXPLAIN ANALYZE
To determine the optimal work_mem value for your query, you’ll need to analyze the EXPLAIN ANALYZE output to understand how much memory the query is using and where it is spilling to disk. Here’s a step-by-step guide to help you:
Execute the query with EXPLAIN ANALYZE to get detailed execution statistics:
EXPLAIN (ANALYZE, BUFFERS)
SELECT
*
FROM DataForWorkMem
WHERE time BETWEEN ‘2006-01-01 05:00:00+00’ AND ‘2006-03-31 05:10:00+00’
ORDER BY name;
Analyze the Output
Look for the following details in the output:
Sort Operation: Check if there is a Sort operation and whether it mentions “external sort” or “external merge”, This indicates that the sort operation used more memory than allocated in work_mem and had to spill to disk.
Buffers Section: The Buffers section shows the amount of data read from and written to disk. High values here may indicate that increasing work_mem could reduce the amount of data spilled to disk.
Here is output generated by above query:
“Gather Merge (cost=8130281.85..8849949.13 rows=6168146 width=47) (actual time=2313.021..3848.958 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72278, temp read=97446 written=97605“
” -> Sort (cost=8129281.82..8136992.01 rows=3084073 width=47) (actual time=2296.884..2726.374 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: external merge Disk: 193200kB“
” Buffers: shared hit=72278, temp read=97446 written=97605“
” Worker 0: Sort Method: external merge Disk: 196624kB“
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..7661339.18 rows=3084073 width=47) (actual time=206.138..739.962 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Rows Removed by Index Recheck: 62934″
” Heap Blocks: exact=15199 lossy=17800″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=203.416..203.417 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.129 ms”
“Execution Time: 4169.774 ms”
Let’s break down the details from the execution plan:
Gather Merge
Purpose: Gather Merge is used to combine results from parallel workers. It performs an order-preserving merge of the results produced by each of its child node instances.
Cost and Rows:
Planned Cost: 8130281.85..8849949.13
This is the estimated cost of the operation.
Planned Rows: 6168146
This is the estimated number of rows to be returned.
Actual Time: 2313.021..3848.958
The actual time taken for the Gather Merge operation.
Actual Rows: 6564864
The actual number of rows returned.
Workers:
Planned: 2
The planned number of parallel workers for this operation.
Launched: 1
The number of workers that were actually used.
Buffers
Shared Hit: 72278
This represents the number of buffer hits for shared buffers .
Temp Read: 97446
This indicates the amount of temporary disk space read.
Approximately 798.8 MB (97446 blocks * buffers of 8KB)
Temp Written: 97605
This indicates the amount of temporary disk space written.
Approximately 799.6 MB (97605 blocks * buffers of 8KB)
Sort Node
Sort:
Cost: 8129281.82..8136992.01
The estimated cost for the sorting operation includes both the startup cost and the cost of retrieving all available rows from the operator.
The startup cost represents the estimated time required to begin the output phase, such as the time needed to perform the sorting in a sort node.
Rows: 3084073
The estimated number of rows returned.
Actual Time: 2296.884..2726.374
The actual time taken for the sorting operation.
The first number represents the startup time for the operator, i.e., the time it took to begin executing this part of the plan. The second number represents the total time elapsed from the start of the execution of the plan to the completion of this operation. The difference between these two values is the actual duration that this operation took to complete.
Actual Rows: 3282432
The actual number of rows returned.
Sort Method
External Merge:
This indicates that an external merge sort was used, meaning that the sort could not be handled entirely in memory and required temporary files.
Disk:
Main Process: 193200 kB
The amount of disk space used by the main process for sorting.
Worker 0: 196624 kB
The amount of disk space used by the worker process for sorting.
To optimize PostgreSQL query performance and avoid disk spills, set the work_mem to cover the total memory usage observed during sorting:
Main Process Memory Usage: 193200 kB
Worker Memory Usage: 196624 kB
Total Memory Required: 389824 kB (approximately 380 MB)
Recommended work_mem Setting: 380 MB
This setting ensures that the sort operation can be performed entirely in memory, improving query performance and avoiding disk spills.
Increasing work_mem to 380 MB at the session level resolved the issue. The execution plan confirms that this memory allocation is now adequate for your sorting operations. The absence of temporary read/write stats in the Buffers section suggests that sorting is being managed entirely in memory, which is a favorable result.
Here’s is updated execution plan:
“Gather Merge (cost=4944657.91..5664325.19 rows=6168146 width=47) (actual time=1213.740..2170.445 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72244″
” -> Sort (cost=4943657.89..4951368.07 rows=3084073 width=47) (actual time=1207.758..1357.753 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: quicksort Memory: 345741kB”
” Buffers: shared hit=72244″
” Worker 0: Sort Method: quicksort Memory: 327233kB”
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..4611250.25 rows=3084073 width=47) (actual time=238.881..661.863 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Heap Blocks: exact=34572″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=230.774..230.775 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.119 ms”
“Execution Time: 2456.604 ms”
It confirms that:
Sort Method: “quicksort” or “other in-memory method” instead of “external merge.”
Memory Usage: The allocated work_mem (380 MB) is used efficiently.
Execution Time: Decreased to 2456.604 ms from 4169.774 ms.
Adjusting work_mem Using pg_stat_statements Data
To estimate the memory needed for a query based on the temp_blks_readparameters from PostgreSQL’s pg_stat_statements, you can follow these steps:
Get the Block Size:
PostgreSQL uses a default block size of 8KB. You can verify this by running:
Calculate Total Temporary Block Usage:
Sum the temp_blks_read to get the total number of temporary blocks used by the query.
Convert Blocks to Bytes:
Multiply the total temporary blocks by the block size (usually 8192 bytes) to get the total temporary data in bytes.
Convert Bytes to a Human-Readable Format:
Convert the bytes to megabytes (MB) or gigabytes (GB) as needed.
To identify queries that might benefit from an increased work_mem setting, use the following query to retrieve key performance metrics from PostgreSQL’s pg_stat_statements view:
SELECT
query,
calls,
total_exec_time AS total_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time
FROM
pg_stat_statements
ORDER BY
total_exec_time DESC
LIMIT 10;
Example Calculation
Suppose we have the following values from the pg_stat_statements:
temp_blks_read: 5000
block_size: 8192 bytes
Calculation:
Total Temporary Data (bytes)= 5000 × 8192 = 40,960,000 bytes
Total Temporary Data (MB) = 40,960,000 / (1024 × 1024) = 39.06 MB
This estimate indicates that to keep operations in memory and avoid temporary disk storage, work_mem should ideally be set to a value higher than 39 MB.
Here is a query that provides the total amount of temporary data in megabytes for each query recorded in pg_stat_statements. This information can help identify which queries might benefit from an increase in work_mem to potentially improve performance by reducing temporary disk usage.
SELECT
query,
total_temp_data_bytes / (1024 * 1024) AS total_temp_data_mb
FROM
(
SELECT
query,
temp_blks_read * 8192 AS total_temp_data_bytes
FROM pg_stat_statements
) sub;
Using Query Store to Determine work_mem
PostgreSQL’s Query Store is a powerful feature designed to provide insights into query performance, identify bottlenecks, and monitor execution patterns.
Here is how to use Query Store to analyze query performance and estimate the disk storage space required for temporary blocks read (temp_blks_read).
Analyzing Query Performance with Query Store
To analyze query performance, Query Store offers execution statistics, including temp_blks_read, which indicates the number of temporary disk blocks read by a query. Temporary blocks are used when query results or intermediate results exceed available memory.
Retrieving Average Temporary Blocks Read
Use the following SQL query to get the average temp_blks_read for individual queries:
SELECT
query_id,
AVG(temp_blks_read) AS avg_temp_blks_read
FROM query_store.qs_view
GROUP BY query_id;
This query calculates the average temp_blks_read for each query. For example, if query_id 378722 shows an average temp_blks_read of 87,348, this figure helps understand temporary storage usage.
Estimating Disk Storage Space Required
Estimate disk storage based on temp_blks_read to gauge temporary storage impact:
Know the Block Size: PostgreSQL’s default block size is 8 KB.
Calculate Disk Space in Bytes: Multiply the average temp_blks_read by the block size:
Space (bytes) = avg_temp_blks_read × Block Size (bytes)
Space (bytes) = 87,348 × 8192 = 715,048,896 bytes
Convert Bytes to Megabytes (MB):
Space (MB) = 715,048,896 / (1024 * 1024) = 682 MB
Consider adjusting work_mem at the session level or within stored procedures/functions to optimize performance.
Query Store is an invaluable tool for analyzing and optimizing query performance in PostgreSQL. By examining metrics like temp_blks_read, you can gain insights into query behavior and estimate the disk storage required. This knowledge enables better resource management, performance tuning, and cost control, ultimately leading to a more efficient and reliable database environment
Best Practices for Setting work_mem
Monitor and Adjust: Regularly monitor the database’s performance and memory usage. Tools like pg_stat_statements and pg_stat_activity can provide insights into how queries are using memory.
Incremental Changes: Adjust work_mem incrementally and observe the impact on performance and resource usage. Make small adjustments and evaluate their effects before making further changes.
Set Appropriately for Workloads: Tailor work_mem settings based on the types of queries and workloads running on your database. For example, batch operations or large sorts might need higher settings compared to simple, small queries.
Consider Total Memory: Calculate the total memory usage, considering the number of concurrent connections and operations, to ensure it does not exceed available physical RAM.
Balancing work_mem involves understanding your workload, monitoring performance, and adjusting settings to optimize both memory usage and query performance.
Microsoft Tech Community – Latest Blogs –Read More
New on Azure Marketplace: July 18-24, 2024
We continue to expand the Azure Marketplace ecosystem. For this volume, 153 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
Access Patient Flow Manager: Access Patient Flow Management provides real-time bed occupancy updates, improving patient care, reducing risk, and saving time. It interfaces with existing patient administration systems and departmental solutions, standardizes data capture, and digitally manages bed supply and demand.
ACSC-Compliant Red Hat Enterprise Linux 7: Foundation Security offers an ACSC-compliant Red Hat Enterprise Linux 7 virtual machine image with built-in security controls to protect sensitive data. The image is regularly updated and ideal for organizations needing a secure and compliant environment. Foundation Security’s team of experts provides ongoing support, and its solutions are trusted by Fortune 500 companies.
ACSC Essential Eight-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers an ACSC Essential Eight-compliant RHEL 8 virtual machine image with built-in security controls to protect sensitive data. The preconfigured image reduces the time and resources required for security implementation and is regularly updated to keep up with the latest threats and compliance regulations. Foundation Security’s experienced team provides ongoing support, and its solutions are used by several Fortune 500 companies.
ACSC Essential Eight-Compliant Rocky Linux 8: Foundation Security offers an ACSC Essential Eight-compliant Rocky Linux 8 virtual machine image with built-in security controls to protect sensitive data. The preconfigured image reduces the time and resources required for security implementation and is regularly updated to keep up with the latest threats and compliance regulations. Foundation Security’s experienced team provides ongoing support.
ACSC Essential Eight-Compliant Rocky Linux 9: Foundation Security offers an ACSC Essential Eight-compliant Rocky Linux 9 virtual machine image with hundreds of built-in security controls. This preconfigured image reduces the time and resources required for security implementation, ensuring the confidentiality, integrity, and availability of sensitive data. Foundation Security’s experienced team provides ongoing support, making it an ideal solution for organizations that need a secure and compliant environment.
ACSC ISM-Compliant Red Hat Enterprise Linux 8 (RHEL 8): This preconfigured ACSC ISM-compliant RHEL 8 virtual machine image is designed to meet Australian government security standards, reducing time and resources required for security implementation and compliance efforts. Foundation Security updates the image regularly to address evolving threats and compliance requirements, with ongoing support provided by a team of experts.
ACSC ISM-Compliant Red Hat Enterprise Linux 9 (RHEL 9): The ACSC ISM-compliant RHEL 9 virtual machine image is a preconfigured solution that aligns with Australian government security standards. It reduces the time and resources required for security implementation and compliance efforts. Foundation Security updates the image regularly to address evolving threats and compliance requirements, providing ongoing support to address any security concerns or compliance queries.
ACSC ISM-Compliant Rocky Linux 8: The ACSC ISM-compliant Rocky Linux 8 virtual machine image is preconfigured with security controls aligned with Australian government standards. Foundation Security updates the image to address evolving threats and compliance requirements, providing ongoing support to meet the highest security standards required by the ACSC ISM.
ACSC ISM-Compliant Rocky Linux 9: Foundation Security offers an ACSC ISM-compliant Rocky Linux 9 virtual machine image with built-in security controls to align with Australian government standards. This preconfigured image reduces the time and resources required for security implementation and compliance efforts. The team provides ongoing support, and its solutions are trusted by various Australian government agencies and contractors.
AffableBPM AI-Based Data Analytics Copilot: AffableBPM’s Data Analytics is powered by Microsoft Azure OpenAI to convert your questions into database searches, presenting the results in an intuitive visual format. It provides instant insights without the need for complex tools or technical skills. It is perfect for quickly making decisions and enhances productivity by bypassing traditional setup and configuration steps.
AI Anomaly Detection: AI Anomaly Detection monitors databases and business indicators to detect anomalies. Users are notified via email or preferred channels. The app monitors database schema and business indicators and sends notifications with interactive data visualizations and AI-generated descriptive analytics.
Apache Solr on Ubuntu: Apache Solr is an open-source search platform that excels in handling large volumes of data efficiently. It facilitates full-text search, supports advanced features such as faceted search and hit highlighting, and can handle diverse document types. Solr integrates seamlessly with various programming languages and frameworks, making it a cornerstone technology for organizations looking to enhance search functionality and improve user experience.
CCN Advanced Level-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a preconfigured CCN Advanced Level-compliant RHEL 9 virtual machine image fortified with numerous security controls to meet the rigorous standards set by the CCN for high-security environments. The image is designed and implemented to meet the highest security standards required by the CCN Advanced Level profile, and regularly updated to address evolving threats and compliance requirements. Foundation Security also provides ongoing support to address any security concerns or compliance queries.
CCN Basic Level-Compliant Red Hat Enterprise Linux (RHEL 9): Foundation Security offers a preconfigured CCN Basic Level-compliant RHEL 9 virtual machine image that meets Spanish government standards for public or low-sensitivity information systems. This reduces the time and effort required for basic security implementation and compliance. The image is regularly updated to address common threats and evolving compliance requirements, and Foundation Security offers support to address security concerns or compliance questions.
CCN Intermediate Level-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a CCN Intermediate Level-compliant RHEL 9 virtual machine image with robust security controls for moderately sensitive environments in Spain. Foundation Security’s expertise in Spanish security frameworks ensures compliance with government standards, reducing complexity and time for implementation.
CJIS-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured CJIS-compliant RHEL 7 virtual machine image with numerous security controls to meet CJIS Security Policy requirements. Foundation Security’s team of experts provides ongoing support to address security concerns and compliance queries.
CJIS-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured CJIS-compliant RHEL 8 virtual machine image with numerous security controls to meet CJIS Security Policy requirements. Foundation Security’s team of experts provides ongoing support to address concerns and compliance queries.
Debian 10 with Minecraft Bedrock Game Server: Virtual Pulse offers a simplified solution for hosting a Minecraft Bedrock game server on Debian 10. The image provides a user-friendly interface and comprehensive documentation to guide users through every step of the configuration process, allowing them to focus on enjoying the game rather than troubleshooting technical issues. The solution is designed for both enthusiasts and server administrators seeking a robust and customizable hosting solution.
Docker: ATH Infosystems offers this image providing Docker, a containerization platform that simplifies application development, deployment, and scaling. Docker provides a consistent and isolated environment, runs on any system, optimizes resource utilization, and enhances security.
EMQX: ATH Infosystems has configured this image providing EMQX on CentOS. EMQX is designed for large-scale IoT deployments. It offers reliable communication, advanced security features, and easy customization through a plugin-based architecture.
Fedora 40 with Trusted Launch: Ntegral has configured this virtual machine image containing Fedora Server 40, a stable and flexible Linux operating system suitable for organizations and individuals. It offers the latest open-source technology, modularity, easy administration, and advanced identity management. Ntegral has optimized and packaged it for Azure, ensuring it is always up-to-date and secure.
fieldWISE: fieldWISE by Vassar Labs uses GIS, remote sensing, AI, machine learning, and data analytics to provide timely insights up to the agriculture field level. It offers customizable and scalable products to help growers plan, monitor, optimize, protect, and earn the most from their fields. The platform benefits food manufacturing, agriculture inputs, insurance, and government institutions.
Flask: Flask is a lightweight and flexible web framework for Python, offering easy-to-use tools and libraries for building web applications quickly and efficiently. It follows the WSGI specification and supports extension with various libraries and frameworks. ATH Infosystems has configured this virtual machine image containing Flask on CentOS 8.5.
Health AI247: This AI-powered database built on Microsoft Azure allows medical professionals to access patient records and research symptoms via efficient workflows. By entering an ID number, doctors can retrieve medical records and provide informed care.
HyperStream Data Processor: HyperStream Data Processor is a high-performance platform for real-time data processing and analytics. It offers advanced tools for processing large data streams, enabling organizations to gain immediate insights and make data-driven decisions.
Jitsi: Jitsi is an open-source video conferencing tool that offers secure and flexible online meetings and video calls with high-quality audio and video, robust security features, screen sharing, integration with various tools, and customization options. ATH Infosystems has configured this image with Jitsi on CentOS 8.5.
Linux Stream 9 Minimal with OpenVPN: OpenVPN provides a reliable solution for secure remote access, catering to diverse user personas and addressing the growing need for enhanced privacy and security. It encrypts data transmission, protecting against cyber threats and unauthorized access to sensitive data. Virtual Pulse has packaged this image for easy installation on Microsoft Azure.
MongoDB on AlmaLinux 8: MongoDB on AlmaLinux 8 offers a flexible approach to data storage and management, allowing developers to work with unstructured and volatile data. Its ability to store data in the BSON document format makes it ideal for a variety of applications, from mobile apps to big data analytics. Tidal Media has configured and provides this image.
MongoDB on AlmaLinux 9: MongoDB is an open-source NoSQL database that offers flexibility, scalability, and high performance. It supports multiple programming languages and platforms, has a dynamic schema, and reduces the complexity of database management. MongoDB is ideal for modern applications and businesses of all sizes and is accessible even to small companies and startups. Tidal Media has configured and provides this image.
MongoDB on Debian 11: MongoDB on Debian 11 is a flexible and scalable NoSQL database that allows for easy handling of complex data structures. It integrates with various development frameworks and languages and provides comprehensive security features, automated backup, and recovery solutions. MongoDB is ideal for modern applications that require dynamic and robust data management solutions. Tidal Media has configured and provides this image.
MongoDB on Oracle Linux 8: MongoDB is a flexible and scalable database that allows for efficient storage and processing of data of any size and type. Its replication system ensures data reliability and availability, while its intuitive interface and natural integration with modern programming languages make application development fast and convenient. It easily scales both vertically and horizontally, making it easy and flexible to manage data. Tidal Media has configured and provides this image.
MongoDB on Red Hat Enterprise Linux 8: MongoDB is a flexible and high-performance database that can store and process various types of data. It offers powerful tools for data aggregation, indexing, replication, and sharding, making it suitable for projects of any scale. Tidal Media has configured and provides this image.
MongoDB on Rocky 8: MongoDB on Rocky 8 is a flexible and high-performance database management system that stores information as documents and collections, making data management easier and query processing faster. It offers data replication, indexing on any field, GridFS technology, load balancing, and support for ACID transactions across multiple documents. With MongoDB, businesses can efficiently process large volumes of data and have a reliable and efficient database for any task. Tidal Media has configured and provides this image.
MongoDB on SUSE 15 SP5: MongoDB on SUSE 15 SP5 is a reliable and scalable database solution for modern applications. It offers enhanced security features, stability, and enterprise-grade support. This solution is perfect for enterprises seeking a dependable database system to handle complex and data-intensive workloads. Tidal Media has configured and provides this image.
MongoDB on Ubuntu 22.04 LTS: MongoDB on Ubuntu 22.04 is a NoSQL database solution that offers high performance, scalability, and flexibility for managing vast amounts of unstructured data. It supports real-time analytics, content management, and more, making it ideal for developers, data scientists, and systems administrators. Tidal Media has configured and provides this image.
MongoDB on Ubuntu 24.04 LTS: MongoDB is a flexible and scalable database that allows you to store and manage data as documents. It has no strict data schema, making it ideal for projects that require rapid adaptation to changing needs. MongoDB simplifies the process of storing and retrieving data, resulting in increased performance and reduced overhead. It is an ideal choice for various types of applications, including big data analytics, web development, and mobile applications. Tidal Media has configured and provides this image.
Neo4j: Neo4j is a scalable graph database management system with ACID transactions, horizontal scaling, and seamless integration with programming languages and analytics tools. ATH Infosystems has configured this image containing Neo4j on CentOS 8.5..
NeuralNet Integrator: NeuralNet Integrator is an AI platform that integrates and deploys neural network models across various applications. It offers tools for developing, training, and managing neural networks, ensuring optimal performance and scalability.
Next.js: Next.js is an open-source framework for building modern web applications with powerful features like server-side rendering, static site generation, and built-in CSS. ATH Infosystems has configured this image providing Next.js on CentOS 8.5.
Oracle Linux 8.10 for Arm64 Architecture: Oracle Linux Server 8.10 is a reliable, secure, and performant enterprise operating system that brings the latest open-source innovations and business-critical performance and security optimizations. It delivers virtualization, management, and cloud-native computing tools, as well as application binary compatible with Red Hat Enterprise Linux. Ntegral has configured this image.
OSPP-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers an OSPP-compliant RHEL 7 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers an OSPP-compliant RHEL 8 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile.Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers an OSPP-compliant RHEL 9 virtual machine image with hardened security controls for organizations requiring high assurance in their operating systems. The preconfigured image reduces time and resources needed for security implementation and evaluation, with ongoing support from experts in compliance standards.
OSPP-Compliant Rocky Linux 8: Foundation Security offers an OSPP-compliant Rocky Linux 8 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Rocky Linux 9: Foundation Security offers an OSPP-compliant Rocky Linux 9 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
PCI-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured RHEL 7 virtual machine image with numerous security controls to meet the latest PCI DSS standard. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured RHEL 8 virtual machine image with numerous security controls to establish a PCI-compliant environment. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a preconfigured RHEL 9 virtual machine image with numerous security controls to meet the latest PCI DSS standard. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Rocky Linux 8: This Rocky Linux 8 virtual machine image is designed to meet the latest security standards for companies handling payment card data. It includes numerous security controls and is regularly updated to address evolving threats and compliance requirements. Foundation Security’s team of experts provides ongoing support to ensure a consistently secure and compliant platform.
PCI-Compliant Rocky Linux 9: This Rocky Linux 9 virtual machine image is designed to meet the latest security standards for organizations handling payment card data. It includes numerous security controls and is regularly updated to address evolving threats and compliance requirements. Foundation Security’s team of experts provides ongoing support to ensure a consistently secure and compliant platform.
phpMyAdmin: ATH Infosystems has configured this image providing CentOS 8.5 and. phpMyAdmin, an open-source web-based administration tool for managing MySQL and MariaDB databases via a user-friendly interface for various database management tasks.
PortalTalk Governance Solution for Microsoft Teams: PortalTalk by QS Solutions streamlines administrative duties with automated site provisioning, offers robust access control, and empowers Microsoft Teams channel owners to manage their domains autonomously while IT staff retain comprehensive control over the system. It enhances an organization’s security stance and simplifies administrative processes, delivering a secure and compliant environment for Teams and SharePoint document management.
Prometheus on Ubuntu: Anarion has configured this image providing Prometheus, an open-source monitoring and alerting toolkit used to collect and store metrics as time series data. Prometheus employs a powerful query language called PromQL, allowing for complex aggregations and transformations of data.
Quantive StrategyAI Gold (US): Quantive StrategyAI is an AI-powered tool that helps businesses plan, execute, and adapt their strategies quickly. It provides real-time insights, digital collaboration tools, and flexible executive dashboards to track business KPIs and goals.
Quantive StrategyAI Gold (UK): Quantive StrategyAI is an AI-powered tool that helps businesses plan, execute, and adapt their strategies quickly. It provides real-time insights, digital collaboration tools, and flexible executive dashboards to track business KPIs and goals. This offer is available in the United Kingdom.
Red Hat Enterprise Linux 8.10 with Trusted Launch: Red Hat Enterprise Linux includes built-in security features like SELinux and mandatory access controls. Configured by Ntegral, this Trusted Launch virtual machine helps protect against advanced attacks.
Redgate Flyway Enterprise: Flyway Enterprise simplifies and accelerates database delivery with automation, object-level version control, and flexible deployment options. It supports multiple database platforms and integrates with common CI and release tools.
Redgate Monitor Enterprise: Redgate Monitor enhances productivity and efficiency, simplifies collaboration, and boosts skills portability. It ensures operational continuity, reduces security risks, ensures compliance, and saves time on manual database tasks.
Redgate Test Data Manager: Redgate Test Data Manager streamlines the data provisioning workflow, enabling developers and testers to self-serve dedicated, compliant copies of production environments within seconds. It automates the delivery of high-quality test data as part of your CI/CD pipeline, and simplifies data security with automated data discovery, classification, and masking practices.
RH-CCP Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured RHEL 7 virtual machine image with numerous security controls to meet the requirements of the Red Hat Common Criteria Profile. Foundation provides ongoing support to address security concerns and compliance queries.
RH CCP-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured RHEL 8 virtual machine image with numerous security controls to meet the requirements of the Red Hat Common Criteria Profile. Foundation provides ongoing support to address security concerns and compliance queries.
Rocky Linux 9.4 Generation 2 VM: Rinne Labs offers a lightweight and secure Rocky Linux 9.4 image built from the official ISO with only essential packages for optimal performance. The image is updated with the latest security patches and updates, making it ideal for rapid deployment of web applications, efficient development and testing environments, stable and secure server infrastructure, data analytics, and machine learning.
Rocky Linux 8.10 on Arm64 Architecture: Rocky Linux is a premier Linux distribution for enterprise cloud environments, offering additional security and compliance. Ntegral has packaged this image to work out of the box.
Smart RDM Data-Driven Decision Support System for Manufacturing: This Smart RDM offer delivers a decision support system that provides real-time insights and action recommendations for manufacturing processes. It analyzes past execution history and data analytics to suggest the best performing solution scenarios. The system also allows for what-if scenarios and digital twin modeling to test multiple scenarios in a safe environment.
Smart RDM Energy Efficiency Management for Manufacturing: ConnectPoint helps you optimize energy consumption in manufacturing processes through real-time data insights, predictive analytics, and a decision support system. Via this offer, you can use Smart RDM to calculate energy costs, sets alarms for abnormal consumption, and maximizes the use of renewable sources.
Standard System Security Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers this RHEL 7 virtual machine image that incorporates industry-standard practices for system security. The preconfigured image reduces the complexity and time required for security implementation and is regularly updated to address evolving threats.
Standard System Security-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers this RHEL 8 virtual machine image with preconfigured security controls to establish a secure environment based on industry-standard practices. The image is regularly updated to address evolving threats and incorporates the latest security best practices.
STIG-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers this RHEL 8 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
STIG-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers this RHEL 9 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
STIG-Compliant Rocky Linux 8: Foundation Security’s Rocky Linux 8 virtual machine image is preconfigured with hundreds of security controls to meet Department of Defense standards. It reduces complexity and time required for security implementation and is regularly updated to address evolving threats and compliance requirements.
STIG-Compliant Rocky Linux 9: Foundation Security offers this Rocky Linux 9 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
Sullexis Hierarchy Management Powered by LinqIQ: LinqIQ helps manage hierarchies for analytics, AI, and data management platforms. It creates and maintains hierarchies based on how businesses interact with customers, vendors, and partners. LinqIQ can be customized and connected to existing systems, with a user-friendly interface for easy adjustments. It can be implemented quickly on Microsoft Azure.
Webmin Server on Oracle Linux 9: Webmin Server provides a web-based interface that simplifies system administration tasks for IT professionals. It offers a comprehensive suite of tools and features for managing Unix-like systems, including user account management, software package installation, and system monitoring. With its intuitive interface and remote access capabilities, Webmin enhances productivity and reduces errors. Tidal Media has configured this image providing Webmin on Oracle Linux 9.
Webmin Server on Red Hat Enterprise Linux 9: Webmin Server provides a user-friendly interface for managing servers remotely. It offers powerful server management features, including user and group creation, network configuration, and database management. With Webmin, you can monitor system resources, configure firewalls, and manage configuration files to keep your server secure and performing optimally. Tidal Media has configured this image providing Webmin on Red Hat Enterprise Linux 9.
Go further with workshops, proofs of concept, and implementations
AltaML AI: 8-Week Proof of Concept: AltaML’s engagement includes ideation, feasibility assessment, and AI/ML model experimentation using Microsoft Azure AI. AltaML efficiently de-risks the ML process, hastens ROI realization, and supports informed decisions for full-scale deployment.
Application Modernization and Migration to Azure: Implementation: Click2Cloud offers migration and modernization services, evaluating existing custom applications and providing detailed information on migration costs to Microsoft Azure.
Database Migration to Azure: Implementation: Click2Cloud’s Database Migration Service helps you migrate, innovate, and modernize data using AI on Microsoft Azure. The solution ensures a seamless, efficient transition, eliminating physical infrastructure and end-of-life software issues. Key deliverables include a business value assessment report and proof of concept.
Migrate Legacy Data to Azure: 4-Week Proof of Concept: T-Systems Managed Application Retirement Services (M.A.R.S.) is a consultancy-to-cloud capability for sunsetting legacy applications. It enables the transfer of data to a single platform, eliminating business and security risks, and reducing overall costs.
Rackspace Managed XDR Powered by Microsoft Sentinel: Rackspace Managed XDR, built on Microsoft Sentinel, offers advanced threat detection capabilities, certified security analysts, and AI-assisted remediation for detection and response to cybersecurity threats across your digital estate. It integrates with over 300 security technologies and log sources, conducts proactive threat hunts, and speeds up containment and eradication of threats through cloud-native security orchestration and automated response.
Unisys Cloud Transformation: Implementation: Unisys Cloud Transformation offers a secure and phased approach to Azure migrations and modernization. Unisys begins with workshops to gather information about your business case and technical requirements. Experts design and build Azure target environments to host your applications.
Contact our partners
ACSC Essential Eight-Compliant Red Hat Enterprise Linux 9 (RHEL 9)
App and Infrastructure: 3-Day Assessment
BreachRisk Copilot for Security
Cognizant – Oracle Dataases to Oracle Database@Azure: Migration
Copilot for Microsoft 365: 1-Week Assessment
GravityZone Small Business Security
HCLTech Cloud Security Foundation (CsaaS) for Azure
Imperium Co-Managed Service for Microsoft Dynamics 365 (SaaS)
Imperium Co-Managed Service for Microsoft Fabric (SaaS)
Infisical Secured and Supported by HOSSTED
Kelvin Autonomous Operations Software
Linux Stream 9 Minimal with iPerf3 Server
Linux Stream 9 with iPerf3 Server
Managed Service Provider (MSP) for Azure
Metric Insights BI Portal – Virtual Machine Image
Octave Immersive Data Service: 2- 3-Week Assessment
PacketFabric Network Solutions
PositivityTech Financial Services Industry Benchmark Platform
Rocky 8.10 Generation 2 with Support by Rinne Labs
Rocky 8.6 Generation 2 with Support by Rinne Labs
Rocky Linux 8.10 Generation 2 with Support by Rinne Labs
Rocky Linux 8.6 Generation 2 with Support by Rinne Labs
Senseye Predictive Maintenance
STIG-Compliant Red Hat Enterprise Linux 7 (RHEL 7)
XENA VISION – Smart City Active Surveillance
Yobi Signal as a Service: Data Enrichment
ZingWorks Distribution Requirement Planning
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
How to use all cores for running a Simulink model?
Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me.Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me. Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me. parallel computing in simulink MATLAB Answers — New Questions
MSSQL – availability group issues
Hello,
I have a cluster setup with availability group and a database was part of this AG, but something happened and I can see it in the AG, but with an exclamation mark.
I tried to alter the availability group and remove it, but the error was that the database is not part of the AG.
Can some one tell me how I can remove that database from the AG ?
Thanks,
Daniel
Hello,I have a cluster setup with availability group and a database was part of this AG, but something happened and I can see it in the AG, but with an exclamation mark.I tried to alter the availability group and remove it, but the error was that the database is not part of the AG.Can some one tell me how I can remove that database from the AG ?Thanks,Daniel Read More
Excel copy and paste error
Usually, when I copy something in Excel, it highlights the cell and keeps it highlighted until I finish pasting it on the Excel or if I copy something else, even on another app or browser.
But now when I copy something from Excel, paste it on the browser (URL bar), copy something else from the opened page, and then try to paste the browser data back to Excel, the initially copied cell is still highlighted, and it pastes that data only, not the browser data. I have to then paste the browser data from the clipboard.
Can someone confirm this issue or suggest a solution?
Usually, when I copy something in Excel, it highlights the cell and keeps it highlighted until I finish pasting it on the Excel or if I copy something else, even on another app or browser.But now when I copy something from Excel, paste it on the browser (URL bar), copy something else from the opened page, and then try to paste the browser data back to Excel, the initially copied cell is still highlighted, and it pastes that data only, not the browser data. I have to then paste the browser data from the clipboard.Can someone confirm this issue or suggest a solution? Read More