Category: Microsoft
Category Archives: Microsoft
Deleted Teams App still being looked for on startup
Hi Everyone, we have a puzzling one, we published and subsequently deleted an App from our Teams portal. all users now get a message saying that the app can’t be found and we’re struggling with how to remedy this. The app is 100% gone from the portal. we have tried uninstall and reinstall of teams including deleting the cache, but this has not helped at all. Has anyone any idea of where to look to resolve this one – I think there’s something somewhere in the teams portal but can’t fine anything.
Kind Regards
Pete
Hi Everyone, we have a puzzling one, we published and subsequently deleted an App from our Teams portal. all users now get a message saying that the app can’t be found and we’re struggling with how to remedy this. The app is 100% gone from the portal. we have tried uninstall and reinstall of teams including deleting the cache, but this has not helped at all. Has anyone any idea of where to look to resolve this one – I think there’s something somewhere in the teams portal but can’t fine anything. Kind Regards Pete Read More
Public preview checkbox keeps unchecking
Hi 🙂
When I tick the ‘Public preview’ box within settings, it is not clear if this instantly changes to the preview version or if it needs a restart. In Classic Teams, enabling public preview used to relaunch the app.
In addition, the checkbox also regularly shows as unticked. I suspect this is happening after Teams updates (which it does a few times a day sometimes).
Does anyone else notice this? I would like to stay in preview mode the majority of the time, but at the least until I choose to turn it off. With this current way of working I am often taken out of it without being notified.
Thanks,
Ian
Hi :)When I tick the ‘Public preview’ box within settings, it is not clear if this instantly changes to the preview version or if it needs a restart. In Classic Teams, enabling public preview used to relaunch the app. In addition, the checkbox also regularly shows as unticked. I suspect this is happening after Teams updates (which it does a few times a day sometimes). Does anyone else notice this? I would like to stay in preview mode the majority of the time, but at the least until I choose to turn it off. With this current way of working I am often taken out of it without being notified. Thanks,Ian Read More
Azure Devops Teams Notification
Hello Team,
I have recently added Azure Repo app to my dev release channel. I configured “@Azure Repos subscriptions” and added notification for when a Pull Request is created. Is there a way I can exclude draft Pull Requests from sending notifications?
Hello Team, I have recently added Azure Repo app to my dev release channel. I configured “@Azure Repos subscriptions” and added notification for when a Pull Request is created. Is there a way I can exclude draft Pull Requests from sending notifications? Read More
ADODB performance problem
Hi All, (Win10, Win11)
To highlight a performance problem (with ADODB/ACCESS), on a heavy client, I wrote two small programs one in VB.NET (and VBA), the other in CPP/MFC. Using the same component (msado15.dll).
This piece of code does nothing but connect to two ACCESS files (with a single table of 15,000 records), and via a recordset copy from one database to the other.
Under VB.NET, it takes 1 second.
Under CPP, on a machine that has the runtime access 2013 x64, it takes 9 seconds. (i spend 99% of my time in AddNew() )…
Under CPP, on a machine that has the runtime access 2016 x64 OR Office365, it takes 20 minutes. (i spend 99% of my time in AddNew() )…
The target computer must have Office365…
I need an issue…
PS : My two sln (exe/cpp/…) are available…There is no sensitive data…
A short running test…
|Nom de la fonction|Total [unit, %]|Self [unit, %]|Nombre d’appels|Module|
|-|-|-|-|-||*+*D:\users\phili\Downloads\Test_ADO_Cpp_VB\test_minimal\x64\Release\test.exe (PID : )|115,73s (100,00*%)|0ns (0,00*%)|0|Plusieurs modules|
||*+*__report_gsfailure|115,73s (100,00*%)|9,20?s (0,00*%)|1|test|
|||*+*__scrt_common_main_seh|115,73s (100,00*%)|13,20?s (0,00*%)|1|test|
||||*+*main|115,73s (100,00*%)|654,83ms (0,57*%)|1|test|
|||||*-*[Native] msado15.dll!0x00007ffe8947bc40|104,82s (90,58*%)|104,82s (90,58*%)|6215|msado15| <= AddNew() !!!
|||||*-*[Native] ucrtbase.dll!0x00007fff373ccad0|9,31s (8,05*%)|9,31s (8,05*%)|1|ucrtbase|
|||||*-*ADODB::Connection15::Open|413,29ms (0,36*%)|8,80?s (0,00*%)|2|test|
|||||*-*[Native] msado15.dll!0x00007ffe8947c010|101,83ms (0,09*%)|101,83ms (0,09*%)|2|msado15|
|||||*-*printf|96,90ms (0,08*%)|157,30?s (0,00*%)|214|test|
|||||*-*[Native] msado15.dll!0x00007ffe8947d3d0|91,69ms (0,08*%)|91,69ms (0,08*%)|74568|msado15|
|||||*-*[Native] msado15.dll!0x00007ffe8947d380|50,46ms (0,04*%)|50,46ms (0,04*%)|74568|msado15|
|||||*-*[Native] msado15.dll!0x00007ffe894d8fc0|30,92ms (0,03*%)|30,92ms (0,03*%)|149136|msado15|
|||||*-*[Native] msado15.dll!0x00007ffe8947bfc0|28,68ms (0,02*%)|28,68ms (0,02*%)|6214|msado15|
|||||*-*[Native] combase.dll!0x00007fff39403f10|26,94ms (0,02*%)|26,94ms (0,02*%)|5|combase|
|||||*-*[Native] msado15.dll!0x00007ffe89513fc0|17,39ms (0,02*%)|17,39ms (0,02*%)|149137|msado15|
|||||*-*[Native] msado15.dll!0x00007ffe894c1480|15,11ms (0,01*%)|15,11ms (0,01*%)|1|msado15|
|||||*-*[Native] msado15.dll!0x00007ffe8947c230|14,46ms (0,01*%)|14,46ms (0,01*%)|149137|msado15|
|||||*-Native]msado15.dll!0x00007ffe8947d2b0|11,32ms,01*%)|11,32ms,01*%)|149136|msado15|
Regards
Hi All, (Win10, Win11)To highlight a performance problem (with ADODB/ACCESS), on a heavy client, I wrote two small programs one in VB.NET (and VBA), the other in CPP/MFC. Using the same component (msado15.dll).This piece of code does nothing but connect to two ACCESS files (with a single table of 15,000 records), and via a recordset copy from one database to the other.Under VB.NET, it takes 1 second.Under CPP, on a machine that has the runtime access 2013 x64, it takes 9 seconds. (i spend 99% of my time in AddNew() )…Under CPP, on a machine that has the runtime access 2016 x64 OR Office365, it takes 20 minutes. (i spend 99% of my time in AddNew() )…The target computer must have Office365…I need an issue…PS : My two sln (exe/cpp/…) are available…There is no sensitive data…A short running test…|Nom de la fonction|Total [unit, %]|Self [unit, %]|Nombre d’appels|Module||-|-|-|-|-||*+*D:\users\phili\Downloads\Test_ADO_Cpp_VB\test_minimal\x64\Release\test.exe (PID : )|115,73s (100,00*%)|0ns (0,00*%)|0|Plusieurs modules|||*+*__report_gsfailure|115,73s (100,00*%)|9,20?s (0,00*%)|1|test||||*+*__scrt_common_main_seh|115,73s (100,00*%)|13,20?s (0,00*%)|1|test|||||*+*main|115,73s (100,00*%)|654,83ms (0,57*%)|1|test||||||*-*[Native] msado15.dll!0x00007ffe8947bc40|104,82s (90,58*%)|104,82s (90,58*%)|6215|msado15| <= AddNew() !!!|||||*-*[Native] ucrtbase.dll!0x00007fff373ccad0|9,31s (8,05*%)|9,31s (8,05*%)|1|ucrtbase||||||*-*ADODB::Connection15::Open|413,29ms (0,36*%)|8,80?s (0,00*%)|2|test||||||*-*[Native] msado15.dll!0x00007ffe8947c010|101,83ms (0,09*%)|101,83ms (0,09*%)|2|msado15||||||*-*printf|96,90ms (0,08*%)|157,30?s (0,00*%)|214|test||||||*-*[Native] msado15.dll!0x00007ffe8947d3d0|91,69ms (0,08*%)|91,69ms (0,08*%)|74568|msado15||||||*-*[Native] msado15.dll!0x00007ffe8947d380|50,46ms (0,04*%)|50,46ms (0,04*%)|74568|msado15||||||*-*[Native] msado15.dll!0x00007ffe894d8fc0|30,92ms (0,03*%)|30,92ms (0,03*%)|149136|msado15||||||*-*[Native] msado15.dll!0x00007ffe8947bfc0|28,68ms (0,02*%)|28,68ms (0,02*%)|6214|msado15||||||*-*[Native] combase.dll!0x00007fff39403f10|26,94ms (0,02*%)|26,94ms (0,02*%)|5|combase||||||*-*[Native] msado15.dll!0x00007ffe89513fc0|17,39ms (0,02*%)|17,39ms (0,02*%)|149137|msado15||||||*-*[Native] msado15.dll!0x00007ffe894c1480|15,11ms (0,01*%)|15,11ms (0,01*%)|1|msado15||||||*-*[Native] msado15.dll!0x00007ffe8947c230|14,46ms (0,01*%)|14,46ms (0,01*%)|149137|msado15||||||*-Native]msado15.dll!0x00007ffe8947d2b0|11,32ms,01*%)|11,32ms,01*%)|149136|msado15|Regards Read More
Adjacent column formulas not updated correctly on power query refresh
I have a query that reads a few csv files. It’s a simple query. There are a couple thousand rows, but it’s not complicated.
That data is loaded into a table.
I then added columns in excel, adjacent to the table with formula. Fairly straight forward.
Problem: When I update the query, new data comes in as expected, but the adjascent cells do not autofill down UNTIL you get to the last row of the new data where (somehow) the formula are filled across that row in all the adjascent columns. There’s litterally a gap between the last previous row of formula and the newly minted last row in the table.
I’ve tried several fixes, none have worked. Hoping someone can point me in the right direction. One thing I’m wondering is the whether the formula structure is causing an issue. I am not using structured formlas, rather simply referencing cells.
I have a query that reads a few csv files. It’s a simple query. There are a couple thousand rows, but it’s not complicated.That data is loaded into a table.I then added columns in excel, adjacent to the table with formula. Fairly straight forward.Problem: When I update the query, new data comes in as expected, but the adjascent cells do not autofill down UNTIL you get to the last row of the new data where (somehow) the formula are filled across that row in all the adjascent columns. There’s litterally a gap between the last previous row of formula and the newly minted last row in the table. I’ve tried several fixes, none have worked. Hoping someone can point me in the right direction. One thing I’m wondering is the whether the formula structure is causing an issue. I am not using structured formlas, rather simply referencing cells. Read More
MS Bookings Calendars being viewable in Outlook Shared Calendars
The trouble we have is that we are unable to see/share the bookings calendars in the our Outlook calendars with stakeholders & managers. All employees go through O365 to the bookings page and enter the calendar from there to look at it. If we were able to see the MS Bookings page calendar where we usually select shared calendars to view them side by side would be a huge benefit.
Does anyone know if this is at all possible. Or anyone else having this challenge?
The trouble we have is that we are unable to see/share the bookings calendars in the our Outlook calendars with stakeholders & managers. All employees go through O365 to the bookings page and enter the calendar from there to look at it. If we were able to see the MS Bookings page calendar where we usually select shared calendars to view them side by side would be a huge benefit.Does anyone know if this is at all possible. Or anyone else having this challenge? Read More
Fraudulant use
Lately, I have been seeing email with my user name and an addition to it like below:
email address removed for privacy reasons email address removed for privacy reasons.S7S <email address removed for privacy reasons email address removed for privacy reasons.S7S>
I want to know how to stop this kind of garbage from sending me what can only be described as junk and phishing emails. I cannot block them because Outlook states I cannot block my email address removed for privacy reasons email even though it really isn’t my email.
Anyone else experiencing this kind of garbage?
Frustrated Long-Time User (If you couldn’t tell from my hotmail.com address),
Ray
Lately, I have been seeing email with my user name and an addition to it like below:email address removed for privacy reasons email address removed for privacy reasons.S7S <email address removed for privacy reasons email address removed for privacy reasons.S7S>I want to know how to stop this kind of garbage from sending me what can only be described as junk and phishing emails. I cannot block them because Outlook states I cannot block my email address removed for privacy reasons email even though it really isn’t my email. Anyone else experiencing this kind of garbage?Frustrated Long-Time User (If you couldn’t tell from my hotmail.com address),Ray Read More
Sharepoint Online – 4000 site permissions adjustment query
Hi everyone, im hoping for some recommendations/advice. We have around 4000 sharepoint sites for Devices that currently have a single document library named Projects and some sub-folders in there created using pnp site provisioning. Every site is identical.
The permissions are currently dynamic where by the Members group has a security group added that ensures all active licensed users can access.
We have a requirement to start provisioning new sites where by two of the folders within the Projects library will have custom permissions which means breaking the inheritance and then adding another security group in however from what i have read this is against best practices.
Also it doesn’t help us as this will then need doing on the 4000 existing sites and having that many sites to manage at a unique level seems very scary.
What can be done here to block access to a couple of folders at tenant level for so many sites without needing to break inheritance of permissions.
Or is there no way, would one alternative be to create a new master site where every “Device” was a document library in this site (Around 4000) then the two folders in there and have a security group then assigned to the root members of that site fixing the permission requirement.
Thankyou,
Hi everyone, im hoping for some recommendations/advice. We have around 4000 sharepoint sites for Devices that currently have a single document library named Projects and some sub-folders in there created using pnp site provisioning. Every site is identical. The permissions are currently dynamic where by the Members group has a security group added that ensures all active licensed users can access. We have a requirement to start provisioning new sites where by two of the folders within the Projects library will have custom permissions which means breaking the inheritance and then adding another security group in however from what i have read this is against best practices. Also it doesn’t help us as this will then need doing on the 4000 existing sites and having that many sites to manage at a unique level seems very scary. What can be done here to block access to a couple of folders at tenant level for so many sites without needing to break inheritance of permissions. Or is there no way, would one alternative be to create a new master site where every “Device” was a document library in this site (Around 4000) then the two folders in there and have a security group then assigned to the root members of that site fixing the permission requirement. Thankyou, Read More
Windows Insider Program Feedback Sign In
I am getting an error message “Can’t sign you in”, error code CAA20003 when accessing Windows Insider Program Feedback from Windows Update. I have Windows 11. I follow the prompt to check Outlook and acknowledge the request. This completes just fine. I also tried the feedback hub app from the website.
I am getting an error message “Can’t sign you in”, error code CAA20003 when accessing Windows Insider Program Feedback from Windows Update. I have Windows 11. I follow the prompt to check Outlook and acknowledge the request. This completes just fine. I also tried the feedback hub app from the website. Read More
Copy existing formulas into Advanced Formula Environment module?
I haven’t found any guidance here or elsewhere on copying existing LAMBDA functions from a workbook into a module in AFE so that I can export it into a GitHub Gist for reuse in other workbooks.
I could copy & paste one at a time … but in some cases it would be onerous!
Situation: I have a few workbooks with several LAMBDA functions defined as “Names”. I can bring up the list in AFE:
The “duplicate” button creates the copy only in the workbook with no option to move it to a module. I can “paste” the definitions in “Formulas>Use In Formula>Paste Names” from the menu and paste them into the module text, but then lose the comments (“Creates string with…” in the image).
Am I missing something?
Thanks!
I haven’t found any guidance here or elsewhere on copying existing LAMBDA functions from a workbook into a module in AFE so that I can export it into a GitHub Gist for reuse in other workbooks.I could copy & paste one at a time … but in some cases it would be onerous! Situation: I have a few workbooks with several LAMBDA functions defined as “Names”. I can bring up the list in AFE: The “duplicate” button creates the copy only in the workbook with no option to move it to a module. I can “paste” the definitions in “Formulas>Use In Formula>Paste Names” from the menu and paste them into the module text, but then lose the comments (“Creates string with…” in the image). Am I missing something? Thanks! Read More
MVP’s Favorite Content: Surface, Azure, Microsoft AI
In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.
SungKi Park, Windows and Devices MVP, Korea
Surface MVP showcase: Enabling commercial experiences on Surface – Microsoft Community Hub
“Introduce a joint blog article that provides accurate product information about Microsoft Surface, which provides both AI PC and Copilot PC, as well as insights from five Surface MVPs.”
*Relevant activities:
– Event/ FY24 Surface Partner Day: Post | LinkedIn
– Blog/ Surface Pro 10 for Business Product Review: “비즈니스용 서피스 프로10” AI PC 시대.. : 네이버블로그 (naver.com)
– Blog/ Surface Laptop 6 for Business Product Review: 비즈니스용 마이크로소프트 서피스 랩탑 6, A.. : 네이버블로그 (naver.com)
Hamid Sadeghpour Saleh, Microsoft Azure MVP, Azerbaijan
“Well Architected Framework is an important design framework to learn and make an architectural mindset out of it and having those best practices in the pocket!”
Mohamed Azarudeen Z, AI Platform MVP, India
Machine learning Archives | Microsoft AI Blogs
“The Microsoft AI Blogs is an invaluable resource that offers the latest insights on the forefront of artificial intelligence advancements. Covering a diverse array of topics, it delves into the transformative power of AI across multiple industries and its seamless integration within Microsoft’s ecosystem of products and services. as an AI- MVP, I suggest this to everyone to yearn learning AI.
The blog also explores the wider impact of AI on various industry landscapes, analyzing how this groundbreaking technology is revolutionizing business operations, fostering innovation, and significantly influencing individuals’ lives on a global scale.”
Tomoitsu Kusaba, Developer Technologies MVP, Japan
Generative AI for Beginners – Full Videos Series Released! (microsoft.com)
“This video provides a well-organized summary of what developers should learn about generative AI. By making it available in video format, it caters to those who prefer learning through text as well as those who prefer video. From an accessibility standpoint, this approach is excellent.”
(In Japanese: 生成AIについて開発者が学ぶべき事柄がよくまとまっています。動画で公開されたことで、テキストで学習したい方、動画で学習したい方それぞれに対応しアクセシビリティの観点から見ても素晴らしい対応と感じています。)
Microsoft Tech Community – Latest Blogs –Read More
Downloading Microsoft Store apps using Windows Package Manager
By: Carlos Britos and Jason Sandys – Principal Product Managers | Microsoft Intune
Offline apps is the last remaining significant function of the Microsoft Store for Business on its path to full retirement. Offline apps allows customers to download packaged apps from the Microsoft Store for Business or Education for distribution through alternate mechanisms like a Windows Provisioning Package.
With the impending retirement of the Microsoft Store for Business and Education on August 15, 2024, this offline apps functionality will also retire but the ability to download and distribute packaged apps from the Microsoft Store to devices with restricted connectivity to the Microsoft Store remains. For this reason, starting with version 1.8, Windows Package Manager (WinGet) added the capability to download packages from the Microsoft Store. Unless explicitly disabled, all Windows devices will have automatically updated to this version already. To check the version running locally, you can run winget –v from a command prompt. For troubleshooting guidance, see Debugging and troubleshooting issues with the WinGet tool.
Keep in mind that just as with offline applications from the Microsoft Store for Business and Education, the download feature in WinGet is limited to packaged apps where the publisher has permitted offline licensing and distribution for organizations. This is controlled by the app publisher, not Microsoft. All unpackaged apps published to the Microsoft Store are available for download.
Also note, packaged apps include UWP apps packaged in the AppX format as well as apps packaged in the MSIX format. Unpackaged apps include all Win32 apps packaged in an alternate format such as MSI or EXE.
Downloading a Microsoft Store app using WinGet
Using the WinGet command line interface (CLI) to download an app from the Microsoft Store is straight-forward. The following example walks through the download of the Microsoft Remote Desktop app. This is an app published by Microsoft and allows offline downloads. For more information on any of the below steps or information related to the new download option, please refer to the WinGet download command documentation. Note that WinGet leverages Delivery Optimization to download apps from the Microsoft Store.
Locate the package you wish to download using the WinGet CLI. This step is optional
if you already know the exact package name or ID of the desired package in which case you can skip directly to step 2 below.
winget search “remote desktop” –source MSStore
Use the new download command line argument for the CLI along with the package ID previously returned. By default, files for the specified package are downloaded to the Downloads subfolder of the current user’s profile folder. To override this location, use the -d or –download-directory option on the WinGet command line.
winget download –id 9WZDNCRFJ3PS
Note: You can limit the scope of the downloaded package using additional filtering options on the WinGet command line, e.g., use -a or –architecture to only download content related to a specific OS architecture.
Review the initial information shown and accept the agreements linked by pressing Y and then Enter. If the current account is not currently logged into Microsoft Entra, you will be presented with a standard Entra ID authentication prompt and must successfully authenticate to proceed. Additionally, the account used requires one of the following roles:
Global Administrator
User Administrator
License Administrator
WinGet creates a new folder in the default or specified download folder named for the package ID you specified and proceeds to download the packages and its dependencies to this subfolder. Additionally, WinGet retrieves a license for the package as all packaged apps from the Microsoft Store require a license.
You can now use the downloaded package using your management tool of choice.
Installing a WinGet downloaded package in a Windows provisioning package
Using packages downloaded by WinGet within a Windows provisioning package allows you to install the downloaded apps while provisioning a Windows device for management by Microsoft Intune. To do this, follow these steps:
Download the Windows Configuration Designer (WCD) app from the Microsoft Store.
Launch WCD and choose the Provision desktop devices option on the Start page.
Provide a name and location for the project.
Provide information on the Set up device, Set up network, and Account Management pages as needed.
For the Add applications page, click Add an Application.
Provide the Application name, Installer path and License path for the application that you are adding.
Add all Required appx dependencies and click Add to finish. The following screenshot shows the completed Add applications page in WCD for Microsoft Remote Desktop including its x64 dependencies.
Complete the Add certificates page as needed and under the Finish step, select Create to complete the process.
Your provisioning package is now ready.
Installing a WinGet downloaded package using Intune
In general, we recommend using the built-in Intune functionality to distribute Microsoft Store apps to managed Windows devices. However, you can also use other device management tools to deploy packaged apps separately downloaded using WinGet download. Scenarios where you may consider this include the following:
Managed clients cannot access or are restricted from connecting to the Microsoft Store.
Strict app version control is required.
To use Intune for this, follow the steps at Add a Windows line-of-business app to Microsoft Intune. Note that managed Windows endpoints must be able to connect to the Microsoft license server to retrieve a license for any apps deployed this way as Intune has no built-in capability to do this. Additionally, Microsoft Store apps will automatically update from the Microsoft Store if devices have connectivity to the Microsoft Store and Automatic Store app updates is not disabled, regardless of the app deployment method.
By following these steps, you can effectively utilize WinGet and Intune to manage app deployments, ensuring all necessary licenses and dependencies are correctly handled. This approach facilitates a streamlined and controlled deployment process across managed Windows devices.
If you have any questions or feedback, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
New to Powerpoint
How do I add narration to a Ppt and have the slide show play like a movie? Additionally, the movie should last for 60 seconds or less. Is there a way to ensure this with Powerpoint? Any help that you can give me would be greatly appreciated. Thank you.
How do I add narration to a Ppt and have the slide show play like a movie? Additionally, the movie should last for 60 seconds or less. Is there a way to ensure this with Powerpoint? Any help that you can give me would be greatly appreciated. Thank you. Read More
Field Type Modification
I performed an import of Excel data into SharePoint List. I brought the Excel (ID) field into the Title column. I need this to be sortable so I created a Numeric column (IDMaster) with the values from the Title column and there are over 1000 records. How do I format the IDMaster so that it does not include a “,” (i.e., 1,000 -> 1000)?
I performed an import of Excel data into SharePoint List. I brought the Excel (ID) field into the Title column. I need this to be sortable so I created a Numeric column (IDMaster) with the values from the Title column and there are over 1000 records. How do I format the IDMaster so that it does not include a “,” (i.e., 1,000 -> 1000)? Read More
Editing the Pinned Section in my Viva Engage Community
I am trying to re-sort & edit the Pinned Section of my Viva Engage page/community. I can add links & delete links, but cannot change the order of all the links on the page for some reason. Does anyone have a solution besides a complete manual re-do of the list?
I am trying to re-sort & edit the Pinned Section of my Viva Engage page/community. I can add links & delete links, but cannot change the order of all the links on the page for some reason. Does anyone have a solution besides a complete manual re-do of the list? Read More
Outlook contact lists are not syncing on mobile outlook app (iOS)
Hello All,
Outlook contact lists which I created on my desktop are not syncing on my outlook mobile app. Individual contacts are syncing without any issues but not the contact lists. I see many reported the same issue. I would like to know whether this is actually the issue or it was built in that way.
Hello All, Outlook contact lists which I created on my desktop are not syncing on my outlook mobile app. Individual contacts are syncing without any issues but not the contact lists. I see many reported the same issue. I would like to know whether this is actually the issue or it was built in that way. Read More
Linked Server to Excel Spreadsheet
I have a perplexing problem which I suspect is permission related.
I have created a Linked Server in SSMS (SQL Express) that connects to an Excel Document on a Network Share. A stored procedure (spUpdateProducts) uses this Linked Server to Merge data into an existing table.
When I execute the stored procedure in SSMS, it works correctly (domain admin).
Similarly, when I execute a script that holds a SQLCMD from the server, that also executes correctly.
SQLCMD -S serverSQLEXPRESS -E -d OutEnd24 -Q “EXEC [dbo].[spUploadProducts]”
However, if I try and execute that as a scheduled task the sproc does not appear to run (or runs and fails to initialise the Linked Server – see below).
Similarly, if I try running the SQL CMD from my client laptop, I get the error:
Msg 7303, Level 16, State 1, Server *****SQLEXPRESS, Procedure spUploadProducts, Line 22
Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA”.
OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA” returned message “Unspecified error”.
My User account on the laptop has permission to execute the sproc, but it appears the Linked Server is unable to access the file in this case (which I suspect is also what is happening in the scheduled task).
I have set the advanced option in SQL Server to allow Ad Hoc Distributed Queries.
The SQL Server instance is running under NT ServiceMSSQL$SQLEXPRESS
I have granted Full Control on the network share and file path to DomainServerName$
I have tried experimenting with various combinations of login mapping with the Linked Server, but either I get a user not recognized error, or the same failure as above.
Any thoughts appreciated.
I have a perplexing problem which I suspect is permission related.I have created a Linked Server in SSMS (SQL Express) that connects to an Excel Document on a Network Share. A stored procedure (spUpdateProducts) uses this Linked Server to Merge data into an existing table.When I execute the stored procedure in SSMS, it works correctly (domain admin).Similarly, when I execute a script that holds a SQLCMD from the server, that also executes correctly.SQLCMD -S serverSQLEXPRESS -E -d OutEnd24 -Q “EXEC [dbo].[spUploadProducts]”However, if I try and execute that as a scheduled task the sproc does not appear to run (or runs and fails to initialise the Linked Server – see below).Similarly, if I try running the SQL CMD from my client laptop, I get the error:Msg 7303, Level 16, State 1, Server *****SQLEXPRESS, Procedure spUploadProducts, Line 22Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA”.OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server “OUTEND DATA” returned message “Unspecified error”.My User account on the laptop has permission to execute the sproc, but it appears the Linked Server is unable to access the file in this case (which I suspect is also what is happening in the scheduled task).I have set the advanced option in SQL Server to allow Ad Hoc Distributed Queries.The SQL Server instance is running under NT ServiceMSSQL$SQLEXPRESSI have granted Full Control on the network share and file path to DomainServerName$ I have tried experimenting with various combinations of login mapping with the Linked Server, but either I get a user not recognized error, or the same failure as above.Any thoughts appreciated. Read More
SharePoint List – Select from a Choice Field and Display an Image
I am attempting to set a column in a Project Tracker List to allow three choices, Red, Amber and Green. Depending on which status the user selects, I am trying to display an image, either a red circle (image1.jpg), an amber circle (image2.jpg) or a green circle (image3.jpg). I am not certain how to have a Choice column return an image.
Thanks,
John
I am attempting to set a column in a Project Tracker List to allow three choices, Red, Amber and Green. Depending on which status the user selects, I am trying to display an image, either a red circle (image1.jpg), an amber circle (image2.jpg) or a green circle (image3.jpg). I am not certain how to have a Choice column return an image. Thanks,John Read More
Speech Recognition for Alphanumeric
Hi,
I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low.
For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”.
I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks.
Is there any settings available to improve the recognition success rate?
Azure Services used:
ACS Phone Number
Azure Cognitive Service
Event Grid Subscriptions
Thanks,
Aravind
Hi, I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low. For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”. I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks. Is there any settings available to improve the recognition success rate?Azure Services used:ACS Phone NumberAzure Cognitive Service Event Grid SubscriptionsThanks,Aravind Read More
Optimizing Query Performance with Work_Mem
work_mem plays a crucial role in optimizing query performance in Azure Database for PostgreSQL. By allocating sufficient memory for sorting, hashing, and other internal operations, One can improve overall database performance and responsiveness, especially under heavy load or complex query scenarios. Fine-tuning work_mem based on workload characteristics is key to achieving optimal performance in your PostgreSQL environment
Understanding work_mem
Purpose:
Memory for Operations: work_mem sets the maximum amount of memory that can be used by operations such as sorting, hashing, and joins before PostgreSQL writes data to temporary disk files. This includes operations to accomplish:
ORDER BY: Sort nodes are introduced in the plan when ordering cannot be satisfied by an index.
DISTINCT and GROUP BY: These can introduce Aggregate nodes with a hashing strategy, which require memory to build hash tables, and potentially Sort nodes when the Aggregate is parallelized.
Merge Joins: When sorting of some or both of the relations being joined is not satisfied via indexes.
Hash Joins: To build hash tables.
Nested Loop Joins: When memoize nodes are introduced in the plan because the estimated number of duplicates is high enough that caching results of lookups is estimated to be cheaper than doing the lookups again.
Default Value: The default work_mem value is 4 MB (or 4096 KB). This means that any operation can use up to 4 MB of memory. If the operation requires more memory, it will write data to temporary disk files, which can significantly slow down query performance.
Concurrent Operations:
Multiple Operations: A single complex query may involve several sorts or hash operations that run in parallel. Each operation can utilize the work_mem allocated, potentially leading to high total memory consumption if multiple operations are occurring simultaneously.
Multiple Sessions: If there are several active sessions, each can also use up to the work_mem value for their operations, which further increases memory usage. For example, if you set work_mem to 10 MB and have 100 concurrent connections, the total potential memory usage for sorting and hashing operations could reach 1,000 MB (or 1 GB).
Impact of Disk Usage:
Spilling to Disk: When the memory allocated for an operation exceeds work_mem, PostgreSQL writes data to temporary files on disk. Disk I/O is significantly slower than memory access, which can lead to degraded performance. Therefore, optimizing work_mem is crucial to minimize disk spills.
Disk Space Considerations: Excessive disk spills can also lead to increased disk space usage, particularly for large queries, which may affect overall database performance and health.
Hash Operations:
Sensitivity to Memory: Hash-based operations (e.g., hash joins, hash aggregates) are particularly sensitive to memory availability. PostgreSQL can use a hash_mem_multiplier to allow these operations to use more memory than specified by work_mem. This multiplier can be adjusted to allocate a higher memory limit for hash operations when needed.
Adjusting work_mem at Different Levels
Server Parameter:
Affects all connections unless overridden.
Configured globally, via REST APIs, Azure CLI or the Azure portal. For more information, read Server parameters in Azure Database for PostgreSQL – Flexible Server
Session Level:
Adjusted using SET work_mem = ’32MB’;
Affects only the current session.
Reverts to default after the session ends.
Useful for optimizing specific queries.
Role or user level:
Set using ALTER ROLE username SET work_mem = ’16MB’;
Applied automatically upon user login.
Tailors settings to user-specific workloads.
Database Level:
Set using ALTER DATABASE dbname SET work_mem = ’20MB’;
Affects all connections to the specified database.
Function, Procedure Level:
Adjusted within a stored procedure/function using SET work_mem = ’64MB’;
Valid for the duration of the procedure/function execution.
Allows fine-tuning of memory settings based on specific operations.
Server Parameter: work_mem
The formula provided, work_mem = Total RAM / Max Connections / 16, is a guideline to ensure that the memory is distributed effectively without over-committing resources. Refer to the official Microsoft documentation on managing high memory utilization in Azure Database for PostgreSQL here.
Breaking Down the Formula
Total RAM:
This is the total physical memory available on your PostgreSQL server. It’s the starting point for calculating memory allocation for various PostgreSQL operations.
Max Connections:
This is the maximum number of concurrent database connections allowed. PostgreSQL needs to ensure that each connection can operate efficiently without causing the system to run out of memory.
Division by 16:
The factor of 16 is a conservative estimate to prevent overallocation of memory. This buffer accounts for other memory needs of PostgreSQL and the operating system.
If your server has a significant amount of RAM and you are confident that other memory requirements (e.g., operating system, cache, other processes) are sufficiently covered, you might reduce the divisor (e.g., to 8 or 4) to allocate more memory per operation.
Analytical workloads often involve complex queries with large sorts and joins. For such workloads, increasing work_mem by reducing the divisor can improve query performance significantly.
Step-by-Step Calculation of work_mem
Total RAM:
The server has 512 GB of RAM.
Convert 512 GB to MB: 512 * 1024 = 524,288 MB
Max Connections:
The server allows up to 2000 maximum connections.
Base Memory Per Connection:
Divide the total RAM by the number of connections: 524,288 / 2000 = 262.144 MB
Apply the Conservative Factor (Divide by 16):
Apply the Conservative Factor (Divide by 16): 262.144 / 16 = 16.384 MB
One should set work_memto approximately 16 MB (rounded from 16.384 MB).
In case one need help with how to set up server parameters or require more information, please refer to the official documentation at Azure PostgreSQL Flexible Server Server Parameters. This resource provides comprehensive insights into the server parameters and their configurations.
Query Execution with EXPLAIN ANALYZE
Fine-Tune work_mem with EXPLAIN ANALYZE
To determine the optimal work_mem value for your query, you’ll need to analyze the EXPLAIN ANALYZE output to understand how much memory the query is using and where it is spilling to disk. Here’s a step-by-step guide to help you:
Execute the query with EXPLAIN ANALYZE to get detailed execution statistics:
EXPLAIN (ANALYZE, BUFFERS)
SELECT
*
FROM DataForWorkMem
WHERE time BETWEEN ‘2006-01-01 05:00:00+00’ AND ‘2006-03-31 05:10:00+00’
ORDER BY name;
Analyze the Output
Look for the following details in the output:
Sort Operation: Check if there is a Sort operation and whether it mentions “external sort” or “external merge”, This indicates that the sort operation used more memory than allocated in work_mem and had to spill to disk.
Buffers Section: The Buffers section shows the amount of data read from and written to disk. High values here may indicate that increasing work_mem could reduce the amount of data spilled to disk.
Here is output generated by above query:
“Gather Merge (cost=8130281.85..8849949.13 rows=6168146 width=47) (actual time=2313.021..3848.958 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72278, temp read=97446 written=97605“
” -> Sort (cost=8129281.82..8136992.01 rows=3084073 width=47) (actual time=2296.884..2726.374 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: external merge Disk: 193200kB“
” Buffers: shared hit=72278, temp read=97446 written=97605“
” Worker 0: Sort Method: external merge Disk: 196624kB“
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..7661339.18 rows=3084073 width=47) (actual time=206.138..739.962 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Rows Removed by Index Recheck: 62934″
” Heap Blocks: exact=15199 lossy=17800″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=203.416..203.417 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.129 ms”
“Execution Time: 4169.774 ms”
Let’s break down the details from the execution plan:
Gather Merge
Purpose: Gather Merge is used to combine results from parallel workers. It performs an order-preserving merge of the results produced by each of its child node instances.
Cost and Rows:
Planned Cost: 8130281.85..8849949.13
This is the estimated cost of the operation.
Planned Rows: 6168146
This is the estimated number of rows to be returned.
Actual Time: 2313.021..3848.958
The actual time taken for the Gather Merge operation.
Actual Rows: 6564864
The actual number of rows returned.
Workers:
Planned: 2
The planned number of parallel workers for this operation.
Launched: 1
The number of workers that were actually used.
Buffers
Shared Hit: 72278
This represents the number of buffer hits for shared buffers .
Temp Read: 97446
This indicates the amount of temporary disk space read.
Approximately 798.8 MB (97446 blocks * buffers of 8KB)
Temp Written: 97605
This indicates the amount of temporary disk space written.
Approximately 799.6 MB (97605 blocks * buffers of 8KB)
Sort Node
Sort:
Cost: 8129281.82..8136992.01
The estimated cost for the sorting operation includes both the startup cost and the cost of retrieving all available rows from the operator.
The startup cost represents the estimated time required to begin the output phase, such as the time needed to perform the sorting in a sort node.
Rows: 3084073
The estimated number of rows returned.
Actual Time: 2296.884..2726.374
The actual time taken for the sorting operation.
The first number represents the startup time for the operator, i.e., the time it took to begin executing this part of the plan. The second number represents the total time elapsed from the start of the execution of the plan to the completion of this operation. The difference between these two values is the actual duration that this operation took to complete.
Actual Rows: 3282432
The actual number of rows returned.
Sort Method
External Merge:
This indicates that an external merge sort was used, meaning that the sort could not be handled entirely in memory and required temporary files.
Disk:
Main Process: 193200 kB
The amount of disk space used by the main process for sorting.
Worker 0: 196624 kB
The amount of disk space used by the worker process for sorting.
To optimize PostgreSQL query performance and avoid disk spills, set the work_mem to cover the total memory usage observed during sorting:
Main Process Memory Usage: 193200 kB
Worker Memory Usage: 196624 kB
Total Memory Required: 389824 kB (approximately 380 MB)
Recommended work_mem Setting: 380 MB
This setting ensures that the sort operation can be performed entirely in memory, improving query performance and avoiding disk spills.
Increasing work_mem to 380 MB at the session level resolved the issue. The execution plan confirms that this memory allocation is now adequate for your sorting operations. The absence of temporary read/write stats in the Buffers section suggests that sorting is being managed entirely in memory, which is a favorable result.
Here’s is updated execution plan:
“Gather Merge (cost=4944657.91..5664325.19 rows=6168146 width=47) (actual time=1213.740..2170.445 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72244″
” -> Sort (cost=4943657.89..4951368.07 rows=3084073 width=47) (actual time=1207.758..1357.753 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: quicksort Memory: 345741kB”
” Buffers: shared hit=72244″
” Worker 0: Sort Method: quicksort Memory: 327233kB”
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..4611250.25 rows=3084073 width=47) (actual time=238.881..661.863 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Heap Blocks: exact=34572″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=230.774..230.775 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.119 ms”
“Execution Time: 2456.604 ms”
It confirms that:
Sort Method: “quicksort” or “other in-memory method” instead of “external merge.”
Memory Usage: The allocated work_mem (380 MB) is used efficiently.
Execution Time: Decreased to 2456.604 ms from 4169.774 ms.
Adjusting work_mem Using pg_stat_statements Data
To estimate the memory needed for a query based on the temp_blks_readparameters from PostgreSQL’s pg_stat_statements, you can follow these steps:
Get the Block Size:
PostgreSQL uses a default block size of 8KB. You can verify this by running:
Calculate Total Temporary Block Usage:
Sum the temp_blks_read to get the total number of temporary blocks used by the query.
Convert Blocks to Bytes:
Multiply the total temporary blocks by the block size (usually 8192 bytes) to get the total temporary data in bytes.
Convert Bytes to a Human-Readable Format:
Convert the bytes to megabytes (MB) or gigabytes (GB) as needed.
To identify queries that might benefit from an increased work_mem setting, use the following query to retrieve key performance metrics from PostgreSQL’s pg_stat_statements view:
SELECT
query,
calls,
total_exec_time AS total_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time
FROM
pg_stat_statements
ORDER BY
total_exec_time DESC
LIMIT 10;
Example Calculation
Suppose we have the following values from the pg_stat_statements:
temp_blks_read: 5000
block_size: 8192 bytes
Calculation:
Total Temporary Data (bytes)= 5000 × 8192 = 40,960,000 bytes
Total Temporary Data (MB) = 40,960,000 / (1024 × 1024) = 39.06 MB
This estimate indicates that to keep operations in memory and avoid temporary disk storage, work_mem should ideally be set to a value higher than 39 MB.
Here is a query that provides the total amount of temporary data in megabytes for each query recorded in pg_stat_statements. This information can help identify which queries might benefit from an increase in work_mem to potentially improve performance by reducing temporary disk usage.
SELECT
query,
total_temp_data_bytes / (1024 * 1024) AS total_temp_data_mb
FROM
(
SELECT
query,
temp_blks_read * 8192 AS total_temp_data_bytes
FROM pg_stat_statements
) sub;
Using Query Store to Determine work_mem
PostgreSQL’s Query Store is a powerful feature designed to provide insights into query performance, identify bottlenecks, and monitor execution patterns.
Here is how to use Query Store to analyze query performance and estimate the disk storage space required for temporary blocks read (temp_blks_read).
Analyzing Query Performance with Query Store
To analyze query performance, Query Store offers execution statistics, including temp_blks_read, which indicates the number of temporary disk blocks read by a query. Temporary blocks are used when query results or intermediate results exceed available memory.
Retrieving Average Temporary Blocks Read
Use the following SQL query to get the average temp_blks_read for individual queries:
SELECT
query_id,
AVG(temp_blks_read) AS avg_temp_blks_read
FROM query_store.qs_view
GROUP BY query_id;
This query calculates the average temp_blks_read for each query. For example, if query_id 378722 shows an average temp_blks_read of 87,348, this figure helps understand temporary storage usage.
Estimating Disk Storage Space Required
Estimate disk storage based on temp_blks_read to gauge temporary storage impact:
Know the Block Size: PostgreSQL’s default block size is 8 KB.
Calculate Disk Space in Bytes: Multiply the average temp_blks_read by the block size:
Space (bytes) = avg_temp_blks_read × Block Size (bytes)
Space (bytes) = 87,348 × 8192 = 715,048,896 bytes
Convert Bytes to Megabytes (MB):
Space (MB) = 715,048,896 / (1024 * 1024) = 682 MB
Consider adjusting work_mem at the session level or within stored procedures/functions to optimize performance.
Query Store is an invaluable tool for analyzing and optimizing query performance in PostgreSQL. By examining metrics like temp_blks_read, you can gain insights into query behavior and estimate the disk storage required. This knowledge enables better resource management, performance tuning, and cost control, ultimately leading to a more efficient and reliable database environment
Best Practices for Setting work_mem
Monitor and Adjust: Regularly monitor the database’s performance and memory usage. Tools like pg_stat_statements and pg_stat_activity can provide insights into how queries are using memory.
Incremental Changes: Adjust work_mem incrementally and observe the impact on performance and resource usage. Make small adjustments and evaluate their effects before making further changes.
Set Appropriately for Workloads: Tailor work_mem settings based on the types of queries and workloads running on your database. For example, batch operations or large sorts might need higher settings compared to simple, small queries.
Consider Total Memory: Calculate the total memory usage, considering the number of concurrent connections and operations, to ensure it does not exceed available physical RAM.
Balancing work_mem involves understanding your workload, monitoring performance, and adjusting settings to optimize both memory usage and query performance.
Microsoft Tech Community – Latest Blogs –Read More