Category: News
The name of the target entity is incorrect. Unable to generate SSPI context
Hi
when my .net framework app try to connect to sql server happenn this error:
The name of the target entity is incorrect. Unable to generate SSPI context
How resolve?
Hi when my .net framework app try to connect to sql server happenn this error:The name of the target entity is incorrect. Unable to generate SSPI contextHow resolve? Read More
Hora de termino a una tarea
Buenas tardes, quisiera saber si Microsoft To Do tiene la opción de agregar hora de termino a una tarea, si la respuesta es no ¿vendría esa opción en las nuevas actualizaciones? Hay muchas tareas con horarios máximo de entrega esto ayudaría a controlar ese requerimiento. La pregunta es para las distintas versiones de Microsoft To Do (Educación, Empresa y domestico)
Saluda Atte. Francisco Gutiérrez
Buenas tardes, quisiera saber si Microsoft To Do tiene la opción de agregar hora de termino a una tarea, si la respuesta es no ¿vendría esa opción en las nuevas actualizaciones? Hay muchas tareas con horarios máximo de entrega esto ayudaría a controlar ese requerimiento. La pregunta es para las distintas versiones de Microsoft To Do (Educación, Empresa y domestico) Saluda Atte. Francisco Gutiérrez Read More
Excel Rounding
In excel, I have to multiply hours by a bill rate. I need the rounding to stay as if if the third digit is 5 or less, I need the rounding to move to the next digit if the third digit is higher than a 5.
In excel, I have to multiply hours by a bill rate. I need the rounding to stay as if if the third digit is 5 or less, I need the rounding to move to the next digit if the third digit is higher than a 5. Read More
Run untrusted content safely with Windows Sandbox
As a developer, your work often involves experimenting with various libraries, frameworks, tools and sometimes testing unknown files or executables. But let’s face it – accessing unfamiliar files or repos can sometimes feel like tiptoeing through a minefield. You do not know if they are safe or potential malware. What if I told you there’s a way to explore new files without risking your host OS!
Windows Sandbox (WSB) provides a lightweight desktop environment to safely run applications in isolation from the host OS. Think of it as your digital playground – a safe, isolated environment where you can test and debug apps, explore unknown files, or experiment with tools without risking your host OS. A Windows Sandbox is disposable. When it’s closed, all the software and files and the state are deleted. You get a brand-new instance of the sandbox every time you open the application.
How can you view or run untrusted content using Windows Sandbox?
First, refer to the instructions provided in our documentation to determine if your device meets the requirements and learn how to install Windows Sandbox.
There are multiple ways to share files between the host and the sandbox:
Option A – Drag and Drop files: Launch ‘Windows Sandbox’ by locating and selecting ‘Windows Sandbox’ on the Start menu or searching for ‘Windows Sandbox’. With Clipboard redirection enabled by default, you can easily copy files from the host and paste them into the Windows Sandbox window. This is the simplest way to view your untrusted files and apps in your sandbox. This approach makes a copy within Sandbox, which can take a while depending on the size of the folder.
Option B – Map Folders before launching Sandbox: Create a folder, say ‘sandbox-assets’, on your host OS containing all files to be tested or viewed in Windows Sandbox. Any files or tools that you will need in the sandbox will need to be placed in this folder before launching the sandbox. You will then use a configuration file to map the ‘sandbox-assets’ folder your host to the ‘sandbox-assets’ folder in the sandbox.
The configuration file below shows how to share a folder from the host desktop to the sandbox desktop. In this example the file is shared with read-only permissions. Windows Sandbox will not be able to write to the folder, providing an additional layer of security.
<Configuration>
<MappedFolders>
<MappedFolder>
<HostFolder>%USERPROFILE%Desktopsandbox-assets</HostFolder>
<SandboxFolder>%USERPROFILE%Desktopsandbox-assets</SandboxFolder>
<ReadOnly>true</ReadOnly>
</MappedFolder>
</MappedFolders>
</Configuration>
Save the config file with a .wsb extension. To use the configuration file, double-click it to launch your custom configured Windows Sandbox. This should launch a sandbox with the folder ‘sandbox-assets’ with read-only access on the desktop with all the files you pasted.
When you’re finished experimenting, close the sandbox. A dialog box will prompt you to confirm the deletion of all sandbox content. Select “Ok” to confirm.
Learn more about Windows Sandbox and provide feedback
To learn more about Windows Sandbox and its functionality, check out our documentation.
Checkout our Windows Sandbox GitHub repo to share your projects that leverage Windows Sandbox, file feature requests or report issues.
You can also file a bug in Feedback Hub. There is a dedicated option in Feedback Hub to file “Windows Sandbox” bugs and feedback. It is located under “Security and Privacy” subcategory “Windows Sandbox”.
We look forward to you using this feature and receiving your feedback!
Microsoft Tech Community – Latest Blogs –Read More
Announcing general availability for FSLogix 2210 hotfix 4!
FSLogix 2210 hotfix 4
This hotfix, along with the updates from hotfix 3, address a wide range of issues associated with the new Microsoft Teams. We wish to express our gratitude to the 30+ customers and partners whose crucial involvement in our validation process has been essential for the discoveries and solutions provided in this release. Additionally, we are reintroducing a previously released and highly requested feature: Asynchronous Group Policy processing!
What’s new
Group Policy processing can now occur asynchronously for users during sign-in.
MSIX folders under %LocalAppData%Packages<package-name> will automatically get created when an ODFC container is created (new or reset container).
New Microsoft Teams data located in %LocalAppData%Publishers8wekyb3d8bbweTeamsSharedConfig will now roam with the ODFC container.
Fixed issues
Windows Server 2019 would sometimes fail to query the provisioned AppX applications for the user during sign-out.
MSIX folders that should not be backed up, would be removed during sign-out instead of only removing the contents of those folders.
New Microsoft Teams crashes or fails to start in Windows Server 2019.
New Microsoft Teams would display an error during launch with The parameter is incorrect.
New Microsoft Teams would display an error during launch with Invalid function.
New Microsoft Teams would not on-demand register during sign-in when using the ODFC container.
New Microsoft Teams would not on-demand register during profile creation and would not register during future sign-ins, despite being installed.
User-based Group Policy settings would persist in the user’s profile after the policy setting was removed or set to disabled.
Release Notes | FSLogix 2210 hotfix 4
Microsoft Tech Community – Latest Blogs –Read More
Edge Policy Reference
Is anyone else having trouble loading the Edge policy reference? We’re seeing the page hang on both Edge and Chrome, with little or no text displayed below the Extensions heading. Then the page hangs and shows a “This page is not responding” dialog.
Thanks!
Is anyone else having trouble loading the Edge policy reference? We’re seeing the page hang on both Edge and Chrome, with little or no text displayed below the Extensions heading. Then the page hangs and shows a “This page is not responding” dialog. Thanks! Read More
B I T G E T hivatkozási kód 2024-re: qp29
A B I T G E T hivatkozási kódot keresi? A legújabb 2024-ben a qp29. Ezzel a kóddal 30% kedvezményt kap a kereskedési díjakból. Ezenkívül a „qp29” promóciós kóddal regisztráló új B I T G E T-felhasználók exkluzív promóciós jutalmat is kaphatnak, legfeljebb 1000 USD értékben.
Mi az a B I T G E T hivatkozási kód?
A „qp29” kód a B I T G E T hivatkozási programban hivatkozási kódként működik. Ennek a kódnak a megadásával állandó kereskedési díjcsökkentést, valamint 30% kedvezményt kap a kereskedéseiből. Ha megosztja ajánlókódját barátaival, esélye lesz egy bőkezű 50%-os bónuszra is. Ennek a kódnak a használata értékes lehetőséget kínál a díjak csökkentésére, és potenciálisan növelheti bevételeit azáltal, hogy másokat vonz a platformhoz.
Mi a legjobb B I T G E T 2024 ajánlókód?
Az erősen ajánlott B I T G E T hivatkozási kód a qp29. Ha ezt a kódot használja a regisztráció során, bőséges 1000 USD bónuszt kap. Ha megosztja kódját barátaival, lehetősége van hatalmas, 50%-os jutalékra. Ez lényegében lehetőséget ad arra, hogy maximum 1000 dolláros regisztrációs bónuszt kapjon üdvözlő jutalomként. Ez egy nagyszerű módja annak, hogy további előnyökkel bővítse kereskedési élményét, miközben másokat is arra ösztönöz, hogy csatlakozzanak, és megszerezzék saját jutalmukat.
A B I T G E T hivatkozási kód használata
A B I T G E T hivatkozási kódot azok az új felhasználók érhetik el, akik még nem regisztráltak a tőzsdére. Sajnos, ha már van fiókja, akkor nem tudja használni az ajánlókódot.
A B I T G E T azonban számos más módot is kínál a promóciókban való részvételre és a jutalmak megszerzésére. Nézzük ezeket az alternatívákat.
A B I T G E T újoncai számára itt található egy lépésről lépésre útmutató az ajánlókód igényléséhez:
A kezdéshez keresse fel a B I T G E T-et, és kattintson a kék „Regisztráció” gombra.
Pontos felhasználói adatokat adjon meg, mivel ellenőrizni fogják, hogy megfelelnek-e a KYC és AML eljárásoknak.
Amikor a rendszer kéri az ajánlókódot, írja be a qp29 kódot.
Végezze el a regisztrációs folyamatot, és végezze el a szükséges ellenőrzéseket.
Ha minden feltétel teljesül, azonnal megkaphatja az üdvözlő bónuszt.
Ez a megközelítés biztosítja, hogy az új felhasználók könnyedén elvégezhessék a regisztrációs folyamatot ajánlókód nélkül is, és üdvözlő bónuszt kapjanak, miután teljesítették a meghatározott követelményeket.
Mi az ajánlott hivatkozási kód a B I T G E T számára?
Ajánlói kód B I T G E T – qp29. Ha 30% kedvezményt szeretne kapni B I T G E T jutalékából, egyszerűen kövesse az alábbi lépéseket:
Regisztráljon új fiókot a B I T G E T-en.
Győződjön meg arról, hogy a B I T G E T qp29 hivatkozási kódot használja.
Mennyi az ajánlási bónusz a B I T G E T számára?
Hívd meg barátaidat, hogy csatlakozzanak a B I T G E Thez, és nyerjetek együtt az ajánlói nyereményalapból! Minden általad hivatkozott barát 50 dollárt kereshet, felhasználónként legfeljebb 1000 dollárt. A felhasználók meghívhatják ismerőseiket, hogy regisztráljanak a B I T G E Ten. Ha minden követelménynek megfelelnek, Ön és barátai 50 dollár kereskedési bónuszt kapnak a maximális limitig.
Hogyan kaphatom meg a B I T G E T bónuszt?
Naponta szerezzen pontokat, és cserélje USDT-re. Teljesítsd a kihívást hét napon belül, hogy megszerezd az összes jutalmat. Regisztráljon, hogy megkapja az 1000 dollár értékű üdvözlőcsomagot. Fizessen be legalább 50 dollárt, hogy 200 pontot szerezzen.
A B I T G E T hivatkozási kódot keresi? A legújabb 2024-ben a qp29. Ezzel a kóddal 30% kedvezményt kap a kereskedési díjakból. Ezenkívül a „qp29” promóciós kóddal regisztráló új B I T G E T-felhasználók exkluzív promóciós jutalmat is kaphatnak, legfeljebb 1000 USD értékben.Mi az a B I T G E T hivatkozási kód?A „qp29” kód a B I T G E T hivatkozási programban hivatkozási kódként működik. Ennek a kódnak a megadásával állandó kereskedési díjcsökkentést, valamint 30% kedvezményt kap a kereskedéseiből. Ha megosztja ajánlókódját barátaival, esélye lesz egy bőkezű 50%-os bónuszra is. Ennek a kódnak a használata értékes lehetőséget kínál a díjak csökkentésére, és potenciálisan növelheti bevételeit azáltal, hogy másokat vonz a platformhoz.Mi a legjobb B I T G E T 2024 ajánlókód?Az erősen ajánlott B I T G E T hivatkozási kód a qp29. Ha ezt a kódot használja a regisztráció során, bőséges 1000 USD bónuszt kap. Ha megosztja kódját barátaival, lehetősége van hatalmas, 50%-os jutalékra. Ez lényegében lehetőséget ad arra, hogy maximum 1000 dolláros regisztrációs bónuszt kapjon üdvözlő jutalomként. Ez egy nagyszerű módja annak, hogy további előnyökkel bővítse kereskedési élményét, miközben másokat is arra ösztönöz, hogy csatlakozzanak, és megszerezzék saját jutalmukat.A B I T G E T hivatkozási kód használataA B I T G E T hivatkozási kódot azok az új felhasználók érhetik el, akik még nem regisztráltak a tőzsdére. Sajnos, ha már van fiókja, akkor nem tudja használni az ajánlókódot.A B I T G E T azonban számos más módot is kínál a promóciókban való részvételre és a jutalmak megszerzésére. Nézzük ezeket az alternatívákat.A B I T G E T újoncai számára itt található egy lépésről lépésre útmutató az ajánlókód igényléséhez:A kezdéshez keresse fel a B I T G E T-et, és kattintson a kék „Regisztráció” gombra.Pontos felhasználói adatokat adjon meg, mivel ellenőrizni fogják, hogy megfelelnek-e a KYC és AML eljárásoknak.Amikor a rendszer kéri az ajánlókódot, írja be a qp29 kódot.Végezze el a regisztrációs folyamatot, és végezze el a szükséges ellenőrzéseket.Ha minden feltétel teljesül, azonnal megkaphatja az üdvözlő bónuszt.Ez a megközelítés biztosítja, hogy az új felhasználók könnyedén elvégezhessék a regisztrációs folyamatot ajánlókód nélkül is, és üdvözlő bónuszt kapjanak, miután teljesítették a meghatározott követelményeket.Mi az ajánlott hivatkozási kód a B I T G E T számára?Ajánlói kód B I T G E T – qp29. Ha 30% kedvezményt szeretne kapni B I T G E T jutalékából, egyszerűen kövesse az alábbi lépéseket:Regisztráljon új fiókot a B I T G E T-en.Győződjön meg arról, hogy a B I T G E T qp29 hivatkozási kódot használja.Mennyi az ajánlási bónusz a B I T G E T számára?Hívd meg barátaidat, hogy csatlakozzanak a B I T G E Thez, és nyerjetek együtt az ajánlói nyereményalapból! Minden általad hivatkozott barát 50 dollárt kereshet, felhasználónként legfeljebb 1000 dollárt. A felhasználók meghívhatják ismerőseiket, hogy regisztráljanak a B I T G E Ten. Ha minden követelménynek megfelelnek, Ön és barátai 50 dollár kereskedési bónuszt kapnak a maximális limitig.Hogyan kaphatom meg a B I T G E T bónuszt?Naponta szerezzen pontokat, és cserélje USDT-re. Teljesítsd a kihívást hét napon belül, hogy megszerezd az összes jutalmat. Regisztráljon, hogy megkapja az 1000 dollár értékű üdvözlőcsomagot. Fizessen be legalább 50 dollárt, hogy 200 pontot szerezzen. Read More
How to install Matlab without Oracle java and use Open JDK instead?
How to install Matlab without Oracle java and use Open JDK instead. We think that we need update all our installation to use Open java because Oracle is starting to end free use of java. We use SCCM to install software silent.How to install Matlab without Oracle java and use Open JDK instead. We think that we need update all our installation to use Open java because Oracle is starting to end free use of java. We use SCCM to install software silent. How to install Matlab without Oracle java and use Open JDK instead. We think that we need update all our installation to use Open java because Oracle is starting to end free use of java. We use SCCM to install software silent. open jdk, install matlab MATLAB Answers — New Questions
User not found after removing user from SP groups
First thing I did was list all the SP groups a user was a member of via Powershell by running:
Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName email address removed for privacy reasons | Select-Object -ExpandProperty Groups | FL *
This spit out a list that included groups and sharing links and look something like:
SharingLinks.xxxxxxxx.OrganizationView.aca4629b-2156-466f-be31-eead2188db4e
SharingLinks.xxxxxxxx.Flexible.630dde56-f7ca-47bd-86bb-c26dbba1a903
SharingLinks.xxxxxxxx.OrganizationView.04dd59e4-7cae-4290-aff0-477af7626f26
Facilities Management Members
So I ran these commands to define the user and groups:
$user = Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName email address removed for privacy reasons
$userGroups = Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName $userLoginName | Select-Object -ExpandProperty Groups
And then I ran this command to remove him from the groups:
foreach ($group in $userGroups) {
Remove-SPOUser -Site https://xxxx.sharepoint.com -LoginName $userLoginName -Group $group.Title
Write-Host “Removed user $userLoginName from group $($group.Title)”
}
This took a long time and I suspect it removed the user from all the sharing links too. Now when I run the Get-SPOuser command, I get the error:
Get-SPOUser : User cannot be found.
At line:1 char:1
+ Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName users@co…
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-SPOUser], ServerException
+ FullyQualifiedErrorId : Microsoft.SharePoint.Client.ServerException,Microsoft.Online.SharePoint.PowerShell.GetSPOUser
So my questions are:
why would it remove the user from SP if I am only removing him from SP groups?what would be the behavior of the documents he modified if he is removed from SP?
First thing I did was list all the SP groups a user was a member of via Powershell by running:Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName email address removed for privacy reasons | Select-Object -ExpandProperty Groups | FL * This spit out a list that included groups and sharing links and look something like:SharingLinks.xxxxxxxx.OrganizationView.aca4629b-2156-466f-be31-eead2188db4e
SharingLinks.xxxxxxxx.Flexible.630dde56-f7ca-47bd-86bb-c26dbba1a903
SharingLinks.xxxxxxxx.OrganizationView.04dd59e4-7cae-4290-aff0-477af7626f26
Facilities Management Members So I ran these commands to define the user and groups:$user = Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName email address removed for privacy reasons
$userGroups = Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName $userLoginName | Select-Object -ExpandProperty Groups And then I ran this command to remove him from the groups:foreach ($group in $userGroups) {
Remove-SPOUser -Site https://xxxx.sharepoint.com -LoginName $userLoginName -Group $group.Title
Write-Host “Removed user $userLoginName from group $($group.Title)”
}This took a long time and I suspect it removed the user from all the sharing links too. Now when I run the Get-SPOuser command, I get the error:Get-SPOUser : User cannot be found.
At line:1 char:1
+ Get-SPOUser -Site https://xxxx.sharepoint.com -LoginName users@co…
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Get-SPOUser], ServerException
+ FullyQualifiedErrorId : Microsoft.SharePoint.Client.ServerException,Microsoft.Online.SharePoint.PowerShell.GetSPOUserSo my questions are:why would it remove the user from SP if I am only removing him from SP groups?what would be the behavior of the documents he modified if he is removed from SP? Read More
How set up a parameter in the PDE Modeler App as a function?
Good morning,
i’m studying the freezing process of mozzarella cheese: for T>0°C there is a heat exchange by convection for water content, but for T <0 °C the heat exchange is by conduction due to the presence of ice.
How set up these coefficient as a function? For example refering the value from Matlab’s script? With a "IF Cycle? With a Overlay of convection and conduction effect?
I’ven’t idea :(Good morning,
i’m studying the freezing process of mozzarella cheese: for T>0°C there is a heat exchange by convection for water content, but for T <0 °C the heat exchange is by conduction due to the presence of ice.
How set up these coefficient as a function? For example refering the value from Matlab’s script? With a "IF Cycle? With a Overlay of convection and conduction effect?
I’ven’t idea 🙁 Good morning,
i’m studying the freezing process of mozzarella cheese: for T>0°C there is a heat exchange by convection for water content, but for T <0 °C the heat exchange is by conduction due to the presence of ice.
How set up these coefficient as a function? For example refering the value from Matlab’s script? With a "IF Cycle? With a Overlay of convection and conduction effect?
I’ven’t idea 🙁 pde modeler MATLAB Answers — New Questions
Help Restricting Date Choice in List
Hello there! I need to set up a SharePoint list Date field with the following requirements:
1. must flow through to calendar view
2. must be restricted to 5-10 pre-determined dates (these are mostly static but will be added to as time goes on)
Currently, I have a date field with a tooltip of “please enter only 1,2,3,4,5 dates” but am not familiar enough with JSON to add validation. Also, I worry that this is cumbersome for end users as they will have a wide-open calendar but have to select from only 5 dates within that.
I’d love to be able to offer a dropdown choice field with the 5-10 dates as options but I don’t know how to do this in a way that flows into the calendar view. When I tried this, they weren’t recognized as dates so I couldn’t enable the calendar view.
I’m new to SharePoint lists so not sure of best practices here. Thanks in advance!
Hello there! I need to set up a SharePoint list Date field with the following requirements:1. must flow through to calendar view2. must be restricted to 5-10 pre-determined dates (these are mostly static but will be added to as time goes on) Currently, I have a date field with a tooltip of “please enter only 1,2,3,4,5 dates” but am not familiar enough with JSON to add validation. Also, I worry that this is cumbersome for end users as they will have a wide-open calendar but have to select from only 5 dates within that. I’d love to be able to offer a dropdown choice field with the 5-10 dates as options but I don’t know how to do this in a way that flows into the calendar view. When I tried this, they weren’t recognized as dates so I couldn’t enable the calendar view. I’m new to SharePoint lists so not sure of best practices here. Thanks in advance! Read More
OneDrive for RISC OS 5
Hi,
The resurgence of vintage computing along with the ever-growing use of Raspberry Pi as a desktop PC, sees more people starting to look at RISC OS as an operating system.
Would Microsoft consider developing tools like a OneDrive module for RISC OS 5 to keep these users within their customer base?
As an existing Microsft user, who has been using RISC OS for a while now, it would be beneficial to access my files on both my Windows 11 and RISC OS.
Cheers,
Barry.
Hi, The resurgence of vintage computing along with the ever-growing use of Raspberry Pi as a desktop PC, sees more people starting to look at RISC OS as an operating system.Would Microsoft consider developing tools like a OneDrive module for RISC OS 5 to keep these users within their customer base?As an existing Microsft user, who has been using RISC OS for a while now, it would be beneficial to access my files on both my Windows 11 and RISC OS.Cheers,Barry. Read More
Good things, friendship, feelings, and mutual respect in the Microsoft Tech community in the GenAi e
Even after stepping into GenAi, it was another year where I saw a lot of buzz. Communities in Microsoft Tech Community and impressed by the community’s experts who solve problems and respond with knowledge of activities. Based on what I responded to in the user research survey:thinking_face:, but above all, the drive of the community moderator team. We hope that in the future, innovation will develop far and wide. We still hope that the tech community will continue to work together to solve problems, answer problems, and interact with each other with humans :pushpin:. Despite the emergence of human cyber defenders, which are now enhanced with AI assistants such as Co-Pilot, for example, today even though we talk to AI for more than half an hour and have the impression that it is a human being, today AI is still a light type that humans have to set up and test.
:face_with_rolling_eyes: But in the near future, what used to be just a theory may come true, there may be a strong AI emerged with neural structures and intelligence similar to humans. :thinking_face: But there is still a lack of feeling, in the community.
:collision: Because the day ahead A guardian may be able to be a destroyer in the same body.:folded_hands::folded_hands::folded_hands:
Even after stepping into GenAi, it was another year where I saw a lot of buzz. Communities in Microsoft Tech Community and impressed by the community’s experts who solve problems and respond with knowledge of activities. Based on what I responded to in the user research survey:thinking_face:, but above all, the drive of the community moderator team. We hope that in the future, innovation will develop far and wide. We still hope that the tech community will continue to work together to solve problems, answer problems, and interact with each other with humans :pushpin:. Despite the emergence of human cyber defenders, which are now enhanced with AI assistants such as Co-Pilot, for example, today even though we talk to AI for more than half an hour and have the impression that it is a human being, today AI is still a light type that humans have to set up and test.:face_with_rolling_eyes: But in the near future, what used to be just a theory may come true, there may be a strong AI emerged with neural structures and intelligence similar to humans. :thinking_face: But there is still a lack of feeling, in the community.:collision: Because the day ahead A guardian may be able to be a destroyer in the same body.:folded_hands::folded_hands::folded_hands: Read More
Buffer pool performance parameters for Azure Database for MySQL
InnoDB is a storage engine for the MySQL database management system. InnoDB manages a buffer pool, which is a dedicated storage zone that’s used to cache data and indexes within memory. Because the data remains readily available in memory, this approach significantly accelerates retrieval of frequently accessed information, easily surpassing the time required for disk-based retrieval.
When a MySQL server stops or restarts, data cached in the buffer pool is lost. However, MySQL includes a new feature that enables you to dump the contents of the buffer pool before you shut down the server. When the server restarts, you can reload these cached contents into memory. You also have the flexibility of dumping the buffer pool at any given time to reload subsequently.
To view details regarding the buffer pool on your system, run the command:
SHOW ENGINE INNODB STATUS
A sample set of results from running this command appears in the following image:
In this instance, the output shows that the buffer pool consists of 84142 database pages.
Note that when you perform a buffer pool dump to disk, only the database pages are saved. When the server restarts, the information stored in these database pages is automatically reloaded into memory.
This blog post explores the effect on memory consumption of enabling and disabling parameters related to the buffer pool memory dump, including:
Parameter
Purpose
innodb_buffer_pool_load_at_startup
Restores the buffer pool when starting MySQL.
innodb_buffer_pool_dump_at_shutdown
Saves the buffer pool when MySQL is shutdown or restarted.
These parameters appear on the Server parameters page for your flexible server, as shown in the following image:
You can use these parameters to help during a during a restart when the buffer pool is still warm and contains all of the active buffers.
Scenarios
We’ll look at memory usage as we disable and enable the memory dump parameters in Non-High Availability (HA) and HA environments. This blog post evaluates different use cases, because there are scenarios in which we want to ensure that the buffer pool cache remains “hot” (the buffer pages remain in memory) even after a planned/unplanned restart of the database server.
In the first set of scenarios, I’ll disable and enable the buffer pool performance parameters innodb_buffer_pool_load_at_startup and innodb_buffer_pool_dump_at_shutdown to determine memory behavior in Azure Telemetry.
In the second set of scenarios, I’ll show that in HA environments, the buffer cache is always retained during any failover, regardless of whether the parameters are enabled or disabled.
Start up and shut down scenarios
In different working environments, production databases aren’t usually shut down because the resulting downtime can lead to problems such as performance issues. In addition, the buffers in buffer pool cache can be lost and may need to be restored from disk on demand, degrading application performance. MySQL now includes a parameter you can use to back up the buffer pool cache, which you can then restore after the database restarts. This capability can help for both planned and unplanned database instance restarts.
In the following sections, I’ll disable and enable the memory dump parameters to check how the memory in the buffer pool cache behaves and how it’s used in telemetry.
Parameters disabled
In this scenario, with the two buffer pool performance parameters (innodb_buffer_pool_load_at_startup and innodb_buffer_pool_dump_at_shutdown) disabled, the percentage of used memory drops from 45% to10% as the database server restarts.
This drop shows that the buffer pool is purged of all active pages and that the pool is refreshed and ready to accept pages based on the new active transactions.
Parameters enabled
In this scenario, with the memory parameters (innodb_buffer_pool_load_at_startup and innodb_buffer_pool_dump_at_shutdown) enabled, restarting the MySQL flexible server does not impact memory consumption, which remains at 45% regardless of the restart.
This means that the buffer pool is hot, and they have the buffer pages which were in memory even after DB Service restart.
High availability scenarios
In an HA environment, there are chances of failover to secondary DB if there are any issues in primary DB triggering a failover. An application would always want its buffer cache to be hot so that queries does not take time to pull buffers from disk. In the next few scenarios, we will check how memory behaves in HA (High Availability) enabled servers. We will perform a failover, making the memory parameters ON and again perform a failover.
Manual failover with parameters disabled
On an HA-enabled MySQL flexible server with the memory dump parameters disabled, after triggering a manual failover, memory usage remains at 45%, the same as before the restart operation.
This is because during failover, systems retain a backup of the memory regardless of whether the dump parameters are disabled.
Manual failover with Memory Parameter ON
In our last scenario, after setting the value of memory dump parameter to ON in previous step, we triggered another failover, but even afterward, there isn’t a change in memory usage. This confirms that regardless of whether the dump parameter set to ON or OFF, during a failover event, the buffer cache is always backed up, ensuring that applications never degrade in performance as a result.
Conclusion
Based on the information provided above, it’s clear that you can use the InnoDB Buffer Pool Parameters innodb_buffer_pool_dump_at_shutdown and innodb_buffer_pool_load_at_startup to influence application performance. While both parameters are enabled by default, you can disable them, but there will be a significant impact on application performance as the buffer pool will be purged and the pages must be reloaded into memory, which is an expensive operation. It is recommended to ensure that these parameters are always enabled to ensure that your buffer pool retains the required buffer pages, even after a server restarts.
If you have any feedback or questions about the information provided above, please leave a comment below or email us at AskAzureDBforMariaDB@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Windows Holographic, version 24H1
We are pleased to share that we have released Windows Holographic version 24H1! In this article, we’ll cover the highlights of this release. If you are interested in full details, please check out our official release notes.
We take feedback from our customers and partners seriously and strive to add new features and functionality that address top pain points shared by all of you. For many users, the HoloLens 2 is a tool to enable them to get their tasks done more quickly and safely, due to our unique heads-up, hands-free form factor. Users want to put the device on and get right into the task at hand, including collaboration with others in real-time. We also understand that HoloLens 2 is unique in that more than one user may use a single HoloLens 2 device throughout the day, but that these users still need access to corporate resources. To address this unique set of requirements, we’ve provided the ability to create a shared account with Microsoft Entra ID credentials. After deployment of the Shared Microsoft Entra ID account, users will need only to click a single sign-in button to login to the device. The shared account enables workers to start their tasks quickly.
For IT Admins, we’ve added some new policies to more easily manage:
Whether a user is prompted for credentials when returning to the device when it is in a suspended state.
Whether device maintenance may occur when the device is in standby, such as app or OS updates, and when the maintenance should commence after entering standby.
Read on to learn about other feature highlights. As always, this update includes security updates and other minor bug fixes and improvements.
Feature Highlights:
Shared Microsoft Entra accounts — Using a shared Microsoft Entra account on your HoloLens results in the quickest login experience, since it does not require any credentials. This setup is ideal for scenarios where multiple people share the same set of HoloLens devices, access to Microsoft Entra resources, such as Dynamics 365 Guides content, is required, and tracking who has used the device isn’t required.
Policy to enable auto unlock — Policy to control whether a user is prompted for credentials when returning to the device in suspended state. When enabled, this policy allows a signed-in user to resume using the device without having to enter credentials.
Collect and view network connectivity report — Network connectivity report is added to Offline Diagnostics to help users investigate network connectivity issues on HoloLens 2 devices. After user triggers Offline Diagnostics, device IP addresses, Wi-Fi information, proxy settings and device’s connectivity to known cloud service endpoints are collected.
Enforce time sync during OOBE — During OOBE, the HoloLens device attempts to sync the device time with the time server after the device has connected to Wi-Fi.
Improve Intune app update experience — The Intune LOB App update does not enforce App shutdown if the App is still running on the device. Instead, the new version of the LOB App is installed and replaces the old App, once the old App is fully exited via user action, sign out or device reboot.
Update to eye tracking calibration — The option to perform eye tracking calibration is shown on the device even if it has been deployed via Autopilot. Customers still have the option to disable this behavior via the existing policy. Any user on the device can still choose to run eye calibration at any time to improve their experience.
Policies to set device standby action — Policies allow the admin to execute supported actions in modern standby.
If you find yourself in need of a quick list of new policies being added for Windows Holographic, version 24H1, check out the IT Admin Checklist.
Also, be sure to check out our other exciting HoloLens 2 news and a mixed reality video:
Microsoft Mixed Reality Toolkit 3 (MRTK3) moves to an independent organization within GitHub – Micro…
Microsoft Customer Story-Sanofi uses the industrial metaverse to revolutionize training and operatio…
3 ways mixed reality empowers frontline workers – Microsoft Industry Blogs
Enhance frontline worker experience anytime, anywhere with Microsoft HoloLens 2 & Mixed Reality Apps
AI + Mixed Reality for the Frontline | Copilot in Dynamics 365 Guides
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Securing our future together
We have learned a lot since then, but there is still more we are doing to ensure that security is the top priority across Microsoft. In a recent blog post, Charlie Bell, EVP of Microsoft Security, announced an expansion of SFI. The expansion will enhance the built-in security of our products and platforms to help protect our Microsoft partners’ organizations, and by extension our shared customers, against evolving threats from cloud, AI, and geopolitical cyber activities. We are asking for your continued support and cooperation as we ensure that our products, solutions, and processes remain the most secure in the industry.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
What’s new with Postgres at Microsoft, 2024 edition
In this post you’ll get a bird’s eye view of all the Postgres work happening at Microsoft—with highlights for each of the workstreams from the last 8 months. And there’s a lot of ground to cover. Our PostgreSQL database investments encompass both Azure and open source, the Postgres core plus the broader Postgres ecosystem, plus PG community work too.
For anyone who still thinks of Microsoft as only a SQL Server company, well, that may be our past—but our present and future very much include PostgreSQL too.
Why does this post exist? Because of this conversation:
[Friend]
It’s hard to keep track of all the Postgres work at Microsoft. Maybe you can write a blog post.
[Me]
Ok.
[Me]
So here is the first version of this “What’s new with Postgres at Microsoft” post, published late last Aug 2023.
[Also Me]
Where do the months go? How have 8 months gone by since I published the previous version of this post?
Pulling together all these highlights is so much fun because I get to interview some of the many super-smart Postgres developers in our team.
Ok, so let’s dive into specifics.
The Microsoft workstreams covered in this post includes open-source contributions to the PostgreSQL core; development on Postgres extensions such as Citus, work on key utilities in the PG ecosystem such as PgBouncer, pgcopydb, and Patroni; our community work around events such as the upcoming POSETTE: An Event for Postgres (free and virtual, happening Jun 11-13) as well as talks and podcasts—and of course, our work on the managed database service, Azure Database for PostgreSQL.
This table of contents follows the layout of this meticulously hand-made infographic, starting with the top-left box and proceeding counter-clockwise. Each of these links will take you straight to whichever sections you’re most interested in. Or you can grab an espresso and read the post in its entirety.
Azure Database for PostgreSQL – Flexible Server
Contributing to Postgres open source
Citus open source extension
Postgres ecosystem
Postgres community work
Azure Cosmos DB for PostgreSQL
Azure Database for PostgreSQL – Flexible Server
Our team’s flagship PostgreSQL managed database service is called Azure Database for PostgreSQL – Flexible Server.
Flexible Server meets the needs of myriad customers—from large enterprises to small to medium-sized businesses to early-stage startups—and has been growing in its capability, month after month after month.
Highlights of new features rolled out in the last 8 months
Postgres 16 support: Less than 2 months from the GA of the PostgreSQL 16 open source release, Postgres 16 support was made generally available (GA) in Azure Database for PostgreSQL – Flexible Server. And while major version upgrade support for Postgres 16 took a bit longer, the good news is that major version upgrade for Postgres 16 is now available in Preview, too.
Private Link: With the GA of Private Link in Azure Database for PostgreSQL, traffic between your virtual network and our PostgreSQL service navigates the Microsoft backbone network, which means you no longer need to expose your service to the public internet. Now generally available (GA), this feature has been very much in demand by many of you who are enterprise customers. (More details in the docs.)
Multi-region disaster recovery / GeoDR: The GA of our “Geo-Disaster recovery with read replicas” includes 2 major capabilities (explained in this blog post virtual endpoints and “promote to primary server”.
Virtual endpoints mean you don’t need to change connection strings in your application. And “promote to primary server” helps to minimize downtime when manually promoting read replicas to primary—such as when you’re in the face of some type of regional disruption. The term most people use to describe this feature is “hassle-free”.
TLS version 1.3 support: TLS stands for “Transport Layer Security” and is the modern version of SSL, a key component in security for client-server communications. Once you set the ssl_min_protocol_version parameter to the value of TLSv1.3, Azure Database for PostgreSQL – Flexible Server will mandate the use of TLS 1.3 for all of your client connections. In addition to giving you stronger security, TLS 1.3 also improves performance during the encryption process. Tip: Our security experts strongly recommend you use the latest version of TLS to encrypt your connections to Flexible Server. (More details in TLS documentation.)
Microsoft Defender integration: Security matters to all of us, and we all need to increase our defenses to stay ahead of the bad actors. We added Microsoft Defender for Cloud support so you can detect anomalous activities that might indicate harmful attempts to access your database. When enabled, this capability provides proactive anomaly detection; real-time security alerts; guided resolution steps; and integration with Microsoft Sentinel. (More details in documentation.)
pgvector 0.6.1 extension support: The pgvector extension to Postgres enables you to store and search vectors in Postgres, which in turn makes PostgreSQL a powerful vector database, enabling all sorts of generative AI capabilities. So, yes, we support pgvector in Flexible Server on Azure—as of the writing of this blog post, pgvector 0.6.1 is supported.
azure_ai extension (Preview): With the azure_ai extension to Azure Database for PostgreSQL, you can use Azure OpenAI directly from Azure Database for PostgreSQL. This means (quoting from a blog post I wrote a few months ago) “you can generate text embeddings by using SQL queries to call into both Azure OpenAI and Azure AI Language services—without needing a separate application layer.” Here’s a 5-minute demo I made, and here’s the announcement blog post from Nov 2023.
Real-time text translation (Preview): The azure_ai extension now includes real-time text translation capabilities using Azure AI Translator. The translation is done in real-time and the translated text can be used immediately or stored in Postgres for future use. You can filter out swear words, too. (More details in the announcement blog post about how to use this feature.)
Real-time ML prediction (Preview): Introduced in Mar 2024, the azure_ai extension enables you to invoke machine learning models hosted on Azure Machine Learning, in real-time. The announcement blog post explains how to use this new feature—and it’s relevant for you if you’re building applications to do fraud detection, product recommendations, transportation routing, equipment maintenance, or healthcare patient predictions, among others.
Migration service, both online & offline: The migration service in Azure Database for PostgreSQL (overview here) is useful for anyone looking to onboard onto Flexible Server. Offline migration to Flex is supported from Single Server; from RDS for PostgreSQL; from on-prem; or from Azure VMs. And if you’re migrating from Single Server to Flex Server, online migration is also supported, giving you a seamless setup—plus continuous operations with zero downtime.
Major version upgrade support for Postgres 16 (Preview): In-place major version upgrade—which uses the popular pg_upgrade capability from core Postgres—enables you to upgrade existing Flexible Servers to newer versions of Postgres with minimal downtime and a simplified upgrade process. With the addition of Postgres 16, major version upgrade is now supported to upgrade to versions 16, 15, 14, 13, and 12. (More details in the documentation.)
Major version upgrade logging: When enabled, this feature gives you access to detailed Postgres upgrade logs during major version upgrades—and gives you access to the PG_Upgrade_Logs either via the Azure Portal or via the CLI. (More details in the documentation.)
Server Logs with CLI support: In November 2023, we enhanced the Server Logs feature for Flexible Server, in both the portal and the CLI. The updated server logs feature is now easy to enable (and disable) through the Azure portal. Also, you can configure the retention period with options ranging between 1 to 7 days. Additionally, you can access and download your server logs from the Azure portal or you can also download service logs by using the CLI.
Grafana Monitoring integration: The Grafana Dashboard for Monitoring with Azure Database for PostgreSQL is just as good as this blog post makes it sound. For those of you who love Grafana it’s worth downloading from the Azure Grafana Gallery. With it, you can monitor your Flex Server database’s availability, active connections, CPU utilization, and storage metrics. Also, there’s a seamless integration between Azure Monitor and Grafana.
30 new monitoring metrics: Over the last 8 months, over 30 new monitoring metrics have been added for Flexible Server. This monitoring concepts page in the documentation spells out what the default metrics are (captured minutely, stored for 93 days, queryable in 30 day intervals)—as well as what the enhanced metrics are (disabled by default.) Also there are autovacuum metrics! And a table that outlines the options for visualizing your Flexible Server metrics: in the Azure Portal, with Azure Monitor’s metrics explorer, and with Grafana.
New regions in Italy, Israel, Norway, Poland, UAE, and USA: In Sep 2023, we introduced support for 3 new regions: Norway West, Poland Central, and US Gov Texas. Then in Jan 2024, we added support for Italy North, Israel Central, and UAE Central. This support for 60 regions is part an ongoing effort to give you localized cloud services in as many parts of the world as possible, enabling you to meet business and regulatory obligations—as well as requirements for in-country disaster recovery, where needed. And, we plan to add 8-10 more regions in the next 12 months. You can find the list of supported regions in the documentation.
Premium SSD v2 (Preview): The description of “cutting-edge technology with the most advanced general-purpose block storage solution with unparalleled price-performance” is chock full of adjectives, but I’ve seen the (not yet published) performance benchmarks and the results are impressive. More details about Premium SSD v2 are in the docs, including comparisons between Premium SSD v2 vs. Premium SSD. Also, Premium SSD v2 gives you a max disk size of 64 TiB.
Storage autogrow: The optional storage autogrow feature does what it says: it automatically increases the size of the provisioned storage of your Flexible Server when storage consumption reaches 80% or 90%, depending on the size of the disk. Thresholds are spelled out clearly in the documentation.
Near-zero downtime scaling: With near-zero downtime scaling, the server restart has been reduced to less than 30 seconds after modifying your storage or compute tiers, hence the moniker “near-zero”. This feature kicks in when you scale compute and storage (scaled independently, or scaled together.) It’s available in all public regions for non-HA servers. And for HA-enabled servers, near-zero downtime scaling is currently enabled for a limited set of regions, with more regions to be enabled in a phased manner in the future.
Free trial for kicking the tires of Azure Database for PostgreSQL
If you’re looking for a free trial for Flexible Server, this “Use an Azure free account” docs page walks you through how to get 750 hours (monthly) of Burstable B1MS instance with 32 GB of storage and 32 GB of backup storage for the first 12 months.
What’s your Single Server migration plan?
If you’re still running on the first-generation PostgreSQL managed service called “Single Server” (you know, the one that doesn’t run on Linux), then you already know:
Retirement of Single Server: was announced in March of 2023 and will be retired on 28 March 2025.
Flexible Server performance advantages: If you’re looking for more incentive to make the switch to Flexible Server, maybe these 3rd-party performance benchmarks (using the popular HammerDB benchmarking tooling) will help motivate. The results of this benchmark: Flexible Server processes orders 4.71 times faster than Single Server—and can do 2.85 times more tasks at the same time.
Online (and offline) migration tooling: And these online migration tools (Preview) have been helping lots of customers make the move from Single Server to Flex.
Our Azure team is hiring!
And… our Postgres team on Azure is hiring!
Contributing to Postgres open source
Microsoft has been hiring and growing a team of Postgres open source contributors since 2019—with a focus of contributing to the Postgres core.
From the previous Aug 2023 version of this blog post, our Microsoft commitment to sponsoring the ongoing development of Postgres remains unchanged:
In order to thrive, an open source ecosystem needs commercial support as well as volunteer efforts. Even open source developers need to eat! For the Postgres open source ecosystem to flourish, companies like Microsoft need to support the project by funding development in the Postgres core. Which we do.
PostgreSQL is a complex piece of software that runs mission-critical workloads across the globe. To provide the best possible experience on Azure, it follows that we need to thoroughly understand how it works. By having PostgreSQL committers and contributors on our team, they can share knowledge internally across different orgs, or directly answer internal questions regarding incidents or extension development.
Because today’s cloud operates at a scale most on-prem solutions never encountered, unique cloud data center problems, often relating to performance, now require special attention. Our in-house team of deep Postgres experts are focused on tackling these cloud-scale issues upstream, in the Postgres core. Another benefit: our team’s Postgres expertise gives Azure customers confidence in our cloud database services, too.
Commercial funding of PostgreSQL developers has another benefit: it gives developers the long-term stability to pursue the big things, the groundbreaking changes that are super important to the future. In particular, the Postgres contributor team at Microsoft is focused on some big architectural changes (example: Asynchronous IO) that you wouldn’t be able to do without the funding for a full-time, multi-year effort.
Two (exciting) Postgres contributor updates on Microsoft team
Our Postgres open source contributor team is continuing to grow.
Amit Langote, Postgres committer, has joined the Postgres team at Microsoft! (Yes, Amit is the engineer who committed the json_table work into Postgres 17.)
Melanie Plageman, Postgres contributor extraordinaire on our team, has accepted the invitation to become a PostgreSQL committer. Melanie’s history of contribution and collaboration in the Postgres community made this a well-earned promotion. Many of us echo the sentiment of these words from Álvaro Herrera to congratulate Melanie in her new PG committer role: “May your commits be plenty and your reverts rare :smiling_face_with_smiling_eyes:”.
And … our Postgres contributor team is hiring!
Highlights of Postgres 17 contributions
There is so much goodness in PostgreSQL 17, which hit code freeze last month in April and is expected to GA later this calendar year, usually in Sep or Oct—with a beta release that typically lands in July or August.
Highlights of the over 300 commits to PG17 authored or co-authored by members of our team are below.
Attribution is part of the culture in the Postgres open source community, so it must be said that among the many contributions our team made to PG17, the work was done with collaboration from PG contributors around the world, both inside and outside Microsoft.
Also want to give a shout-out to our team of Postgres committers (a “committer” is an engineer who has the “commit bit” to merge changes into the Postgres core, equivalent to to the term “maintainer” in other open source projects)—because not only did our Postgres committers commit their own work in Postgres 17, but almost a third of their commits were made on behalf of other developers.
Streaming I/O in Postgres 17
Streaming I/O with I/O combining: This new capability added to Postgres 17 is the start of something big. Streaming I/O (with I/O combining) introduces a whole new paradigm into Postgres—and is an important step toward a future of asynchronous I/O in Postgres. Now asynchronous I/O will NOT be available in Postgres 17, but this Streaming I/O work will still improve performance for users (initially when it comes to sequential scans and ANALYZE, as explained in the next bullets.)
Historically, Postgres reads data 1 page at a time. With the new Streaming I/O capability in PG17, Postgres can look further ahead and see what’s coming. So instead of reading 1 page at a time, Postgres can combine those pages and do a single read of 16 pages. This means reading 128K instead of 8K in a single read. To enable this, a new GUC has been added in PG17 called io_combine_limit which has a default setting of 128K. (link to commit for API for streaming I/O / io_combine_limit commit)
When talking to Thomas Munro, one of the authors of Streaming I/O in Postgres 17, he said this about the project:
“It’s a big project, and it’s hard work to find a pathway that gives incremental benefits through digestible improvements. Each piece has got to make sense on its own. The new stream abstraction already enables I/O combining and advice-based prefetching in 17 as of the time of writing (it hasn’t shipped yet, so watch this space to see if it sticks), but the real story is that it paves the way for a fully modernized I/O stack. That’s the kind of vision that takes long term funding, that Microsoft is bringing to the PostgreSQL community.”
True fact: it was micro-benchmarking the performance of this PG17 Streaming I/O work that started the absurd chain of coincidences that led Andres Freund to discover the xz utils backdoor. Anyone who knows Andres is not surprised he decided to investigate sshd processes that were using a surprising amount of cpu. Maybe that’s the moment Andres got the first of many “that’s weird” feelings that fueled his investigation. Thank you Andres!
If you want to dive deeper into this new capability you have 3 places to look:
Video of talk at PGConf.EU: Andres Freund, who originally proposed the idea of I/O streams as a core abstraction—and provided invaluable feedback on the concrete patches eventually proposed to PostgreSQL—gave a talk at PGConf.EU in Prague in Dec 2023 titled: “The path to using AIO in Postgres”.
Upcoming talk at PGConf.dev: Thomas Munro, one of the authors of Streaming I/O from Microsoft, will be giving a talk at the upcoming talk at PGConf.dev 2024 conference in Vancouver that will go deeper on this topic, titled: Streaming I/O and Vectored I/O.
5mins of Postgres: Lukas Fittl of pganalyze has recorded a short bite-sized overview of the new feature: Waiting for Postgres 17: Streaming I/O for sequential scans & ANALYZE.
Streaming I/O in sequential scans: Thanks to this new ability to do I/O combining and bigger 128K reads (hence fewer system calls too), some SELECT queries tied to sequential scans will be faster in Postgres 17. A significant amount of refactoring went into making sequential scans take advantage of the new streaming I/O API to become what the developers call “async friendly” and while the real motivation is to pave the way to asynchronous I/O in the future, it’s quite nice that some PG17 users will see performance benefits in sequential scans, too! (link to commit)
Streaming I/O in ANALYZE: To save you the trouble of looking it up in the docs, “ANALYZE collects statistics about the contents of tables in Postgres, and stores the results in the pg_statistic catalog.” These statistics are important and get used by the query planner. And in Postgres 17, ANALYZE is the first user for Streaming I/O with random streams, which doesn’t benefit from I/O combining but does benefit from prefetching. The benefit to users: ANALYZE table_name will be faster in Postgres 17. (link to commit)
Query Planner Improvements in Postgres 17
Query Planner to use Merge Append to efficiently UNION queries: This change in PG17 can improve performance significantly if you have a query with a UNION clause. In particular, this change is especially helpful if the top-level UNION contains a LIMIT node that limits the output rows to a small subset of the unioned rows. How is this possible? With this change, Postgres will be able to use presorted input in order to eliminate duplications—where previously, the Postgres query planner had to use a Hash Aggregate or had to sort the entire Append result. (link to commit)
Query Planner to better handle redundant IS [NOT] NULL: When you create a table you can create a column, select the name and type and whether it can allow NULLS or not—and you can put a NOT NULL constraint on the column. When you write a query you might write WHERE column IS NOT NULL and before this PG17 change, Postgres would always evaluate even if it knew there couldn’t be any NULLs in the table. As of Postgres 17, Postgres is a lot smarter when a column has the constraint IS NOT NULL and avoids doing unnecessary work in that scenario. (link to commit)
Performance Improvements in PG17
Vacuum WAL volume decrease & performance improvements: WAL in Postgres is the “write-ahead log” which is used to ensure data integrity and to support backups, point-in-time recovery, and replication. New to Postgres 17, vacuum will now produce less WAL by volume in terms of number of bytes—thereby taking up less space on disk and speeding up replay. Vacuum which freezes tuples may emit 30% less WAL—and writing and syncing these WAL records may take up to 15% less time. (link to commit)
Reduce memory usage in sort & incremental sort by using a bump memory allocator: The benefit of this change is that Postgres will use less memory for doing sorts, so things don’t have to go to disk because work_mem is full, which means improved sort and incremental sort performance. Also, when Postgres data is more compact in memory, that means Postgres can use CPU caches more efficiently too. (link to bump memory allocator commit / link to use of bump memory allocator for tuplesorts)
Improve memory allocation performance: These improvements to the 4 different types of memory allocators in Postgres improve the performance of everything—slightly. The way this was implemented was to optimize for the most common code paths which are described in the commits as “hot” paths—as compared to cold paths which are less common and would generally require a malloc anyway so the cold paths would be slower anyway. (link to allocset commit / link to generation and slab commit.)
Query planner improvements for highly-partitioned tables: By speeding up Bitmapset processing by removing trailing zero words in PG17, the Postgres query planner speed doubled in some of the test cases. In particular, this optimization can make the query planner twice as fast for workloads with a lot of partitioned tables. And, I’m told there is more work that can be done in this area to improve performance even further in the future. (link to Bitmapset commit)
libpq performance optimization: The detailed name of this feature is “avoid needless large memcpys in libpq socket writing”, and the bottom line is that this improvement makes libpq more efficient. The result: improved performance when clients have large amounts of data in one message— such as a SELECT outputting large (>8k) variable length columns such as text; or a big COPY TO STDOUT; or in pg_basebackup. (link to commit)
Reduce memory usage for JIT: Just-in-Time (JIT) compilation in Postgres can make your queries insanely fast for certain workloads, particularly if you are running expression-heavy queries that are CPU-bound. However, there was an issue with JIT that caused it to leak a lot of memory that was causing some people to turn off JIT. With this fix, which some people are celebrating, you can turn JIT back on, and new users won’t have to turn it off. (link to commit)
pg_upgrade performance: This improvement in PG17 makes pg_upgrade runs faster, especially during the compatibility check phase, which many users like you may run over and over again to make sure your cluster is ready to be upgraded—or to make sure nothing problematic has snuck into your Postgres cluster as you’re preparing to upgrade. Speedup of the pg_upgrade check varies depending on the Postgres version being upgraded from, but will typically be 2x or better.
For those of you who are Flexible Server customers of Azure Database for PostgreSQL, this improvement will also benefit major version upgrades to Postgres 17 once it’s available. (link to commit / link to mailing list discussion)
Developer experience in PG17
pg_buffercache_evict test tool: A superuser-only developer test utility. From the commit, “When testing buffer pool logic, it is useful to be able to evict arbitrary blocks. This function can be used in SQL queries over the pg_buffercache view to set up a wide range of buffer pool states. Of course, buffer mappings might change concurrently so you might evict a block other than the one you had in mind, and another session might bring it back in at any time. That’s OK for the intended purpose of setting up developer testing scenarios.”
This pg_buffercache_evict tool will enable future enhancements to memory plasticity, which will be explored in this upcoming talk at PGConf.dev by Krishnakumar Ravi and Palak Chaturvedi. (link to commit)
Meson build system maintenance: Lots of ongoing maintenance on the newer build system for Postgres, called Meson. Meson is an open source build system that is faster, multiplatform, modern, cleaner, popular. And Meson is much more user friendly when compared to the venerable autoconf and make-based system, which has served PostgreSQL very well for decades but is starting to show its age.
Postgres CI maintenance: In Postgres 17 there are myriad commits to maintain the Postgres CI that was first adopted in Postgres 15. As you would expect, the Postgres CI continues to be absolutely transformative. With Cirrus CI, every commit you push into your GitHub repo (cloned from PostgreSQL) will get tested across 4 operating systems, along with extra checks—catching a lot of problems early. The result: reviewers can focus on higher level architectural questions, and the build farm (which runs the test suites after things are committed) no long turns red as much as it used to.
Postgres 17 release notes (first draft!)
Hot off the press, while I was writing this blog post, Bruce Momjian of the Postgres core team published the first draft of the Postgres 17 release notes in the “PostgreSQL devel” docs branch. While these PG17 release notes are still being worked on and will definitely change, they give a taste of what’s to come.
Citus open source
Citus is an open source extension to Postgres (open source repo on GitHub) that gives you the superpower of distributed tables. Who uses Citus? People with data-intensive applications that need more compute, memory, or scale than they can get from a single Postgres node.
The tagline for the open source project is that “Citus gives you the Postgres you love, at any scale.”
And is Citus popular? There are almost 10,000 stars on GitHub—maybe your star can be the one that pushes Citus over the edge to hit 10K. :star:️
New Citus open source features in last 8 months
Postgres 16 support: Published on the Citus Open Source Blog, this Citus 12.1 release blog post announced the Citus support of Postgres 16 in Citus 12.1, within just one week of the PG16 release. (More details in the Citus 12.1 release notes.)
PG16: JSON aggregate support: As of Citus 12.1, Citus now supports and parallelizes the new JSON_ARRAYAGG() and JSON_OBJECTAGG() aggregates. (Code example in the 12.1 release notes.)
PG16: DEFAULT in COPY: By using the new DEFAULT option in PG16 with COPY FROM, you can control in which rows you want to insert the default value of a column (vs. inserting a defined, non-default value.) And as of Citus 12.1, this new DEFAULT option is supported and propagated to the nodes in a distributed cluster. (Code example in the 12.1 release notes.)
PG16: more DDL propagation: Citus now propagates new CREATE TABLE, VACUUM, and ANALYZE options to worker nodes in the distributed cluster. And according to the 12.1 release notes, Citus can propagate the STORAGE attribute if it is specified when creating a new table. In addition, Citus can now propagate BUFFER_USAGE_LIMIT, PROCESS_MAIN, SKIP_DATABASE_STATS and ONLY_DATABASE_STATS options in VACUUM and/or ANALYZE.
ICU collation rule propagation: Prior to Postgres 16, Citus already supported distributed collations. So with the PG16 addition of custom ICU collation rules that can be created using the new “rules” option in CREATE COLLATION, Citus just needed to support the propagain of this new PG16 collation “rules” option. (Details in 12.1 release notes.)
Support TRUNCATE triggers on Citus foreign tables: Those of you who care about audit logging were probably pleased to see Postgres 16 add support for TRUNCATE triggers for foreign tables. With Citus 12.1 you can use the new TRUNCATE triggers features with Citus foreign tables too. (More details in the announcement blog post.)
Combine query-from-any-node with load balancing: PG16 added a new load balancing feature in libpq that lets you specify load_balance_hosts and set it to random. This new libpq load balancing feature makes it easy to load balance in combination with the Citus query-from-any-node feature. (More details in the 12.1 release notes.)
Distributed schema move: Citus 12.1 includes some schema-based sharding improvements, including the new citus_schema_move() function, which enables you to move a distributed schema to a different node. (See 12.1 release notes for more details.)
GRANT … ON DATABASE propagation: More schema-based sharding improvements in Citus 12.1: now you can propagate GRANT/REVOKE ON DATABASE commands. (Code examples in the release notes.)
Distributed schema table from local table when identity column: Before Citus 12.1 it was not possible to create a distributed schema table from a local table if it uses an identity column. Code example of how you can take advantage of this new feature is in the 12.1 blog post.
Citus dev containers!: From the Citus commit, the new Citus devcontainer “allows for quick generation of isolated development environments, either local on the machine of a developer or in a cloud, like GitHub Codespaces.” With the introduction of Citus dev containers, it’s now much easier to setup the Citus development environment, making it easier for new contributors. Detailed how-to instructions are in the Contributing.md in the Citus repo.
Postgres ecosystem
Patroni 3.2 and 3.3: Patroni is the most popular High Availability (HA) solution for Postgres. It helps you deploy, manage, and monitor HA clusters using streaming replication—and it’s open source. Alexander Kukushkin from our team is the technical lead and collaborates with engineers from different companies on Patroni—also, Alexander gave a recent talk at Nordic PGDay 2024 titled “Step-by-step Patroni cooking guide.”
In Patroni 3.2, notable new features include priority failovoer; generating Patroni configuration from a Postgres cluster not yet managed by Patroni; and making permanent physical replication slots on standby nodes. In Patroni 3.3, notable features include improved visiblity of pending Postgres restart; and possibility to run standby nodes without replication by replaying WAL only from archive.
PgBouncer 1.21.0, 1.22.0, and 1.22.1: PgBouncer is a popular open source connection pooler—and in the last 8 months there have been 3 notable PgBouncer releases that our team has contributed to. I love the “names” the PgBouncer team gives to their releases.
PgBouncer 1.21.0 is called “The one with prepared statements” which adds support for protocol-level named prepared statements, which Jelte tells me was one of the most requested features for PgBouncer. With 1.21.0, JDBC works out of the box, npgsql (a .NET client) works out of the box, and you no longer need to turn off PgBouncer when using prepared statements. Instead you just enable prepared statement support in PgBouncer—and queries are faster in most scenarios where you run the same query over and over again, especially if those SQL queries are large queries.
How fast? According to the changelog for 1.21.x, “in synthetic benchmarks this feature was able to increase query throughput anywhere from 15% to 250%, depending on the workload.”
PgBouncer 1.22.0 is called “DEALLOCATE ALL” and PgBouncer 1.22.1 is called “It’s summer in Bangalore.” (More details in the 1.22.x changelog.)
pgcopydb 0.14 and 0.15: The pgcopydb utility (open source repo on GitHub) automates running pg_dump and pg_restore between two running Postgres servers. Example use cases include migrations to newer hardware, migrations to a newer instances—and also Postgres major upgrades. And our Migration service in Azure Database for PostgreSQL is built on top of pgcopydb, making it relevant to those of you who run on Azure too. There are boatloads of improvements in the v0.15 release and the v0.14 release, mostly about being able to cater to more use cases. Also some memory usage fixes.
Dimitri Fontaine’s inputs on what to highlight about recent changes in pgcopydb:
“I have been asked a lot about how to resume operations when using pgcopydb, and some users wanted to have a better grasp of how we use snapshots and replication slots and their impact on the ability to resume operations. The new documentation chapter Resuming Operations (snapshots) covers that in details. Oh, the new tutorial is a great place to get started with pgcopydb too.”
HLL and TopN: HyperLogLog (HLL) and TopN are both approximation algorithms, sometimes called sketch algorithms. HLL is used to solve the count-distinct problem. Our team maintains the HLL open source extension and the primary change in this time period was to add Postgres 16 support. The TopN extension, which we also maintain, is used to calculate the top values according to some criteria.
activerecord-multi-tenant: Our team maintains the activerecord-multi-tenant gem which makes it easy for multi-tenant Ruby on Rails applications to use row-based sharding with Citus. (Whereas for schema-based sharding you can use the acts_as_tenant gem, on which activerecord-multi-tenant is based.)
django-multitenant: Similar to activerecord-multi-tenant, this library our team maintains is for multi-tenant applications that want to use row-based sharding—the difference is that this library is for Python and Django applications. (For schema-based sharding there are other libraries you can use that are maintained by the community, with django-tenants being the most popular.)
Postgres community work
Given my work as head of Postgres open source community initiatives at Microsoft, it’s no surprise: contributing to Postgres in ways beyond code is near and dear to my heart. I’ve even given a few talks about it.
Listed below are highlights of the community work that our Postgres team at Microsoft has contributed in the last 8 months.
Serve on Postgres organizing & talk selection teams: The awesome Postgres community conferences—which happen all around the world—are an opportunity for knowledge sharing, learning, and networking. And if you take advantage of it, the in-person hallway track can open up all sorts of doors for you in the Postgres world. And members of our PG team at Microsoft have served the organization teams &/or the talk selection teams in these Postgres community events in the last 8 months:
PGConf.EU 2023
PGConf NYC 2023 and 2024
FOSDEM PGDay 2024
Nordic PGDay 2024
PGDay Chicago 2024
PGConf.dev 2024
Sponsor Postgres conferences: Postgres conferences need financial support or they simply won’t happen. And Microsoft is proud to be able to sponsor all of these Postgres events for the Postgres community over the last 8 months:
PGConf NYC 2023 – Platinum sponsor
PGConf EU 2023 – Platinum sponsor
PGConf India 2024 – Diamond sponsor
Nordic PGDay 2024 – Supporter sponsor
pgDay Paris 2024 – Supporter sponsor
PGConf Germany – Platinum sponsor
Postgres Conference Silicon Valley – Partner sponsor
PGDay Chicago – Gold sponsor
PGConf.dev 2024 – Gold sponsor
POSETTE: An Event for Postgres, happening Jun 11-13: Organized by our Postgres team at Microsoft, POSETTE is a free & virtual developer event, now in its 3rd year, formerly called Citus Con.
This year’s event will take place online Jun 11-13. With 4 livestreams, 4 keynotes, 38 talks, 44 amazing speakers—there’s guaranteed to be something for everyone.
Check out the schedule to see what people are so excited about, and be sure to save the date.
You can also add specific livestream(s) to your calendar: Livestream 1 / Livestream 2 / Livestream 3 / Livestream 4.
Of course you can always watch the talks on YouTube after the livestreams are over, at your leisure, at 2X speed—but then you’ll miss the opportunity to ask the speakers questions via live text chat while the livestream is happening.
If you’re curious, there’s a blog post for why we changed the name to POSETTE.
And in the interest of transparency, a blog post about the process for POSETTE talk selection too.
Host monthly podcast for developers who love Postgres: This monthly podcast for developers who love Postgres started as a pre-event for Year 2 of Citus Con, hence the original name “Path To Citus Con”. The focus is on the human side of Postgres and open source, and we often explore how people in the Postgres community got their start: as developers, as Postgres users, or as Postgres contributors. You can find all 15 past episodes online (or on your favorite podcast platform, as well as on YouTube.) Oh, and we record LIVE on Discord and it’s quite fun to participate in the live chat that happens in parallel to the live recording.
So many blog posts: You can find many of our Postgres team’s blog posts on Microsoft Tech Community as well as on the Citus Open Source Blog. (And yes, we syndicate our open source blog posts to Planet Postgres.)
Conference talks at PG events: Both in-person and virtually, our Postgres teams have been active on the conference circuit. How active? Our engineers and subject matter experts delivered 49 talks in the 8 months since I published the previous version of this “what’s new” blog post last August.
Later in May, Postgres people on our team will be presenting 6 different sessions at PGConf.dev in Vancouver. And some of my teammates will be presenting virtually at POSETTE in June!
Citus monthly technical newsletter: Our monthly Citus technical newsletter includes links to latest blog posts and releases of the Citus extension. And it’s easy to join the Citus newsletter.
Citus Slack for Q&A: If you’re a Citus open source user, you can join our Slack for Q&A about the Citus extension and distributed PostgreSQL.
PGSQL Phriday contributions: Ryan Booz from Redgate started PGSQL Phriday, a monthly community blog event for the Postgres community. It seems like it started yesterday but there have been 16 blogging events so far so clearly it’s been happening for more than a year. I participated in PGSQL Phriday #014 organized by Pavlo Golub, all about PostgreSQL Events, with this post, an Illustrated Guide to Postgres at PASS Data Summit 2023.
Azure Cosmos DB for PostgreSQL
Azure Cosmos DB for PostgreSQL is a distributed Postgres database service geared toward workloads that need a multi-node database cluster.
Typical workloads for a distributed PostgreSQL database include multi-tenant SaaS, real-time analytics apps such as timeseries, and hybrid transactional and analytical applications.
This “Product updates” page in the docs is a good page to bookmark and is the comprehensive source for new capabilities in Azure Cosmos DB for PostgreSQL. But let’s walk through just a few highlights…
What’s new in Azure Cosmos DB for PostgreSQL in the last 8 months?
Azure Cosmos DB for PostgreSQL is a distributed Postgres service powered by the Citus extension to Postgres—which is geared toward data-intensive applications that need the scale and performance of a multi-node distributed Postgres database cluster.
In the last 8 months, Azure Cosmos DB for PostgreSQL has added GA support for:
Postgres 16
32TiB storage for multi-node clusters
Customer Managed Keys (CMK) in all regions
Geo-redundant backup & restore
EntraID authentication in addition to Postgres roles
This Release Notes page in the Azure documentation has even more details about new capabilities in Azure Cosmos DB for PostgreSQL.
Microsoft <3 Postgres
In putting together this post I was struck by all the places our Postgres team is contributing to Postgres: first by offering a popular managed Postgres database service on Azure—and also by the ways we contribute to PostgreSQL with code, architecture, reviews, bug reports, commitfest management, CVEs, testing, extensions, ecosystem tooling, conference sponsorships, conference talks, organizing events, and all the rest of the ways we contribute beyond code too. The list goes on. Not to mention the long-term architectural investments in Postgres.
And there’s more! I didn’t even mention that Andres Freund serves on the Postgres core team.
When Daniel Gustafsson and I were looking at all the metrics about the team’s PG17 open source work, he summarized it well, “…Microsoft employees are involved in all aspects of Postgres.”
Microsoft Tech Community – Latest Blogs –Read More
web cam üzerinden nesne boyutu tespit etme
vidaların m8 mi m10 mu tespit etmek ıstiyorum hangı komutları kullanmalıyımvidaların m8 mi m10 mu tespit etmek ıstiyorum hangı komutları kullanmalıyım vidaların m8 mi m10 mu tespit etmek ıstiyorum hangı komutları kullanmalıyım nesne boyutu MATLAB Answers — New Questions
Detect peaks before drop in signal
Hi everyone,
I have this signal where i need to detect only the peak right before the signal drops but I can’t figure out a way to do it. Can anyone give me a hand?
Thank you so much!Hi everyone,
I have this signal where i need to detect only the peak right before the signal drops but I can’t figure out a way to do it. Can anyone give me a hand?
Thank you so much! Hi everyone,
I have this signal where i need to detect only the peak right before the signal drops but I can’t figure out a way to do it. Can anyone give me a hand?
Thank you so much! findpeaks MATLAB Answers — New Questions
don’t understand output of fitlme, intercept substituted for first line.
Hi – I fit some linear mixed effect models using , but the output always substitutes ‘intercept’ for the first line of the fixed effect coefficiants. Why is this?
Fixed effects coefficients (95% CIs):
Name Estimate SE tStat DF pValue Lower
{‘(Intercept)’ } -1.984e-05 6.4305e-06 -3.0853 37 0.0038374 -3.2869e-05
{‘voc_registers_ADS’ } -9.3714e-06 6.9456e-06 -1.3493 37 0.18545 -2.3444e-05
{‘voc_registers_nonSpeech’} 5.4826e-06 6.4513e-06 0.84985 37 0.40088Hi – I fit some linear mixed effect models using , but the output always substitutes ‘intercept’ for the first line of the fixed effect coefficiants. Why is this?
Fixed effects coefficients (95% CIs):
Name Estimate SE tStat DF pValue Lower
{‘(Intercept)’ } -1.984e-05 6.4305e-06 -3.0853 37 0.0038374 -3.2869e-05
{‘voc_registers_ADS’ } -9.3714e-06 6.9456e-06 -1.3493 37 0.18545 -2.3444e-05
{‘voc_registers_nonSpeech’} 5.4826e-06 6.4513e-06 0.84985 37 0.40088 Hi – I fit some linear mixed effect models using , but the output always substitutes ‘intercept’ for the first line of the fixed effect coefficiants. Why is this?
Fixed effects coefficients (95% CIs):
Name Estimate SE tStat DF pValue Lower
{‘(Intercept)’ } -1.984e-05 6.4305e-06 -3.0853 37 0.0038374 -3.2869e-05
{‘voc_registers_ADS’ } -9.3714e-06 6.9456e-06 -1.3493 37 0.18545 -2.3444e-05
{‘voc_registers_nonSpeech’} 5.4826e-06 6.4513e-06 0.84985 37 0.40088 lme, linear mixed effect, model MATLAB Answers — New Questions