Category: Microsoft
Category Archives: Microsoft
Public preview of Universal Print on macOS
When we developed Universal Print, our goal was to create a single print solution that meets people where they are and “just works—from every app, on every device. Today, we’re taking a new step toward that vision by announcing the public preview of Universal Print on macOS.
This capability delivers full support for Mac devices within the Universal Print infrastructure. People can now print from any Mac app using the system print dialog without having to install drivers. If you were previously hesitant to adopt Universal Print because of a mixed Windows and Mac environment, you can now move ahead knowing all your workers are supported, regardless of operating system. This capability is available immediately after enabling the feature in the Azure Portal.
Adding printers
A new Universal Print macOS app is now available in the App Store. After installing the app and signing in to a Universal Print eligible Microsoft Entra ID account, people can search for printers by name (e.g., “Building 8” or “Design Studio”) or even narrow down the list by location, then add relevant printers to their devices.
That’s it – no learning curve! Try it today!
Getting started
If you have eligible Universal Print licenses, you already have access to this new capability! Read the documentation or follow these short steps to get started:
Configure the macOS Support (preview) setting to “Show all printers“. You can access this new setting on the Settings page.
The Global settings page for Universal Print in the Microsoft Azure Portal.
Get the Universal Print app from the App Store and sign in with an account that has Universal Print access and access to at least one printer.
The Universal Print system dialog in macOS showing a list of available printers
Find and install a printer.
The Add Printer dialog showing a search for printers by keyword
Try it out – print something from an app you frequently use!
The Print dialog in macOS showing the Universal Print printer
Share your thoughts
We’d love to hear about your experience with the public preview of Universal Print for macOS. Your input really does guide our future investments. Please share your ideas and feature requests with us, start discussions in the Universal Print Tech Community, and leave a review in the App Store.
Additional resources
macOS support is just one recent announcement. To learn about other capabilities, read the latest installment of What’s new for Universal Print. If a demo is what you are looking for, check out our self-paced guided tour, Get to know Universal Print. Or, hear directly from the experts on Universal Print and cloud printing by watching Tackling Tech.
Microsoft Tech Community – Latest Blogs –Read More
2024 Microsoft Partner of the Year Awards – nomination window is now open!
Are you a Microsoft partner that has embraced the Microsoft commercial marketplace as key to your GTM strategy? Either selling with or through other partners via marketplace? If so, we want to celebrate you!
The nomination window for the 2024 Microsoft Partner of the Year Awards is open now through April 3, 2024. Submit your nomination for a chance to be recognized as the Commercial Marketplace Microsoft Partner of the Year!
About the Commercial Marketplace Partner of the Year Award:
Recognizes partners that have successfully adopted the marketplace as core to their go-to-market and joint-selling opportunities with Microsoft.
Partners positioned to win this award will demonstrate marketplace sales and marketing expertise, as well as consistent, high-quality, and predictable delivery to marketplace customers.
Successful entries will also include quantifiable success through growth in customer acquisition, billed sales, or Microsoft Azure consumption – either selling digital direct, through channel partners, or co-selling with Microsoft.
Preferred qualification – your organization has a publicly available offer that is transactable and surfaced on digital storefronts (either Azure Marketplace or Microsoft AppSource).
Competitive submissions will describe how your use of the marketplace helped address your customers’ needs. (Examples: Simplified procurement, utilization of cloud consumption commitments, faster time-to-value, or customization with private offers.)
Call to Action
Prepare your nomination & submit before April 3, 2024! Visit https://aka.ms/POTYA for more details.
Additional resources:
Complete award guidelines: https://aka.ms/POTYA_Guidelines
Guidance from the judges: https://aka.ms/POTYA_JudgesGuidance
Tips on creating award-winning entries: https://aka.ms/POTYA_NominationTips
Frequently asked questions: https://aka.ms/POTYA_FAQ
Nomination tool: https://aka.ms/POTYA_Nominate
About the Microsoft Partner of the Year Awards program
The Commercial Marketplace award is just one of many that are celebrated as part of the annual Microsoft Partner of the Year Awards program. The awards are driven by a self-nominating process and acknowledge outstanding successes and innovations by partners across our global ecosystem – recognizing achievements in categories spanning solutions areas, industries and across cloud to edge technologies.
These successes enable digital transformation, showcase entrepreneurial spirit, and deliver impactful solutions to customers. Award winners and finalists receive recognition from Microsoft and the partner community and are celebrated at Microsoft Inspire. Nominations are accepted from February 7, 2024, until April 3, 2024 at 6:00pm Pacific Time.
Microsoft Tech Community – Latest Blogs –Read More
Abnormal and swXtch.io offer new transactable partner solutions in Azure Marketplace
Microsoft partners like Abnormal and swXtch.io deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:
Abnormal – Cloud Email Security: Abnormal’s modern, cloud-native solution uses an API-based architecture to augment the built-in protection of Microsoft 365 to block malicious and unwanted emails that are text-based, lack attack signatures or originate from inside your company — all of which often bypass legacy secure email gateways (SEGs) and cloud email providers.
cloudSwXtch VM Image: cloudSwXtch from swXtch.io is a virtual overlay network that adds feature-rich, high-performance networking to your cloud or edge applications. Deployed as a VM image on Microsoft Azure, cloudSwXtch requires no code changes to your existing applications and enables true and seamless IP multicast and broadcast. It optimizes your network configuration and may reduce your cloud distribution and egress costs.
Microsoft Tech Community – Latest Blogs –Read More
Integrity protect your Azure blob storage data with Azure confidential ledger
To support customers in regulated industries and compliance scenarios who asked about higher integrity protection of storage blobs, the Azure confidential ledger team has launched a preview of a managed Marketplace application that will further protect data: Blob Storage Digests Backed by Confidential Ledger (Preview).
Data signatures from blob can be harvested and stored in a confidential ledger for tamper protection. At a later point in time and to demonstrate tamper proofness for compliance and auditing purposes, signatures can be recalculated and validated against the signature in Azure confidential ledger. To express your interest, join the preview.
Customers needing confidentiality guarantees in key-value store can continue utilizing Azure confidential ledger directly and can leverage the integration between Azure SQL Database ledger with digest stores in Azure confidential ledger for continuing to protect their relational data.
The Azure confidential ledger team has also launched new features to enhance product and auditing experience:
The Azure confidential ledger Portal experience has been improved with a new Ledger Explorer feature that allows observing transactions and validating the cryptographic proofs of ledger transactions.
The multi-admin delete (upcoming) feature will allow deleting ledgers only with approval from multiple administrators, compared to a single administrator requested delete today.
Quick Glance of the new Marketplace App: Blob Storage Digests Backed by Confidential Ledger (Preview)
Step 1: When a new blob gets created, the transaction table will be populated with a new entry
Step 2: When enough blob records have been created, a digest will be calculated and stored in to Azure Confidential Ledger. An entry will be created within the block table and contain the Azure Confidential Ledger Transaction ID.
Step 3: When it comes time to performing an audit, a history of audit records can be viewed for every blob storage container
Step 4: The audit record will verify transactions and demonstrate the integrity protection status by comparing the digest hash stored in Azure Confidential Ledger against re-calculated digest hash
Preview Sign up: Join the preview
Watch the demo (~15:00):
Getting Started & Resources
For getting started with Azure confidential ledger, read our documentation and past blog posts:
ACL Get Started
ACL Documentation
BeekeeperAI Case Study
ACL Hardware Supply Chain Post
ACL Securing Compliance Data Post
Microsoft Tech Community – Latest Blogs –Read More
“On the minds of OneDrive admins” 🎙 – The Intrazone podcast
Whether you’re looking to chat about the best new drivers of 2024 – FOUR! – OR best practices for OneDrive administration, Drew Madelung is one of the best SMEs that will lead you to a hole in one – to hone your superpowers for the underlying solution that powers collaborative files experiences across Microsoft 365.
On this episode, we hear from Drew Madelung (Microsoft MVP | Solution Architect – Protoviti) who expertly focuses on all things OneDrive, specifically all the deep diving details admins need to know about. Drew is all about how to think about managing OneDrive to the fullness of your organization; how to make it better, how to optimize using what you have AND investing in the right places to keep with the times — aka, your trusted advisor for all things OneDrive in overdrive – at your disposal and within your control.
We grabbed time with Drew recently at 365 EduCon (aka now “TechCon365”) – to see what he was up to and to ask him a few questions everyone would benefit from in hearing his answers – – especially when it comes to deployment, configuration, troubleshooting, and governance – Drew covers it all and best addresses what on the minds of OneDrive admins!
Our full discussion is embedded below and now available through your favorite pod’player.
The Intrazone, episode 106:
Subscribe to The Intrazone podcast + show links and more below.
The Intrazone guest: Drew Madelung presenting the depths of OneDrive administration at a recent 365 EduCon event (2023); note the t-shirt.
Fun, past pic of our guest, Drew Madelung, presenting “Sharing Strategy” back in the days of SharePoint Fest 2019, the previous (previous) name of TechCon365. Did you note the similari-tee?
Links to important on-demand recordings and articles mentioned in this episode:
Hosts, guests, and related products
Drew Madelung | LinkedIn | X | Blog [guest]
Mark Kashman |@mkashman [co-host]
Chris McNulty |@cmcnulty2000 [co-host]
SharePoint | Facebook | @SharePoint | SharePoint community blog | Feedback
Articles and sites
“Unveiling the next generation of OneDrive” by Jason Moore [on the OneDrive community blog].
OneDrive on adoption.microsoft.com
Microsoft Docs – The home for Microsoft documentation for end users, developers, and IT professionals.
Microsoft Tech Community Home
Stay on top of Office 365 changes
Listen to other Microsoft podcasts
Upcoming Events
CollabDays – Bremen 2024 | Feb 9-10, 2024, near Hamburg in Germany
M365 Saturday DC | Feb 10, 2024 (Reston, VA)
Teams Nation 2024 | Feb 21, 2024, 9:00 – 19:30 CET
CollabDays – Calgary 2024 | Feb. 24, 2024
Microsoft Fabric Community Conference | March 26-28, 2024, in Las Vegas, NV
AIIM | April 3-5, 2024, in San Antonio, TX
North American Cloud & Collaboration Summit (NACS) | April 9-11, 2024, in Dallas, TX
Microsoft 365 Conference | April 28 – May 4, 2024, in Orlando, FL
SharePoint Intranet Festival | May 22, 2024 (online)
TechCon365 – Seattle (formerly 365 EduCon) | June 3-7, 2024, in Seattle, WA
Subscribe today!
Thanks for listening! If you like what you hear, we’d love for you to Subscribe, Rate and Review on iTunes or wherever you get your podcasts.
Be sure to visit our show page to hear all episodes, access the show notes, and get bonus content. And stay connected to the SharePoint community blog and where we’ll share more information per episode, guest insights, and take any questions or suggestions from our listeners and SharePoint users (TheIntrazone@microsoft.com).
Get Intrazone anywhere and everywhere
Show page
Apple Podcasts
Google Play Music
Spotify
Pandora
Stitcher
Overcast
TuneIn
RadioPublic
iHeart
RSS
+ Listen to other Microsoft podcasts at aka.ms/microsoft/podcasts.
The Intrazone, a show about the Microsoft 365 intelligent intranet (aka.ms/TheIntrazone)
Microsoft Tech Community – Latest Blogs –Read More
Firewall considerations for gMSA on Azure Kubernetes Service
This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the Kerberos ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the firewall considerations for different scenarios on which security rules might block the authentication process.
gMSA and its moving parts
To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).
The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.
The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the Active Directory DC.
Ports to open for Active Directory and gMSA
We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.
The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there’s no computer or user account being used interactively, and thus we can compile the following list:
Protocol and port
Purpose
TCP and UDP 53
DNS
TCP and UDP 88
Kerberos
TCP 139
NetLogon
TCP and UDP 389
LDAP
TCP 636
LDAP SSL
Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.
Domain Controllers in Azure
You might mitigate a lot of firewall issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:
You keep the authentication process within Azure. Your Windows pods and nodes don’t need to reach to an on-premises environment – unless the DC(s) in Azure is down.
You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.
On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD replication as well.
Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.
Microsoft Tech Community – Latest Blogs –Read More
Firewall considerations for gMSA on Azure Kubernetes Service
This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the Kerberos ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the firewall considerations for different scenarios on which security rules might block the authentication process.
gMSA and its moving parts
To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).
The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.
The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the Active Directory DC.
Ports to open for Active Directory and gMSA
We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.
The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there’s no computer or user account being used interactively, and thus we can compile the following list:
Protocol and port
Purpose
TCP and UDP 53
DNS
TCP and UDP 88
Kerberos
TCP 139
NetLogon
TCP and UDP 389
LDAP
TCP 636
LDAP SSL
Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.
Domain Controllers in Azure
You might mitigate a lot of firewall issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:
You keep the authentication process within Azure. Your Windows pods and nodes don’t need to reach to an on-premises environment – unless the DC(s) in Azure is down.
You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.
On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD replication as well.
Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.
Microsoft Tech Community – Latest Blogs –Read More
Firewall considerations for gMSA on Azure Kubernetes Service
This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the Kerberos ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the firewall considerations for different scenarios on which security rules might block the authentication process.
gMSA and its moving parts
To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).
The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.
The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the Active Directory DC.
Ports to open for Active Directory and gMSA
We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.
The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there’s no computer or user account being used interactively, and thus we can compile the following list:
Protocol and port
Purpose
TCP and UDP 53
DNS
TCP and UDP 88
Kerberos
TCP 139
NetLogon
TCP and UDP 389
LDAP
TCP 636
LDAP SSL
Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.
Domain Controllers in Azure
You might mitigate a lot of firewall issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:
You keep the authentication process within Azure. Your Windows pods and nodes don’t need to reach to an on-premises environment – unless the DC(s) in Azure is down.
You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.
On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD replication as well.
Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.
Microsoft Tech Community – Latest Blogs –Read More
Firewall considerations for gMSA on Azure Kubernetes Service
This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the Kerberos ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the firewall considerations for different scenarios on which security rules might block the authentication process.
gMSA and its moving parts
To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).
The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.
The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the Active Directory DC.
Ports to open for Active Directory and gMSA
We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.
The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there’s no computer or user account being used interactively, and thus we can compile the following list:
Protocol and port
Purpose
TCP and UDP 53
DNS
TCP and UDP 88
Kerberos
TCP 139
NetLogon
TCP and UDP 389
LDAP
TCP 636
LDAP SSL
Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.
Domain Controllers in Azure
You might mitigate a lot of firewall issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:
You keep the authentication process within Azure. Your Windows pods and nodes don’t need to reach to an on-premises environment – unless the DC(s) in Azure is down.
You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.
On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD replication as well.
Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
CoPilot for genomics report interpretation!
Contributors – Manoj Kumar, Sam Bhatia, Kelly Johnson, Roberto Llaras, Luis Apellaniz Fontanez, Sean Hamill, Mary Burke, Anukool A, Christina Tillbrook
Imagine having a special friend who is a genomics expert and can understand your genetic profile and can help you understand it in a simple, easy to understand way. Well, that’s exactly what we built during the recent Microsoft Hackathon in September 2024.
Picture this: You visit the doctor’s office, and they give you a bunch of papers filled with big, fancy words and numbers as part of your recent genomic analysis. Whoa, that can be confusing, right?
Physicians today use genomic reports to diagnose and treat a wide range of genetic conditions. These reports typically provide information about an individual’s genetic makeup, including any mutations or variations that may be associated with specific diseases or disease conditions. Interpreting genomic reports can be complex and requires specialized training and expertise. Physicians typically work with genetic counselors and other specialists to help interpret and understand the information contained in these reports. In general, physicians will look for any genetic mutations or variations that are known to be associated with the patient’s symptoms or family history. They will also consider the patient’s overall health, medical history, and other factors when interpreting genomic reports. The interpretation of genomic reports is constantly evolving as new research and discoveries are made in the field of genetics. As a result, physicians must stay up to date with the latest advances in genomics and work closely with other experts to ensure that they are providing the most accurate and effective care possible.
That’s where the CoPilot for genomic report interpretation jumps in to save the day! It’s like having a secret decoder that turns all those complex things into simple stories just for you.
What is a CoPilot?
A copilot is an intelligent virtual assistant that uses large language models to facilitate natural, human-like conversational interactions. It can assist users in various tasks and decision-making processes across multiple domains within an enterprise environment. By leveraging these models, AI copilots have the capability to understand, analyze, and process vast amounts of data. Microsoft, for instance, has rebranded all of its AI-powered companions as Copilot, adding specialized monikers to each of them. These AI copilots can provide support in areas such as coding, writing, learning, and more. They are designed to understand context, provide relevant suggestions, and even generate content, making them a valuable tool in many professional settings.
This Copilot is responsible for taking the report or genomic impression from tertiary analysis as input and creating a narrative on the impression in the report in a layman language for non-experts to understand.
Watch the following video to understand how it actually works.
Following is the architecture for this hackathon project.
Health copilot was a powerapp application that different personas interacted with. The reports were stored in Azure storage. The reference for this was available as part of SharePoint List. Power Automate was used to connect to Azure form recognizer to convert from pdf to text. This text was then sent to Azure Open AI GPT 3.5 model with a specific prompt to generate response. Text analytics for health was also used to highlight medical images that were part of the report for easy reading.
Following prompt was sent to large language models in order to produce a consistent as well as understandable summary report at 5th grade level.
#### **Introduction**
Please create a report that will help patients with their
diagnosis based on the lab and genomics results. Use simple language to explain
the findings and what they mean. Be sure to explain any scientific terms you
use in a glossary at the end of the report.
#### **Section Breakdown**
1. **Your Test**
– Describe the type of test carried out in a manner easy
for a patient to understand. Mention that a small sample of their blood was
taken for the test.
2. **What We Found**
– Summarize the findings, including lab parameters seen
in the report if any. Briefly explain what was looked at in the genes without
using technical jargon.
3. **What It Means**
– Break down what the findings mean in simple terms,
focusing on mapping lab results to what was found in the genetic tests.
4. **Your Health Team’s Advice**
– Provide advice on potential lifestyle changes like diet
and physical activity in a patient-friendly way, encouraging healthy habits
rather than instilling fear.
5. **Questions You Might Have**
– Anticipate and answer a couple of basic questions a
patient might have, framing it as a collaborative effort to maintain health.
6. **Glossary**
– Define any technical terms used in the report in
simple, easy-to-understand language.
#### **Guidance on Tone**
– Maintain a friendly, reassuring tone throughout the
report.
– Avoid using scary words or phrases that might cause
anxiety.
– Use encouraging language that promotes a positive
outlook and emphasizes the proactive steps the patient can take to maintain
their health.
#### **Instructions**
1. Start with a friendly greeting to make the patient
feel seen and respected.
2. For each section, provide clear, concise information
without overwhelming the reader with too much detail.
3. Use analogies or comparisons where possible to explain
complex concepts in a way that a patient can understand.
4. Include colorful illustrations or diagrams to help
explain the concepts visually (if possible).
5. Review the report to ensure it is free of medical
jargon, and replace any complex terms with simpler alternatives.
6. Conclude with a positive note, reassuring the patient
that their health team is there to support them.
#### **Review**
After drafting the report, please review it to ensure:
– The language is appropriate for a 5th-grade reading
level.
– The report conveys all necessary information without
being too detailed or technical.
– It maintains a friendly, positive tone throughout.
Thank you for helping to create a report that respects
and empowers our young patients by providing them with the information they
need in a way they can understand!
—
[Start of text]
Text extracted from complex genomics reports
[End of text]
Sample output from the model using the above prompt.
Dear Sarah Taylor,
We hope this report finds you well. We have conducted a test to help diagnose your condition and provide you with valuable information. Below, we have summarized the findings and explained what they mean in simple terms. If you have any questions, please don’t hesitate to reach out to your health team.
Section 1: Your Test
We conducted a test called FoundationOne®Liquid which involved taking a small sample of your blood. This test uses next generation sequencing to identify genomic alterations related to your condition.
Section 2: What We Found
Based on the test results, we found some biomarker and genomic findings related to your diabetes type 1. Autoantibodies to other antigens have been reported, but they occur infrequently and are not used for prediction. We also found some genomic alterations, including HLA-DR3 & HLA-DR4 combined, CTLA4 +49G>A and CT60, and IL2RA rs2104286, rs12722489, and rs11594656. These variations were found in a certain percentage of the population.
Section 3: What It Means
The presence of HLA-DR3 and HLA-DR4 alleles is associated with an increased risk of type 1 diabetes. Treatment strategies for type 1 diabetes typically involve insulin therapy, diet management, regular exercise and monitoring of blood glucose levels Immunomodulatory agents can also be used to prevent or delay the onset of type 1 diabetes.
For patients with CTLA4 alterations, immunomodulatory agents can be used to prevent or delay the onset of type 1 diabetes. These agents work by modulating the immune system to prevent the destruction of insulin-producing beta cells in the pancreas. Other potential treatment strategies include standard therapies for type 1 diabetes, such as insulin therapy, diet management, regular exercise and monitoring of blood glucose levels.
Patients with IL2RA alterations can also benefit from immunomodulatory agents to prevent or delay the onset of type 1 diabetes. These agents work by modulating the immune system to prevent the destruction of insulin-producing beta cells in the pancreas.
Section 4: Your Health Team’s Advice
We advise you to work closely with your healthcare provider to develop a personalized plan for managing your risk of developing type 1 diabetes. These include insulin therapy, diet management, regular exercise, and monitoring your blood glucose levels. These proactive steps can contribute to better overall health and help you maintain stable blood sugar levels. Your healthcare provider can also discuss the potential use of immunomodulatory agents and other treatment options based on your specific situation. Remember, your health team is here to support you every step of the way.
Section 5: Questions You Might Have
You may have some questions about your diagnosis and the recommended lifestyle changes. We anticipate a few common questions you might have:
Q: What lifestyle changes can I make to manage my risk of developing type 1 diabetes?
A: Maintaining a healthy diet, engaging in regular physical activity, and monitoring your blood glucose levels can help manage your risk of developing type 1 diabetes. It’s important to work with your healthcare provider to develop a personalized plan that suits your needs.
Q: Are there any experimental treatments available for type 1 diabetes?
A: Cell transplantation is an experimental treatment that involves transplanting insulin producing cells from a donor pancreas into a patient with type 1 diabetes. However, this procedure is not widely available and should be discussed with your healthcare provider.
Q: How can I manage my diet effectively?
A: Your healthcare team can provide you with a personalized diet plan that takes into account your specific needs and preferences. They can guide you on portion sizes, food choices, and meal timings to help you maintain stable blood sugar levels.
Q: What types of physical activity are recommended?
A: Engaging in regular physical activity, such as walking, swimming, or cycling, can have a positive impact on your overall health and help manage diabetes. Your healthcare team can provide guidance on the appropriate level of activity for you and any precautions you should take.
Section 6: Glossary
To ensure you understand all the terms used in this report, we have included a glossary at the end. It provides simple, easy-to-understand definitions of any technical terms used.
– Autoantibodies: Antibodies produced by the immune system that mistakenly target and attack the body’s own cells or tissues.
– Genomic alterations: Changes or variations in the DAN sequence that can affect the function of genes.
– HLA-DR3 & HLA-DR4: Specific genetic variations associated with an increased risk of type 1 diabetes.
– CTLA4: A gene involved in regulating the immune response.
– IL2RA: A gene that encodes a protein involved in the immune response.
– Immunomodulatory agents: Medications that modify or regulate the immune system’s response.
– Insulin therapy: Treatment involving the use of insulin to manage blood glucose levels in individuals with diabetes.
– Diet management: Making healthy food choices and following a balanced diet to maintain blood glucose levels.
– Regular exercise: Engaging in physical activity to improve overall health and manage blood glucose levels.
– Monitoring of blood glucose levels: Regularly checking blood glucose levels to ensure they are within a healthy range.
We hope this report helps you better understand your diagnosis and the steps you can take to manage your health. Remember, your health team is always available to answer any further questions you may have. Stay positive and take proactive steps towards a healthier lifestyle.
Best regards,
[Your Name]
Microsoft Tech Community – Latest Blogs –Read More
Semantic Kernel introduces SQLPlugin and UploadPlugin for efficient data insights.
Semantic kernel Bot in-a-box
The era of information has made it incredibly valuable to interact with and gain insights from data. With the emergence of advanced AI technologies, integrating natural language processing with data has opened up new possibilities for data interaction. Semantic Kernel is a technology that combines artificial intelligence with software development to bring together large language models and software functionality. It is designed to show how AI can work together with human expertise and technical knowledge to create a solution where we can use intelligent tools that blend seamlessly into our digital world.
In this blog post, we are thrilled to discuss two new features for Semantic Kernel’s AI-powered assistant – SQLPlugin and UploadPlugin. SQLPlugin uses SQL to extract insights that can transform the way professionals interact with data, while UploadPlugin lets users upload documents and retrieve knowledge. These cutting-edge advancements are transforming how businesses operate, making them more efficient and productive.
1. Architecture
The process for message flow is as follows:
End-users connect to a messaging channel where your bot is published, for example, Web, PowerBI dashboard, or Teams.
Messages are processed through Azure Bot Services, which communicates with a .NET application running on App Services.
The .NET application has a Semantic Kernel Stepwise Planner at its core, which elaborates a series of steps to process the user’s request and then executes it.
Each plan step is formulated through Azure OpenAI and then executed against Cognitive Search (traditional RAG pattern) or Azure SQL (structured data RAG).
Cognitive search simplifies data retrieval from multiple cloud sources by ingesting and indexing the data, while Azure SQL contains customer data from the AdventureWorksLT sample.
Azure OpenAI decides which data source each question gets routed to. Questions may also span multiple data sources.
2. Pre-requisites
For running locally:
Install .NET;
Install Bot Framework Emulator;
For deploying to Azure:
Install Azure CLI
Install Azure Developer CLI
Log into your Azure subscription
azd auth login
3. Recommended deployment parameters
For this tutorial, you will deploy Document Intelligence, SQL, and Cognitive Search resources using the Semantic Kernel Bot-in-a-box accelerator. This will ensure a seamless and efficient setup process with optimal deployment parameters.
Azure_location = “East US 2”
gptModel = “gpt-4”
gptVersion = “1106-preview”
deployDocIntel
deploySql = True
deploySearch = true
publicNetworkAccess = True
4. Deploy to Azure
Clone this repository locally:
git clone https://github.com/Azure/AI-in-a-Box
Then, access to the semantic-kernel-bot-in-a-box
Deploy resources:
Running the following command in your terminal :
azd up
Add the parameters by following the recommended parameters to use SQL assistant and upload the document for search using retrieval-augmented generation.
3. Use Case 1: SQL-powered GenAI assistant.
Consider the task of extracting the top 10 product figures with prices and categories. With the SQL Assistant plugin, the analyst can easily extract the top 10 product figures with prices and categories. The GPT-4 model interprets the request and translates it into SQL, providing a structured representation of the data sourced directly from the database.
Select your Azure Bot resource from the menu on the Azure portal for this first use case.
Test the bot by asking some sample questions:
Can you provide the top 10 list of all products, their prices, and the categories they belong to?
How much in total sales do we have?
What are my biggest customers?
What are my top-selling products?
4. Use Case 2: AI Search Retrieval-augmented generation.
Envision a scenario where users effortlessly extract valuable information from uploaded documents. Integrated with GPT-4, a document processing system adeptly analyzes diverse document types, including PDFs. Simultaneously, the system collaborates with HR processes, extracting insights from employee data for strategic workforce planning. This seamless integration enhances efficiency and reduces manual effort in extracting information from various document formats.
To retrieve the information, kindly select the document you wish to use.
After uploading, you can enter your prompt by typing it in.
Sample prompt :
How does HRM contribute to achieving an organization’s strategic goals through its functional objectives?
5. Embed the web chat.
In the context of communication applications, a channel refers to a means of establishing a connection between a bot and its users. By registering with Azure, a bot can leverage channels to facilitate effective communication with its users.
To configure a bot to connect to a channel, complete the following steps:
Select the bot that you want to configure.
In the left pane, select Channels under Settings.
Select the channel icon you want to add to your bot in the right pane. You may need to scroll down to see the list of all Available Channels.
Embed the web chat control on a web page.
Under Settings, select Channels. Then select Web Chat.
The Web Chat page will open. Select the Default Site from the list of Sites.
Copy the first Secret key and the Embed code.
6. Integrated the Bot in Power BI Dashboard
To enrich your reports and dashboards, you can easily incorporate HTML content by either manually entering the embedded code or copying and pasting it from popular websites like YouTube, Twitter, and embed.ly. This feature provides a convenient way to enhance your reports and dashboards’ visual appeal and functionality.
From the Add tile window, select Web content > Next.
Add information to the Add web content tile window:
To display a title above the tile, select Display title and subtitle and enter a Title and an optional Subtitle.
Enter the embed code. In this example, we’re copying and pasting a Twitter feed.
Select Apply.
On the dashboard, resize and move the web content tile as needed.
Congratulations on exploring the Semantic Kernel Bot in-a-box, the innovative solution revolutionizing data interaction. This advanced platform features the SQL-Powered GenAI Assistant and the Upload document Plugin, which can easily manage complex analyses and document processing. Azure’s infrastructure ensures a secure and scalable environment that’s easy to set up. With Power BI dashboards, you’ll get real-time insights and analytics. The Semantic Kernel Bot in-a-box makes data interaction more user-friendly and accessible, and it’s a great example of how AI can transform howt we work with data.
7. Ressources
Create AI agents with Semantic Kernel | Microsoft Learn
How to quickly start with Semantic Kernel | Microsoft Learn
Azure OpenAI Service models – Azure OpenAI | Microsoft Learn
AI-in-a-Box/gen-ai/semantic-kernel-bot-in-a-box at main · Azure/AI-in-a-Box (github.com)
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More