Category: Microsoft
Category Archives: Microsoft
Graph Semantics for Kusto are Generally Available
The Graph Semantics are a new feature of Kusto. It allows users to model their data as graphs and perform graph queries and analytics using the Kusto Query Language (KQL). It supports various graph types, such as directed, undirected, weighted, and labelled graphs, and provides a set of graph operators and functions.
The Graph Semantics can help users gain deeper insights into complex datasets that have inherent or implicit relationships among the data entities. It can also help users discover hidden patterns, anomalies, and communities in their data, as well as measure the importance and influence of different entities. Examples of use cases that can benefit from the graph semantics extension are social network analysis, fraud detection, recommendation systems, cybersecurity, and more.
We are proud to announce that the Kusto Graph Semantics are now generally available and ready for production use. We encourage all users to integrate this powerful tool into their data analysis workflows. Whether you are looking to enhance your understanding of complex datasets or uncover hidden patterns in your data, the graph semantics extension offers a robust framework for achieving these goals.
How to get started with the graph semantics in Kusto?
Users can learn more about the Graph Semantics by reading our documentation or by playing the Kusto Detective Agency game. Additionally, we recommend to follow our graph samples repository on GitHub which we are using to describe scenarios in depth.
Now, have fun with Graphs in Kusto.
PS: Are you ready for some fun? Launch your Kusto Explorer and pick any database… Do you want to discover the secrets behind the “Show Entities Graph” option?
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Community News Desk with essential content from Microsoft 365 Community Conference
We are excited to announce the launch of the Community News Desk on the Microsoft Tech Community, where you will find community experts and calls, content from events a and community members, and shows like Mondays at Microsoft to help you stay up to date on all that is happening across Microsoft products. We are launching our News Desk with our first set of content from Microsoft 365 Community Conference. With over 3k attendees and experts from inside and outside of Microsoft the content we delivered there will help you build your skills and get all you can from your Microsoft 365 services.
We have so much great insights to offer with our line up of sessions from M365 Community Conference! With new content coming each week you can skill up from anywhere:
OPENING KEYNOTE | “The Age of Copilots” with Jeff Teper (President of Collaboration Apps and Platforms) |
GENERAL SESSIONS
“Reshaping productivity with Copilot for Microsoft 365” with Bobby Kishore and Dan Parish
“The future of work with Microsoft Teams” with Sumi Sing, Manik Gupta, and Mark Swift
“Transforming Communications with AI” with Steve Clayton [part of the Transformation track]
“Content Management and Collaboration for the AI Era” with Zach Rosenfield, Lincoln DeMaris, Melissa Torres, Sesha Mani, and Ashu Rawat
“What’s new and next for Microsoft Viva and the employee experience” with Kirk Gregersen, Kristi Kelly, Michael Holste, and Nick DeFalco
CLOSING KEYNOTE | “Delivering Business Value and User Satisfaction in the Era of AI” with Karuana Gatimu
We have also made available the first of our brand new content from our Transformation track at the conference that empowers communications and HR professionals with insights on how to improve the employee experience. Our first sessions include:
The Transformation track for communicators, HR, and business stakeholders in workplace experience [sessions] + full details
Dive deep into strategies, insights, and best practices to modernize communications and engage employees. Develop an actionable plan that aligns stakeholders to drive the transformation that matters to your organization, whether you want to nurture a high-performance organization, foster culture, and diversity, accelerate innovation, or ensure successful adoption of technologies like Copilot and AI. Take away prescriptive product guidance from experts, inspiration from thought leaders and real-world learnings from peers across industries.
“Planning a corporate communications strategy with SharePoint News and Viva Amplify” with Maeneka Grewal, Naomi Moneypenny, and Dave Cohen
“Leveraging AI for communications” with Dan Holme
“How Microsoft is transforming communications & engagement” with John Cirone
There’s more to come from on this important topic right here over the coming weeks.
Our launch also includes the expansion of the already successful Microsoft 365 and Power Platform YouTube channel to serve even more parts of our community by renaming it Microsoft Community Learning. Subscribe to this channel – and don’t forget to turn on notifications – to get our weekly drops of skilling content, see all our community calls and learn from community members like yourselves.
This is the continuation of the work we’ve been doing in our Global Community Initiative and we’re so excited about all we will be bringing to you through this channel. Now do these three things:
Subscribe to this blog to stay up to date on all the Microsoft Community has to share
Subscribe to our Microsoft Community Learning YouTube channel
Tell your friends so they can discover more with you!
Stay tuned about how you can submit YOUR content to our channels and our partnership with Microsoft Learn through our Community Skills Challenge in coming weeks! We hope you are as excited as we are about the Community News Desk and we look forward to highlighting all that’s happening across our space.
Microsoft Tech Community – Latest Blogs –Read More
To-Do-tasks gone
Hi community
I´m devastated.
Without any obvious reason my To Do list is suddenly similar to the one it looked like some weeks ago. I´ve worked with it, assembled (and deleted) numerours new tasks since then – but its all gone. How is that even possible?
Would be greatful for any hint.
Thx
Christian
Hi community I´m devastated. Without any obvious reason my To Do list is suddenly similar to the one it looked like some weeks ago. I´ve worked with it, assembled (and deleted) numerours new tasks since then – but its all gone. How is that even possible? Would be greatful for any hint. Thx Christian Read More
Booking notifications sent automatically to “Deleted Items”
For some reason, this week we realized that our notifications from our Booking page are not being sent to us any more.
After some review, we realized that they’re being sent, but automatically delivered into our “Deleted Items”, instead of our main Inbox.
For some reason, this week we realized that our notifications from our Booking page are not being sent to us any more. After some review, we realized that they’re being sent, but automatically delivered into our “Deleted Items”, instead of our main Inbox. Read More
Migrate from sybase IQ to Sql server
Can SSMA be used to do the migration?
If not, What is the best way to Migrate from sybase IQ to Sql server?
Can SSMA be used to do the migration?If not, What is the best way to Migrate from sybase IQ to Sql server? Read More
Contacts related to Organizations, Schools, and Employers in Microsoft for Non-Profit
One of the requests from a client is to relate Contacts to an Employer, School, and/or Organizations. We are debating between using stand-alone custom table(s) or just the OOTB Account table. We know Fundraising & Engagement already has this concept built in but only for Organizations. We want to make sure we are keeping in mind native functionality when deciding on whether or not to use OOTB or our own custom table.
For more context:
We would be leveraging the msnfp_accounttype choice column on the Account table, and extending it from the existing choice values to also include School and Organization.As each Contact could have a different “Accounts” for each of School/Organization/Employer, we will be configuring custom lookups to hold each of these potential distinct relationships between a single Contact and one or more Accounts.
We’re also curious to know if there is anything in place that we’re missing or anything in the future pipeline for MC4NP / Fundraising & Engagement that could impact our decision.
One of the requests from a client is to relate Contacts to an Employer, School, and/or Organizations. We are debating between using stand-alone custom table(s) or just the OOTB Account table. We know Fundraising & Engagement already has this concept built in but only for Organizations. We want to make sure we are keeping in mind native functionality when deciding on whether or not to use OOTB or our own custom table. For more context: We would be leveraging the msnfp_accounttype choice column on the Account table, and extending it from the existing choice values to also include School and Organization.As each Contact could have a different “Accounts” for each of School/Organization/Employer, we will be configuring custom lookups to hold each of these potential distinct relationships between a single Contact and one or more Accounts. We’re also curious to know if there is anything in place that we’re missing or anything in the future pipeline for MC4NP / Fundraising & Engagement that could impact our decision. Read More
Executing a subscription to run report gives error “Authentication failed”
We have configured a subscription to automatically run and email a report and every time that it tries to execute, we receive the error “Authentication failed because the remote party has closed the transport stream.”
We have researched the error online and tried suggested solutions, but still are receiving the error.
We have configured a subscription to automatically run and email a report and every time that it tries to execute, we receive the error “Authentication failed because the remote party has closed the transport stream.” We have researched the error online and tried suggested solutions, but still are receiving the error. Read More
Subtracting Hours
I’m having trouble finding a solution to my problem.
I have a table with the following Headers NAME, DATE and HOUR METER VALUE.
There are about 1800 entries from about 12 different names. In the Hour Meter Column there are entries for the hours shown on a engine run time. So for instance:
Vehicle 1 1/3/2024 10015
Vehicle 1 1/2/2024 10013
Vehicle 2 1/2/2024 955
Vehicle 1 1/1/2024 10008
Vehicle 2 1/1/2024 945
What I need to do is find the difference between each entry and total them to find out total run time of the engines.
Vehicle 1 = 7 hours
Vehicle 2 = 10 Hours
Thanks for looking.
I’m having trouble finding a solution to my problem. I have a table with the following Headers NAME, DATE and HOUR METER VALUE. There are about 1800 entries from about 12 different names. In the Hour Meter Column there are entries for the hours shown on a engine run time. So for instance: Vehicle 1 1/3/2024 10015Vehicle 1 1/2/2024 10013Vehicle 2 1/2/2024 955Vehicle 1 1/1/2024 10008Vehicle 2 1/1/2024 945 What I need to do is find the difference between each entry and total them to find out total run time of the engines. Vehicle 1 = 7 hoursVehicle 2 = 10 Hours Thanks for looking. Read More
Configuring archive period for tables at Mass for Data Retention within Log Analytics Workspace
How does this Blog help in Configuring archive period for tables at Mass for Data Retention in Log Analytics Workspace:
Simplified Data Archival: Implementing archival within Log Analytics Workspace provides a straightforward and integrated solution for retaining log data over extended periods. This ensures compliance with regulatory requirements, making it easier for organizations to meet data retention mandates without resorting to complex external storage solutions.
Efficient Data Management: The article’s primary focus on mass applying archival to multiple tables within Log Analytics Workspace streamlines the process of managing a diverse range of log data. This efficiency is invaluable for organizations dealing with large volumes of logs from various sources, simplifying the management of data retention policies and significantly reducing the administrative overhead.
Cost and Complexity Optimization: By leveraging Log Analytics Workspace for archival, organizations can maintain a balance between cost-effective storage and data accessibility. This approach eliminates the need for more complex and potentially costly alternatives like Blob Storage and Azure Data Explorer (ADX) for archival, thus reducing both operational complexity and storage expenses. It provides a practical solution for long-term data retention while optimizing both cost and management efforts.
Step 0: Default approach to perform archival at a table level in Log Analytics Workspace
Navigate to Log Analytics Workspace > Table > Manage Table
Consider replicating above for multiple tables using below PowerShell commands.
Step 1: Fetch the table list on which Archiving is Required using this KQL.
KQL to fetch the Active table list:
search *| distinct $table
Step 2: Export the KQL Result-Set:
Exporting the table list in CSV using export functionality
Step 3: Open it with Excel & Rename the column name to “Table” as:
Step 4: Rename from “$table” column to “Table” as:
Rename $table to Table as highlighted:
Step 5: Rename the Excel File name as well:
From “query_data” to “Sentinel” as shown
Step 6: Open Cloud Shell on Azure portal and upload this new file:
Upload the file from local machine as:
Step 7: Check the uploaded file using “ls” list command for uploaded File as:
Step 8: Run following PowerShell command in Cloud shell once file upload completes:
Import-CSV “SentinelTable.csv” | foreach {Update-AzOperationalInsightsTable -ResourceGroupName sentineltraining -WorkspaceName sentineltrainingworkspace -TableName $_.Table -TotalRetentionInDays 2556}
Prior Running the command ensure to update:
*Please update the –TotalRetentionInDays as required in your scenario
*Update the Resource Group Name, Log analytics Workspace name respectively.
Step 9: Check the Archival Log Analytics Table for following tables:
Step 10: The Tables exported have updated Archival period and others have default Retention as per Log Analytics Settings:
Navigation: Log Analytics Workspace > Settings > Tabels > Archive Period.
Conclusion:
1. This blog covers the default approach at a table level to perform archival for long term storage within log analytics workspace.
2.This blog covers steps to actually scale the archival for multiple tables which is a key production requirement.
3. All the steps can be implemented in a lab environment and archival period can be observed in log analytics workspace in table blade respectively.
Microsoft Tech Community – Latest Blogs –Read More
User self-service BitLocker recovery key access with Intune Company Portal website now available
By: Aasawari Navathe – Sr. Product Manager | Microsoft Intune
With the May (2405) service release of Microsoft Intune, users are now able to access the BitLocker recovery key of their Intune enrolled devices using the Intune Company Portal website. This enables users to self-resolve, rather than contacting their helpdesk, when they’re locked out of their machines and need to access their BitLocker recovery key.
What are the prerequisites?
Enrolled Windows device into Intune tenant
Ability to log into the Intune Company Portal website from a device (doesn’t need to be enrolled)
Permission to view your BitLocker recovery key (if one exists in Microsoft Entra ID)
We’re working to add the ability to view the BitLocker recovery key from the native Company Portal apps on other platforms like Apple iOS/iPadOS and macOS. The Intune Company Portal website can be used on other platforms.
How does this work?
After opening the Intune Company Portal website, navigate to the Devices node, select the enrolled Windows device, and click “Get recovery key” under Device Encryption. If there are multiple recovery keys found, click “Show recovery key” under the one with the key ID that is needed. Users may then use this recovery key to complete the recovery process on their enrolled Windows device without reaching out to the helpdesk.
Features for BitLocker recovery key access in Microsoft Entra ID
We heard the customer feedback on what level of control IT admins need within their organization for this scenario. While Intune helps configure policy to define the escrow of BitLocker recovery keys, these keys are stored within Entra ID. There are three capabilities within Entra ID that are helpful to use in conjunction with self-service BitLocker recovery key access for users.
Tenant-wide toggle to prevent recovery key access for non-admin users
This setting is located in the Entra ID > Devices > Device settings.
This setting determines if users can self-service to recover their BitLocker key(s). The default value is ‘No’ which allows all users to recover their BitLocker key(s). ‘Yes’ restricts non-admin users from being able to see the BitLocker key(s) for their own devices if there are any. Learn more: Manage devices in Microsoft Entra ID using the Microsoft Entra admin center.
In the event that the admin has restricted recovery key access for users, users will receive the message “Recovery key could not be retrieved” in the Company Portal website.
Auditing for recovery key access
Audit Logs within the Entra ID portal show the history of activities within the tenant. Any user recovery key accesses made through the Company Portal website will be logged in Audit Logs under the Key Management category as a “Read BitLocker key” activity type. The user’s User Principal Name and additional info such as key ID is also logged.
Learn more: Learn about the audit logs in Microsoft Entra ID.
Entra Conditional Access policy requiring a compliant device to access BitLocker Recovery Key
With Conditional Access policy (CA), you can restrict the access to certain corporate resources if a device is not compliant with the “Require compliant device” setting. If this is set up within your organization, and a device fails to meet the Compliance requirements configured in the Intune Compliance policy, that device cannot be used to access the BitLocker Recovery Key as it is considered a corporate resource which is access controlled by CA.
In this case, you may see an error like below which suggests using a compliant device for recovery key access.
With the 2405 release, get started on this new capability for user self-service BitLocker recovery key access with the Intune Company Portal website!
Let us know your thoughts or if you have any questions, by leaving a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
LLM based development tools: PromptFlow vs LangChain vs Semantic Kernel
Introduction
Prerequisites
Azure OpenAI Service, LLM we will be using for our simple application
Visual Studio Code – IDE
Refer to the blog GitHub Repository
What are they?
Semantic Kernel: an open-source SDK that allows you to orchestrate your existing code and more with AI.
LangChain: a framework to build LLM-applications easily and gives you insights on how the application works
PromptFlow: this is a set of developer tools that helps you build an end-to-end LLM Applications. Using PromptFlow, you can take your application from an idea to production.
Semantic Kernel
Kernel: the kernel is at the center stage of your development process as it contains the plugins and services necessary for you to develop your AI application.
Planners: special prompts that allow an agent to generate a way to complete a task such as using function calling to complete a task.
Plugins: they allow you to give your copilot skills, using both code and prompts
Memories: in addition to connecting your application to LLMs and creating various tasks, Semantic Kernel has a memory feature to store context and embeddings giving additional information to your prompts.
Install the necessary libraries using: pip install semantic-kernel==0.9.8b1 openai
Add you keys and endpoint from .env to your notebook
This module defines an enumeration representing different services.
“””
from enum import Enum
class Service(Enum):
“””
Attributes:
OpenAI (str): Represents the OpenAI service.
AzureOpenAI (str): Represents the Azure OpenAI service.
HuggingFace (str): Represents the HuggingFace service.
“””
OpenAI = “openai”
AzureOpenAI = “azureopenai”
HuggingFace = “huggingface”
4. Create a new Kernel where you will host your application then import Service into your application which will allow you to add your LLM into our application.
# Import the Kernel class from the semantic_kernel module
from semantic_kernel import Kernel
from services import Service
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
# Create an instance of the Kernel class
kernel = Kernel()
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = Service.OpenAI
# Define the service_id variable
service_id = None
# Set the deployment name, API key, and endpoint variables
deployment = model
api_key = api_key
endpoint = azure_endpoint
# Set the service_id variable to “default”
service_id = “default”
# Add an instance of the AzureChatCompletion class to the kernel’s services
kernel.add_service(
AzureChatCompletion(service_id=service_id, deployment_name=deployment, endpoint=endpoint, api_key=api_key),
)
5. Next we will create and add our plugin. We have the plugin folder TranslatePlugin within it we have our Swahili Plugin with our config and prompt txt files which guide the model on how it will perform its task. Once imported we invoke the Swahili Function into our application.
# Set the directory path where the plugins are located
plugins_directory = “.prompt_templates_samples”
# Add the TranslatePlugin to the kernel and store the returned plugin functions in the translateFunctions variable
translateFunctions = kernel.add_plugin(parent_directory=plugins_directory, plugin_name=”TranslatePlugin”)
# Retrieve the Swahili translation function from the translateFunctions dictionary and store it in the swahiliFunction variable
swahiliFunction = translateFunctions[“Swahili”]
# invokes the ‘swahiliFunction’ with the specified parameters and prints the results
result = await kernel.invoke(swahiliFunction, question=”what is the WiFi password”, time_of_day=”afternoon”, style=”professional”)
print(result)
6. The output will be the requested translation.
LangChain
Model I/O: this is where you can bring in your LLM and format its inputs and outputs
Retrieval: In RAG applications, this component specifically helps you load your data, connect with vector databases and transform your documents to meet the needs of your application.
Other Higher level Components
Tools: allows you to create Intergrations with external services and applications
Agents: these are responsible as a guide on what step to take next.
Chains: these are a sequence of calls linking various components to create LLM apps
Install the necessary libraries: pip install langchain openai
Login to Azure CLI using az login –use-device-code and authenticate your connection
Add you keys and endpoint from .env to your notebook, then set the environment variables for your API key and type for authentication.
import os
from azure.identity import DefaultAzureCredential
# Get the Azure Credential
credential = DefaultAzureCredential()
# Set the API type to `azure_ad`
os.environ[“OPENAI_API_TYPE”] = “azure_ad”
# Set the API_KEY to the token from the Azure credential
os.environ[“OPENAI_API_KEY”] = credential.get_token(“https://cognitiveservices.azure.com/.default”).token
4. Create your model class and configure it to interact with Azure OpenAI
# Import the necessary modules
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version=AZURE_OPENAI_API_VERSION,
azure_deployment=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
)
5. Use ChatPromptTemplate to curate your prompt
# Import the necessary modules
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Create a ChatPromptTemplate object with messages
prompt = ChatPromptTemplate.from_messages(
[
(
“system”,
“You are a helpful assistant that translates tasks into Kiswahili. Follow these guidelines:n”
“The translation must be accurate and culturally appropriate.n”
“Use the {{$time_of_day}} to determine the appropriate greeting to use during translation.n”
“Be creative and accurate to communicate effectively.n”
“Incorporate the {{$style}} suggestion, if provided, to determine the tone for the translation.n”
“After translating, add an English translation of the task in the specified language.n”
“For example, if the question is ‘what is the WiFi password’, your response should be:n”
“‘Habari ya mchana! Tafadhali nipe nenosiri la WiFi.’ (Translation: Good afternoon! Please provide me with the WiFi password.)”
),
(“human”, “{question}”),
]
)
6. Chain your model and prompt together to get a response
# Chain the prompt and the model together
chain = prompt | model
# Invoke the chain with the input parameters
response = chain.invoke(
{
“question”:”what is the WiFi password”,
“time_of_day”:”afternoon”,
“style”:”professional”
}
)
# Print the response
response
7. The output will be the requested translation.
PromptFlow
First, you install the promptflow extension on Visual Studio Code
2. Next, ensure you install the necessary dependencies and libraries your will need for the project.
3. In our case we will be build a chat flow with template. Click on somewhere and create a chat flow for the application
4. Once the flow is ready, we can open flow.dag and click on the visual editor to see how our application is structured.
5. We will need to connect to our LLM, you can do this by creating a new connection. Update your Azure OpenAI endpoint and your connection name. Click create connection then you will have your connection ready.
6. Update the connection and run the flow to test your application.
7. Update the chat.jinja2 file to customize the prompt template.
8. Edit the yaml file to add more functionality to your flow, in our case for the Tutor, we will add more inputs.
In Summary:
GitHub Repository: https://github.com/BethanyJep/Swahili-Tutor
Semantic Kernel: microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps (github.com)
Semantic Kernel documentation: Create AI agents with Semantic Kernel | Microsoft Learn
Promptflow documentation: Prompt flow — Prompt flow documentation (microsoft.github.io)
LangChain: Introduction | 🦜️:link: LangChain
Microsoft Tech Community – Latest Blogs –Read More
Getting started with Azure Cosmos Database (A Deep Dive)
What is Azure Cosmos Database
It’s a fully managed, distributed NOSQL & relational database.
Topics to be covered
NoSQL Vs Relational Databases
What is Azure Cosmos DB
Azure cosmos DB Architecture
Azure Cosmos Db Components
Multi-model (APIs)
Partitions
Request Units
Azure cosmos DB Access Methods
Scenario: Event Database
Creating Resources using Azure cosmos DB for NoSQL
What is a NoSQL Database?
NoSQL, standing for “Not only SQL,”. It’s a highly scalable storage mechanism for structured, semi-structured, and unstructured data.
What is a Relational Database?
A relational database is a way of storing and organizing data that emphasizes precision and interconnection. It uses a structured table with predefined schemas to store data.
Structural Difference
Relational Vs NoSQL
What Is Azure Cosmos DB?
Simply it’s Microsoft’s premium NoSQL database service.
Key Benefits
Fully-managed Service – Focus on your app, and let Microsoft handle the rest.
No Schema – NoSQL, no schema, no problem.
No Index Management – All data is automatically indexed.
Multi-Model – It helps you cover a variety of databases by providing APIs to interact with.
Cosmos DB for NoSQL API – It’s the default API which provides support for querying items in an SQL style. It also supports ACID transactions, stored procedures and triggers.
Table API – stores simple key-value data. This is geared towards users of Azure Table storage to use this API as a premium feature.
Apache Gremlin API – It’s for working with graph databases.
Apache Cassandra API – it’s a wide-column store database, well known for distributing petabytes of data with high reliability and performance.
MongoDB API – Document database built on MongoDB.
Global Distribution – Azure is in more than 60+ Regions and 140+ Countries, in all these regions and countries azure cosmos DB is available. This is not the case for all other services offered in Azure.
Guaranteed Performance and Availability – Azure cosmos DB provides 99.99 % Service Level Agreement (SLA) for throughput, consistency, availability and latency.
Elastically scalable – You can achieve this by
Provisioned – you specify what you service will scale up to.
Autoscale – The service will scale automatically according to the workloads.
Azure Cosmos DB Architecture
What Are Azure Cosmos Components
Database Account – Top-level resource that determines the public name and API.
Database – Namespace for your containers. Manage users & permissions
Container – A collection of items (similar to a table). Your Api choice will manifest various forms of our container, ie table, collection, graph, etc.
Item – Atomic data structure of a container ie document, row, node, edge
How is multi-model possible?
The database engine of Azure Cosmos is cable of efficiently translating and projection the data models onto the Atomic-Record-sequence based data (ARS) model.
By utilizing the ARS abstraction layer Cosmos Db can offer various popular data models and translate them back to ARS. This all happens under the same database engine efficiently at a global scale.
Available APIS
Partitions
These are the chunks in which your data is stored. These are the fundamental units of scalability and distribution.
Logical – This is dividing each partition based on a partition key of your choice.
Physical – Physical storage of your data, with one or more logical partitions mapped to it. Azure will map the logical partitions to the physical partitions for you. As you increase physical throughput, azure will automatically create new physical partitions and remap the logical ones as it needs in order to satisfy those requests.
Partitions: Tips to keep in mind
Partition key will affect the database performance and ability to scale.
Avoid hot partitions – (partitions which are not evenly distributed) by choosing keys with high cardinality and distinctness over time. In our example above, using a serial number of the phone is unique hence creating an even distributed partition. Model is not a great key to use in the end because it will cause all items to be in one partition instead of being evenly distributed.
Hot partitions result in rate limiting and inefficient use of the throughput that you’ve provisioned as well as potentially higher costs.
Microsoft Transparently handles physical distribution – your work is to choose a good partition key that is good for your application and data also the throughput and storage associated with it.
Request Units
Request units normalize database operation costs and become a uniform currency for Azure Cosmos DB throughput. Query operation requires more RU/s than the rest because we are using more system resources to perform a query operation.
Flavors for Provisioning RUs
Provisioned – in this case, you know what you want and just provision it. You will get dependable billing because you know how many Rus you’re going to be billed for. Main fall back is hitting rate limits
Auto scale – It lets you set certain parameters and the system will scale up and down the RUs needed as necessary should you hit a higher peak of work.
Serverless – Pay only for what you consume. This option frees you from the need to pick specific parameters like auto scale or be locked into a specific one via provisioned option.
Planning Your Request Units
Two granularities – Provisioning your throughput at the database or container level or both.
Database level – The throughput you choose will be shared among all containers under that database.
Container Level – you have a specific throughput to a certain container.
Billing Hourly – No matter which option method you use, you’ll be billed for the highest RU of the hour.
Azure Cosmos DB Access Methods
Data Explorer – A graphical data utility built straight into the Azure Portal
SDK – use your favorite language to consume Azure Cosmos DB
.Net
Java
Spring data
Node.js
Python
Go
Rest APIs –manage data using HTTPS request
Creating Resources in the Azure Portal
Let’s Create Account
Search for Azure Cosmos DB
Create Azure Cosmos DB Account
Choose the API according to your use case. I’ll go with NoSQL option for this demo.
Under create Azure cosmos DB Account page
Choose your subscription.
Choose or create a resource group.
Create the account name (make it unique).
Choose the availability zone if you want to improve your apps availability and resilient.
Choose the location of your DB according to the available data centers.
Capacity Mode enables you to define the throughput. The Provisioned option also comes with a free tier option.
Select Geo-Redundancy will enable your database to be available to the paired region ie East US and West Us or South Africa North and South Africa West. For this demo ‘South Africa West’ is not included in my subscription
Multi-region writes capability allows you to take advantage of the provisioned throughput for your databases and containers across the globe.
Under networking, your Azure Cosmos DB account either publicly, via public IP addresses or service endpoints, or privately, using a private endpoint. Choose according to your use case.
Connection Security Settings – I will go with TLS 1.2
Backup policy defines the way your backup will occur.
Periodic lets you define the interval (minutes or hours) and backup retention (How long you would like your backups to be saved) – in hours or days and Backup storage redundancy in Geo, Zone or Local.
Continuous (7 days) – Provides backup window of 7 days / 168 hours and you can restore to any point of time within the window. This mode is available for free.
Continuous (30 days) – Provides a backup window of 30 days / 720 hours and you can restore to any point of time within the window. This mode has cost impact.
Data Encryption – I will let Microsoft encrypt my account using service-managed keys. Feel free to use your customer-managed key if you have any.
I don’t need to create a tag for now, just review and create.
Let’s Create Event Database Using the Scenario below
For our scenario, we need to store data from sports events (e.g., marathon, triathlon, cycling, etc.). Users should be able to select an event and view a leaderboard. The amount of data that will be stored is estimated at 100 GB.
The schema of the data is different for various events and likely to change. As a result, this requires the database to be schema-agnostic and therefore we decided to use Azure Cosmos DB as our database.
Identify access patterns
To design an efficient data model it is important to understand how the client application will interact with Azure Cosmos DB. The most important questions are:
Is the access pattern more read-heavy or write-heavy?
What are the main queries?
What is the expected document size?
If the access pattern is read-heavy you want to choose a partition key that appears frequently as a filter in your queries. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate.
When the access pattern is write-heavy you might want to choose item ID as the partition key. Item ID does a great job with evenly balancing partitioned throughput (RUs) and data storage since it’s a unique value. For more information, see Partitioning and horizontal scaling in Azure Cosmos DB | Microsoft Docs
Finally, we need to understand the document size. 1 kb documents are very efficient in Azure Cosmos DB. To understand the impact of large documents on RU utilization see the capacity calculator and change the item size to a larger value. As a starting point you should start with only one container and embed all values of an entity in a single JSON document. This provides the best reading performance. However, if your document size is unpredictable and can grow to hundreds of kilobytes you might want to split these in different documents within the same container. For more information, see Modeling data in Azure Cosmos DB – Azure Cosmos DB | Microsoft Docs.
Sample document structure
{
“eventId”: “unique_event_id”,
“eventName”: “Marathon”,
“eventDate”: “2024-05-20”,
“participants”: [
{
“participantId”: “participant1”,
“name”: “Alice”,
“score”: 1200
},
{
“participantId”: “participant2”,
“name”: “Bob”,
“score”: 1100
}
// … more participants
]
}
The eventId serves as the unique identifier for each event.
Create Container
Create a new container
Give it a unique database id
Select autoscale – for automatic throughput else select manual which can be useful for use for a single container with a predictable throughput for a container. The importance of autoscale is that it doesn’t cause any downtime. For more information, see How to choose between manual and autoscale on Azure Cosmos DB.
At the container level the partition key is specified, which in our case is /eventId
Add a document
Click data explorer
Click on Events
Expand Events2024 then items
Click New item
Let’s replace the default Json object with our data
Save a single document
Add the document
save
Save a many document
Let’s say you have you have your data saved on a JSON file, like the one below, follow below steps to insert that data.
[
{
“eventId”: “event_1”,
“eventName”: “Coding Competition”,
“eventDate”: “2024-05-21”,
“participants”: [
{
“participantId”: “p1”,
“name”: “John”,
“score”: 980
},
{
“participantId”: “p2”,
“name”: “Jane”,
“score”: 890
},
{
“participantId”: “p3”,
“name”: “Mike”,
“score”: 1020
}
]
},
{
“eventId”: “event_2”,
“eventName”: “CodeFest”,
“eventDate”: “2024-06-15”,
“participants”: [
{
“participantId”: “p4”,
“name”: “Lily”,
“score”: 950
},
{
“participantId”: “p5”,
“name”: “Alex”,
“score”: 1120
}
]
},
{
“eventId”: “event_3”,
“eventName”: “Hackathon Challenge”,
“eventDate”: “2024-07-10”,
“participants”: [
{
“participantId”: “p6”,
“name”: “Sarah”,
“score”: 1180
},
{
“participantId”: “p7”,
“name”: “Kevin”,
“score”: 1035
}
]
},
{
“eventId”: “event_4”,
“eventName”: “Byte Battle”,
“eventDate”: “2024-08-05”,
“participants”: [
{
“participantId”: “p8”,
“name”: “Olivia”,
“score”: 1005
},
{
“participantId”: “p9”,
“name”: “Ethan”,
“score”: 1150
}
]
},
{
“eventId”: “event_5”,
“eventName”: “Code Warriors Championship”,
“eventDate”: “2024-09-20”,
“participants”: [
{
“participantId”: “p10”,
“name”: “Ava”,
“score”: 1085
},
{
“participantId”: “p11”,
“name”: “Noah”,
“score”: 1070
}
]
}
]
Click Upload item
Locate the file you want to upload from the file explorer then click upload.
A successful upload will show you the number of records uploaded.
Let’s Query our Database
Click on New SQL Query
Write your SQL query
Run the query
View our results – as you can see our object has some meta data appended to it.
More Queries
Query 1: View Top Ranked Participants for a Selected Event:
Query 2: View All Events for a Selected Year a Person Has Participated In:
Query 3: View All Registered Participants per Event:
Query 4: View Total Score for a Single Participant per Event:
You can also check the cost of the query operation consumed 2.9 RUs
Read More
Databases, containers, and items in Azure Cosmos DB
Queries in Azure Cosmos DB for NoSQL
How to model and partition data on Azure Cosmos DB using a real-world example
Implement a data modeling and partitioning strategy for Azure Cosmos DB for NoSQL
Data modeling in Azure Cosmos DB
Server-side programing
Migrate data to Azure Cosmos DB using the desktop data migration tool
Microsoft Tech Community – Latest Blogs –Read More
Trustworthy AI: Copilot for Microsoft 365 data security and privacy commitments
Copilot for Microsoft 365—your AI assistant for work—is built on our existing Microsoft 365 commitments to data security and privacy in the enterprise, enabling you to always stay in control. Watch our video series to learn how our comprehensive approach to privacy, security, and compliance safeguards your data.
Copilot for Microsoft 365 became publicly available to Enterprise customers just over 6 months ago. Since release we’ve been on a mission to answer all of the thoughtful customer questions and discuss exciting use cases that are changing the way we work. AI at work is here, now comes the hard part, we just released new research and insights in the Work Trend Index 2024 Annual Report.
Employees want AI at work—and won’t wait for companies to catch up: AI use at work has nearly doubled in the last six months, with 75% of knowledge workers using generative AI. Employees are bringing their own AI tools to work, and leaders recognize AI as a business imperative despite lacking a clear plan for implementation. 78% of AI users are bringing their own AI to work (BYOAI)
The rise of the AI power user—and what they reveal about the future: A spectrum of AI users exists, from skeptics to power users. Power users, who are familiar with AI and use it several times a week, report significant benefits in managing workload, boosting creativity (92%), and enjoying work more.
As customers adopt and scale plans for Copilot in their organizations we receive excellent questions about how Copilot for Microsoft 365 works with confidential organizational data in the Microsoft 365 Graph. I’m thrilled to share the latest on how Copilot is revolutionizing the way we work while upholding the highest standards of data protection.
Built on a foundation of trust
A key concept of Microsoft 365 is the ‘tenant’—a secure, encrypted construct to support manageability and data privacy of your organizational data that is distinct, unique, and separate from all other Microsoft 365 tenants. Copilot is an orchestrator that integrates with your tenant, inheriting all your existing Microsoft 365 security, privacy, identity, and compliance requirements.
Watch this video to learn more about how Copilot is built upon a foundation of trust.
Defending your data
At Microsoft, we believe that your data is your business, and you should control its collection, use and distribution, as well as its location. Watch the following video to learn how data is stored, encrypted, processed, and defended.
Securely powered by Azure OpenAI Service
When you interact with Copilot, your prompts are securely processed using Azure OpenAI services, ensuring that your organizational data remains protected. For a deeper understanding of how Azure OpenAI services power Copilot while prioritizing data integrity, watch this video.
Join us in embracing the future of work with Copilot for Microsoft 365—where your data’s security is our top priority. For all the latest updates and deep dive information start at Microsoft Copilot for Microsoft 365 documentation | Microsoft Learn and aka.ms/copilotlab to learn more about how to use Copilot! Find adoption and skilling best practices at adoption.microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Incidents and Alerts blades missing in Defender portal
Hi,
We recently found out that the incidents and alerts blades have disappeared from our Defender portal. This is true for both Global Admin and Security Administrator roles. We use A5 licenses in our tenant. Not sure what happened. Microsoft Unified support has not been very helpful in even replying to our query. Can someone please point us in the right direction. We don’t know what has happened.
Thanks in advance,
Hi, We recently found out that the incidents and alerts blades have disappeared from our Defender portal. This is true for both Global Admin and Security Administrator roles. We use A5 licenses in our tenant. Not sure what happened. Microsoft Unified support has not been very helpful in even replying to our query. Can someone please point us in the right direction. We don’t know what has happened. Thanks in advance, Read More
Bookings confirmation not matching my calendar
I just had a few appointments that got booked for 15 minutes and it is showing 45 minutes in my calendar. I’ve double-checked the duration and that is correct.
For example:
Confirmation came to my email 11:55am – 12:10 pm
Calendar shows: 11:40 – 12:30 for the same appointment
I just had a few appointments that got booked for 15 minutes and it is showing 45 minutes in my calendar. I’ve double-checked the duration and that is correct. For example:Confirmation came to my email 11:55am – 12:10 pmCalendar shows: 11:40 – 12:30 for the same appointment Read More
Conditional Formatting is not showing properly
I am trying to find the duplicate value of the serial No.
But I can’t find the duplicate value. But the showing error.
Could you please guide me how to solve this issue.
I am trying to find the duplicate value of the serial No.But I can’t find the duplicate value. But the showing error.Could you please guide me how to solve this issue. Read More
Search engine positioning on Bing I would like
I would lik to ask for information, I have positioned the site with the keyword Industrial Manipulator, in Italian it is positioned on the first page in first position. the site does not display an image related to the product. In the search snippet inside the site I entered the title, description and image. It does not appear. Do you have any suggestions for improving the search result and making the image appear in addition to the title and description?
Thank you
I would lik to ask for information, I have positioned the site with the keyword Industrial Manipulator, in Italian it is positioned on the first page in first position. the site does not display an image related to the product. In the search snippet inside the site I entered the title, description and image. It does not appear. Do you have any suggestions for improving the search result and making the image appear in addition to the title and description? Thank you Read More
Multitenant collaboration – share users – can’t choose groups
Hi all, I am configuring the new multitenant collaboration now that it’s out of preview.
When I last was testing it in preview, when I clicked “Share users” I was able to select an Entra ID group of users to share. Now the behaviour is different, it’s only allowing me to select users’, not groups. Am I missing something obvious here?
Thanks!
Hi all, I am configuring the new multitenant collaboration now that it’s out of preview.When I last was testing it in preview, when I clicked “Share users” I was able to select an Entra ID group of users to share. Now the behaviour is different, it’s only allowing me to select users’, not groups. Am I missing something obvious here? Thanks! Read More
copilot does not log
copilot does not log in on my smartphone it says there is a problem with my account when I log in in another browser it is fine
copilot does not log in on my smartphone it says there is a problem with my account when I log in in another browser it is fine Read More
Sharepoint sync issues
Hello
Please i need your help on this issue.
One if our user is experiencing issues with SharePoint sync. They can see folders on the web version, but they are not syncing to their laptop or Explorer.
Hello Please i need your help on this issue. One if our user is experiencing issues with SharePoint sync. They can see folders on the web version, but they are not syncing to their laptop or Explorer. Read More