Tag Archives: microsoft
To-Do-tasks gone
Hi community
I´m devastated.
Without any obvious reason my To Do list is suddenly similar to the one it looked like some weeks ago. I´ve worked with it, assembled (and deleted) numerours new tasks since then – but its all gone. How is that even possible?
Would be greatful for any hint.
Thx
Christian
Hi community I´m devastated. Without any obvious reason my To Do list is suddenly similar to the one it looked like some weeks ago. I´ve worked with it, assembled (and deleted) numerours new tasks since then – but its all gone. How is that even possible? Would be greatful for any hint. Thx Christian Read More
Booking notifications sent automatically to “Deleted Items”
For some reason, this week we realized that our notifications from our Booking page are not being sent to us any more.
After some review, we realized that they’re being sent, but automatically delivered into our “Deleted Items”, instead of our main Inbox.
For some reason, this week we realized that our notifications from our Booking page are not being sent to us any more. After some review, we realized that they’re being sent, but automatically delivered into our “Deleted Items”, instead of our main Inbox. Read More
Migrate from sybase IQ to Sql server
Can SSMA be used to do the migration?
If not, What is the best way to Migrate from sybase IQ to Sql server?
Can SSMA be used to do the migration?If not, What is the best way to Migrate from sybase IQ to Sql server? Read More
Contacts related to Organizations, Schools, and Employers in Microsoft for Non-Profit
One of the requests from a client is to relate Contacts to an Employer, School, and/or Organizations. We are debating between using stand-alone custom table(s) or just the OOTB Account table. We know Fundraising & Engagement already has this concept built in but only for Organizations. We want to make sure we are keeping in mind native functionality when deciding on whether or not to use OOTB or our own custom table.
For more context:
We would be leveraging the msnfp_accounttype choice column on the Account table, and extending it from the existing choice values to also include School and Organization.As each Contact could have a different “Accounts” for each of School/Organization/Employer, we will be configuring custom lookups to hold each of these potential distinct relationships between a single Contact and one or more Accounts.
We’re also curious to know if there is anything in place that we’re missing or anything in the future pipeline for MC4NP / Fundraising & Engagement that could impact our decision.
One of the requests from a client is to relate Contacts to an Employer, School, and/or Organizations. We are debating between using stand-alone custom table(s) or just the OOTB Account table. We know Fundraising & Engagement already has this concept built in but only for Organizations. We want to make sure we are keeping in mind native functionality when deciding on whether or not to use OOTB or our own custom table. For more context: We would be leveraging the msnfp_accounttype choice column on the Account table, and extending it from the existing choice values to also include School and Organization.As each Contact could have a different “Accounts” for each of School/Organization/Employer, we will be configuring custom lookups to hold each of these potential distinct relationships between a single Contact and one or more Accounts. We’re also curious to know if there is anything in place that we’re missing or anything in the future pipeline for MC4NP / Fundraising & Engagement that could impact our decision. Read More
Executing a subscription to run report gives error “Authentication failed”
We have configured a subscription to automatically run and email a report and every time that it tries to execute, we receive the error “Authentication failed because the remote party has closed the transport stream.”
We have researched the error online and tried suggested solutions, but still are receiving the error.
We have configured a subscription to automatically run and email a report and every time that it tries to execute, we receive the error “Authentication failed because the remote party has closed the transport stream.” We have researched the error online and tried suggested solutions, but still are receiving the error. Read More
Subtracting Hours
I’m having trouble finding a solution to my problem.
I have a table with the following Headers NAME, DATE and HOUR METER VALUE.
There are about 1800 entries from about 12 different names. In the Hour Meter Column there are entries for the hours shown on a engine run time. So for instance:
Vehicle 1 1/3/2024 10015
Vehicle 1 1/2/2024 10013
Vehicle 2 1/2/2024 955
Vehicle 1 1/1/2024 10008
Vehicle 2 1/1/2024 945
What I need to do is find the difference between each entry and total them to find out total run time of the engines.
Vehicle 1 = 7 hours
Vehicle 2 = 10 Hours
Thanks for looking.
I’m having trouble finding a solution to my problem. I have a table with the following Headers NAME, DATE and HOUR METER VALUE. There are about 1800 entries from about 12 different names. In the Hour Meter Column there are entries for the hours shown on a engine run time. So for instance: Vehicle 1 1/3/2024 10015Vehicle 1 1/2/2024 10013Vehicle 2 1/2/2024 955Vehicle 1 1/1/2024 10008Vehicle 2 1/1/2024 945 What I need to do is find the difference between each entry and total them to find out total run time of the engines. Vehicle 1 = 7 hoursVehicle 2 = 10 Hours Thanks for looking. Read More
Configuring archive period for tables at Mass for Data Retention within Log Analytics Workspace
How does this Blog help in Configuring archive period for tables at Mass for Data Retention in Log Analytics Workspace:
Simplified Data Archival: Implementing archival within Log Analytics Workspace provides a straightforward and integrated solution for retaining log data over extended periods. This ensures compliance with regulatory requirements, making it easier for organizations to meet data retention mandates without resorting to complex external storage solutions.
Efficient Data Management: The article’s primary focus on mass applying archival to multiple tables within Log Analytics Workspace streamlines the process of managing a diverse range of log data. This efficiency is invaluable for organizations dealing with large volumes of logs from various sources, simplifying the management of data retention policies and significantly reducing the administrative overhead.
Cost and Complexity Optimization: By leveraging Log Analytics Workspace for archival, organizations can maintain a balance between cost-effective storage and data accessibility. This approach eliminates the need for more complex and potentially costly alternatives like Blob Storage and Azure Data Explorer (ADX) for archival, thus reducing both operational complexity and storage expenses. It provides a practical solution for long-term data retention while optimizing both cost and management efforts.
Step 0: Default approach to perform archival at a table level in Log Analytics Workspace
Navigate to Log Analytics Workspace > Table > Manage Table
Consider replicating above for multiple tables using below PowerShell commands.
Step 1: Fetch the table list on which Archiving is Required using this KQL.
KQL to fetch the Active table list:
search *| distinct $table
Step 2: Export the KQL Result-Set:
Exporting the table list in CSV using export functionality
Step 3: Open it with Excel & Rename the column name to “Table” as:
Step 4: Rename from “$table” column to “Table” as:
Rename $table to Table as highlighted:
Step 5: Rename the Excel File name as well:
From “query_data” to “Sentinel” as shown
Step 6: Open Cloud Shell on Azure portal and upload this new file:
Upload the file from local machine as:
Step 7: Check the uploaded file using “ls” list command for uploaded File as:
Step 8: Run following PowerShell command in Cloud shell once file upload completes:
Import-CSV “SentinelTable.csv” | foreach {Update-AzOperationalInsightsTable -ResourceGroupName sentineltraining -WorkspaceName sentineltrainingworkspace -TableName $_.Table -TotalRetentionInDays 2556}
Prior Running the command ensure to update:
*Please update the –TotalRetentionInDays as required in your scenario
*Update the Resource Group Name, Log analytics Workspace name respectively.
Step 9: Check the Archival Log Analytics Table for following tables:
Step 10: The Tables exported have updated Archival period and others have default Retention as per Log Analytics Settings:
Navigation: Log Analytics Workspace > Settings > Tabels > Archive Period.
Conclusion:
1. This blog covers the default approach at a table level to perform archival for long term storage within log analytics workspace.
2.This blog covers steps to actually scale the archival for multiple tables which is a key production requirement.
3. All the steps can be implemented in a lab environment and archival period can be observed in log analytics workspace in table blade respectively.
Microsoft Tech Community – Latest Blogs –Read More
User self-service BitLocker recovery key access with Intune Company Portal website now available
By: Aasawari Navathe – Sr. Product Manager | Microsoft Intune
With the May (2405) service release of Microsoft Intune, users are now able to access the BitLocker recovery key of their Intune enrolled devices using the Intune Company Portal website. This enables users to self-resolve, rather than contacting their helpdesk, when they’re locked out of their machines and need to access their BitLocker recovery key.
What are the prerequisites?
Enrolled Windows device into Intune tenant
Ability to log into the Intune Company Portal website from a device (doesn’t need to be enrolled)
Permission to view your BitLocker recovery key (if one exists in Microsoft Entra ID)
We’re working to add the ability to view the BitLocker recovery key from the native Company Portal apps on other platforms like Apple iOS/iPadOS and macOS. The Intune Company Portal website can be used on other platforms.
How does this work?
After opening the Intune Company Portal website, navigate to the Devices node, select the enrolled Windows device, and click “Get recovery key” under Device Encryption. If there are multiple recovery keys found, click “Show recovery key” under the one with the key ID that is needed. Users may then use this recovery key to complete the recovery process on their enrolled Windows device without reaching out to the helpdesk.
Features for BitLocker recovery key access in Microsoft Entra ID
We heard the customer feedback on what level of control IT admins need within their organization for this scenario. While Intune helps configure policy to define the escrow of BitLocker recovery keys, these keys are stored within Entra ID. There are three capabilities within Entra ID that are helpful to use in conjunction with self-service BitLocker recovery key access for users.
Tenant-wide toggle to prevent recovery key access for non-admin users
This setting is located in the Entra ID > Devices > Device settings.
This setting determines if users can self-service to recover their BitLocker key(s). The default value is ‘No’ which allows all users to recover their BitLocker key(s). ‘Yes’ restricts non-admin users from being able to see the BitLocker key(s) for their own devices if there are any. Learn more: Manage devices in Microsoft Entra ID using the Microsoft Entra admin center.
In the event that the admin has restricted recovery key access for users, users will receive the message “Recovery key could not be retrieved” in the Company Portal website.
Auditing for recovery key access
Audit Logs within the Entra ID portal show the history of activities within the tenant. Any user recovery key accesses made through the Company Portal website will be logged in Audit Logs under the Key Management category as a “Read BitLocker key” activity type. The user’s User Principal Name and additional info such as key ID is also logged.
Learn more: Learn about the audit logs in Microsoft Entra ID.
Entra Conditional Access policy requiring a compliant device to access BitLocker Recovery Key
With Conditional Access policy (CA), you can restrict the access to certain corporate resources if a device is not compliant with the “Require compliant device” setting. If this is set up within your organization, and a device fails to meet the Compliance requirements configured in the Intune Compliance policy, that device cannot be used to access the BitLocker Recovery Key as it is considered a corporate resource which is access controlled by CA.
In this case, you may see an error like below which suggests using a compliant device for recovery key access.
With the 2405 release, get started on this new capability for user self-service BitLocker recovery key access with the Intune Company Portal website!
Let us know your thoughts or if you have any questions, by leaving a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
LLM based development tools: PromptFlow vs LangChain vs Semantic Kernel
Introduction
Prerequisites
Azure OpenAI Service, LLM we will be using for our simple application
Visual Studio Code – IDE
Refer to the blog GitHub Repository
What are they?
Semantic Kernel: an open-source SDK that allows you to orchestrate your existing code and more with AI.
LangChain: a framework to build LLM-applications easily and gives you insights on how the application works
PromptFlow: this is a set of developer tools that helps you build an end-to-end LLM Applications. Using PromptFlow, you can take your application from an idea to production.
Semantic Kernel
Kernel: the kernel is at the center stage of your development process as it contains the plugins and services necessary for you to develop your AI application.
Planners: special prompts that allow an agent to generate a way to complete a task such as using function calling to complete a task.
Plugins: they allow you to give your copilot skills, using both code and prompts
Memories: in addition to connecting your application to LLMs and creating various tasks, Semantic Kernel has a memory feature to store context and embeddings giving additional information to your prompts.
Install the necessary libraries using: pip install semantic-kernel==0.9.8b1 openai
Add you keys and endpoint from .env to your notebook
This module defines an enumeration representing different services.
“””
from enum import Enum
class Service(Enum):
“””
Attributes:
OpenAI (str): Represents the OpenAI service.
AzureOpenAI (str): Represents the Azure OpenAI service.
HuggingFace (str): Represents the HuggingFace service.
“””
OpenAI = “openai”
AzureOpenAI = “azureopenai”
HuggingFace = “huggingface”
4. Create a new Kernel where you will host your application then import Service into your application which will allow you to add your LLM into our application.
# Import the Kernel class from the semantic_kernel module
from semantic_kernel import Kernel
from services import Service
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
# Create an instance of the Kernel class
kernel = Kernel()
# Select a service to use for this notebook (available services: OpenAI, AzureOpenAI, HuggingFace)
selectedService = Service.OpenAI
# Define the service_id variable
service_id = None
# Set the deployment name, API key, and endpoint variables
deployment = model
api_key = api_key
endpoint = azure_endpoint
# Set the service_id variable to “default”
service_id = “default”
# Add an instance of the AzureChatCompletion class to the kernel’s services
kernel.add_service(
AzureChatCompletion(service_id=service_id, deployment_name=deployment, endpoint=endpoint, api_key=api_key),
)
5. Next we will create and add our plugin. We have the plugin folder TranslatePlugin within it we have our Swahili Plugin with our config and prompt txt files which guide the model on how it will perform its task. Once imported we invoke the Swahili Function into our application.
# Set the directory path where the plugins are located
plugins_directory = “.prompt_templates_samples”
# Add the TranslatePlugin to the kernel and store the returned plugin functions in the translateFunctions variable
translateFunctions = kernel.add_plugin(parent_directory=plugins_directory, plugin_name=”TranslatePlugin”)
# Retrieve the Swahili translation function from the translateFunctions dictionary and store it in the swahiliFunction variable
swahiliFunction = translateFunctions[“Swahili”]
# invokes the ‘swahiliFunction’ with the specified parameters and prints the results
result = await kernel.invoke(swahiliFunction, question=”what is the WiFi password”, time_of_day=”afternoon”, style=”professional”)
print(result)
6. The output will be the requested translation.
LangChain
Model I/O: this is where you can bring in your LLM and format its inputs and outputs
Retrieval: In RAG applications, this component specifically helps you load your data, connect with vector databases and transform your documents to meet the needs of your application.
Other Higher level Components
Tools: allows you to create Intergrations with external services and applications
Agents: these are responsible as a guide on what step to take next.
Chains: these are a sequence of calls linking various components to create LLM apps
Install the necessary libraries: pip install langchain openai
Login to Azure CLI using az login –use-device-code and authenticate your connection
Add you keys and endpoint from .env to your notebook, then set the environment variables for your API key and type for authentication.
import os
from azure.identity import DefaultAzureCredential
# Get the Azure Credential
credential = DefaultAzureCredential()
# Set the API type to `azure_ad`
os.environ[“OPENAI_API_TYPE”] = “azure_ad”
# Set the API_KEY to the token from the Azure credential
os.environ[“OPENAI_API_KEY”] = credential.get_token(“https://cognitiveservices.azure.com/.default”).token
4. Create your model class and configure it to interact with Azure OpenAI
# Import the necessary modules
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version=AZURE_OPENAI_API_VERSION,
azure_deployment=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
)
5. Use ChatPromptTemplate to curate your prompt
# Import the necessary modules
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Create a ChatPromptTemplate object with messages
prompt = ChatPromptTemplate.from_messages(
[
(
“system”,
“You are a helpful assistant that translates tasks into Kiswahili. Follow these guidelines:n”
“The translation must be accurate and culturally appropriate.n”
“Use the {{$time_of_day}} to determine the appropriate greeting to use during translation.n”
“Be creative and accurate to communicate effectively.n”
“Incorporate the {{$style}} suggestion, if provided, to determine the tone for the translation.n”
“After translating, add an English translation of the task in the specified language.n”
“For example, if the question is ‘what is the WiFi password’, your response should be:n”
“‘Habari ya mchana! Tafadhali nipe nenosiri la WiFi.’ (Translation: Good afternoon! Please provide me with the WiFi password.)”
),
(“human”, “{question}”),
]
)
6. Chain your model and prompt together to get a response
# Chain the prompt and the model together
chain = prompt | model
# Invoke the chain with the input parameters
response = chain.invoke(
{
“question”:”what is the WiFi password”,
“time_of_day”:”afternoon”,
“style”:”professional”
}
)
# Print the response
response
7. The output will be the requested translation.
PromptFlow
First, you install the promptflow extension on Visual Studio Code
2. Next, ensure you install the necessary dependencies and libraries your will need for the project.
3. In our case we will be build a chat flow with template. Click on somewhere and create a chat flow for the application
4. Once the flow is ready, we can open flow.dag and click on the visual editor to see how our application is structured.
5. We will need to connect to our LLM, you can do this by creating a new connection. Update your Azure OpenAI endpoint and your connection name. Click create connection then you will have your connection ready.
6. Update the connection and run the flow to test your application.
7. Update the chat.jinja2 file to customize the prompt template.
8. Edit the yaml file to add more functionality to your flow, in our case for the Tutor, we will add more inputs.
In Summary:
GitHub Repository: https://github.com/BethanyJep/Swahili-Tutor
Semantic Kernel: microsoft/semantic-kernel: Integrate cutting-edge LLM technology quickly and easily into your apps (github.com)
Semantic Kernel documentation: Create AI agents with Semantic Kernel | Microsoft Learn
Promptflow documentation: Prompt flow — Prompt flow documentation (microsoft.github.io)
LangChain: Introduction | 🦜️:link: LangChain
Microsoft Tech Community – Latest Blogs –Read More
Getting started with Azure Cosmos Database (A Deep Dive)
What is Azure Cosmos Database
It’s a fully managed, distributed NOSQL & relational database.
Topics to be covered
NoSQL Vs Relational Databases
What is Azure Cosmos DB
Azure cosmos DB Architecture
Azure Cosmos Db Components
Multi-model (APIs)
Partitions
Request Units
Azure cosmos DB Access Methods
Scenario: Event Database
Creating Resources using Azure cosmos DB for NoSQL
What is a NoSQL Database?
NoSQL, standing for “Not only SQL,”. It’s a highly scalable storage mechanism for structured, semi-structured, and unstructured data.
What is a Relational Database?
A relational database is a way of storing and organizing data that emphasizes precision and interconnection. It uses a structured table with predefined schemas to store data.
Structural Difference
Relational Vs NoSQL
What Is Azure Cosmos DB?
Simply it’s Microsoft’s premium NoSQL database service.
Key Benefits
Fully-managed Service – Focus on your app, and let Microsoft handle the rest.
No Schema – NoSQL, no schema, no problem.
No Index Management – All data is automatically indexed.
Multi-Model – It helps you cover a variety of databases by providing APIs to interact with.
Cosmos DB for NoSQL API – It’s the default API which provides support for querying items in an SQL style. It also supports ACID transactions, stored procedures and triggers.
Table API – stores simple key-value data. This is geared towards users of Azure Table storage to use this API as a premium feature.
Apache Gremlin API – It’s for working with graph databases.
Apache Cassandra API – it’s a wide-column store database, well known for distributing petabytes of data with high reliability and performance.
MongoDB API – Document database built on MongoDB.
Global Distribution – Azure is in more than 60+ Regions and 140+ Countries, in all these regions and countries azure cosmos DB is available. This is not the case for all other services offered in Azure.
Guaranteed Performance and Availability – Azure cosmos DB provides 99.99 % Service Level Agreement (SLA) for throughput, consistency, availability and latency.
Elastically scalable – You can achieve this by
Provisioned – you specify what you service will scale up to.
Autoscale – The service will scale automatically according to the workloads.
Azure Cosmos DB Architecture
What Are Azure Cosmos Components
Database Account – Top-level resource that determines the public name and API.
Database – Namespace for your containers. Manage users & permissions
Container – A collection of items (similar to a table). Your Api choice will manifest various forms of our container, ie table, collection, graph, etc.
Item – Atomic data structure of a container ie document, row, node, edge
How is multi-model possible?
The database engine of Azure Cosmos is cable of efficiently translating and projection the data models onto the Atomic-Record-sequence based data (ARS) model.
By utilizing the ARS abstraction layer Cosmos Db can offer various popular data models and translate them back to ARS. This all happens under the same database engine efficiently at a global scale.
Available APIS
Partitions
These are the chunks in which your data is stored. These are the fundamental units of scalability and distribution.
Logical – This is dividing each partition based on a partition key of your choice.
Physical – Physical storage of your data, with one or more logical partitions mapped to it. Azure will map the logical partitions to the physical partitions for you. As you increase physical throughput, azure will automatically create new physical partitions and remap the logical ones as it needs in order to satisfy those requests.
Partitions: Tips to keep in mind
Partition key will affect the database performance and ability to scale.
Avoid hot partitions – (partitions which are not evenly distributed) by choosing keys with high cardinality and distinctness over time. In our example above, using a serial number of the phone is unique hence creating an even distributed partition. Model is not a great key to use in the end because it will cause all items to be in one partition instead of being evenly distributed.
Hot partitions result in rate limiting and inefficient use of the throughput that you’ve provisioned as well as potentially higher costs.
Microsoft Transparently handles physical distribution – your work is to choose a good partition key that is good for your application and data also the throughput and storage associated with it.
Request Units
Request units normalize database operation costs and become a uniform currency for Azure Cosmos DB throughput. Query operation requires more RU/s than the rest because we are using more system resources to perform a query operation.
Flavors for Provisioning RUs
Provisioned – in this case, you know what you want and just provision it. You will get dependable billing because you know how many Rus you’re going to be billed for. Main fall back is hitting rate limits
Auto scale – It lets you set certain parameters and the system will scale up and down the RUs needed as necessary should you hit a higher peak of work.
Serverless – Pay only for what you consume. This option frees you from the need to pick specific parameters like auto scale or be locked into a specific one via provisioned option.
Planning Your Request Units
Two granularities – Provisioning your throughput at the database or container level or both.
Database level – The throughput you choose will be shared among all containers under that database.
Container Level – you have a specific throughput to a certain container.
Billing Hourly – No matter which option method you use, you’ll be billed for the highest RU of the hour.
Azure Cosmos DB Access Methods
Data Explorer – A graphical data utility built straight into the Azure Portal
SDK – use your favorite language to consume Azure Cosmos DB
.Net
Java
Spring data
Node.js
Python
Go
Rest APIs –manage data using HTTPS request
Creating Resources in the Azure Portal
Let’s Create Account
Search for Azure Cosmos DB
Create Azure Cosmos DB Account
Choose the API according to your use case. I’ll go with NoSQL option for this demo.
Under create Azure cosmos DB Account page
Choose your subscription.
Choose or create a resource group.
Create the account name (make it unique).
Choose the availability zone if you want to improve your apps availability and resilient.
Choose the location of your DB according to the available data centers.
Capacity Mode enables you to define the throughput. The Provisioned option also comes with a free tier option.
Select Geo-Redundancy will enable your database to be available to the paired region ie East US and West Us or South Africa North and South Africa West. For this demo ‘South Africa West’ is not included in my subscription
Multi-region writes capability allows you to take advantage of the provisioned throughput for your databases and containers across the globe.
Under networking, your Azure Cosmos DB account either publicly, via public IP addresses or service endpoints, or privately, using a private endpoint. Choose according to your use case.
Connection Security Settings – I will go with TLS 1.2
Backup policy defines the way your backup will occur.
Periodic lets you define the interval (minutes or hours) and backup retention (How long you would like your backups to be saved) – in hours or days and Backup storage redundancy in Geo, Zone or Local.
Continuous (7 days) – Provides backup window of 7 days / 168 hours and you can restore to any point of time within the window. This mode is available for free.
Continuous (30 days) – Provides a backup window of 30 days / 720 hours and you can restore to any point of time within the window. This mode has cost impact.
Data Encryption – I will let Microsoft encrypt my account using service-managed keys. Feel free to use your customer-managed key if you have any.
I don’t need to create a tag for now, just review and create.
Let’s Create Event Database Using the Scenario below
For our scenario, we need to store data from sports events (e.g., marathon, triathlon, cycling, etc.). Users should be able to select an event and view a leaderboard. The amount of data that will be stored is estimated at 100 GB.
The schema of the data is different for various events and likely to change. As a result, this requires the database to be schema-agnostic and therefore we decided to use Azure Cosmos DB as our database.
Identify access patterns
To design an efficient data model it is important to understand how the client application will interact with Azure Cosmos DB. The most important questions are:
Is the access pattern more read-heavy or write-heavy?
What are the main queries?
What is the expected document size?
If the access pattern is read-heavy you want to choose a partition key that appears frequently as a filter in your queries. Queries can be efficiently routed to only the relevant physical partitions by including the partition key in the filter predicate.
When the access pattern is write-heavy you might want to choose item ID as the partition key. Item ID does a great job with evenly balancing partitioned throughput (RUs) and data storage since it’s a unique value. For more information, see Partitioning and horizontal scaling in Azure Cosmos DB | Microsoft Docs
Finally, we need to understand the document size. 1 kb documents are very efficient in Azure Cosmos DB. To understand the impact of large documents on RU utilization see the capacity calculator and change the item size to a larger value. As a starting point you should start with only one container and embed all values of an entity in a single JSON document. This provides the best reading performance. However, if your document size is unpredictable and can grow to hundreds of kilobytes you might want to split these in different documents within the same container. For more information, see Modeling data in Azure Cosmos DB – Azure Cosmos DB | Microsoft Docs.
Sample document structure
{
“eventId”: “unique_event_id”,
“eventName”: “Marathon”,
“eventDate”: “2024-05-20”,
“participants”: [
{
“participantId”: “participant1”,
“name”: “Alice”,
“score”: 1200
},
{
“participantId”: “participant2”,
“name”: “Bob”,
“score”: 1100
}
// … more participants
]
}
The eventId serves as the unique identifier for each event.
Create Container
Create a new container
Give it a unique database id
Select autoscale – for automatic throughput else select manual which can be useful for use for a single container with a predictable throughput for a container. The importance of autoscale is that it doesn’t cause any downtime. For more information, see How to choose between manual and autoscale on Azure Cosmos DB.
At the container level the partition key is specified, which in our case is /eventId
Add a document
Click data explorer
Click on Events
Expand Events2024 then items
Click New item
Let’s replace the default Json object with our data
Save a single document
Add the document
save
Save a many document
Let’s say you have you have your data saved on a JSON file, like the one below, follow below steps to insert that data.
[
{
“eventId”: “event_1”,
“eventName”: “Coding Competition”,
“eventDate”: “2024-05-21”,
“participants”: [
{
“participantId”: “p1”,
“name”: “John”,
“score”: 980
},
{
“participantId”: “p2”,
“name”: “Jane”,
“score”: 890
},
{
“participantId”: “p3”,
“name”: “Mike”,
“score”: 1020
}
]
},
{
“eventId”: “event_2”,
“eventName”: “CodeFest”,
“eventDate”: “2024-06-15”,
“participants”: [
{
“participantId”: “p4”,
“name”: “Lily”,
“score”: 950
},
{
“participantId”: “p5”,
“name”: “Alex”,
“score”: 1120
}
]
},
{
“eventId”: “event_3”,
“eventName”: “Hackathon Challenge”,
“eventDate”: “2024-07-10”,
“participants”: [
{
“participantId”: “p6”,
“name”: “Sarah”,
“score”: 1180
},
{
“participantId”: “p7”,
“name”: “Kevin”,
“score”: 1035
}
]
},
{
“eventId”: “event_4”,
“eventName”: “Byte Battle”,
“eventDate”: “2024-08-05”,
“participants”: [
{
“participantId”: “p8”,
“name”: “Olivia”,
“score”: 1005
},
{
“participantId”: “p9”,
“name”: “Ethan”,
“score”: 1150
}
]
},
{
“eventId”: “event_5”,
“eventName”: “Code Warriors Championship”,
“eventDate”: “2024-09-20”,
“participants”: [
{
“participantId”: “p10”,
“name”: “Ava”,
“score”: 1085
},
{
“participantId”: “p11”,
“name”: “Noah”,
“score”: 1070
}
]
}
]
Click Upload item
Locate the file you want to upload from the file explorer then click upload.
A successful upload will show you the number of records uploaded.
Let’s Query our Database
Click on New SQL Query
Write your SQL query
Run the query
View our results – as you can see our object has some meta data appended to it.
More Queries
Query 1: View Top Ranked Participants for a Selected Event:
Query 2: View All Events for a Selected Year a Person Has Participated In:
Query 3: View All Registered Participants per Event:
Query 4: View Total Score for a Single Participant per Event:
You can also check the cost of the query operation consumed 2.9 RUs
Read More
Databases, containers, and items in Azure Cosmos DB
Queries in Azure Cosmos DB for NoSQL
How to model and partition data on Azure Cosmos DB using a real-world example
Implement a data modeling and partitioning strategy for Azure Cosmos DB for NoSQL
Data modeling in Azure Cosmos DB
Server-side programing
Migrate data to Azure Cosmos DB using the desktop data migration tool
Microsoft Tech Community – Latest Blogs –Read More
Trustworthy AI: Copilot for Microsoft 365 data security and privacy commitments
Copilot for Microsoft 365—your AI assistant for work—is built on our existing Microsoft 365 commitments to data security and privacy in the enterprise, enabling you to always stay in control. Watch our video series to learn how our comprehensive approach to privacy, security, and compliance safeguards your data.
Copilot for Microsoft 365 became publicly available to Enterprise customers just over 6 months ago. Since release we’ve been on a mission to answer all of the thoughtful customer questions and discuss exciting use cases that are changing the way we work. AI at work is here, now comes the hard part, we just released new research and insights in the Work Trend Index 2024 Annual Report.
Employees want AI at work—and won’t wait for companies to catch up: AI use at work has nearly doubled in the last six months, with 75% of knowledge workers using generative AI. Employees are bringing their own AI tools to work, and leaders recognize AI as a business imperative despite lacking a clear plan for implementation. 78% of AI users are bringing their own AI to work (BYOAI)
The rise of the AI power user—and what they reveal about the future: A spectrum of AI users exists, from skeptics to power users. Power users, who are familiar with AI and use it several times a week, report significant benefits in managing workload, boosting creativity (92%), and enjoying work more.
As customers adopt and scale plans for Copilot in their organizations we receive excellent questions about how Copilot for Microsoft 365 works with confidential organizational data in the Microsoft 365 Graph. I’m thrilled to share the latest on how Copilot is revolutionizing the way we work while upholding the highest standards of data protection.
Built on a foundation of trust
A key concept of Microsoft 365 is the ‘tenant’—a secure, encrypted construct to support manageability and data privacy of your organizational data that is distinct, unique, and separate from all other Microsoft 365 tenants. Copilot is an orchestrator that integrates with your tenant, inheriting all your existing Microsoft 365 security, privacy, identity, and compliance requirements.
Watch this video to learn more about how Copilot is built upon a foundation of trust.
Defending your data
At Microsoft, we believe that your data is your business, and you should control its collection, use and distribution, as well as its location. Watch the following video to learn how data is stored, encrypted, processed, and defended.
Securely powered by Azure OpenAI Service
When you interact with Copilot, your prompts are securely processed using Azure OpenAI services, ensuring that your organizational data remains protected. For a deeper understanding of how Azure OpenAI services power Copilot while prioritizing data integrity, watch this video.
Join us in embracing the future of work with Copilot for Microsoft 365—where your data’s security is our top priority. For all the latest updates and deep dive information start at Microsoft Copilot for Microsoft 365 documentation | Microsoft Learn and aka.ms/copilotlab to learn more about how to use Copilot! Find adoption and skilling best practices at adoption.microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Incidents and Alerts blades missing in Defender portal
Hi,
We recently found out that the incidents and alerts blades have disappeared from our Defender portal. This is true for both Global Admin and Security Administrator roles. We use A5 licenses in our tenant. Not sure what happened. Microsoft Unified support has not been very helpful in even replying to our query. Can someone please point us in the right direction. We don’t know what has happened.
Thanks in advance,
Hi, We recently found out that the incidents and alerts blades have disappeared from our Defender portal. This is true for both Global Admin and Security Administrator roles. We use A5 licenses in our tenant. Not sure what happened. Microsoft Unified support has not been very helpful in even replying to our query. Can someone please point us in the right direction. We don’t know what has happened. Thanks in advance, Read More
Bookings confirmation not matching my calendar
I just had a few appointments that got booked for 15 minutes and it is showing 45 minutes in my calendar. I’ve double-checked the duration and that is correct.
For example:
Confirmation came to my email 11:55am – 12:10 pm
Calendar shows: 11:40 – 12:30 for the same appointment
I just had a few appointments that got booked for 15 minutes and it is showing 45 minutes in my calendar. I’ve double-checked the duration and that is correct. For example:Confirmation came to my email 11:55am – 12:10 pmCalendar shows: 11:40 – 12:30 for the same appointment Read More
Conditional Formatting is not showing properly
I am trying to find the duplicate value of the serial No.
But I can’t find the duplicate value. But the showing error.
Could you please guide me how to solve this issue.
I am trying to find the duplicate value of the serial No.But I can’t find the duplicate value. But the showing error.Could you please guide me how to solve this issue. Read More
Search engine positioning on Bing I would like
I would lik to ask for information, I have positioned the site with the keyword Industrial Manipulator, in Italian it is positioned on the first page in first position. the site does not display an image related to the product. In the search snippet inside the site I entered the title, description and image. It does not appear. Do you have any suggestions for improving the search result and making the image appear in addition to the title and description?
Thank you
I would lik to ask for information, I have positioned the site with the keyword Industrial Manipulator, in Italian it is positioned on the first page in first position. the site does not display an image related to the product. In the search snippet inside the site I entered the title, description and image. It does not appear. Do you have any suggestions for improving the search result and making the image appear in addition to the title and description? Thank you Read More
Multitenant collaboration – share users – can’t choose groups
Hi all, I am configuring the new multitenant collaboration now that it’s out of preview.
When I last was testing it in preview, when I clicked “Share users” I was able to select an Entra ID group of users to share. Now the behaviour is different, it’s only allowing me to select users’, not groups. Am I missing something obvious here?
Thanks!
Hi all, I am configuring the new multitenant collaboration now that it’s out of preview.When I last was testing it in preview, when I clicked “Share users” I was able to select an Entra ID group of users to share. Now the behaviour is different, it’s only allowing me to select users’, not groups. Am I missing something obvious here? Thanks! Read More
copilot does not log
copilot does not log in on my smartphone it says there is a problem with my account when I log in in another browser it is fine
copilot does not log in on my smartphone it says there is a problem with my account when I log in in another browser it is fine Read More
Sharepoint sync issues
Hello
Please i need your help on this issue.
One if our user is experiencing issues with SharePoint sync. They can see folders on the web version, but they are not syncing to their laptop or Explorer.
Hello Please i need your help on this issue. One if our user is experiencing issues with SharePoint sync. They can see folders on the web version, but they are not syncing to their laptop or Explorer. Read More
What’s new across Azure Governance services, Microsoft Build 2024
Over the last six months there have been exciting new releases across Governance services to help you continue to manage your Azure environment with increased speed and control. We are spotlighting the public preview and general availability of highly anticipated policy features, recently released Azure Resource Graph Copilot capabilities, and some sneak peaks into what is coming soon. Stay tuned to explore what AI means for your at-scale cloud management scenarios, and make sure to check us out on X for other updates, @AzureGovernance.
Azure Resource Graph
Azure Resource Graph Copilot Capabilities
We are thrilled with the initial response as well as major enhancements to Azure Resource Graph (ARG) capabilities within the Azure Copilot. Azure CoPilot allows you to understand your resources and environment with ease, through transforming natural language prompts into ARG queries. This reduces the amount of expertise you need to have to run queries and shortens the time to discover solutions for key environmental questions. As we continue to drive enhancements to this capability, our goal is to let our customers interact with their cloud environment in the same language that they use for day-to-day work.
Try it out with some queries like:
“Show me all my VMs that have a public IP address”
“Show me all my Linux VMs along with their creation date”
Learn more about ARG Copilot capabilities here: Get resource information using Microsoft Copilot for Azure (preview) | Microsoft Learn
Generally Available: Azure Resource Graph Power BI Data Connector
A highly anticipated release that we are pleased to announce is Generally available is the Azure Resource Graph Power BI Connector, a tool that allows Azure users to access deeper insights into their Azure resources. This powerful integration leverages the strong querying capabilities of Azure Resource Graph with the interactive visualization features of Power BI, enabling users to easily explore, analyze, and visualize their inventory of Azure resources. Refer here for sample queries that you can use with the new Azure Resource Graph Power BI connector and create visualizations with.
To learn more about the Azure Resource Graph Power BI Data Connector and how it can transform your Azure experience, review our official documentation and check out our brand new Youtube tutorial that offers step-by-step guidance on how to use the Azure Resource Graph Power BI Data Connector.
Query VMSS Power State Through ARG
Now you can query virtual machine details in the Virtual Machine Scale Set Uniform orchestration mode categorized according to their power state. ARG table “ComputeResources” contains the model view and powerState in the instance view properties for the virtual machines part of Virtual Machine Scale Set Uniform mode.
ComputeResources
| where type =~ ‘microsoft.compute/virtualmachinescalesets/virtualmachines’
| extend powerState = properties.extended.instanceView.powerState.code
| project name, powerState, id
Refer here for sample queries that you can use with the new Azure Resource Graph ComputeResources table.
Coming Soon: ARG enhanced support for GET/LIST calls
ARG is introducing a new feature to support existing Azure control plane GET and List API calls providing significantly higher throttling quota (up to 10X) for large cloud native customer workloads running in Azure. The goal is to address READ throttling issues that could lead to issues like performance degradation, failed requests, and increased latency impacting critical cloud operations.
Customers can use this capability to get an improved performance for Azure GET/LIST APIs, while reducing throttling for these calls across key resource types like Compute, network etc. The new throttling limits offered by ARG will be aligned to the new Azure Resource Manager throttling limits applied per region and hence offer a more scalable and performant backend for your GET/LIST calls. Stay tuned to learn more about this update!
If you have faced throttling issues in your environment or want to hear from us, you can reach out to us through the Twitter handle @AzureGovernance or fill out this form.
Azure Policy
Generally available: Selectors and Overrides for Gradual Policy Rollout
Selectors and overrides are now generally available, making it easier than ever to safely roll out your policy assignments. The resourceSelectors property on policy assignment enables targeting resources by resource location or resource type to target subset of resources through the rollout stages. In addition, the overrides property allows you to change the effect of a policy definition without modifying the underlying policy definition or use a parameterized effect in the policy definition to first roll out using the audit or auditIfNotExists effect.
Check out our how-to guide to learn more on how to leverage these properties and others to safe deploy policy assignments: Safe deployment of Azure Policy assignments – Azure Policy | Microsoft Learn
Public preview, SSH Posture control through Machine Configuration
We are excited to announce additional built-in capabilities for Linux management scenarios through Azure policy and Machine Configuration. Through new built-in policies, you can manage your SSH configuration settings declaratively at-scale.
SSH Posture Control enables you to use the familiar workflows of Azure Policy and Machine Configuration to:
Ensure compliance with standards in your industry or organization
Reduce attack surface of remote management features
Ensure consistent setup across your fleet for security and productivity
SSH Posture Control also provides detailed Reasons describing how compliance or non-compliance was determined. These Reasons help you to document compliance for auditors with confidence and evidence. They also enable you to take action when non-compliance is observed.
For more information, see https://aka.ms/SshPostureControl
Coming Soon: Built-in Policy Versioning and Resource Capabilities
Stay tuned to learn about upcoming releases from the governance team including built-in Policy versioning, a platform shift that will allow you to manage version changes and upgrade built-in policies on-demand. To learn more and give it a try fill out the below form to get onboard to the private preview. Also coming up is the release of Resource Capabilities, which allows you to use a single Azure Policy definition to govern a common scenario across multiple resource types.
Onboard to the private previews through the following link: https://aka.ms/governance_pp
Change Analysis powered by Azure Resource Graph
Public Preview: New Change Analysis Portal Experience
Viewing changes to your Azure resources just became easier! With the new Change Analysis experience powered by Azure Resource Graph, you can now view all your resource changes across all your tenants and subscriptions in the Azure Portal. Resources are at the heart of this new experience. It also gives you an onboarding-free experience, tenant-wide querying rather than selecting subscriptions, more scalable and extensive filtering capabilities, change actor information and improved accuracy. To learn more visit: https://learn.microsoft.com/en-us/azure/governance/resource-graph/changes/view-resource-changes
To stay on top of all our latest releases and updates or if you have any questions, be sure to give us a follow on X at @AzureGovernance.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Microsoft Intune May 2024
Innovation is in the air this spring (or fall for our friends in the Southern Hemisphere). I’m pleased to highlight some new capabilities we’re bringing to Intune this month. We’re adding features that increase secure productivity. Read on to learn what’s new and notable this month, then put these features to work for your organization.
Getting down to business
We have three major enhancements to highlight this month that help users get down to business:
Platform single sign-on (SSO) has arrived for macOS device enrollment: This capability helps users with macOS devices get to work faster, with a single sign in and password for their device and apps. Additionally, it enables users to automatically sign in to their Microsoft 365 productivity apps. To learn more, see this article about the rest of the Mac management news.
Windows Autopilot device preparation: Built from the ground-up with an improved architecture, this new Windows Autopilot option offers faster and more configurable self-deployment capabilities. The original, existing Windows Autopilot architecture is still in place and its existing capabilities are all still available to admins. Read more about the new and improved Windows Autopilot.
Enhanced frontline worker (FLW) device management: New capabilities make FLW devices easier to use and manage. One of the biggest improvements is updates to the Managed Home Screen. Get the whole story in this blog post.
More secure and more efficient
We’re also introducing capabilities to Intune focused on making it easier to improve security and efficiency.
New security baseline
First is an update to the Microsoft Defender for Endpoint security baseline. Security baselines are one-click collections of policies that can be applied to devices (and device groups) in Intune. This latest update is a super-efficient way to apply configurations recommended by the Microsoft Defender for Endpoint team. It’s also based on the Windows unified settings platform, which brings some additional benefits like:
Quicker turnaround for updates.
Improved reporting, including per-setting status reports.
Assignment filter support.
Improved UI.
Consistent names across Intune.
We recommend updating baselines to the latest version by selecting the check box for test baseline when they’re released:
BitLocker recovery key
The second addition is to the BitLocker recovery key workflow. Traditionally, if a user gets locked out of their BitLocker-encrypted device, they call the Help Desk. With the capability we’re rolling out, end users can access their BitLocker recovery key directly from the Company Portal web site, providing a more intuitive and streamlined path to recovery, reducing the burden on support teams.
Admins can disable this feature for users without admin rights and access to logs. For more information, see the documentation on Get recovery key for Windows.
Corporate identifiers
The third capability is an update to the Windows corporate identifiers feature. This can be used as part of any Windows deployment, including the new Windows Autopilot device preparation process.
This change is meant to help you and security teams ensure that only devices that are explicitly authorized can be marked as corporate-owned devices. Organizations can upload a comma-separated, values-formatted (.csv) list of devices, specifying manufacturer, model, and serial number (for Windows devices only). Details will be available in the documentation when this feature is released as it’s rolling out apart from the May 2024 update.
Enrollment time grouping
You know that device groups are powerful tools for managing lots of devices. Before the introduction of this new capability, enrollment time grouping, new Windows devices would get policies only once the device’s properties are discovered and group memberships are evaluated. The result would have unpredictable wait times before devices were ready to use. The enrollment time grouping feature accelerates the process of group assignment and the time of productivity for end users by skipping the inventory discovery and dynamic membership evaluation phases. Enrollment time grouping is currently available as part of Windows Autopilot device preparation, which is being released at the end of May 2024 and will be expanded to other enrollment methods and platforms in the months ahead. To learn more, read this article on enrollment time grouping and Windows Autopilot device preparation.
Stay up to date! Bookmark the Microsoft Intune Blog and follow us on LinkedIn or @MSIntune on X to continue the conversation.
Microsoft Tech Community – Latest Blogs –Read More