Month: September 2024
Loop – Some Recipients wont be able to view or edit this.
We have been having some issues with Loop in teams when people create agendas.
Organisers were receiving an error that some recipients won’t be able to view or edit this when creating notes / agendas with a loop component in the Teams meeting invite. After talking with MSFT the common thread was that those with issues were not showing as a user type of ‘Member’ in Entra ID, when their account was updated to be a ‘member’ their problem went away.
I am now seeing that even with the entire org being user type ‘member’ I am still getting this – I have a meeting with 17 people, 15 were auto added and 2 were not (everyone on the invite list is a User Type ‘Member’).
Is anyone else experiencing this? Any advice / ideas welcome on how to ensure that when you add notes/agenda’s to a meeting that everyone on the invite list gets included and added by default.
Thanks in advance! 😀
We have been having some issues with Loop in teams when people create agendas. Organisers were receiving an error that some recipients won’t be able to view or edit this when creating notes / agendas with a loop component in the Teams meeting invite. After talking with MSFT the common thread was that those with issues were not showing as a user type of ‘Member’ in Entra ID, when their account was updated to be a ‘member’ their problem went away. I am now seeing that even with the entire org being user type ‘member’ I am still getting this – I have a meeting with 17 people, 15 were auto added and 2 were not (everyone on the invite list is a User Type ‘Member’).Is anyone else experiencing this? Any advice / ideas welcome on how to ensure that when you add notes/agenda’s to a meeting that everyone on the invite list gets included and added by default.Thanks in advance! 😀 Read More
Multi Stake Holder Booking
Hello All – I have a scenario and wanted to know how I can use the booking app.
We have to set up a meeting with a panel of 4 Senior Leadership Members and 30 different teams. They panel would talk to the team / Users individually. I am looking at getting schedules that are commonly available for all 4 SLT members and have that published in Bookings app so that the teams can block the time slots available. Once the slot is booked, it should put that on the calendar of the user and the panel based on the subject chosen by the users.
In case a slot is taken by Team 1, they would have only 29 others open and so on. How can I achieve this.
Hello All – I have a scenario and wanted to know how I can use the booking app. We have to set up a meeting with a panel of 4 Senior Leadership Members and 30 different teams. They panel would talk to the team / Users individually. I am looking at getting schedules that are commonly available for all 4 SLT members and have that published in Bookings app so that the teams can block the time slots available. Once the slot is booked, it should put that on the calendar of the user and the panel based on the subject chosen by the users. In case a slot is taken by Team 1, they would have only 29 others open and so on. How can I achieve this. Read More
Decommissioning Exchange Server 2016 with Azure AD Connect Enabled
Long story short, we have migrated all our mailboxes to the cloud and all email are flowing in the cloud.
We have azure ad connect enabled to sync ad users password and other attributes including those of exchange. We need to decom our exchange server and no guide or kb is available with this scenario.
Please help and point us to the right direction on how we can decom our exchange server 2016.
Long story short, we have migrated all our mailboxes to the cloud and all email are flowing in the cloud. We have azure ad connect enabled to sync ad users password and other attributes including those of exchange. We need to decom our exchange server and no guide or kb is available with this scenario. Please help and point us to the right direction on how we can decom our exchange server 2016. Read More
How to Increase C Drive Space in Windows 11 without Formatting
Due to previous mistake. I only allocated 80GB storage for Windows OS. Now, there is only 10GB free space left in my C drive as Windows 11 eats up more space than Windows 10. In fact, there is a plenty of free space available in D drive (nearly 100GB). Is there any to how to increase c drive space in windows 11 without formatting the drive?
I don’t want to reinstall Windows 11 as it is really a time consuming task. Kindly share your experience if you knew how to add more space to C drive in Windows 11 without formatting.
P.S. there is no option to extend C drive in Windows 11 when opening the Disk Management app.
Due to previous mistake. I only allocated 80GB storage for Windows OS. Now, there is only 10GB free space left in my C drive as Windows 11 eats up more space than Windows 10. In fact, there is a plenty of free space available in D drive (nearly 100GB). Is there any to how to increase c drive space in windows 11 without formatting the drive? I don’t want to reinstall Windows 11 as it is really a time consuming task. Kindly share your experience if you knew how to add more space to C drive in Windows 11 without formatting. P.S. there is no option to extend C drive in Windows 11 when opening the Disk Management app. Read More
MS Team for education – MS Surveys as assignment
Hello guys
So far we can attach a MS quiz as assignment in a classroom . But we cannot attach a survey.
What we want is to get student self evaluation about their competencies OR just gathering some feedbacks.
Having this as assignment is nice : it can be scheduled, evaluated etc..
Do you know if it s on the backlog….. and if we have any workaround?
thx !
Hello guysSo far we can attach a MS quiz as assignment in a classroom . But we cannot attach a survey.What we want is to get student self evaluation about their competencies OR just gathering some feedbacks.Having this as assignment is nice : it can be scheduled, evaluated etc..Do you know if it s on the backlog….. and if we have any workaround? thx ! Read More
open office 365 v2408 issue with "Docusign"
I’m facing issue while using office 365 version 2408 for document to send to Docusign . Error is “unable to convert to pdf”. But same issue is not there if I use older office 365 version 2308 or earlier
I’m facing issue while using office 365 version 2408 for document to send to Docusign . Error is “unable to convert to pdf”. But same issue is not there if I use older office 365 version 2308 or earlier Read More
Katso FY25 START – Mitä uutta FY25 tuo Microsoftin kumppaneille
FY25 START – Mitä uutta FY25 tuo Microsoftin kumppaneille
Tervetuloa tallenteena seuraamaan 12.9.2024 Espoossa pidettyä Microsoftin FY25 START -tilaisuutta kumppaneille. Videolla kuulet terveiset Microsoftin johdolta, käydään läpi viimeisimmät päivitykset ratkaisualueittain (Azure, Security, Microsoft 365 ja Dynamics 365) ja juhlitaan vuoden 2024 Partner of the Year -voittajia.
Tallenne on katsottavissa Microsoftin Cloud Champion -palvelussa, johon voit rekisteröityä, jolloin pääset katsomaan myös muita sisältöä mm. Kumppanituntia ja Akatemiaa.
Katso tallenne tästä: FY25 START – Mitä uutta FY25 tuo Microsoftin kumppaneille
Microsoft Tech Community – Latest Blogs –Read More
Get-Mailbox Versus Get-ExoMailbox
Modernized Get-Mailbox Cmdlet Versus REST Get-ExoMailbox Cmdlet
In November 2019, Microsoft introduced a set of REST-based cmdlets designed to improve the performance and stability of the most frequently used PowerShell actions conducted against Exchange Online. The new set didn’t use Remote PowerShell and incorporated functionality like pagination (like Graph API requests). Given its usage in many scenarios, the Get-ExoMailbox cmdlet is possibly the poster child for the REST cmdlets. Many tests were run, usually successfully, to validate its performance advantages over the older Get-Mailbox cmdlet.
Get-Mailbox Still in Active Use
Five years on, I still see people use Get-Mailbox in their scripts. I was recently quizzed about the enduring nature of the older cmdlet. It’s a good question. Despite my advice, many chose to leave Get-Mailbox untouched in their scripts on the basis that if something isn’t broken it shouldn’t be touched. Get-ExoMailbox behaves differently, especially in how it fetches mailbox properties. In a nutshell, Get-ExoMailbox fetches just fifteen of the hundreds of available mailbox properties, so if you want a property like InPlaceHolds or ArchiveStatus, you must request them:
[array]$Mbx = Get-ExoMailbox -Properties Office, InPlaceHolds, ArchiveStatus
It’s all too easy to forget to request a property. I can appreciate that perspective because I’ve fallen into the unrequested property hole myself.
Another reason why people stick with Get-Mailbox is that Microsoft has modernized the older cmdlets to remove dependencies like basic authentication and remote PowerShell. I’ve heard the feeling expressed that if Microsoft puts time and effort into upgrading a cmdlet, it must be a good sign that the cmdlet can safely be used. And yes, Get-Mailbox is very safe to use.
The question then is when to use Get-Mailbox and when to opt for its turbo-charged version? I propose a simple guideline:
When you’re working interactively with less than five mailboxes, use Get-Mailbox. The cmdlet will fetch all available mailbox properties, but that’s OK because relatively few objects are involved. In addition, requests don’t need to page to find more data and the chances of time outs or other known problems are small when fetching a small number of mailboxes.
Anytime else, use Get-ExoMailbox. That means all scripts and Azure Automation runbooks should use Get-ExoMailbox. Scripts should include the best possible code and that means using the best possible cmdlets. The issue with requesting the correct set of properties shouldn’t occur because the testing of the script will highlight any problems in this area.
The same rule of thumb applies to the other REST cmdlets like Get-ExoMailboxStatistics, Get-ExoMailboxFolderStatistics, and so on. I have a lingering suspicion that Microsoft will dedicate more tender loving care to the REST cmdlets than their older counterparts. It’s probably not true, but stranger things have happened.
The Importance of Server-Side Filtering When Fetching Mailboxes
While I’m at it, let me advance another golden rule for use with either Get-Mailbox or Get-ExoMailbox: never use a client-side filter when a server-side filter is available. The reason is that a server-side filter is always faster than applying a client-side filter after retrieving all possible matching data over the network.
I review many articles and it’s surprising when a code example is submitted that abuses the server-side principle. For example, this server-side filtered command:
[array]$MBx = Get-ExoMailbox -filter {Office -eq ‘New York’} -Properties Office
is much faster than:
[array]$Mbx = Get-ExoMailbox -Properties Office -ResultSize 1000 | Where-Object {$_.Office -eq ‘New York’}
The exact performance advantage depends on the number of objects that are retrieved, but I have never seen a case when a client-side filter wins. Use the Measure-Command cmdlet to measure the speed advantage by running commands against mailboxes. This article has more information about using filters with Get-ExoMailbox.
A PowerShell Principle
The principle of using server-side filters extends anywhere PowerShell fetches data from a server, including using Microsoft Graph PowerShell SDK cmdlets. If you see the Where-Object cmdlet being used to extract a set of objects from a larger set, ask if the larger set could have been reduced with a server-side filter. In many cases, it can, and if a server-side filter can be applied, your scripts will run faster, no matter if you use Get-Mailbox or Get-ExoMailbox (but use the latter).
Learn how to exploit the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
Managed Instance intermittently logs me in as DBO
Intermittently Azure Managed Instance logs me in as dbo when I log in using Entra. I will wake up the next day to find the problem gone.
This is an issue because when i try to create a new transactional replication publication
I get
Msg 15007, Level 16, State 1, Procedure sys.sp_grant_publication_access, Line 112 [Batch Start Line 17]
‘dbo’ is not a valid login or you do not have permission.
Can anyone help me understand why Managed Instance sometimes logs me in as ‘dbo’ or should I raise a ticket..?
FYI – I am in the Entra Admin group on the server
Intermittently Azure Managed Instance logs me in as dbo when I log in using Entra. I will wake up the next day to find the problem gone.This is an issue because when i try to create a new transactional replication publication I get Msg 15007, Level 16, State 1, Procedure sys.sp_grant_publication_access, Line 112 [Batch Start Line 17]’dbo’ is not a valid login or you do not have permission. Can anyone help me understand why Managed Instance sometimes logs me in as ‘dbo’ or should I raise a ticket..? FYI – I am in the Entra Admin group on the server Read More
DevOps Templates
Hello
Is it possible to define templates that includes the complete tree?
Epic, Features, Stories.
Regards
JFM_12
HelloIs it possible to define templates that includes the complete tree?Epic, Features, Stories.RegardsJFM_12 Read More
Only services and no staff
We want to use Bookings to have bookable physical resources. We have five different resources that we want people to be able to reserve for different time slots. These resources should not be connected to any staff calendar or other calendar, but have their own unique resource calendar – how to fix this?
We want to use Bookings to have bookable physical resources. We have five different resources that we want people to be able to reserve for different time slots. These resources should not be connected to any staff calendar or other calendar, but have their own unique resource calendar – how to fix this? Read More
How Do I Convert WebP to PNG on Windows 11?
Hi,
I really need some help in here as I just upgraded my PC to Windows 11 from Windows 10. I have more than 10 .webp images downloaded from web and currently look for a way to bulk convert webp to png so I can edit and share them with others.
Does Windows 11 comes with a WebP to PNG Converter? If yes, could you kindly let me know. In addition, it could be better to keep the quality after conversion.
Thank you
Hi,I really need some help in here as I just upgraded my PC to Windows 11 from Windows 10. I have more than 10 .webp images downloaded from web and currently look for a way to bulk convert webp to png so I can edit and share them with others. Does Windows 11 comes with a WebP to PNG Converter? If yes, could you kindly let me know. In addition, it could be better to keep the quality after conversion. Thank you Read More
Is possible to configure in a teams group conversation the repository of files of that conversation?
Hello:
I need to be able to configure a specific repository files where people of teams conversation group can save files. Only for that conversation.I describe the scenario:
I need to create, dynamically, different group conversations depending on what theme is discussed, i will add different members to every conversation.
I will create a new specific folder (each time) in a teams channel group where only all conversation members and channel members will have all permissions.
Is it possible to configure the repository of files for each conversation so that it points to the folder created and shared in teams channel? (see image attached)
If it’s possible, I need to know which are the commands or functions that permits that.
I would like to use them in powerapps program.
Thank you very much.
Hello: I need to be able to configure a specific repository files where people of teams conversation group can save files. Only for that conversation.I describe the scenario: I need to create, dynamically, different group conversations depending on what theme is discussed, i will add different members to every conversation.I will create a new specific folder (each time) in a teams channel group where only all conversation members and channel members will have all permissions.Is it possible to configure the repository of files for each conversation so that it points to the folder created and shared in teams channel? (see image attached) If it’s possible, I need to know which are the commands or functions that permits that.I would like to use them in powerapps program. Thank you very much. Read More
Optimizing Models: Fine-Tuning, RAG and Application Strategies
Before diving in, let’s take a moment to review the key resources and foundational concepts that will guide us through this blog. That will ensure we’re well-equipped to follow along. This brief review will provide a strong starting point for exploring the main topics ahead.
Microsoft Azure: Microsoft offers a cloud computing platform and a suite of cloud services. It provides a wide range of cloud-based
services and solutions that enable organizations to build, deploy, and manage applications and services through Microsoft’s global network of data centers.
AI Studio: a platform that helps you evaluate model responses and orchestrate prompt application components with prompt flow for better performance. The platform facilitates scalability for transforming proof of concepts into full-fledged production with ease, continuous monitoring and refinement support long-term success.
Fine-tuning: is the process of retraining pretrained models on specific datasets. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn’t well represented when you originally trained the base model.
Retrieval Augmented Generation (RAG): is a pattern that works with pretrained large language models (LLM) and your own data to generate responses. In Azure Machine Learning, you can implement RAG in a prompt flow.
Our hands-on learning will be developing an AI-based solution that helps the user extract financial information and insights from investment/finance books and newspaper in our database.
The process is divided into three main parts:
Fine-tune a base model with financial data to help the model provide more specific responses and be grounded and rooted with data related to finance and investment.
Implement RAG so that the response won’t be only based on the data it was trained with (fine-tuned with) but also based on other data sources (the user’s input in our case).
Integration of the deployed model into a web app so that it could be used through a user interface.
1- Setup:
Create a resource group which is defined as a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group.
You need to specify your subscription, a unique resource group name, and the region.
Create an Azure OpenAI resource: Azure OpenAI Service provides REST API access to OpenAI’s powerful language models including GPT-4o, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, image understanding, semantic search, and natural language to code translation
– Create a text embedding model: the embedding is an information-dense representation of the semantic meaning of a piece of text. Each embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlated with semantic similarity between two inputs in the original format.
Create an AI search resource: Azure AI Search (“Azure Cognitive Search” previously) provides secure information retrieval at scale over user-owned content in traditional and generative AI search applications. Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include data exploration, and increasingly feeding query results to prompts based on your proprietary grounding data for conversational search as we will do in our example.
Create a storage account: it contains all your Azure Storage data objects: blobs, files, queues, and tables. The storage account provides a unique namespace for your Azure Storage data that is accessible from anywhere in the world over HTTP or HTTPS.
– Create a blob container: blob Storage is Microsoft’s object optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn’t adhere to a particular data model or definition, such as text or binary data. it will be used to store your data.
Navigate to your storage resource -> Click on Storage browser tab on the left -> Click Blob Containers -> Click on + add container then Upload your data. Our data was pdf files (books and newspapers) and csv files from Kaggle, all are related to finance and investment.
Create a search Index: is your searchable content, available to the search engine for indexing, full text search, vector search, hybrid search, and filtered queries. Check that the status of your AI search is “Running”.
– Import and vectorize data: integrated vectorization is an extension of the indexing and query pipelines in Azure AI Search. It adds the following capabilities: Data chunking (split the data into smaller, manageable pieces) during indexing. Text-to-vector conversion during indexing.
Navigate to you AI search service -> Click on Indexes tab on the left -> Click on “Import and Vectorize data” -> Select your text embedding model previously deployed.
2- Fine Tune a Base Model
Preparing Dataset: before starting fine tuning, we need to prepare our training and validation data. Which could be collected manually or generated from text files or tabular data. However, it is important that the data confirm the following template:
{“messages”: [{“role”: “system”, “content”: “content goes here”}, {“role”: “user”, “content”: “query goes here?”}, {“role”: “assistant”, “content”: “response goes here.”}]}.
To satisfy this, we prepared our two data sets training_data.jsonl and val_data.jsonl for training and validation, respectively.
Below is an example of our training_data.jsonl:
{“messages”: [{“role”: “system”, “content”: “You are a helper in finance and an advisor in investment”}, {“role”: “user”, “content”: “What makes a successful startup?”}, {“role”: “assistant”, “content”: “Luck, hard work, and maybe skills… mostly luck, hard work, and consistency.”}]}
Both data files are attached to this blog. They were collected manually through some examples.
Evaluate data to ensure its quality, check number of tokens and its distribution.
import json
import tiktoken
import numpy as np
from collections import defaultdict
encoding = tiktoken.get_encoding(“cl100k_base”)
def num_tokens_from_messages(messages, tokens_per_message=3, tokens_per_name=1):
num_tokens = 0
for message in messages:
if not isinstance(message, dict):
print(f”Unexpected message format: {message}”)
continue
num_tokens += tokens_per_message
for key, value in message.items():
if not isinstance(value, str):
print(f”Unexpected value type for key ‘{key}’: {value}”)
continue
num_tokens += len(encoding.encode(value))
if key == “name”:
num_tokens += tokens_per_name
num_tokens += 3
return num_tokens
def num_assistant_tokens_from_messages(messages):
num_tokens = 0
for message in messages:
if not isinstance(message, dict):
print(f”Unexpected message format: {message}”)
continue
if message.get(“role”) == “assistant”:
content = message.get(“content”, “”)
if not isinstance(content, str):
print(f”Unexpected content type: {content}”)
continue
num_tokens += len(encoding.encode(content))
return num_tokens
def print_distribution(values, name):
if values:
print(f”n#### Distribution of {name}:”)
print(f”min / max: {min(values)}, {max(values)}”)
print(f”mean / median: {np.mean(values)}, {np.median(values)}”)
print(f”p5 / p95: {np.quantile(values, 0.05)}, {np.quantile(values, 0.95)}”)
else:
print(f”No values to display for {name}”)
files = [
r’train_data.jsonl’,
r’val_data.jsonl’
]
for file in files:
print(f”Processing file: {file}”)
try:
with open(file, ‘r’, encoding=’utf-8′) as f:
total_tokens = []
assistant_tokens = []
for line in f:
try:
ex = json.loads(line)
messages = ex.get(“messages”, [])
if not isinstance(messages, list):
raise ValueError(“The ‘messages’ field should be a list.”)
total_tokens.append(num_tokens_from_messages(messages))
assistant_tokens.append(num_assistant_tokens_from_messages(messages))
except json.JSONDecodeError:
print(f”Error decoding JSON line: {line}”)
except ValueError as ve:
print(f”ValueError: {ve} – line: {line}”)
except Exception as e:
print(f”Unexpected error processing line: {e} – line: {line}”)
if total_tokens and assistant_tokens:
print_distribution(total_tokens, “total tokens”)
print_distribution(assistant_tokens, “assistant tokens”)
else:
print(“No valid data to process.”)
print(‘*’ * 50)
except FileNotFoundError:
print(f”File not found: {file}”)
except Exception as e:
print(f”An unexpected error occurred: {e}”)
Login to AI Studio
Navigate to the Fine-tuning tab
Check the available models for fine-tuning within your region.
Upload your training and validation data
Since we have our data locally, we uploaded them. In case you want to save your data in the cloud and use the URL for later in place of the “Uploading files” option, you can use SDK and follow this code:
# Initialize AzureOpenAI client
client = AzureOpenAI(
azure_endpoint=azure_oai_endpoint,
api_key=azure_oai_key,
api_version=version # Ensure this API version is correct
)
training_file_name = r’path’
validation_file_name = r’path’
try:
# Upload the training dataset file
with open(training_file_name, “rb”) as file:
training_response = client.files.create(
file=file, purpose=”fine-tune”
)
training_file_id = training_response.id
print(“Training file ID:”, training_file_id)
except Exception as e:
print(f”Error uploading training file: {e}”)
try:
# Upload the validation dataset file
with open(validation_file_name, “rb”) as file:
validation_response = client.files.create(
file=file, purpose=”fine-tune”
)
validation_file_id = validation_response.id
print(“Validation file ID:”, validation_file_id)
except Exception as e:
print(f”Error uploading validation file: {e}”)
You can specify the hyperparameters such as batch size, or leave them with default values.
Review the settings before submitting
Check the status of the fine-tuning in your dashboard, changing from Queued to Running to Completed.
Once completed, your fine-tuned model is ready to be deployed. Click on ‘Deploy’
After successful deployment, you can go back to Azure Open AI and find your fine-tuned model deployed along with your previous text embedding model.
3- Integration into Web App
The concept here is to rely on the model’s knowledge + users’ documentation. We have two options and both provide high precision for responses:
Look for the answer in the documents, and if not found, return a response based on the internal knowledge of the model.
Combine the two responses from the retriever and the model. Which is the one we opt for here.
Also, for integration, we have two ways we may follow: through the Azure OpenAI User Interface and deploying into an Azure static web app or develop your own web app and use the Azure SDK to integrate your model.
1- Deploying into Azure static web app
Click on “Open in Playground” below your deployments list in Azure open AI
Click “Add your data”
Choose your Azure blob storage as data source à Choose Index name “myindex”
Customize the system message to “You are a financial advisor and an expert in investment. You have access to a wide variety of documents. Use your own knowledge to answer the question and verify it or supplement it using the relevant documents when possible.” This system message will enable the model not only to rely on documents but also rely on its internal knowledge.
Complete the setup and click on “Apply changes”
Deploy to a new web app and configure the web app name, subscription, resource group, location, and pricing plan.
2- Develop your own web App and use Azure SDK
Prepare your environment
load_dotenv ()
azure_oai_endpoint = os.getenv(“AZURE_OAI_FINETUNE_ENDPOINT2”)
azure_oai_key = os.getenv(“AZURE_OAI_FINETUNE_KEY2”)
azure_oai_deployment = os.getenv(“AZURE_OAI_FINETUNE_DEPLOYMENT2”)
azure_search_endpoint = os.getenv(“AZURE_SEARCH_ENDPOINT”)
azure_search_key = os.getenv(“AZURE_SEARCH_KEY”)
azure_search_index = os.getenv(“AZURE_SEARCH_INDEX”)
Initialize your AzureOpenAI client
client = AzureOpenAI(
base_url=f”{azure_oai_endpoint}/openai/deployments/{azure_oai_deployment}/extensions”,
api_key=azure_oai_key,
api_version=”2023-09-01-preview)
Configure your data source for Azure AI search. This will retrieve response from our stored files.
extension_config = dict(
dataSources= [
{
“type”: “AzureCognitiveSearch”,
“parameters”: {
“endpoint”: azure_search_endpoint,
“key”: azure_search_key,
“indexName”: azure_search_index,
}
}
]
)
RAG is used to enhance a model’s capabilities by adding more grounded information, not to eliminate the model’s internal knowledge.
RAG is used to enhance a model’s capabilities by adding more grounded information, not to eliminate the model’s internal knowledge.
Some issues that you may face during development:
Issue 1: make sure to verify the OpenAI version. You can pin the version to openai=0.28 or upgrade it and follow migration steps.
Issue 2: you may run out of quota and be asked to wait for 24 hours till the next try. Make sure to always have enough quota in your subscription.
Issue 1: make sure to verify the OpenAI version. You can pin the version to openai=0.28 or upgrade it and follow migration steps.
Issue 2: you may run out of quota and be asked to wait for 24 hours till the next try. Make sure to always have enough quota in your subscription.
Next, you can look at how to do real-time injection so that you personalize more of the responses. Try to find how to rely between your web app, the user’s input I/O, the searching index, and LLM.
Keyword: Langchain, Databricks
Resources:
what-is-azure-used-for.
What is Azure AI Studio? – Azure AI Studio | Microsoft Learn.
Fine-tuning in Azure AI Studio – Azure AI Studio | Microsoft Learn.
machine-learning/concept-retrieval-augmented-generation.
Manage resource groups – Azure portal – Azure Resource Manager | Microsoft Learn.
What is Azure OpenAI Service? – Azure AI services | Microsoft Learn
Introduction to Azure AI Search – Azure AI Search | Microsoft Learn
storage-account-create
Introduction to Blob (object) Storage – Azure Storage | Microsoft Learn
How to generate embeddings with Azure OpenAI Service – Azure OpenAI | Microsoft Learn.
Azure OpenAI Service models – Azure OpenAI | Microsoft Learn
Search index overview – Azure AI Search | Microsoft Learn
Integrated vectorization – Azure AI Search | Microsoft Learn
Easy Guide to Transitioning from OpenAI to Azure OpenAI: Step-by-Step Process
LangChain on Azure Databricks for LLM development – Azure Databricks | Microsoft Learn
Build a RAG-based copilot solution with your own data using Azure AI Studio – Training | Microsoft Learn
RAG and generative AI – Azure AI Search | Microsoft Learn
Retrieval augmented generation in Azure AI Studio – Azure AI Studio | Microsoft Learn
Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview) – Azure Machine Learning | Microsoft Learn
Retrieval-Augmented Generation (RAG) with Azure AI Document Intelligence – Azure AI services | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Certifícate con Learn Live GitHub Universe en Español
GitHub Universe se acerca! Microsoft y GitHub se han unido para ofrecer una nueva serie especial de Learn Live en inglés y español: GitHub Universe 2024. Del 10 al 24 de Octubre, aprenderás a aprovechar al máximo GitHub Copilot, automatizar con GitHub Actions para crear sitios web y API, y mucho más. También recibirás un cupón de descuento para tomar una certificación de GitHub por $35 USD (el precio regular es $99 USD). Regístrate ahora!
*Oferta válida durante 48 horas después de una sesión. Límite de un cupón de descuento de GitHub por persona. Esta oferta no es transferible y no se puede combinar con ninguna otra oferta. Esta oferta finaliza 48 horas después de una sesión y no se puede canjear por dinero en efectivo. Los impuestos, si los hubiera, son responsabilidad exclusiva del destinatario. Microsoft se reserva el derecho de cancelar, cambiar o suspender esta oferta en cualquier momento sin previo aviso.
Learn Live GitHub Universe
Ya sea que estés comenzando o buscando mejorar tus habilidades, este es un evento imperdible para cualquier persona interesada en crecer en su carrera en tecnología. Todas nuestras sesiones se llevarán a cabo en horario de la Ciudad de México (GMT-6). REGÍSTRATE AQUÍ: Learn Live GitHub Universe 2024!
10 de octubre, 5:00 pm GMT-6
Mexico City time
Crea READMEs impresionantes con Markdown
Aprende como usar Markdown, un lenguaje de marcado super util que te permitira crear contenido impactante y resaltar tu repositorio en GitHub.
Oct 17th, 5:00 pm GMT-6
Mexico City time
Construye un sitio web con GitHub Copilot
¡Construye una API web con Python usando las últimas tecnologias de GitHub como Codespaces y GitHub Copilot. Crea un ejemplo de repositorio que puedes usar como parte de tu portafolio en tu cuenta usando las mejores guias y ejemplos de expertos.
Oct 24th, 5:00 pm GMT-6
Mexico City time
Automatiza tu repositorio con GitHub Actions
Usa GitHub Actions para crear automatizacion y evitar tareas manuales o repetitivas en tu proyecto! En esta sesión, ganarás habilidades necesarias para implementar automatización usando GitHub Actions en un repositorio de código.
Si deseas unirte a la serie en ingles, por favor, visita nuestro sitio web de Microsoft Reactor y regístrate ahora!
Explorando las certificaciones de GitHub
Obtener una certificación de GitHub es una poderosa afirmación de tus habilidades, credibilidad, confiabilidad y experiencia en las tecnologías y herramientas de desarrollo utilizadas por más de 100 millones de desarrolladores a nivel mundial. Actualmente, GitHub ofrece cuatro certificaciones, y en octubre, se lanzará la quinta certificación centrada en GitHub Copilot.
GitHub Foundations: destaca tu comprensión de los temas y conceptos fundamentales de colaborar, contribuir y trabajar en GitHub. Este examen cubre temas como colaboración, productos de GitHub, conceptos básicos de Git y trabajo dentro de repositorios de GitHub.
GitHub Actions: certifica tu competencia en la automatización de flujos de trabajo y la aceleración del desarrollo con GitHub Actions. Pon a prueba tus habilidades en la optimización de flujos de trabajo, la automatización de tareas y la optimización de pipelines de software, incluyendo CI/CD, dentro de flujos de trabajo totalmente personalizables.
GitHub Advanced Security: destaca tu conocimiento en seguridad de código con la certificación de GitHub Advanced Security. Valida tu experiencia en la identificación de vulnerabilidades, la seguridad de flujos de trabajo y la implementación de seguridad robusta, elevando los estándares de integridad del software.
GitHub Administration: certifica tu capacidad para optimizar y gestionar un entorno saludable de GitHub con el examen de GitHub Admin. Destaca tu experiencia en la gestión de repositorios, la optimización de flujos de trabajo y la colaboración eficiente para apoyar proyectos exitosos en GitHub.
Únete a nosotros este octubre para Learn Live GitHub Universe y obtén un cupón de descuento especial para una certificación de GitHub.
Microsoft Tech Community – Latest Blogs –Read More
Defender >>EndpointSecurity-Reports-Summary Vs EDR Onboarding status
I have a query related to MDE >>Endpoint Security>>Endpoint Detection and response
To check how many devices are on boarded noticing 2 menus
i.e. summary and EDR Onboarding status.
What is the difference between Summary and EDR onboarding status ?
Which menu value has to be factored in identifying the total devices that are onboarded successfully in MDE in a tenant?
I have a query related to MDE >>Endpoint Security>>Endpoint Detection and response To check how many devices are on boarded noticing 2 menus i.e. summary and EDR Onboarding status. What is the difference between Summary and EDR onboarding status ?Which menu value has to be factored in identifying the total devices that are onboarded successfully in MDE in a tenant? Read More
>EndpointSecurity-Reports-Summary Vs EDR Onboarding status” />
Tread dkim-signed mails from our own domain from external relayhost to exchange online, as internal
Hi Folks,
we use some 3rd-party web-applications that send mails to exchange online via a relay-service. These mails carry a valid dkim-signature for our primary domain (d=…). Sender address is our accepted (primary) maildomain.
Exchange Online marks these mails as outside of the organisation, as they are neither authenticated via sending-IP, nor certificate. As the relay-service is a shared service, we do not want to whitelist the ip or ssl-cert as others use this system as well.
Is there a way to use the dkim-signature to tread this mails as internal?
Thank you.
Siegmar
Hi Folks, we use some 3rd-party web-applications that send mails to exchange online via a relay-service. These mails carry a valid dkim-signature for our primary domain (d=…). Sender address is our accepted (primary) maildomain.Exchange Online marks these mails as outside of the organisation, as they are neither authenticated via sending-IP, nor certificate. As the relay-service is a shared service, we do not want to whitelist the ip or ssl-cert as others use this system as well. Is there a way to use the dkim-signature to tread this mails as internal? Thank you. Siegmar Read More
Azure Backup-SAP HANA DB Backup Delivers More Value at Lower TCO with Reduced Protected Instance Fee
Azure Backup for SAP HANA Database Delivers More Value at Lower TCO with Reduced Protected Instance Fees starting 1st Sept’2024
At Azure, our commitment to providing superior value to our customers is unwavering. We are thrilled to announce a significant update that will bring enhanced cost efficiency to our SAP HANA Database Backup service. Starting September 1, 2024, we are reducing the Protected Instance (PI) fees for “Azure Backup for SAP HANA on Azure VM.” This change is designed to deliver more value at a lower cost, making it easier for enterprises to protect their critical data without compromising on quality or performance.
New Pricing Structure: More Value, Lower Cost
With the new Protected Instance (PI) fee structure effective 1st September 2024, both SAP HANA Streaming/Backint-based backups and SAP HANA Snapshot-based backups will see reduced costs. Here’s how the new pricing breaks down:
HANA Backint/Streaming Backup
HANA Snapshot Backup
DB Size
Old pricing (Protected Instance Fee for East US2)
New pricing (Protected Instance Fee for East US2)
PI Cost Savings %
DB Size
Old pricing (Protected Instance Fee for East US2)
New pricing (Protected Instance Fee for East US2)
PI Cost Savings %
500 GB
$80
$80
No Change
1TB
$160
$80
50%
1 TB
$160
$80
50%
10 TB
$1600
$160
90%
5TB
$800
$80
90%
20 TB
$3200
$320
90%
10 TB
$1600
$80
95%
30 TB
$4800
$480
90%
HANA Streaming Backup: A flat rate of $80 (East US2) per instance, with standard regional uplift, regardless of the HANA database size.
For example, if you are protecting 1.2 TB of HANA database in one instance running in the East US2 region, the New PI cost would be flat $80 (East US2 Region) per month. Previously, the cost would have been $240 + storage consumed.
HANA Snapshot Backup: $80 (East US2) per 5 TB increment, with standard regional uplift.
For example, if you have 10 TB of HANA database in one instance running in East US2 region, the New PI cost would be $160 + storage consumed, Previously, the cost would have been $1600 + storage consumed. Following SAP recommendation, if you opt for a weekly full streaming backup in addition to a Snapshot backup, we will apply a single PI fee applicable for HANA Snapshot backup.
For more details on the new pricing structure, visit Pricing – Cloud Backup | Microsoft Azure and Pricing Calculator | Microsoft Azure.
Real-World Impact: Customer Scenarios
To better illustrate the impact of this pricing change, let’s consider two typical customer scenarios: The introduction of the new pricing model with compression has significantly reduced the overall backup costs across all the given instance configurations and customers can gain TCO savings ranging from 45% to 62%, indicating substantial cost efficiency gained through the new pricing. For large databases, we have implemented a snapshot backup technology that is both fast and cost-effective, utilizing a forever-incremental snapshot approach. This enhances the speed of backup and restore operations while also reducing the storage size required for backups.
Contoso (Small-Medium Enterprise), Manufacturing and Retail Industry: Customer running, 5 small HANA instances (600 GB each) and 2 large HANA instances (4 TB each) in the East US2 region with Backup Policy of Weekly Full backup and Daily Incremental – Retain Daily Backup for 7 days, retain weekly Backup for 4 weeks and retain Monthly backup for 3 months with ZRS resiliency.
Northwind Traders (Large Enterprise), FMCG Industry: FMCG, customer running 20 small HANA instances (1 TB each) and 5 large HANA instances (10 TB each) in the East US2 region. Backup Policy of Weekly Full and Daily Incremental – Retain Daily Backup for 7 days, retain weekly Backup for 4 weeks and retain Monthly backup for 6 months with ZRS resiliency.
Contoso -SME
Old Pricing
New Pricing +Compression
Overall TCO Savings
600 GB x 5 HANA Instances
PI
800
PI
400
Storage
573
Storage
352
PI+ Storage
1373
PI+ Storage
752
45%
4TB X 2 HANA Instances
PI
1440
PI
160
Storage
1567
Storage
962
PI+ Storage
3007
PI+ Storage
1122
62%
Northwind Traders -Large Enterprise Customer
Old Pricing
New Pricing + Compression
Overall TCO Savings
20 x 1 TB HANA Instances
PI
4,800
PI
1600
Storage
5,053
Storage
3094
PI+ Storage
9,853
PI+ Storage
4694
52%
5 x 10 TB HANA Instances
PI
8400
PI
400
Storage
12633
Storage
7735
PI+ Storage
21033
PI+ Storage
8135
61%
*Above backup calculation is using backint/streaming based backup
Why Azure Backup for SAP HANA?
Azure Backup for SAP HANA DB offers native backup support that seamlessly integrates with SAP HANA’s backint APIs. This solution allows businesses to efficiently back up and restore SAP HANA databases running on Azure VMs while taking advantage of the enterprise management capabilities that Azure Backup provides.
Here’s why Azure Backup stands out as a top choice for enterprises:
1.High Performant Backups with 15-Minute RPO for Rapid Recovery
Time is of the essence when it comes to data recovery, especially during a critical system failure or cyberattack. Azure Backup delivers an impressive 15-minute RPO (Recovery Point Objective), allowing you to recover essential data in under 15 minutes. This drastically reduces downtime, ensuring that your business can bounce back with minimal impact. For environments with heavy workloads, the speed of backup operations is crucial. Azure Backup achieves impressive performance, with speeds ranging from 1.2-1.5 GBps, thanks to multi-streaming support. This ensures that even large databases can be backed up efficiently, minimizing the time taken to secure critical data.
2.One-Click, Point-in-Time Restores with Continuous Protection
In case of an outage or a ransomware attack, Azure Backup offers point-in-time restores that leverage the power of log backups. With just one click, you can restore production databases to a specific point in time on alternative HANA servers, all while Azure manages backup chains behind the scenes. This is particularly crucial for environments using HANA System Replication (HSR), as it ensures the availability and continuity of critical data, even in complex setups.
3.Versatile Recovery Options for Maximum Flexibility
Whether it’s an accidental deletion, corruption, or disaster recovery, Azure Backup offers a variety of flexible recovery options:
Alternate Location Restore (ALR): Restore your database to a new target virtual machine (VM), giving you the ability to recover data in a different environment if the original one is compromised.
Original Location Restore (OLR): For in-place recovery, OLR allows you to restore data back to its original location, minimizing changes to your infrastructure.
Cross Region Restore (CRR): For enhanced disaster recovery capabilities, you can restore backup items to a secondary, Azure-paired region, ensuring that you can access your data even if one region becomes unavailable.
These versatile recovery options make it easy to meet your specific recovery needs, from minor system refreshes to major disaster recovery operations.
4.Long-Term Retention (LTR) with Cost Savings
Many industries require long-term data retention to meet compliance and auditing requirements. Azure Backup supports Long-Term Retention (LTR), enabling businesses to store backups for years. Plus, Azure’s archive tier allows you to move older recovery points to cheaper storage options, reducing costs while maintaining compliance.
5.Unmatched Ransomware Protection
Ransomware attacks are a growing threat, and organizations need to ensure that their backups are protected from such malicious actions. Azure Backup’s immutability feature ensures that backed-up data cannot be modified or deleted during a specified retention period, even if attackers gain access to the system. Additionally, with Soft Delete, deleted backups are retained for 14 days, ensuring they can be recovered even if accidentally or maliciously deleted.
These security measures, coupled with Multi-User Authorization, Private Endpoints, and Encryption, ensure that your backup data is protected from both internal and external threats.
6.Centralized Backup Management
Managing multiple backup and disaster recovery processes can be complex, but Azure Backup simplifies this with a single pane of glass for monitoring and managing backups across your environment. Whether you’re using the Azure portal, Azure CLI, Terraform, or SDK, Azure Backup provides a unified experience for managing all your backup needs at scale.
7.Native Compression for Storage Optimization
With HANA’s native compression feature, Azure Backup reduces storage consumption by approximately 30-50%, allowing businesses to optimize storage costs while maintaining robust backup processes. This native compression helps ensure that you are storing data more efficiently, without compromising on availability or performance.
Maximizing Storage Cost Efficiency
In addition to the reduced PI fees, there are several strategies you can employ to further optimize your storage costs:
Enable Native HANA Compression: Reduce backup storage consumption by approximately 30-50%, Enable compression via backup policy or while using “Backup Now” option.
Adjust Backup Policy: Switch from a “daily full” backup to a “weekly full” with “daily incremental” backups to save on storage costs.
Utilize Snapshot Backups: For large databases (> 10TB), snapshot backups are ideal as they start with a full backup and are forever incremental.
Opt for Reserved Pricing: Commit to a one- or three-year reservation to benefit from discounted Azure Backup Storage Reserved Capacity pricing.
Use Archive Tier: Leverage the archive tier for long-term retention of recovery points.
These changes underscore our dedication to helping you achieve greater cost efficiency and value in your SAP HANA database backups on Azure. We encourage you to review your backup strategies and take advantage of the new pricing and optimization options available.
Microsoft Tech Community – Latest Blogs –Read More
Keep pushing the boundaries. A journey with Parkinson’s
Meet Somesh Pathak, a Security MVP from the Netherlands, whose journey took an unexpected turn when he was diagnosed with early-onset Parkinson’s in 2020. From battling the stigma at work to finding strength in his family, Somesh’s story is one of resilience and determination. We asked him how this challenge has reshaped his journey as a professional, community leader, and father.
How has being diagnosed with early-onset Parkinson’s affected both your personal and professional life?
I was diagnosed with Young-onset Parkinson’s disease (YOPD) back in 2020, but early signs of hypokinetic rigid syndrome started in 2016–2017. I was living in India at the time and the doctors we’re not able to find the reason of my symptoms. I relocated to Stockholm in 2019 and underwent extensive tests at the Karolinska Hospital where I was diagnosed with idiopathic Parkinson’s disease.
Transformation from a healthy life to a life with a hidden disability has seriously affected my personal and professional life. One of the most difficult challenges I faced professionally was the lack of support from my previous manager at a former employer. At one point, I felt ashamed of having YOPD, primarily due to my symptoms and the stigma at work. Instead of receiving the assistance I needed, I was asked to work extended hours, which only stressed the situation.
Controlling the illness has also been quite challenging. One of the most difficult things I’ve experienced is feeling a lot of guilt, mostly related to my family and my 5-year-old son. I feel like I am not giving my all as even sports or lighthearted things suddenly have become challenging. Often depressed and dissatisfied, I feel as though I do not always reciprocate 100% the efforts of others who look after me.
Socially, my self-esteem and behavior around others has been impacted, because I fear my symptoms will show up or change randomly. The symptoms are always there, influencing my voice, sleep, diet, and general physical condition. Notwithstanding these difficulties, I try to keep as much normalcy as I can by balancing work and my personal life.
What has been the most challenging aspect of your Parkinson’s journey, and how have you managed to stay motivated throughout?
The psychological and emotional toll has been among the toughest aspects of my Parkinson’s path. Though the physical suffering was difficult, the heaviest toll on me was the feeling of frustration and helplessness. Dealing with YOPD already left the left side of my body weak, and after surgery to repair my ACL, my right side became significantly affected as well. This coupled with the understanding that healing could last up to nine months often made me feel as though I was losing the battle.
The unflinching support of my family, especially that of my wife and son— kept me going through these challenging times. My wife was my rock; she shared inspirational tales of those who had surmounted much more difficult obstacles in their life, therefore motivating me to keep on. Her support allowed me to concentrate on recovery and bounce back with full confidence. Another inspiration came from my son, reminding me of the need to be present and resilient for him.
Apart from my family, my colleagues have played a huge role in keeping me motivated. They encouraged me to keep pushing my boundaries, both physically and mentally, and gave me the confidence to face each day with a renewed sense of determination.
Though it has not been simple, with the emotional support of my loved ones and the strength I get from their belief in me, I have been motivated through one of the toughest obstacles of my life.
What does being a Microsoft MVP and contributing to the tech community mean to you?
Being a Microsoft MVP is about more than just title or recognition; it is about a dedication to significantly contribute to the tech community in meaningful ways. Every contribution — from blogging to event speaking to helping others solve their tech challenges, each contribution is an opportunity to make a positive impact. For me, it is about sharing knowledge, inspiring others, and being part of a larger community that thrives on innovation and collaboration.
Particularly on the tough days when other aspects of my life—such as dealing with health challenges—have been overwhelming, the MVP program has been a major source of encouragement. Knowing that my efforts are valued and that I can make a difference in someone’s learning journey motivates me to keep pushing forward. It gives me a sense of purpose and direction, even when times are difficult.
The community’s outpouring of support when I shared my personal story at a monthly Nordics & Benelux MVP call was quite inspiring.
My peers’ encouragement and solidarity reminded me of the power of a strong, supporting group and helped drive me. It is this sense of belonging, coupled with the opportunity to give back, that makes being a Microsoft MVP so special to me.
Based on your personal experiences and insights, what advice would you offer to someone going through a personal, health, or professional crisis?
Life’s challenges, be they physical injuries or health conditions, test our limits. But they also reveal our strength, resilience, and capacity to adapt. Through my journey of recovery and community contribution, I have learned that while the path may be tough, the destination is worth every struggle. There will be more challenges, more hurdles to overcome, but with every step, you must grow stronger
Keep pushing the boundaries. Stay strong, stay motivated, and continue making a difference. We are all in this together, and together, we can achieve greatness. There is no limit to what we can achieve and what we can accomplish when we refuse to give up.
To everyone out there facing their own battles, remember this: Keep pushing. Your spirit is stronger than you think. Use your challenges as a springboard to greater heights. Together, we can achieve incredible things, one step at a time.
Microsoft Tech Community – Latest Blogs –Read More