Month: September 2024
Windows Does Not Allow Copy Functionality
Clearly, individuals seeking the requested version often wish to duplicate it for the purpose of sharing it with the original requester.
Clearly, individuals seeking the requested version often wish to duplicate it for the purpose of sharing it with the original requester. Read More
Term picker not working when opened from Teams tab
Hi!
There seems to be a problem with the term picker for SharePoint forms when opened from a Teams tab. It’s not loading the terms as shown below:
Inside Teams (new client, both desktop and web)Directly in SharePoint/Lists
Looking in the F12-console there’s an error when opening from Teams:
Uncaught SecurityError: Failed to read a named property ‘termPickerDialog’ from ‘Window’:
Blocked a frame with origin “https://[tenant].sharepoint.com” from accessing a cross-origin frame.
I have tried this in a couple of tenants and the behavior is the same – when using a Lists/SharePoint tab in Teams, the term picker is not working. It has been this way since beginning of august.
Anyone else experiencing this?
Regards,
Johan
Hi! There seems to be a problem with the term picker for SharePoint forms when opened from a Teams tab. It’s not loading the terms as shown below: Inside Teams (new client, both desktop and web)Directly in SharePoint/Lists Looking in the F12-console there’s an error when opening from Teams:Uncaught SecurityError: Failed to read a named property ‘termPickerDialog’ from ‘Window’:Blocked a frame with origin “https://[tenant].sharepoint.com” from accessing a cross-origin frame. I have tried this in a couple of tenants and the behavior is the same – when using a Lists/SharePoint tab in Teams, the term picker is not working. It has been this way since beginning of august. Anyone else experiencing this? Regards,Johan Read More
How to Disable Sleep Mode on Windows 11
To configure power settings on Windows 11, you can use the following commands:
“`
POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_VIDEO VIDEOAC 0
POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_VIDEO VIDEODC 0
POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_DISK DISKAC 0
POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_DISK DISKDC 0
POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_SLEEP STANDBYAC 0
POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_SLEEP STANDBYDC 0
POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_SLEEP HIBERNATEAC 0
POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_SLEEP HIBERNATEDC 0
POWERCFG -H OFF
“`
These commands will set the specified timeouts to 0 on both AC and DC power sources. However, if the 20-minute hard disk timeout is not being changed, you may need to adjust the specific setting related to it on Windows 11. Unfortunately, there doesn’t seem to be a direct command to modify this specific timeout in Windows 11. You may need to access the power settings in the Control Panel or Settings app to make adjustments to the hard disk timeout.
To configure power settings on Windows 11, you can use the following commands: “`POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_VIDEO VIDEOAC 0POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_VIDEO VIDEODC 0POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_DISK DISKAC 0POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_DISK DISKDC 0POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_SLEEP STANDBYAC 0POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_SLEEP STANDBYDC 0POWERCFG -SETACVALUEINDEX SCHEME_CURRENT SUB_SLEEP HIBERNATEAC 0POWERCFG -SETDCVALUEINDEX SCHEME_CURRENT SUB_SLEEP HIBERNATEDC 0POWERCFG -H OFF“`These commands will set the specified timeouts to 0 on both AC and DC power sources. However, if the 20-minute hard disk timeout is not being changed, you may need to adjust the specific setting related to it on Windows 11. Unfortunately, there doesn’t seem to be a direct command to modify this specific timeout in Windows 11. You may need to access the power settings in the Control Panel or Settings app to make adjustments to the hard disk timeout. Read More
How do I find Windows 10 product key without cmd?
I recently needed to find Windows 10 product key, but I don’t want to use the command prompt (cmd) method. I wonder if there are other simpler ways, such as directly checking in the system settings, or using third-party software? I didn’t record the key before, and now I want to reinstall the system, so I’m a little anxious. Has anyone encountered a similar situation and can recommend some practical methods? Thank you!
I recently needed to find Windows 10 product key, but I don’t want to use the command prompt (cmd) method. I wonder if there are other simpler ways, such as directly checking in the system settings, or using third-party software? I didn’t record the key before, and now I want to reinstall the system, so I’m a little anxious. Has anyone encountered a similar situation and can recommend some practical methods? Thank you! Read More
How to use third party mail security gateway to scan internal/inter-domain mails
Hi All,
In my mail server environment, there’s a requirement for internal emails (e.g., a mail sent from email address removed for privacy reasons to email address removed for privacy reasons) to be scanned by a third-party email security gateway that the company recently purchased. However, from what I understand, this might be impossible because all internal emails use the implicit Send connector named the intra-organization Send connector.
I would like to know if there is any way to edit or configure the intra-organization Send connector so that, instead of using the intra-organization Send connector, the Exchange On-Premise Server will use my custom/recently created connector. This way, all internal emails will be sent to the third-party email security gateway first, scanned, and have all policies applied before the gateway sends the scanned emails to the recipients within the same domain.
Alternatively, if there is another way to achieve my main goal—using a third-party email security gateway to scan internal emails instead of directly sending them and relying solely on the security of the Exchange Server On-Premise for internal mail protection—please let me know.
Thank you.
Hi All,In my mail server environment, there’s a requirement for internal emails (e.g., a mail sent from email address removed for privacy reasons to email address removed for privacy reasons) to be scanned by a third-party email security gateway that the company recently purchased. However, from what I understand, this might be impossible because all internal emails use the implicit Send connector named the intra-organization Send connector.I would like to know if there is any way to edit or configure the intra-organization Send connector so that, instead of using the intra-organization Send connector, the Exchange On-Premise Server will use my custom/recently created connector. This way, all internal emails will be sent to the third-party email security gateway first, scanned, and have all policies applied before the gateway sends the scanned emails to the recipients within the same domain.Alternatively, if there is another way to achieve my main goal—using a third-party email security gateway to scan internal emails instead of directly sending them and relying solely on the security of the Exchange Server On-Premise for internal mail protection—please let me know.Thank you. Read More
Building RAG on Phi-3 locally using embeddings on VS Code AI Toolkit
In the previous tutorial we created embeddings and added them to the opensource vector database ChromaDB. This is one of the prerequisites for creating any retrieval augmented generation (RAG) application. If you want to follow the steps for using embeddings, please take a look at the earlier part to follow along.
Since the database is already created in the earlier tutorial, let us now connect it to Phi-3 using the AI toolkit. AI toolkit enables us to create an endpoint which will help in creating easier API calls. We can utilize the model on our local machine and it can be done completely offline. This uses a concept called as Port forwarding. An earlier blog in this series covered it.
Small language models (SLMs) are language models with a smaller computational and memory footprint, smaller parameter count and are typically have lower response latency compared to LLMs. They can be used to do efficient on-device processing (like mobile phones and edge devices). They are easier to train and adapt to specific specialized domain and also popular in cases where sensitive data needs to be handled and privacy/security are paramount concerns. Phi, is a family of open AI SLMs developed by Microsoft. You can learn more about the Phi-3 model in detail using this excellent cookbook.
Also all the code available for this tutorial is available in the Azure Samples Repository
We well develop one basic chat application, which enable the Phi-3 SLM to communicate with the Vector DB alone and answer the user questions. This will be done in two steps.
Create the basic application workflow
Use streamlit and convert into a webapp.
Basic python knowledge would be needed to understand the code flow. Let’s begin by importing the required libraries.
import streamlit as st
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Chroma
from langchain_community.embeddings import SentenceTransformerEmbeddings
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
1. Streamlit
Module: Streamlit
Purpose: Streamlit is a Python library used to create interactive web applications. It is particularly popular for data science and machine learning applications.
Usage: The st alias is used to access Streamlit’s functions.
ChatOpenAI:
Module: langchain_openai
Purpose: ChatOpenAI is a class that provides an interface to interact with OpenAI’s language models. It allows you to send queries to the model and receive responses.
Usage: Used to initialize and configure the OpenAI model for generating responses based on user input.
Chroma:
Module: langchain_community.vectorstores
Purpose: Chroma is a vector store that allows you to store and retrieve high-dimensional vectors. It is used to store embeddings of documents or text and retrieve them based on similarity searches.
Usage: Typically used in applications that require efficient similarity searches, such as document retrieval or question-answering systems.
SentenceTransformerEmbeddings:
Module: langchain_community.embeddings
Purpose: SentenceTransformerEmbeddings provides a way to generate embeddings using models from the Sentence Transformers library. These embeddings are numerical representations of text that capture semantic meaning.
Usage: Used to convert text into embeddings that can be stored in a vector store like Chroma for similarity searches.
StrOutputParser:
Module: langchain_core.output_parsers
Purpose: StrOutputParser is a class used to parse the output from the language model into a string format.
Usage: Used to convert the raw output from the language model into a more usable string format for display or further processing.
ChatPromptTemplate:
Module: langchain_core.prompts
Purpose: ChatPromptTemplate is a class used to create and manage prompt templates for interacting with the language model. It allows you to define the structure and content of the prompts sent to the model.
Usage: Used to create a consistent and structured prompt for querying the language model.
RunnableParallel and RunnablePassthrough:
Module: langchain_core.runnables
Purpose:
RunnableParallel: A class used to run multiple tasks in parallel. It is useful for performing concurrent operations, such as retrieving multiple pieces of information simultaneously.
RunnablePassthrough: A class that simply passes the input through without any modification. It can be used as a placeholder or in situations where no processing is needed.
Usage: Used to manage and execute multiple tasks concurrently or to pass data through a pipeline without modification.
Once we have the libraries it’s time to initialize the embedding model and SLM.
Let’s first initialize the embedding model. This is necessary to convert text into numerical embeddings. As of now there are no embedding models on AI Toolkit, we can also utilize a direct embedding model from AI Toolkit once they will be available. So for now we can use the Hugging Face Embeddings or Sentence Transformer Embeddings. As we used Hugging Face Embeddings in the previous blog lets now try with Sentence Transformer Embeddings
embeddings = SentenceTransformerEmbeddings(model_name=’all-MiniLM-L6-v2′)
Module: langchain_community.embeddings
Purpose: This line initializes an embedding model using the SentenceTransformerEmbeddings class. The model specified is all-MiniLM-L6-v2, which is a pre-trained model from the Sentence Transformers library.
Usage: The embeddings object will be used to convert text into numerical embeddings. These embeddings capture the semantic meaning of the text and can be used for various tasks such as similarity searches, clustering, or feeding into other machine learning models.
ChatOpenAI: Initializes a ChatOpenAI model with specific parameters, including a base URL for the API, an API key, a custom model name, and a temperature setting. This model is used to generate responses based on user queries.
model = ChatOpenAI(
base_url=”http://127.0.0.1:5272/v1/”,
api_key=”ai-toolkit”,
model=”Phi-3-mini-128k-directml-int4-awq-block-128-onnx”,
temperature=0.7
)
Parameters:
base_url = “http://127.0.0.1:5272/v1/“: Specifies the base URL for the OpenAI API. In this case, it points to a local server running on 127.0.0.1 (localhost) at port 5272.
api_key = “ai-toolkit”: The API key used to authenticate requests to the OpenAI API. In case of AI Toolkit usage we don’t have to specify any API key.
model=”Phi-3-mini-128k-directml-int4-awq-block-128-onnx”: Specifies the model to be used.
temperature=0.7: Sets the temperature parameter for the model, which controls the randomness of the output. A higher temperature results in more random responses, while a lower temperature makes the output more deterministic.
Retriever is a component of generative AI systems that enhances the quality and accuracy of responses by retrieving relevant information from a vast knowledge base.
Benefits of using a RAG retriever:
Improved accuracy: By providing relevant information, the retriever helps the AI model generate more accurate and informative responses.
Enhanced relevance: The retrieved context ensures that the generated responses are directly related to the user’s query.
Factual correctness: The retriever can help prevent the AI model from generating incorrect or misleading information.
In essence, a RAG retriever acts as a bridge between the AI model and the world’s knowledge, ensuring that the generated responses are both informative and relevant.
Now let’s initialize the vector database and also create a retriever object which will enable the app to search and query in the database.
In essence, a RAG retriever acts as a bridge between the AI model and the world’s knowledge, ensuring that the generated responses are both informative and relevant. Now let’s initialize the vector database and also create a retriever object which will enable the app to search and query in the database
load_db=Chroma(persist_directory=’./ai-toolkit’,embedding_function=embeddings)
retriever=load_db.as_retriever(search_kwargs={‘k’:3})
Initialize Chroma Vector Store:
Module: langchain_community.vectorstores
Purpose: This line initializes a Chroma vector store by loading it from the specified directory.
Parameters:
persist_directory=’./ai-toolkit’: Specifies the directory where the vector store is saved. This should match the directory used when the vector store was initially created and saved.
embedding_function=embeddings: The embedding model used to generate embeddings for the text. This should be the same embedding model used when the vector store was created.
Usage: The load_db object represents the loaded vector store, which contains the document embeddings and allows for efficient similarity searches.
Convert to Retriever:
Purpose: Converts the Chroma vector store into a retriever object.
Parameters:
search_kwargs={‘k’: 3}: Specifies the search parameters for the retriever. In this case, k=3 means that the retriever will return the top 3 most similar documents for any given query.
Usage: The retriever object can be used to perform similarity searches on the vector store, retrieving the most relevant documents based on the query.
Once we have done this. Its time to now define the system message,
System message or metaprompt serves as the initial instruction or query that guides the model’s response. It provides the context and sets the direction for the subsequent conversation.
Key components of a system message:
Task or Instruction: Clearly defines the desired outcome or action. For example, “Summarize the article on climate change.”
Context: Provides relevant background information or context to help the model understand the query.
Constraints or Limitations: Specifies any specific requirements or restrictions on the response. For instance, “Keep the response concise and informative.”
As we have the database around AI Toolkit, template is designed to guide the AI assistant’s responses, ensuring they are relevant, professional, and focused on the Microsoft Visual Studio Code AI Toolkit. It provides a structured format for the AI to follow when generating responses.
template = “”” You are a specialized AI assistant for the Microsoft Visual Studio Code AI Toolkit.n
Your responses should be strictly relevant to this product and the user’s query. n
Avoid providing information that is not directly related to the toolkit.
Maintain a professional tone and ensure your responses are accurate and helpful.
Strictly adhere to the user’s question and provide relevant information.
If you do not know the answer then respond “I dont know”.Do not refer to your knowledge base.
{context}
Question:
{question}
“””
The above prompt covers the following aspects
Introduction: Sets the context for the AI assistant.
Relevance: Ensures responses are relevant to the toolkit and the user’s query.
Avoid Irrelevant Information: Instructs the AI to avoid unrelated information.
Professional Tone: Ensures responses are professional, accurate, and helpful.
Adhere to User’s Question: Instructs the AI to focus on the user’s question.
Unknown Answers: Provides guidance on how to respond if the AI does not know the answer.
Besides this, we have the following parameters.
Context Placeholder:
Purpose: A placeholder for the context that will be provided when the template is used. This context will be dynamically filled in during execution.
Question Placeholder:
Purpose: A placeholder for the user’s question that will be provided when the template is used. This question will be dynamically filled in during execution.
By using this structure, you we ensure that the SLM’s response is relevant, informative, and aligned with the specific task or context you’ve provided. This template is a common component of LangChain.
It’s now time to pass the prompt template into respective parser.
prompt = ChatPromptTemplate.from_template(template)
output_parser = StrOutputParser()
ChatPromptTemplate: The ChatPromptTemplate.from_template(template) method creates a prompt template object from the provided template string. This object can be used to format the template with specific context and questions.
StrOutputParser: The StrOutputParser object is initialized to parse the output from the AI model into a string format. This ensures that the raw output from the model is converted into a usable string format for display or further processing.
setup_and_retrieval = RunnableParallel(
{“context”: retriever, “question”: RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | model | output_parser
RunnableParallel: The RunnableParallel object is created to run multiple tasks in parallel. It retrieves relevant context using the retriever and passes the question through without modification using RunnablePassthrough.
Processing Chain: The chain object is created by combining the setup_and_retrieval, prompt, model, and output_parser components using the | operator. This represents the entire processing pipeline, where each component processes the input and passes the result to the next component.
These components work together to create a robust and efficient pipeline for processing user queries, retrieving relevant context, generating responses using the AI model, and parsing the output into a usable format.
That’s it, we have now finally completed the hard part, and let’s test it out!
resp=chain.invoke(“What is Fine tuning”)
print(resp)
Invoke the Chain: The chain.invoke method is used to process the query “What is Fine tuning” through the entire chain, which includes context retrieval, prompt formatting, model response generation, and output parsing.
The response from the AI model is printed to the console.
App development with Streamlit :
As we now want to use it as a chat interface webapp, we are using the streamlit framework to implement it
Lets start with title,
st.title(“AI Toolkit Chatbot”)
st.write(“Ask me anything about the Microsoft Visual Studio Code AI Toolkit.”)
Set the Title: The st.title function sets the title of the Streamlit web application to “AI Toolkit Chatbot”, making it clear to users what the app is about.
Write a Description: The st.write function provides a brief description or introductory text, informing users that they can ask questions about the Microsoft Visual Studio Code AI Toolkit.
if ‘messages’ not in st.session_state:
st.session_state.messages = []
for message in st.session_state.messages:
with st.chat_message(message[“role”]):
st.markdown(message[“content”])
Initialize User Session: Checks if the messages key exists in the Streamlit session state. If not, it initializes it as an empty list. This ensures that chat messages persist across user interactions.
Display Chat Messages: Iterates over the messages stored in the session state and displays each message in the chat interface using Markdown formatting. The role of the message (e.g., “user” or “assistant”) is used to differentiate between user and assistant messages.
These steps help to create a persistent and interactive chat interface within the Streamlit web application, allowing users to see the history of their interactions with the AI assistant.
Finally, now create the input sections,
if user_input := st.chat_input(“Your question:”):
st.session_state.messages.append({“role”: “user”, “content”: user_input})
with st.chat_message(“user”):
st.markdown(user_input)
response = chain.invoke(user_input)
st.session_state.messages.append({“role”: “assistant”, “content”: response})
with st.chat_message(“assistant”):
st.markdown(response)
Capture User Input: Captures user input from a chat input box and stores it in the user_input variable.
Append User Message to Session State: Appends the user’s message to the messages list in the Streamlit session state.
Display User Message: Displays the user’s message in the chat interface using Markdown formatting.
Invoke the Processing Chain: Processes the user’s input through the entire chain and stores the AI model’s response in the response variable.
Append Assistant Message to Session State: Appends the assistant’s response to the messages list in the Streamlit session state.
Display Assistant Message: Displays the assistant’s response in the chat interface using Markdown formatting.
These steps allow users to ask questions and receive responses from the AI assistant.
Now we can run the using the following command,
streamlit run <filename>.py
You should see something like this on the browser
A page will be launched at the default port.
In the upcoming series we will explore more types of RAG implementations with AI toolkit.
The RAG Hack series linked below in the resources talks of different kinds of RAG.
Resources
Github Repo – RAGHack: Let’s build RAG applications together
Learn Module: Implement Retrieval Augmented Generation (RAG) with Azure OpenAI Service
Github Repo – Supporting source code for this blog
Learn Module: AI Toolkit
Learn more about Streamlit
Github Repo: Learn more about and play with Phi-3 SLMs using Phi-3CookBook
Microsoft Tech Community – Latest Blogs –Read More
Unlock the Power of AI with GitHub Models: A Hands-On Guide
Hi, this is Zil-e-huma a BETA student ambassador and today I am back with another interesting article for my curious tech fellows. Ever thought if there’s a way to seamlessly integrate AI models into your projects without the heavy lifting? Enter GitHub Models—a game-changing hallmark that drives the power of AI right to your fingertips. Suppose you’re an AI enthusiast, a passionate developer, or just looking to make your applications smarter. In that case, this guide will show you how to harness the full strength of GitHub Models in a few simple steps.
Discovering GitHub Models: Your Gateway to AI Magic
Think of you having a collection of powerful AI models at your disposal—models that can chat, generate code, and much more with just a few tweaks. That’s what GitHub Models have for you. To get started, head over to the Marketplace on GitHub and select Models. Here, you’ll see many options, from the versatile Lama to the innovative Meta and beyond. Imagine this as your AI toolkit, ready to be explored and experimented with!
Now once you’ve chosen a model, you’ll see the layout. Here is what it will look like and what it will be about:
README: The go-to guide for everything you need to know about the model.
Evaluation: A handy comparison tool to see how this model stacks up against others.
Transparency: Get all the nitty-gritty details about the model’s inner workings.
License: Check out the usage rights and restrictions.
Ready to take your first leap? Click the Playground button, and the fun begins!
Your First AI Adventure: Playing with GitHub Models
The Playground is where the magic happens. Here, you can ask questions, change parameters, and see the model respond in real-time. In this way, you are receiving customized responses just by adjusting settings like max tokens and temperature to see how different configurations affect the output.
Now, let’s take it up a notch. Click the Get Started button, and you’ll be greeted with a user-friendly overlay. You can choose the programming language and SDK that suits your needs. Then, it’s time to generate your own personal access token. Don’t worry—it’s easier than it sounds. Simply follow these steps:
Go to Personal Access Token.
Select the Beta option.
Log in with your GitHub credentials.
Set an expiration date and name your token.
Click Generate Token and copy it.
You’re now equipped with the key to the GitHub Models kingdom! Export the token to your environment, and you’re all set to start coding.
Bringing AI to Life: Integrating GitHub Models into Your Projects
Quick and Easy Integration
Want to see how easy it is to integrate a model into your project? Let’s use a simple Python example. This code will have you up and running in no time:
import os
from openai import OpenAI
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
model_name = “gpt-4o”
client = OpenAI(
base_url=endpoint,
api_key=token,
)
response = client.chat.completions.create(
messages=[
{
“role”: “system”,
“content”: “You are a helpful assistant.”,
},
{
“role”: “user”,
“content”: “What is the capital of France?”,
}
],
model=model_name,
temperature=1.0,
max_tokens=1000,
top_p=1.0
)
print(response.choices[0].message.content)
Now in order to run this file write the following command in the terminal, as you have to let the system know about the GitHub token. Here is one possible way to do so.
(NOTE: the GitHub token used here is shown only for educational purposes and has been deleted now)
Advanced Integration with Custom Tools
But why stop there? Imagine adding custom functionality to your AI. In the provided example we are getting flight information between two cities. Here’s how you can supercharge your model with a custom tool:
import os
import json
from openai import OpenAI
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
model_name = “gpt-4o”
# Define a function that returns flight information between two cities (mock implementation)
def get_flight_info(origin_city: str, destination_city: str):
if origin_city == “Seattle” and destination_city == “Miami”:
return json.dumps({
“airline”: “Delta”,
“flight_number”: “DL123”,
“flight_date”: “May 7th, 2024”,
“flight_time”: “10:00AM”})
return json.dumps({“error”: “No flights found between the cities”})
# Define a function tool that the model can ask to invoke in order to retrieve flight information
tool={
“type”: “function”,
“function”: {
“name”: “get_flight_info”,
“description”: “””Returns information about the next flight between two cities.
This includes the name of the airline, flight number and the date and time
of the next flight”””,
“parameters”: {
“type”: “object”,
“properties”: {
“origin_city”: {
“type”: “string”,
“description”: “The name of the city where the flight originates”,
},
“destination_city”: {
“type”: “string”,
“description”: “The flight destination city”,
},
},
“required”: [
“origin_city”,
“destination_city”
],
},
},
}
client = OpenAI(
base_url=endpoint,
api_key=token,
)
messages=[
{“role”: “system”, “content”: “You an assistant that helps users find flight information.”},
{“role”: “user”, “content”: “I’m interested in going to Miami. What is the next flight there from Seattle?”},
]
response = client.chat.completions.create(
messages=messages,
tools=[tool],
model=model_name,
)
# We expect the model to ask for a tool call
if response.choices[0].finish_reason == “tool_calls”:
# Append the model response to the chat history
messages.append(response.choices[0].message)
# We expect a single tool call
if response.choices[0].message.tool_calls and len(response.choices[0].message.tool_calls) == 1:
tool_call = response.choices[0].message.tool_calls[0]
# We expect the tool to be a function call
if tool_call.type == “function”:
# Parse the function call arguments and call the function
function_args = json.loads(tool_call.function.arguments.replace(“‘”, ‘”‘))
print(f”Calling function `{tool_call.function.name}` with arguments {function_args}”)
callable_func = locals()[tool_call.function.name]
function_return = callable_func(**function_args)
print(f”Function returned = {function_return}”)
# Append the function call result fo the chat history
messages.append(
{
“tool_call_id”: tool_call.id,
“role”: “tool”,
“name”: tool_call.function.name,
“content”: function_return,
}
)
# Get another response from the model
response = client.chat.completions.create(
messages=messages,
tools=[tool],
model=model_name,
)
print(f”Model response = {response.choices[0].message.content}”)
This code lets your AI model not just answer questions, but actively perform tasks—like finding the next flight from Seattle to Miami. The possibilities are endless!
Supercharge Your Workflow with GitHub Codespaces
Want an even smoother experience? GitHub Codespaces lets you run models in a fully-configured cloud environment. Here’s how:
Go to the Playground.
Click Get Started, then select Run Codespace.
A virtual environment with all dependencies pre-installed will launch, so you can start coding immediately.
No more configuration headaches—just you and your code.
Pricing and Limitations: What You Need to Know
While GitHub Models are powerful, they do come with rate limits. To use them effectively, you’ll need an Azure AI account and a personalized Azure token. Pricing details are available on the Azure AI portal, so you can choose a plan that fits your needs.
FAQs: Your Burning Questions Answered
Q: Can GitHub Models replace Hugging Face?
A: Not yet. Most of the models on GitHub are closed-source and link back to Azure AI. While GitHub Models provide a convenient way to use Azure AI, they don’t currently offer open model weights like Hugging Face. However, they do make using Azure AI models incredibly simple with a GitHub Personal Token.
Ready to Dive In?
GitHub Models are a fantastic way to integrate AI into your applications effortlessly. From simple queries to complex integrations, the possibilities are endless. So why wait? Head over to GitHub, explore the models, and let your creativity soar!
Happy coding! 🚀
Microsoft Learn modules for further learning
Introduction to prompt engineering with GitHub Copilot – Training | Microsoft Learn
Build a Web App with Refreshable Machine Learning Models – Training | Microsoft Learn
Introduction to GitHub – Training | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
How to Add Contacts to User Mailboxes From a CSV File
Amend a PowerShell Script to Import Contacts from a CSV File
Being a creature of habit, my practice is to write about shorter topics here and keep long-form articles that address complex topics for Practical365.com. Those topics often include discussion about using PowerShell to automate operations and involve a script that I publish in the Office365ITPros GitHub repository. The idea in writing scripts is to illustrate the principles of the topic under discussion, not to deliver a complete solution. I expect people to take the code and change it to meet their needs.
All of which brings me to an article where I cover how to read data from a list in a SharePoint Online site and use the information to create personal contacts in user mailboxes. I like the idea because I think a list is a good way to maintain information. Others obviously disagree, because soon after publication, I received a note saying that keeping stuff in a list is too complex (from the scripting perspective) and could they have a version that reads the input from a CSV file?
The Joy of Publishing PowerShell Scripts
Authors who write about PowerShell and include code snippets or complete scripts in their text often find that people reach out to ask for changes to be made. It’s as if you’re an online script generation service. For instance, after writing about how to create a licensing report for Microsoft 365 accounts, I received multiple requests for enhancements. Most of the ideas were very good and I was happy to incorporate the changes, which is why the script is now at version 1.94.
In some respects, generative AI has taken over as the go-to place to get advice about writing PowerShell. In any case, because generative AI depends on knowledge captured in its LLM from previous scripts, articles, and blog posts, it has a nasty habit of getting things wrong. Copilot seems prone to creating cmdlets that don’t exist and recommending their use to solve a problem. However, people do like the fact that it’s often easier to ask AI about a script than to track down an author.
Getting back to the original point, when an author receives a request to change code that they’ve published, they can ignore the email, tweet, or whatever platform was used to reach out to them or respond. If you want an author to help, you can prepare the ground by attempting to make the change yourself and explaining exactly why you think the change will be valuable. The desired outcome is more likely if you demonstrate that you’ve tried to understand and amend the code, and that good logic underpins the request.
Script Change to Import Contacts from a CSV File
In this instance, lines 20-46 of the script are where the input data is fetched from the SharePoint Online list. If you want to use a CSV file instead, you can throw away those lines and add something like this:
$ImportFile = ‘c:tempContacts.csv’
[array]$Itemdata = Import-Csv $ImportFile
These lines import data from a CSV file to the array used to populate contacts in user mailboxes. If the input CSV has the correct columns, then that’s all you need to do. The script will run and add the contacts to the target mailboxes.
Figure 1 shows an example of a CSV file in Excel. The column names are those expected by the script. If you don’t include the column headings or use different names, the script won’t know how to map properties from the CSV file to the contact records and it won’t be possible to import contacts.
Figure 1: CSV file containing comtacts to import
A Quick Change to Switch Source of Import Contacts from SharePoint Site to a CSV File
Making the change took me about five minutes, three of which were to fix a bug where the hash table used by the script to detect if a contact already exists didn’t handle duplicate contacts with the same name. I’d created the duplicates to test how well the new Outlook suppresses duplicate contacts and forgot to remove them. It just shows how developing PowerShell scripts can be an iterative process.
Stay updated with developments across the Microsoft 365 ecosystem by subscribing to the Office 365 for IT Pros eBook. We do the research to make sure that our readers understand the technology. The Office 365 book package now includes the Automating Microsoft 365 with PowerShell eBook.
Missing filled form
Hello,
I face a problem for my student account but my university never provide permission for me to write forum here so I use my personal account instead.
So i filled a microsoft form to submit my assignment and I am very certain that I had submit the form but my lecturer say he never receive it. I never save the responce of the form so I have no evidence to prove to my lecturer. Its there anyway to check for my student for the history of my submitted form?
Hello, I face a problem for my student account but my university never provide permission for me to write forum here so I use my personal account instead. So i filled a microsoft form to submit my assignment and I am very certain that I had submit the form but my lecturer say he never receive it. I never save the responce of the form so I have no evidence to prove to my lecturer. Its there anyway to check for my student for the history of my submitted form? Read More
Special type of meeting
Hello, I need to set up a type of meeting where the attendance of one team member is mandatory, and from the rest of the team, any of them can attend. Could this be arranged?
Hello, I need to set up a type of meeting where the attendance of one team member is mandatory, and from the rest of the team, any of them can attend. Could this be arranged? Read More
The client does not receive the invitation.
Hello, I have set up a one-on-one meeting type where the client can schedule the meeting without any issues, and it gets added to the salesperson’s calendar, but the invitation does not reach the client. How could I solve this? Thank you.
Hello, I have set up a one-on-one meeting type where the client can schedule the meeting without any issues, and it gets added to the salesperson’s calendar, but the invitation does not reach the client. How could I solve this? Thank you. Read More
Colour date cell older than 1 year.
Hello Community,
Can you colour a date cell if it’s older that one year from today?
I can find ‘Last Week’ and ‘Last Month’ but nothing for older that 1 year from today’s date or ‘Last Year’). Thank you.
Hello Community,Can you colour a date cell if it’s older that one year from today?I can find ‘Last Week’ and ‘Last Month’ but nothing for older that 1 year from today’s date or ‘Last Year’). Thank you. Read More
Export to TSV file
Hello, in a shared meeting I am using the option to export recent data to a TSV file, however, the document does not include all the scheduled appointments. Is it always like this?
Could the additional fields added to the invitation be included in this file?
Hello, in a shared meeting I am using the option to export recent data to a TSV file, however, the document does not include all the scheduled appointments. Is it always like this?Could the additional fields added to the invitation be included in this file? Read More
Replication Snapshot not getting generated in MSSQL 2022 Standard Edition
Hi,
I am trying to setup replication of MSSQL database hosted on Windows (Master) to database hosted on Ubuntu (Slave). After configuring the Publisher agent on Master, when I try to run agent to generate snapshot and check status, it shows “Agent is running”. It keeps on running endlessly even though the size of my test database is very small.
I have checked that master server is not having any heavy processes running on it. Has anyone faced similar issue as well and resolved it then please guide.
Thanks & Regards.
Aditya.
Hi,I am trying to setup replication of MSSQL database hosted on Windows (Master) to database hosted on Ubuntu (Slave). After configuring the Publisher agent on Master, when I try to run agent to generate snapshot and check status, it shows “Agent is running”. It keeps on running endlessly even though the size of my test database is very small. I have checked that master server is not having any heavy processes running on it. Has anyone faced similar issue as well and resolved it then please guide. Thanks & Regards.Aditya. Read More
Not recieved certificate
I have passed MS-102 – Microsoft 365 Administrator – English (ENU) examination scheduled on Friday 7:15 pm IST but not yet recieved certificate regarding same
I have passed MS-102 – Microsoft 365 Administrator – English (ENU) examination scheduled on Friday 7:15 pm IST but not yet recieved certificate regarding same Read More
calculate volume by language and region
Hi,
I have 4 columns month, language, Region and files delivered Here i want to calculate
calculate volume by language and region Please help
Hi, I have 4 columns month, language, Region and files delivered Here i want to calculate calculate volume by language and region Please help Read More
Sales figures by day from multiple tabs averaged in to a summary?
Hi there,
I analyse ticket sales data and want to be able to see, from historical sales-by-day data, the average % of total sales on any given number of days prior to the show.
I’ve put the sales data into one workbook as different sheets, formatted identically. Say column I is the ‘days prior’ and column J is the % of total sales. E.g. I150 (value 130) corresponds to J150 (value 5.0%).
On a summary page I want to see the avg % of total ticket sales (averaged from every sheet) for that specific ‘day prior’.
Later I’ll want to see by year, by location, etc. but for now that’s where I’m stuck.
Advice appreciated.
Hi there, I analyse ticket sales data and want to be able to see, from historical sales-by-day data, the average % of total sales on any given number of days prior to the show. I’ve put the sales data into one workbook as different sheets, formatted identically. Say column I is the ‘days prior’ and column J is the % of total sales. E.g. I150 (value 130) corresponds to J150 (value 5.0%). On a summary page I want to see the avg % of total ticket sales (averaged from every sheet) for that specific ‘day prior’. Later I’ll want to see by year, by location, etc. but for now that’s where I’m stuck. Advice appreciated. Read More
Formula not working
I have this formula for excel that gives me the most common value in a range.
The problem I have is that if a cell in the range is blank it becomes NA. This is the formula =@INDEX(K5:AF5,MODE(MATCH(K5:AF5,K5:AF5,0)))
What I need is for the formula to work even if there is a cell with no value in it. And it possible it would be preferable if it only considered the text not number fields but I expect I would need to select each cell rather than have a range?
I have this formula for excel that gives me the most common value in a range.The problem I have is that if a cell in the range is blank it becomes NA. This is the formula =@INDEX(K5:AF5,MODE(MATCH(K5:AF5,K5:AF5,0)))What I need is for the formula to work even if there is a cell with no value in it. And it possible it would be preferable if it only considered the text not number fields but I expect I would need to select each cell rather than have a range? Read More
HELP!!!!!!! How to export from ONENOTE windows10
I cannot find where i can export my note from onenote windows10!!!!!!!!! I want to upload it to another account.
(i tried to use web version, it doesn’t work as well)
I cannot find where i can export my note from onenote windows10!!!!!!!!! I want to upload it to another account.(i tried to use web version, it doesn’t work as well) Read More
फोनपे में शिकायत कैसे दर्ज करें?
किसी गलत लेनदेन के लिए फोन पे से धन वापसी के लिए, संपर्क करना चाहिए: 86608↑47056/–और (24/7 उपलब्ध) समस्या की रिपोर्ट करें ।
किसी गलत लेनदेन के लिए फोन पे से धन वापसी के लिए, संपर्क करना चाहिए: 86608↑47056/–और (24/7 उपलब्ध) समस्या की रिपोर्ट करें । Read More