Category: Microsoft
Category Archives: Microsoft
How to add-in feature Open files on One Drive with other application
Hi Team
I am developing an electronic signature application. We now want to integrate it with OneDrive, and the concept is outlined below:
Users utilize OneDrive to manage their documents. They should be able to right-click or select multiple files and then choose “Open with other application,” which should support opening and navigating to my application. (You can refer to the SignNow application with Google Drive for an example.)
Based on my research, I need to follow these steps to achieve the idea:
Publish the application to appsource of microsoftUsers can install the add-in on OneDrive to use it as a feature for opening files with other applications
I am not entirely sure about our research or which steps need to be taken to achieve the goal. It would be helpful if you could assist me in verifying the following:
The necessary steps to be taken
Whether it is possible to implement the add-in feature to open files with other applications
Thank you so much
Hi TeamI am developing an electronic signature application. We now want to integrate it with OneDrive, and the concept is outlined below:Users utilize OneDrive to manage their documents. They should be able to right-click or select multiple files and then choose “Open with other application,” which should support opening and navigating to my application. (You can refer to the SignNow application with Google Drive for an example.)Based on my research, I need to follow these steps to achieve the idea:Publish the application to appsource of microsoftUsers can install the add-in on OneDrive to use it as a feature for opening files with other applicationsI am not entirely sure about our research or which steps need to be taken to achieve the goal. It would be helpful if you could assist me in verifying the following:The necessary steps to be takenWhether it is possible to implement the add-in feature to open files with other applicationsThank you so much Read More
How to create Catalina boot USB installer on Windows PC?
I’m a technical writer and web designer, so I have a good handle on software and tools. However, I’m more familiar with macOS environments and usually work with a MacBook M1. I recently ran into a situation where I need to reinstall macOS Catalina on an older Mac, but I only have access to a Windows machine right now.
Can anyone provide a step-by-step guide or point me to reliable resources for creating Catalina boot USB installer on Windows? The official createinstallmedia command does not work on Windows PC!
I’m a technical writer and web designer, so I have a good handle on software and tools. However, I’m more familiar with macOS environments and usually work with a MacBook M1. I recently ran into a situation where I need to reinstall macOS Catalina on an older Mac, but I only have access to a Windows machine right now. Can anyone provide a step-by-step guide or point me to reliable resources for creating Catalina boot USB installer on Windows? The official createinstallmedia command does not work on Windows PC! Read More
Microsoft Azure Backup Server
I have an instance of SQL 2022 on Windows Server 2022 where I want to backup via Microsoft Azure Backup Server; it sees the vm and disks but does not see the SQL instance (error attached)
I have an instance of SQL 2022 on Windows Server 2022 where I want to backup via Microsoft Azure Backup Server; it sees the vm and disks but does not see the SQL instance (error attached) Read More
Super fast creating react app with Microsoft Entra on Azure Web app
TOC
Steps
References
Steps:
Create an App registration in Microsoft Entra.
After creation, note down the Application (client) ID and Directory (tenant) ID, which will be used later.
Create a Linux web app in Azure Portal and use Node.js.
After creation, note down the App Name, which will be used later.
Download the sample code in your development environment:
git clone https://github.com/Azure-Samples/ms-identity-docs-code-javascript.git
You can use VS Code to open the react-spa subfolder.
Edit src/authConfig.js and make the corresponding modifications:
clientId
Specify the Application (client) ID obtained in step 2
authority
Fill in the Directory (tenant) ID obtained in step 2 in the specified format. (https://login.microsoftonline.com/<TENANT_ID>)
redirectUri
Fill in the App Name obtained in step 4 in the specified format. (https://<APP_NAME>.azurewebsites.net/.auth/login/aad/callback)
Run the following command to install the relevant packages:
npm install
Run the following command to compile the project. After execution, a build subfolder will be created.
npm run build
Create or modify build/.deployment with the following content:
[config]
SCM_DO_BUILD_DURING_DEPLOYMENT=false
Publish your project, specifying the build subfolder during the publishing process.
After publishing, go back to the App registration in Microsoft Entra, find Authentication, create a Platform, specify Single-Page Application, and fill in the value of authority from Step 7.
Go back to the project, navigate to Configuration, and specify the Startup Command as follows:
pm2 serve /home/site/wwwroot –no-daemon –spa
After restarting the web app, you can visit your webpage using a browser.
References:
Quickstart: Sign in to a SPA & call an API – React – Microsoft identity platform | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Microsoft Build with MVP Insights and Excitement
It’s hard to believe, but it has already been two and a half months since this year’s Microsoft Build, where we highlighted the contributions of Microsoft MVPs in various roles in blog post Microsoft Build 2024 with MVP Communities – Microsoft Community Hub. Some people have taken the initiative to test new AI features themselves, while others have relied on community content to understand the latest technological advancements and consider how to apply them to their own businesses.
MVPs have been actively catching up with the newly announced information, writing blog posts and sharing insights on social media. They added a layer of excitement to the official releases by delivering valuable information to the community in their own words.
Microsoft Build 2024 Watch Party in the heart of New York City, United States
Peter Smulovics, a Developer Technologies MVP, hosted the Microsoft Build 2024 Watch Party in the heart of New York City, where community members gathered to deepen their knowledge together by watching the live conference.
This event allowed attendees to experience the excitement of watching the keynote address from Microsoft Build and learn about the latest announcements. They also enjoyed networking opportunities and engaged in many conversations about the newly unveiled technologies.
For those who couldn’t attend Microsoft Build in person, a Watch Party like this offers the chance to share in the spirit and excitement of this milestone event, but also connect with other tech enthusiasts locally… Check out Peter’s blog post Watching Microsoft Build in Good Company | Dotneteers.net for a recap of the event.
Microsoft Build: Developer Learning Day in London, the United Kingdom
Microsoft Build: Developer Learning Day was co-hosted with NVIDIA with over 300 attendees. The focus of the event was to share post-Build insights and viewpoints from the community and Microsoft employees.
Participants had the opportunity to deepen their knowledge of Dev Tools, Cloud Platform, AI Development, Copilot, and Azure Data & Analytics through technical sessions. Four out of the five sessions were co-presented by MVPs including Sue Bayes, Callum Whyte, Daniel Roe, and Alpa Buddhabhatti. Additionally, in the Lab space, there were two 60-minute Lab sessions offering practical learning experiences on Deep Learning and Transformer-Based Natural Language Processing. Furthermore, at the Ask the Expert Booth, MVPs interacted with participants and engaged in discussions based on their expertise.
Read the event recap posted on LinkedIn by Marcel Lupo, a Developer Technologies and Microsoft Azure MVP, who was one of the experts at the Ask the Expert Booth, here.
2024 Microsoft Build After Party & Microsoft Build 2024 – After Party with MVP in Seoul, Korea
At the Microsoft Korea office in Seoul, Korean MVPs hosted two community-led post-Build events over two days, helping enthusiastic participants learn about the latest AI technologies to enhance their knowledge.
On the first day, at the 2024 Microsoft Build After Party, five MVP/RDs shared insights on the latest Azure AI technologies with over 100 attendees through hands-on sessions, and panel discussions. They covered Copilot Studios, multi-modal generative artificial intelligence, responsible AI, low-code development, and cloud platform. On the second day, the Microsoft Build 2024 – After Party with MVP featured diverse speakers, including Microsoft Learn Student Ambassadors and community leaders, who provided new insights on emerging technologies, such as Power Platform and Microsoft 365, to the participants.
One of the MVP hosts of the Microsoft Build 2024 After Party with MVP, Inhee Lee, expressed what they were able to convey to the participants through this event, “we shared insights of the benefit of GenAI which is a disruption and democratisation of conventional technology. Anybody can access to utilise technology easily with affordable costs.”
Post Microsoft Build and AI Day in Shanghai, Beijing, and Greater Bay Area (Shenzhen and Hong Kong)
In collaboration with Microsoft Asia and Microsoft Reactor, and 16 technology communities, the Post-Microsoft Build and AI Day event series was held in June 2024. The three events in Shanghai, Beijing, and the Greater Bay Area (Shenzhen and Hong Kong) attracted significant interest from those eager to learn about the latest AI solutions. Over 400 participants attended the events in person, and online streaming reached more than 140,000 viewers.
This series of hybrid events offered opportunities to learn about the latest technology topics announced at Microsoft Build, including Azure AI, .NET, Copilot, Power Platform, Azure OpenAI, LLM, SLM, Data Analytics and Integration, and edge computing. 11 MVPs participated as speakers, providing technical insights and engaging in discussions with attendees at the booths, enhancing the event experience for both in-person and online participants.
The event in Shanghai coincided with Children’s Day, leading to a special “Hour of Code · Children’s Day Edition.” Hao Hu, an AI Platform MVP, conducted an AI hands-on workshop themed around marine conservation for children. This initiative demonstrated how AI can be utilized across different generations.
Microsoft Build Japan 2024 – Virtual event from Tokyo, Japan
As an opportunity for Japanese-speaking users to learn the latest technologies announced at Microsoft Build, Microsoft Japan held Microsoft Build Japan 2024. Eight MVPs contributed to this two-day virtual event as speakers. On Day 1, four MVPs shared important updates on Azure AI and developer-focused topics. On Day 2, another four MVPs covered Microsoft 365, including Copilot, as well as the latest devices such as Copilot+ PC and Surface.
Tomokazu Kizawa participated as a speaker at a Microsoft-hosted event for the first time. Reflecting on his experience and the key messages he wanted to convey to the participants, he shared his thoughts: “From my perspective as a user outside of Microsoft, I focused on specifically conveying how new technologies like Copilot+ PC can positively impact daily life and work. Additionally, I aimed to highlight the positioning of Surface and the technological trends of Copilot+ PC not only from Microsoft’s standpoint but also within the broader PC industry, in order to generate more interest in Windows Devices.”
You can watch the session videos from this event, including the presentations by the Microsoft MVPs, at the following link.
Microsoft Build Japan 2024 – YouTube
– MVP session Day 1: Microsoft MVP による “推し最新技術情報” Azure/AI + Dev 編 (youtube.com)
– MVP session Day 2: Microsoft MVP による “推し最新技術情報” Microsoft 365 + Windows Device 編 (youtube.com)
Starting November 19, another major Microsoft global conference, Microsoft Ignite, is set to take place both in Chicago and online. It’s exciting to think about the innovative technology announcements that will be revealed in just three months. If you’d like to stay updated on the latest conference information, we recommend signing up for email notifications via the [Join the email list] link on the official page (note: this is not for event registration).
Also, be sure to look forward to learning opportunities provided by the Microsoft MVP community during and after the Microsoft Ignite event!
——
To learn more about Microsoft Build, please visit the following websites.
Microsoft Build (Official website)
Microsoft Build 2024 Book of News (Announcements)
Build on Microsoft Learn (Microsoft Learn)
Microsoft Build 2024 (Playlist on Microsoft Developer YouTube)
The top 5 Microsoft MVP demos presented at this year’s event are now available on the Microsoft Developer YouTube channel.
– Extend your Copilot with plugins using Copilot Studio by M365 MVP Manpreet Singh
– Create a fully functional support bot in less than 10 minutes by Business Applications MVP Em D’Arcy
– Golden path with GitHub Actions by Microsoft Azure MVP/Regional Director Michel Hubert
– Optimize Azure Infrastructure as Code Deployments with VS Code by Windows and Devices, Microsoft Azure MVP
– AI-Powered Personalized Learning by AI Platform MVP Noelle Russell
Microsoft Tech Community – Latest Blogs –Read More
$filter by multiple properties
Hi all,
Unfortunately, I can’t manage to filter according to several properties.
I’m currently filtering for a specific value, but I would like to filter using one or two “or” operators or other properties:
$filter=assignmentState+eq+’Delivered’
e.g.:
filter where assignmentstate is ‘Delivered’ or ‘Delivering‘ or ‘etc..’
When I follow the documentation I run into errors.
https://learn.microsoft.com/de-de/graph/filter-query-parameter?tabs=http
Does anyone have experience with multiple filters?
Regards
Hi all, Unfortunately, I can’t manage to filter according to several properties. I’m currently filtering for a specific value, but I would like to filter using one or two “or” operators or other properties: $filter=assignmentState+eq+’Delivered’ e.g.:filter where assignmentstate is ‘Delivered’ or ‘Delivering’ or ‘etc..’ When I follow the documentation I run into errors.https://learn.microsoft.com/de-de/graph/filter-query-parameter?tabs=http Does anyone have experience with multiple filters? Regards Read More
Trouble adding Personal Calendar to Work Calendar, Both MS 365 accounts
I am an accountant with my own company and have a long-term contracting relationship with another company. In addition to my own company’s MS365 account, the other company has issued me an MS365 account for my work on their behalf. The client is implementing Bookings to allow clients to self-book my services and assistance. This requires Bookings to receive information from both MS365 calendars to accurately convey my availability.
Both companies use Microsoft 365 for software services. When I try to add the calendar from the other company as a personal calendar, I receive an error message. I can add it as a shared calendar, but Bookings doesn’t consider the availability of the shared calendar. It also doesn’t work when I try to set up a shared Booking page and add the other email as staff.
I can’t imagine I am the first person to need this integration. Please let me know what you suggest.
I am an accountant with my own company and have a long-term contracting relationship with another company. In addition to my own company’s MS365 account, the other company has issued me an MS365 account for my work on their behalf. The client is implementing Bookings to allow clients to self-book my services and assistance. This requires Bookings to receive information from both MS365 calendars to accurately convey my availability. Both companies use Microsoft 365 for software services. When I try to add the calendar from the other company as a personal calendar, I receive an error message. I can add it as a shared calendar, but Bookings doesn’t consider the availability of the shared calendar. It also doesn’t work when I try to set up a shared Booking page and add the other email as staff. I can’t imagine I am the first person to need this integration. Please let me know what you suggest. Read More
Download Spotify Podcasts/Free Podcasts to MP3
Here is a detailed guide on how to download Spotify podcasts to MP3 without Spotify Premium via Tidabie Music Go:
STEP 1 Select Spotify as the Downloading Source
With Tidabie Music Go, you can download Spotify podcasts to MP3 format from your Spotify library, no matter if you are using a Spotify Free or Premium account. Once you start Tidabie on your computer, please select “Spotify” as the audio source. And then log in to your Spotify account to access your library.
Note: When choosing the Spotify source, you have the choice to capture Spotify podcasts from either the Spotify app or the Spotify web player. You can toggle between the two options by clicking on the switching icon. If you opt to record podcasts from the Spotify app, the operation will function at speeds of up to 10 times faster while preserving the best audio quality.
STEP 2 Customize the Output Settings of Downloaded Spotify Podcasts
When you finish choosing “Spotify” as the downloading audio source, you will see the “Music” interface like the picture below. Simply select the output format as “MP3” under the “Convert Settings” module in this interface. If needed, you can also adjust parameters like the bit rate and sample rate. Additional settings such as the output folder path and file naming can be customized in the full settings pop-up window, which is accessible by clicking the “More settings” button.
STEP 3 Find Spotify Podcasts to Download
Back to the Spotify app or the Spotify web player after choosing the output settings. Then you need to find the Spotify podcasts you want to download on Spotify. As you locate the podcast page, you will see a blue “Click to add” button in the lower right corner. Just tap on it to start parsing the Spotify podcast episodes.
As the podcasts are processed, all the downloadable items will be listed on a small pop-up window. What you need to do next is to choose the episodes you want to download by ticking the square box next to the episode title. When comparing downloading podcasts using Spotify and Tidabie, the advantages of using Tidabie are more noticeable. Tidabie enables you to download Spotify podcasts in batches, while on Spotify, you need to click on the download icon to get episodes one by one.
STEP 4 Start Downloading Spotify Podcasts
Before downloading, you have the option to add more podcasts to download by hitting the “Add More” icon plus the chance to modify the output settings again by tapping on the settings icon on this interface. If everything is all set, just click on the “Convert” button to start downloading. Then Tidabie will run up to 10x faster to get your favorite Spotify podcasts downloaded in MP3 format to the local PC. All you need to do now is to wait patiently.
STEP 5 Start Downloading Spotify Podcasts
Check the Downloaded Spotify Podcats on your Local PC
The output folder that keeps the downloaded Spotify podcasts will pop up by default when the downloading is completed. You can check the downloaded podcasts in the pop-up folder. Or go to the specific podcast files by hitting the folder icon near each song under the “Converted” module.
Here is a detailed guide on how to download Spotify podcasts to MP3 without Spotify Premium via Tidabie Music Go:STEP 1 Select Spotify as the Downloading SourceWith Tidabie Music Go, you can download Spotify podcasts to MP3 format from your Spotify library, no matter if you are using a Spotify Free or Premium account. Once you start Tidabie on your computer, please select “Spotify” as the audio source. And then log in to your Spotify account to access your library.Note: When choosing the Spotify source, you have the choice to capture Spotify podcasts from either the Spotify app or the Spotify web player. You can toggle between the two options by clicking on the switching icon. If you opt to record podcasts from the Spotify app, the operation will function at speeds of up to 10 times faster while preserving the best audio quality.STEP 2 Customize the Output Settings of Downloaded Spotify PodcastsWhen you finish choosing “Spotify” as the downloading audio source, you will see the “Music” interface like the picture below. Simply select the output format as “MP3” under the “Convert Settings” module in this interface. If needed, you can also adjust parameters like the bit rate and sample rate. Additional settings such as the output folder path and file naming can be customized in the full settings pop-up window, which is accessible by clicking the “More settings” button.STEP 3 Find Spotify Podcasts to DownloadBack to the Spotify app or the Spotify web player after choosing the output settings. Then you need to find the Spotify podcasts you want to download on Spotify. As you locate the podcast page, you will see a blue “Click to add” button in the lower right corner. Just tap on it to start parsing the Spotify podcast episodes.As the podcasts are processed, all the downloadable items will be listed on a small pop-up window. What you need to do next is to choose the episodes you want to download by ticking the square box next to the episode title. When comparing downloading podcasts using Spotify and Tidabie, the advantages of using Tidabie are more noticeable. Tidabie enables you to download Spotify podcasts in batches, while on Spotify, you need to click on the download icon to get episodes one by one.STEP 4 Start Downloading Spotify PodcastsBefore downloading, you have the option to add more podcasts to download by hitting the “Add More” icon plus the chance to modify the output settings again by tapping on the settings icon on this interface. If everything is all set, just click on the “Convert” button to start downloading. Then Tidabie will run up to 10x faster to get your favorite Spotify podcasts downloaded in MP3 format to the local PC. All you need to do now is to wait patiently.STEP 5 Start Downloading Spotify PodcastsCheck the Downloaded Spotify Podcats on your Local PCThe output folder that keeps the downloaded Spotify podcasts will pop up by default when the downloading is completed. You can check the downloaded podcasts in the pop-up folder. Or go to the specific podcast files by hitting the folder icon near each song under the “Converted” module. Read More
What is the best spotify music converter for Windows PC?
I enjoy listening to Spotify music offline on various devices, but I don’t have a Spotify Premium account. As a technical writer and web designer, I often deal with different software and tools, and I’m familiar with various conversion processes.
I’ve heard about several Spotify music converters, but I’m looking for one that stands out in terms of quality, ease of use, and reliability. Ideally, it should support saving Spotify tracks as MP3 files without compromising on sound quality. I’m also interested in any additional features that might enhance the overall experience.
I enjoy listening to Spotify music offline on various devices, but I don’t have a Spotify Premium account. As a technical writer and web designer, I often deal with different software and tools, and I’m familiar with various conversion processes.I’ve heard about several Spotify music converters, but I’m looking for one that stands out in terms of quality, ease of use, and reliability. Ideally, it should support saving Spotify tracks as MP3 files without compromising on sound quality. I’m also interested in any additional features that might enhance the overall experience. Read More
Making private meeting in shared mailbox calendar
Hello,
In the calendar of a shared mailbox we can’t set a meeting to private. The lock-tile is greyed out. In the users own calendar the tile is available. Is there a setting for shared mailboxes that makes it possible to set meetings to private?
Kind regards,
Arjan
Hello, In the calendar of a shared mailbox we can’t set a meeting to private. The lock-tile is greyed out. In the users own calendar the tile is available. Is there a setting for shared mailboxes that makes it possible to set meetings to private? Kind regards,Arjan Read More
Building HyDE powered RAG chatbots using Microsoft Azure AI Models & Dataloop
Customer service is undergoing an AI revolution, driven by the demand for smarter, more efficient solutions. HyDE-powered RAG chatbots offer a breakthrough technology that combines vast knowledge bases with real-time data retrieval and hypothetical document embeddings (HyDE) to deliver superior accuracy and context-specific responses. Yet, building and managing these complex systems remains a significant challenge due to the intricate integration of diverse AI components, real-time processing requirements, and the need for specialized expertise in AI and data engineering.
Simplifying GenAI solutions with Microsoft and Dataloop
The Microsoft-Dataloop partnership abstracts the deployment of powerful chatbot applications. By integrating Microsoft’s PHI-3-MINI foundation model with Dataloop’s data platform, we’ve made HyDE-powered RAG chatbots accessible to a wider developer community with minimal coding. Developers can leave the documentation behind and start utilizing these capabilities instantly, accelerating time to value.
This announcement follows our successful integration with Microsoft Azure AI Model as a Service and Azure AI Video Indexer, further enhancing our ability to deliver advanced AI solutions. These integrations enable developers to seamlessly incorporate state-of-the-art AI models into their workflows, significantly accelerating development cycles.
About Dataloop AI development platform
Dataloop is an enterprise-grade end-to-end AI development platform designed to streamline the creation and deployment of powerful GenAI applications. The platform offers a comprehensive suite of tools and services, enabling efficient AI model development and management.
Key features include:
Orchestration: Dataloop provides seamless pipeline management, access to a marketplace for AI models, and a serverless architecture to simplify deployment and scalability.
Data Management: The Dataloop platform supports extensive dataset exploration, allowing users to query, visualize, and curate data efficiently.
Human Knowledge: Dataloop facilitates knowledge-based ground truth creation through tools for annotation, review, and monitoring, ensuring high-quality data labeling.
MLOps: With reliable model management capabilities, Dataloop ensures efficient inference, training, and evaluation of AI models.
Dataloop is also available on Azure Marketplace.
About Azure AI Models as a Service
Azure AI Models as a Service (MaaS) offers developers and businesses access to a robust ecosystem of powerful AI models. This service includes a wide range of models, from pre-trained and custom models to foundation models, covering tasks such as natural language processing, computer vision, and more. The service is backed by Azure’s stringent data privacy and security commitments, ensuring that all data, including prompts and responses, remains private and secure.
Add photo pink screen
Figure: HyDE-powered RAG Chatbot Workflow – This pipeline, created using the Dataloop platform, demonstrates the process of transforming user queries into hypothetical answers, generating embeddings, and retrieving relevant documents from a vector store. This internal Slack chatbot is optimizing information retrieval to ensure that users receive accurate and contextually relevant responses, enhancing the chatbot’s ability to search for answers in the documentation.
This is how we do it!
Powering Efficient AI Inference at Scale: Microsoft’s AI tools build upon a powerful foundation of inference engines like Azure Machine Learning and ONNX Runtime. This robust toolkit ensures smooth, high-performance AI inferencing at scale. These tools specifically fine-tune neural networks for exceptional speed and efficiency, making them ideal for demanding applications like large language models (LLMs). This translates to rapid inference and scalable AI deployment across various environments.
End-to-End AI Development with Drag-and-Drop Ease: Dataloop empowers users to build and manage advanced AI capabilities entirely within its intuitive no-code interface. Simply drag and drop models provided or developed by Microsoft through our marketplace to seamlessly integrate them into your workflows. Pre-built pipeline templates specifically designed for RAG chatbots further streamline development. This eliminates the need for additional tools, making Dataloop your one-stop shop for building next-generation RAG-based chatbots.
A Node-by-Node Look at a RAG-based Document Assistant Chatbot with Microsoft and Dataloop
This section takes you behind the scenes of our RAG-based document assistant chatbot creation, utilizing Microsoft’s AI tools and the Dataloop platform. This breakdown will help you understand each component’s role and how they work together to deliver efficient and accurate responses. Below is a detailed node-by-node explanation of the system.
Node 1: Slack (or Messaging App) – Prompt Entry Point
Description: This node acts as the interface between users and the chatbot system. It integrates with a messaging platform like Slack and receives user interactions (messages, queries, commands) and starts the pipeline.
Functionality: It captures and processes the user input to be forwarded to the predictive model.
Configuration:
Integration:
Specify the target messaging platform (e.g., Slack API token, login credentials for other messaging apps).
Define event types to handle (e.g., messages, direct mentions, specific commands).
Message Handling:
Define how to pre-process messages (e.g., removing emojis, formatting, language detection).
Configure how to identify user intent and extract relevant information from the message.
Node 2 – PHI-3-MINI – Predict Model
Description: This node utilizes a generative prediction model, PHI-3-MINI, optimized with Microsoft’s AI tools.
Functionality: The node takes input from the Slack node and generates hypothetical responses. Research in Zero-Shot Learning suggests that this approach, leveraging contextual understanding and broad knowledge, can often outperform traditional methods.
Configuration:
Model Selection: Choose any LLM optimized using Microsoft’s AI tools. In our chatbot, we leverage PHI-3-MINI, specifically optimized for efficient resource usage.
System Prompt Configuration: A system prompt guides the AI’s behavior by setting tone, style, and content rules, ensuring consistent, relevant, and appropriate responses. For our case, we configure the LLM to give a hypothetical and concise answer.
Parameters: Set parameters for the model (e.g., beam search size, temperature for sampling).
Node 3 – Embed Item
Description: This node is responsible for embedding items, transforming text or data into a format that can be easily used for further processing or retrieval.
Functionality: It generates vector embeddings from the text. These embeddings represent the text in a high-dimensional space, allowing for efficient similarity searches in the next node.
Configuration:
Embedding Model: Choose the model for generating vector embeddings from text (e.g., pre-trained Word2Vec, Sentence Transformers). You can also utilize Microsoft’s embedding tools. Each embedding model comes with its own dimensionality of the vectors.
Normalization: Specify the normalization technique for the embeddings (e.g., L2 normalization).
Node 4 – Retriever Prompt (Search)
Description: This node acts as a retrieval mechanism, responsible for fetching relevant information or context based on the embedded item.
Functionality: It uses the embeddings to search a database or knowledge base, retrieving information that is relevant to the query or input provided by the user. It could use various retrieval techniques, including vector searches, to find the best matching results.
Configuration:
Dataset: Specify your dataset, with all the existing chunks and embeddings.
Similarity Metric: Define the metric for measuring similarity between the query embedding and candidate items (e.g., cosine similarity, dot product).
Retrieval Strategy: Choose the retrieval strategy. In our case, we used our feature store based on SingleStore, a database optimized for fast searches. This allows for efficient vector-based search to quickly retrieve the most relevant information.
Node 5 – PHI-3-MINI – (Refine)
Description: Similar to the earlier PHI-3-MINI node, this node also involves a predictive model, another instance of the PHI-3-MINI model optimized by Microsoft.
Functionality: Processes the retrieved information using the predictive model to generate a response or further refine the data, ensuring a contextually accurate output for the user.
Configuration: Model Selection: Specify another instance of the PHI-3-MINI model optimized with Microsoft’s AI tools.
Task Definition: Instruct the model to take all chunks of documentation and reply accurately to the user’s question.
System Prompt Configuration: Instruct the chatbot on how to respond. In our case, we configured it to respond kindly, act as a helpful documentation assistant, clearly state when it doesn’t know an answer, and avoid inventing information.
Accelerate AI Development with Dataloop’s Integration of Microsoft Foundation Models
Discover a vast ecosystem of pre-built solutions, models, and datasets tailored to your specific needs. Easily filter options by provider, media type, and compatibility to find the perfect fit. Build and customize AI workflows with easy-to-use pipeline tools and out-of-the-box end-to-end AI and GenAI workflows. We are incredibly excited to see what you can create with your new capabilities!
Microsoft Tech Community – Latest Blogs –Read More
GitHub Model Catalog – Getting Started
Welcome to GitHub Models! We’ve got everything fired up and ready for you to explore AI Models hosted on Azure AI. So as Student developer you already have access to amazing GitHub Resources like Codespaces and Copilot from http://education.github.com now you get started on developing with Generative AI and Language Models with the Model Catalog.
For more information about the Models available on GitHub Models, check out the GitHub Model Marketplace
Each model has a dedicated playground and sample code available in a dedicated codespaces environment.
There are a few basic examples that are ready for you to run. You can find them in the samples directory within the codespaces environment.
If you want to jump straight to your favorite language, you can find the examples in the following Languages:
Python
JavaScript
cURL
The dedicated Codespaces Environment is an excellent way to get started running the samples and models.
Below are example code snippets for a few use cases. For additional information about Azure AI Inference SDK, see full documentation and samples.
Create a personal access token You do not need to give any permissions to the token. Note that the token will be sent to a Microsoft service.
To use the code snippets below, create an environment variable to set your token as the key for the client code.
If you’re using bash:
Install the Azure AI Inference SDK using pip (Requires: Python >=3.8):
This sample demonstrates a basic call to the chat completion API. It is leveraging the GitHub AI model inference endpoint and your GitHub token. The call is synchronous.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
token = os.environ[“GITHUB_TOKEN”]
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
response = client.complete(
messages=[
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”What is the capital of France?”),
],
model=model_name,
temperature=1.,
max_tokens=1000,
top_p=1.
)
print(response.choices[0].message.content)
This sample demonstrates a multi-turn conversation with the chat completion API. When using the model for a chat application, you’ll need to manage the history of that conversation and send the latest messages to the model.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
messages = [
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”What is the capital of France?”),
AssistantMessage(content=”The capital of France is Paris.”),
UserMessage(content=”What about Spain?”),
]
response = client.complete(messages=messages, model=model_name)
print(response.choices[0].message.content)
For a better user experience, you will want to stream the response of the model so that the first token shows up early and you avoid waiting for long responses.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
response = client.complete(
stream=True,
messages=[
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”Give me 5 good reasons why I should exercise every day.”),
],
model=model_name,
)
for update in response:
if update.choices:
print(update.choices[0].delta.content or “”, end=””)
client.close()
Install Node.js.
Copy the following lines of text and save them as a file package.json inside your folder.
“type”: “module”,
“dependencies”: {
“@azure-rest/ai-inference”: “latest”,
“@azure/core-auth”: “latest”,
“@azure/core-sse”: “latest”
}
}
Note: @azure/core-sse is only needed when you stream the chat completions response.
Open a terminal window in this folder and run npm install.
For each of the code snippets below, copy the content into a file sample.js and run with node sample.js.
This sample demonstrates a basic call to the chat completion API. It is leveraging the GitHub AI model inference endpoint and your GitHub token. The call is synchronous.
import { AzureKeyCredential } from “@azure/core-auth”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role:”system”, content: “You are a helpful assistant.” },
{ role:”user”, content: “What is the capital of France?” }
],
model: modelName,
temperature: 1.,
max_tokens: 1000,
top_p: 1.
}
});
if (response.status !== “200”) {
throw response.body.error;
}
console.log(response.body.choices[0].message.content);
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
This sample demonstrates a multi-turn conversation with the chat completion API. When using the model for a chat application, you’ll need to manage the history of that conversation and send the latest messages to the model.
import { AzureKeyCredential } from “@azure/core-auth”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role: “system”, content: “You are a helpful assistant.” },
{ role: “user”, content: “What is the capital of France?” },
{ role: “assistant”, content: “The capital of France is Paris.” },
{ role: “user”, content: “What about Spain?” },
],
model: modelName,
}
});
if (response.status !== “200”) {
throw response.body.error;
}
for (const choice of response.body.choices) {
console.log(choice.message.content);
}
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
For a better user experience, you will want to stream the response of the model so that the first token shows up early and you avoid waiting for long responses.
import { AzureKeyCredential } from “@azure/core-auth”;
import { createSseStream } from “@azure/core-sse”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role: “system”, content: “You are a helpful assistant.” },
{ role: “user”, content: “Give me 5 good reasons why I should exercise every day.” },
],
model: modelName,
stream: true
}
}).asNodeStream();
const stream = response.body;
if (!stream) {
throw new Error(“The response stream is undefined”);
}
if (response.status !== “200”) {
stream.destroy();
throw new Error(`Failed to get chat completions, http operation failed with ${response.status} code`);
}
const sseStream = createSseStream(stream);
for await (const event of sseStream) {
if (event.data === “[DONE]”) {
return;
}
for (const choice of (JSON.parse(event.data)).choices) {
process.stdout.write(choice.delta?.content ?? “);
}
}
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
Paste the following into a shell:
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “What is the capital of France?”
}
],
“model”: “Phi-3-small-8k-instruct”
}’
Call the chat completion API and pass the chat history:
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “What is the capital of France?”
},
{
“role”: “assistant”,
“content”: “The capital of France is Paris.”
},
{
“role”: “user”,
“content”: “What about Spain?”
}
],
“model”: “Phi-3-small-8k-instruct”
}’
This is an example of calling the endpoint and streaming the response.
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “Give me 5 good reasons why I should exercise every day.”
}
],
“stream”: true,
“model”: “Phi-3-small-8k-instruct”
}’
The rate limits for the playground and free API usage are intended to help you experiment with models and prototype your AI application. For use beyond those limits, and to bring your application to scale, you must provision resources from an Azure account, and authenticate from there instead of your GitHub personal access token. You don’t need to change anything else in your code. Use this link to discover how to go beyond the free tier limits in Azure AI.
Microsoft Tech Community – Latest Blogs –Read More
How to copy a built-in Data connector in Global region to China region?
A lot of Sentinel solutions/data connectors are not available in China. For example: Dynamics 365 connector. Is it possible to get the source code of a built-in data connector (like Dynamics 365 connector) and create custom data connector in China?
A lot of Sentinel solutions/data connectors are not available in China. For example: Dynamics 365 connector. Is it possible to get the source code of a built-in data connector (like Dynamics 365 connector) and create custom data connector in China? Read More
Business Central Post Deployment Offer
Hi all, 
I would like to know in this new business central deployment offer: Does the partner need to get all the licenses subscribed by customer in the very first go or the ACR is calculated based on the subscription in 1st year? In other terms the incentives for ACR is eligible when all the licenses are subscripted at first place or is it the sum total in 1st Year?
Hi all, I would like to know in this new business central deployment offer: Does the partner need to get all the licenses subscribed by customer in the very first go or the ACR is calculated based on the subscription in 1st year? In other terms the incentives for ACR is eligible when all the licenses are subscripted at first place or is it the sum total in 1st Year? Read More
Selecting Tables to Sync from finance and operations | programmatically
Dears, I’ve some important questions please
Is there any option to programmatically select Tables to Sync from finance and operations, currently it’s a manual process and time-consuming task? We’re open to any tool: PowerShell, Python, Power Automate & etcIs there any option to force the sync to start without waiting the interval settings in the advanced configuration?
Dears, I’ve some important questions pleaseIs there any option to programmatically select Tables to Sync from finance and operations, currently it’s a manual process and time-consuming task? We’re open to any tool: PowerShell, Python, Power Automate & etcIs there any option to force the sync to start without waiting the interval settings in the advanced configuration? Choose finance and operations data in Azure Synapse Link for Dataverse – Power Apps | Microsoft Learn Read More
Issue with Autoruns v14.11 – Offline System Registry Hives Not Unmounted
When using the Analyze Offline System option leaves registry hives mounted, risking system corruption.
Steps to Reproduce:
Open Autoruns v14.11.Use File > Analyze Offline System.Close AutoRuns.Observe that registry hives remain mounted after the process has terminated. (Regedit.exe > HKLM > autoruns.software / autoruns.system / autoruns.user)
Impact:
Can render the offline system unbootable.Prevents you from using Analyze Offline System again as the HKLMautoruns.* mountpoints are already in use.
Workaround: Use v13.100, which works correctly.
When using the Analyze Offline System option leaves registry hives mounted, risking system corruption. Steps to Reproduce:Open Autoruns v14.11.Use File > Analyze Offline System.Close AutoRuns.Observe that registry hives remain mounted after the process has terminated. (Regedit.exe > HKLM > autoruns.software / autoruns.system / autoruns.user)Impact:Can render the offline system unbootable.Prevents you from using Analyze Offline System again as the HKLMautoruns.* mountpoints are already in use. Workaround: Use v13.100, which works correctly. Read More
Reply-To’ Header Being Stripped in Office 365 Emails
Hi everyone, we are experiencing an issue where the “Reply-To” header in emails sent to our Exchanged hosted email account is being stripped. This behaviour started on the 7th of August, seemingly for reasons we haven’t been able to isolate.
Our current set up is that we have one primary email address that we receive all of our customer enquiries, orders, and emails through for 15+ websites. This email address is hosted through Exchange and we have set up SMTP. Our website is a WordPress based website, and we use WP Mail SMTP to connect our Exchange account to this plugin. Then, we filter this email account through MailGuard so that we mitigate 99% of the spam sent to that address.
Originally, we thought the problem was to do with this plugin, so we rolled back a version of the plugin and the issue was still not rectified. We also reached out to MailGuard asking them if they would strip Reply-To headers before they sent the email(s) back to us, and their reply from support was:
“We will add details into the headers of an email, but that will be in regards to recording the Hops of the email, whether it has passed SPF/DKIM/DMARC checks and specific logging regarding tour processing of the email.
That all being said, MailGuard’s systems do not remove content from emails.
If the emails do not have a reply-to in them, that is how they are when we have received them.”
As mentioned, we have 15+ other websites, but only 2 of them run through an OAuth connection with Exchange through the WP Mail SMTP plugin. The other websites, use Brevo (SendInBlue) as their SMTP provider.
Thinking it was a plugin issue, we tested the enquiries being sent from those websites to our email address, to see if they were also getting their Reply-To headers stripped, however, none of them were having this issue. We use the free SMTP service through Brevo for these smaller sites, and would exceed their limit if we switched our main sites to Brevo in the meantime.
We believe we have isolated the issue down to Exchange/Office365, but admittedly, are finding it a bit of a challenge given the intricate settings and options available throughout the account.
Below is a screenshot of two enquiries sent through to our email address, but 24 hours apart. The left indicates the the Reply-To header is present as normal, but the right image, indicates a missing Reply-To header.
To add a note, it is interesting saying that the emails were not signed. We definitely have DMARC/DKIM DNS records present so I’m unsure why they would be being delivered as unsigned.
We have not changed any SMTP settings, any policy settings, mail rules or anything similar in our Exchange account. It seemingly appears to be an issue that randomly appeared overnight.
Has anyone experienced similar issues with “Reply-To” headers being stripped in Office 365? Could there be specific settings or policies in Exchange Online or Azure that might affect this behaviour? Any advice or troubleshooting tips would be greatly appreciated.
Thank you in advance for your help.
Hi everyone, we are experiencing an issue where the “Reply-To” header in emails sent to our Exchanged hosted email account is being stripped. This behaviour started on the 7th of August, seemingly for reasons we haven’t been able to isolate. Our current set up is that we have one primary email address that we receive all of our customer enquiries, orders, and emails through for 15+ websites. This email address is hosted through Exchange and we have set up SMTP. Our website is a WordPress based website, and we use WP Mail SMTP to connect our Exchange account to this plugin. Then, we filter this email account through MailGuard so that we mitigate 99% of the spam sent to that address. Originally, we thought the problem was to do with this plugin, so we rolled back a version of the plugin and the issue was still not rectified. We also reached out to MailGuard asking them if they would strip Reply-To headers before they sent the email(s) back to us, and their reply from support was: “We will add details into the headers of an email, but that will be in regards to recording the Hops of the email, whether it has passed SPF/DKIM/DMARC checks and specific logging regarding tour processing of the email. That all being said, MailGuard’s systems do not remove content from emails. If the emails do not have a reply-to in them, that is how they are when we have received them.” As mentioned, we have 15+ other websites, but only 2 of them run through an OAuth connection with Exchange through the WP Mail SMTP plugin. The other websites, use Brevo (SendInBlue) as their SMTP provider. Thinking it was a plugin issue, we tested the enquiries being sent from those websites to our email address, to see if they were also getting their Reply-To headers stripped, however, none of them were having this issue. We use the free SMTP service through Brevo for these smaller sites, and would exceed their limit if we switched our main sites to Brevo in the meantime. We believe we have isolated the issue down to Exchange/Office365, but admittedly, are finding it a bit of a challenge given the intricate settings and options available throughout the account. Below is a screenshot of two enquiries sent through to our email address, but 24 hours apart. The left indicates the the Reply-To header is present as normal, but the right image, indicates a missing Reply-To header. To add a note, it is interesting saying that the emails were not signed. We definitely have DMARC/DKIM DNS records present so I’m unsure why they would be being delivered as unsigned. We have not changed any SMTP settings, any policy settings, mail rules or anything similar in our Exchange account. It seemingly appears to be an issue that randomly appeared overnight. Has anyone experienced similar issues with “Reply-To” headers being stripped in Office 365? Could there be specific settings or policies in Exchange Online or Azure that might affect this behaviour? Any advice or troubleshooting tips would be greatly appreciated. Thank you in advance for your help. Read More
Exchange hybrid writeback with cloud sync is enabled, but I still can’t edit attributes from 365…
Hi,
I’m very familiar with Exchange hybrid mode, I did a lot of hybrid migration, and I’m waiting since very long time for Exchange Hybrid Writeback feature to be able to edit hybrid Exchange mailbox settings from Exchange Online Admin Center instead of having to connect to the on-prem hybrid server or to “Exchange Recipient Admin Center”. I did configure it with Cloud Sync, enabled Exchange Hybrid Writeback option like in the following article (Exchange hybrid writeback with cloud sync – Microsoft Entra ID | Microsoft Learn), but I still can’t edit any mailbox attributes that comes from the on-prem AD, like emails adresses (aliases), etc. I still receive the same old error message saying “ Unable to update the specified properties for on-premises mastered Directory Sync objects or objects currently undergoing migration. DualWrite (Graph) RequestId: 4c2d42fa-9c6e-4749-8d00-79e9f8041787 The issue may be transient and please retry a couple of minutes later. If issue persists, please see exception members for more information.”
Am I misunderstanding how it is supposed to work? I searched a lot on the web, and even if it seems to be a very valuable feature, not much poeple talk about it, and when they talk about it, they don’T make a demo of how to use it once it is configured.
Thanks!
Hi, I’m very familiar with Exchange hybrid mode, I did a lot of hybrid migration, and I’m waiting since very long time for Exchange Hybrid Writeback feature to be able to edit hybrid Exchange mailbox settings from Exchange Online Admin Center instead of having to connect to the on-prem hybrid server or to “Exchange Recipient Admin Center”. I did configure it with Cloud Sync, enabled Exchange Hybrid Writeback option like in the following article (Exchange hybrid writeback with cloud sync – Microsoft Entra ID | Microsoft Learn), but I still can’t edit any mailbox attributes that comes from the on-prem AD, like emails adresses (aliases), etc. I still receive the same old error message saying ” Unable to update the specified properties for on-premises mastered Directory Sync objects or objects currently undergoing migration. DualWrite (Graph) RequestId: 4c2d42fa-9c6e-4749-8d00-79e9f8041787 The issue may be transient and please retry a couple of minutes later. If issue persists, please see exception members for more information.” Am I misunderstanding how it is supposed to work? I searched a lot on the web, and even if it seems to be a very valuable feature, not much poeple talk about it, and when they talk about it, they don’T make a demo of how to use it once it is configured. Thanks! Read More
Unable to delete contacts in People
Hello,
Trying to delete contacts from People. Checked the checkboxes on my contacts whom I want to delete from People.
Also tried to delete one by one at a time.
Then, for former way, the dust box icon is grey out.
For latter way, dust box appears, however, some particular contacts show the error message and says “Issues and cannot delete.” The screeshot of the error message is as follows.
It appeared at the event when I deleted the contact in my People.
It says that “problem happened. Due to the problem, unable to delete.” Cancel in blue highlighted
Please advise how to delete contacts in People.
Hello,Trying to delete contacts from People. Checked the checkboxes on my contacts whom I want to delete from People.Also tried to delete one by one at a time.Then, for former way, the dust box icon is grey out.For latter way, dust box appears, however, some particular contacts show the error message and says “Issues and cannot delete.” The screeshot of the error message is as follows. It appeared at the event when I deleted the contact in my People.It says that “problem happened. Due to the problem, unable to delete.” Cancel in blue highlighted Please advise how to delete contacts in People. https://people.live.com Read More
Need Fans Blasting Cold Air to Sleep peacefully.
I am seeking assistance to troubleshoot an issue I am experiencing with my computer. I have enabled S3 sleep mode in the BIOS and restored all settings to factory defaults. Additionally, I have disabled modern standby, hibernate, and fast startup settings using various methods such as Group Policy Editor, Registry Editor, and Power Options. Despite this, the problem persists. I strongly believe it is a software-related issue. Can someone provide guidance on how to resolve this?
I am seeking assistance to troubleshoot an issue I am experiencing with my computer. I have enabled S3 sleep mode in the BIOS and restored all settings to factory defaults. Additionally, I have disabled modern standby, hibernate, and fast startup settings using various methods such as Group Policy Editor, Registry Editor, and Power Options. Despite this, the problem persists. I strongly believe it is a software-related issue. Can someone provide guidance on how to resolve this? Read More