Tag Archives: microsoft
How do I fix QᴜɪᴄᴋBᴏᴏᴋs Error Code 6210?
QᴜɪᴄᴋBᴏᴏᴋs Error Code 6210 indicates a multi-user mode issue. To resolve it, ensure all users are logged out of QᴜɪᴄᴋBᴏᴏᴋs. Restart the server hosting the company file. Open QᴜɪᴄᴋBᴏᴏᴋs and go to “File,” then “Utilities,” and ensure “Host Multi-User Access” is enabled on the server but disabled on workstations. Update QᴜɪᴄᴋBᴏᴏᴋs to the latest version and run the QᴜɪᴄᴋBᴏᴏᴋs File Doctor tool. If the problem persists, check firewall settings to ensure QᴜɪᴄᴋBᴏᴏᴋs processes are not blocked.
QᴜɪᴄᴋBᴏᴏᴋs Error Code 6210 indicates a multi-user mode issue. To resolve it, ensure all users are logged out of QᴜɪᴄᴋBᴏᴏᴋs. Restart the server hosting the company file. Open QᴜɪᴄᴋBᴏᴏᴋs and go to “File,” then “Utilities,” and ensure “Host Multi-User Access” is enabled on the server but disabled on workstations. Update QᴜɪᴄᴋBᴏᴏᴋs to the latest version and run the QᴜɪᴄᴋBᴏᴏᴋs File Doctor tool. If the problem persists, check firewall settings to ensure QᴜɪᴄᴋBᴏᴏᴋs processes are not blocked. Read More
Azure Video Indexer & Phi-3 introduce Textual Video Summary on Edge: Better Together story
Azure AI Video Indexer collaborated with the Phi-3 team to introduce a Textual Video Summary capability on Edge.
This collaboration showcases the utilization of the SLM, Phi-3 model, enabling the Azure AI Video Indexer team to extend the same LLM based summarization capabilities that are available for the cloud, to also be available on the Edge.
This comes following Build’s 2024 announcements of the integration of Azure AI Video Indexer with language models to generate textual summaries of videos and the expansion of the Phi-3 models family. The feature is accessible both in the cloud, utilizing Azure Open AI, and at the Edge via the Phi-3-mini-4k-instruct model.
Powered by Phi-3, the Edge video summarization generates summaries for videos and audio files of any length, processing all data locally.
These summaries are accessible through the Azure AI Video Indexer portal or via the Azure AI Video Indexer API. Users have the flexibility to customize the length and style of the summaries to meet their specific requirements, ranging from brief and concise to extensive and formal.
In this blog, we’ll discuss how both teams collaborated to integrate the Phi-3 Language Model into an Edge environment, offering high-quality video summarization in Azure AI Video Indexer enabled by ARC. We’ll cover the main challenges, the work done to achieve high quality results, and our commitment to maintaining high responsible AI standards.
Background
Azure AI Video Indexer is a one-stop-shop for video analytics and insights, with video summarization being a key component for quickly understanding content without watching the entire video. It also helps in searching and maintaining archives by providing the right level of detail. Given the rapid increase in video content, efficient summarization is essential.
At the same time, concerns about data privacy, residency, and regulations are growing among organizations, law enforcement, and private users. They may also wish to leverage their existing computing resources. Therefore, utilizing Edge infrastructure becomes vital, especially for companies facing legal, security, or privacy challenges, making an Edge solution necessary for video analytics and summarization.
Phi-3-Mini-4K
Creating a summary on Edge requires balancing many requirements: summary quality (see the Summarization section below for more details), runtime, costs, and various aspects of responsible AI. In our experiments with several small language models, Phi-3-Mini-4K provided the best balance between these factors.
Phi-3 is the latest Small Language Model that was released by Microsoft under the Phi-3 family. The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants of context size (tokens): a 4K and a 128K variant (see quality benchmarks). For the summarization task we selected the 4K variant, which is high-quality, comparatively light, and can run on Edge, enabling customers to retain their data locally. In addition, the Phi-3 official model’s training included a safety alignment phase, which makes it compliant with responsible-AI considerations (trained to avoid harmful content, XPIA, etc.). All these make Phi-3 an ideal choice for the task of summarization on Edge.
Summarization
When summarizing a video, it is important to note the multi-modality of the input, as opposed to summarization of a text such as a textbook. We consume multiple products of Azure AI Video Indexer’s other insights to be used for summarization. When evaluating (both manually and semi-automatically) and scoring the summaries, we specifically note these aspects of the summary:
Conciseness: the summary is short and to the point. It doesn’t repeat itself.
Coherence: the structure is logical and easy to follow.
Objectivity: the summary is stated in an unbiased manner.
Accurate: the summary is factually accurate with respect to the video (also known as groundness).
Completeness: the summary contains all the main points of the video.
Multi-Modality Summarization and Sectioning
Providing a good summary of a video can be challenging. It must be able to correctly weigh the various elements that describe a video: Transcript, OCR, Audio Effects, Visual Labels, Detected Objects, and more. This work explored different methods to achieve that and produce a high-quality video summarization. Due to the limitations of context size (4K) we had to employ smart-sectioning to split the video into context-sized sections (this work has been previously discussed here). But in order to get a unified summary that is based of the entire video and not just a single section, all the sections must be somehow aggregated.
To facilitate this aggregation, we made a “running” summary window: in the beginning only the first section is summarized, but the summary of the second section would include, as part of the input, the result from the first:
In this manner each iteration produces a summary that is based on the previous ones, carrying the main points from start to finish. Our summarization solution adapted prompt-engineering to coalesce the multi-modality inputs into a cohesive video summary, without any need to fine-tune or apply additional training to the Phi-3 base–model to adapt it for the summarization task.
Prompt Engineering
We used the suggested chat-format for Phi-3: “system” (aka meta-prompt), “user”, and “assistant” roles. Our system prompt had to cover the following aspects:
Summarization instructions.
Guidelines for excluding harmful content in the output summary: avoid hate speech, violence, self-harm, etc.
Our prompt includes instructions to protect the model against indirect prompt injection attacks (XPIA).
Groundness to ensure that the summary only includes information discussed in the video and does not introduce external knowledge or fabricate facts.
Instructions to adhere to the meta-prompt and to avoid modifying the instructions, as well as to suppress the instructions from the summary output (this is also a quality concern, as internal instructions should not appear in the output).
Summary styles, such as “formal”, “casual”, “short”, or “long” that can be customized by the users to fit their preferences.
We analyzed 50 videos of different types, lengths, and domains to reflect the typical range of content indexed on Azure AI Video Indexer. These videos were manually assessed for summarization quality (conciseness, coherence, objectivity, etc.) through iterative experimentation with the aforementioned prompt aspects, until achieving satisfactory results across all criteria.
Adjusting the prompt to enhance one aspect can theoretically affect others and may necessitate costly and time-consuming human review. However, Phi-3 demonstrated considerable robustness, effectively following instructions without compromising the quality of the resulting summary. It adapted well to specific prompt changes tailored to our responsible-AI needs, ensuring the output met our requirements.
Evaluations
After finalizing the prompts, we implemented an external review cycle with independent reviewers to ensure the system consistently meets quality and safety standards. The reviewers rated the summaries for a set of videos not used during development on a scale from 1 (bad) to 10 (perfect). The average scores for the manually labeled videos were: GPT-3.5Turbo at 6.9, Phi-3 at 7.5, and GPT-4 at 8.5. This shows that Phi-3, despite being a much smaller language model, achieved comparable scores to the GPT models in the summarization task.
Conclusions
This article presents a case study of embedding Phi-3, a new Small Language Model from Microsoft, as an Edge solution for video summarization using Azure AI Video Indexer. Video summarization is a powerful feature that enables users to quickly grasp the content of a video without watching it entirely. It can also help in searching and maintaining an archive, giving just the right level of detail. Summarizing a video requires combining various modalities, extracted by Azure AI Video Indexer such as transcript, OCR, audio effects, visual labels, detected objects, and more, and weighing them accordingly. The article discusses the data science aspects of creating a high-quality video summarization, such as what makes a good summary, how to section videos, and the challenges of maintaining high-quality summary, while addressing responsible AI consideration, such as avoiding harmful content, ensuring data privacy, and complying with regulations. The article showcases the advantages of using Phi-3 as an Edge solution for video summarization, such as its high quality, lightweight, state-of-the-art performance, and its ability to run on Edge devices, powered by ARC.
Note: Azure AI Video Indexer enabled by Arc is an Azure Arc extension enabled service that runs video analysis, audio analysis, and generative AI on Edge devices. The solution is designed to run on Azure Arc enabled Kubernetes and supports many video formats. To leverage the summarization capability on Edge, you must sign up using this form to approve your subscription-id.
Read More
About the feature
Video summarization on Edge: Public feature documentation
Video summarization on Cloud: Public feature documentation
Video Summarization (YouTube)
Prompt content: Video-to-text API
Transparency note
About Phi-3 model
Phi-3 Open Models – Small Language Models
About Azure AI Video Indexer
Use Azure AI Video Indexer website to access product website
Get started with Azure AI Video Indexer, Enabled by Arc by following this Arc Jumpstart scenario
Visit Azure AI Video Indexer Developer Portal to learn about our APIs
Search the Azure Video Indexer GitHub repository
Review our product documentation.
Get to know the recent features using Azure AI Video Indexer release notes
To report an issue with Azure AI Video Indexer, go to Azure portal Help + support. Create a new support request. Your request will be tracked within SLA.
For any other question, contact our support distribution list at visupport@microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Use WebGPU + ONNX Runtime Web + Transformer.js to build RAG applications by Phi-3-mini
Phi-3-mini is deployed in different edge devices, such as iPhone/Android, AIPC/Copilot+PC, as well as cloud and IoT, citing the cross-platform and flexibility of SLM. If you want to follow these deployment methods, you can follow the content of the Phi-3 Cookbook. In model reference, computing power is essential. Through the quantized model, SLM can be deployed and run on a GPU or a traditional CPU. In this topic, we will focus on the model reference of WebGPU.
What’s WebGPU?
“WebGPU is a JavaScript API provided by a web browser that enables webpage scripts to efficiently utilize a device’s graphics processing unit. This is achieved with the underlying Vulkan, Metal, or Direct3D 12 system APIs. On relevant devices, WebGPU is intended to supersede the older WebGL standard.” – Wikipedia
WebGPU allows developers to leverage the power of modern GPUs to implement web-based graphics and general computing applications on all platforms and devices, including desktops, mobile devices, and VR/AR headsets. WebGPU not only has rich prospects in front-end applications, but is also an important scenario in the field of machine learning. For example, the familiar tensorflow.js uses WebGPU to run machine learning/deep learning acceleration.
Required environment
Support Google Chrome 113+, Microsoft Edge 113+, Safari 18 (macOS 15), Firefox Nightly
Enable WebGPU
Perform the following operations in the Chrome / Microsoft Edge address bar
The chrome://flags/#enable-unsafe-webgpu flag must be enabled (not enable-webgpu-developer-features). Linux experimental support also requires launching the browser with –enable-features=Vulkan.
Safari 18 (macOS 15) is enabled by default
Firefox Nightly Enter about:config in the address bar and set dom.webgpu.enabled to true
Use js script to check whether WebGPU is supported
if (!navigator.gpu) {
throw new Error(“WebGPU not supported on this browser.”);
}
Why should Phi-3-mini run on WebGPU
We hope that the application scenarios are cross-platform, not just running on a single terminal. For example, the browser, as a cross-platform Internet access tool, can quickly expand our application scenarios. For Phi-3-mini, a quantized ONNX-enabled WebGPU model has been released, which can quickly build WebApp applications through NodeJS and ONNX Rutime Web. By combining WebGPU we can build Copilot applications very simply.
Learn about ONNX Runtime Web
ONNX Runtime Web enables you to run and deploy machine learning models in your web application using JavaScript APIs and libraries. This page outlines the general flow through the development process. You can also integrate machine learning into the server side of your web application with ONNX Runtime using other language libraries, depending on your application development environment.
Starting with ONNX Runtime 1.17, ONNX Runtime Web supports WebGPU acceleration, combining the quantized Phi-3-mini-4k-instruct-onnx-web model and Tranformer.js to build a Web-based Copilot application.
Transformer.js
Transformers.js is designed to be functionally equivalent to Hugging Face’s transformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
Computer Vision: image classification, object detection, and segmentation.
Audio: automatic speech recognition and audio classification.
Multimodal: zero-shot image classification.
Transformers.js uses ONNX Runtime to run models in the browser. The best part about it, is that you can easily convert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using Optimum.
Transformers.js has supported numerous models across Natural Language Processing, Vision, Audio, Tabular and Multimodal domains.
Build Phi-3-mini-4k-instruct-onnx-web RAG WebApp application
RAG applications are the most popular scenarios for generative artificial intelligence. This example hopes to integrate Phi-3-mini-4k-instruct-onnx-web and jina-embeddings-v2-base-en vector models to build WebApp applications to build solutions in multiple terminals plan.
A. Create the Phi3SLM class
Using ONNX Runtime Web as the backend of Phi-3-mini-4k-instruct-onnx-web, I built phi3_slm.js with reference to llm.js. If you want to know the complete code, please visit https://github.com/microsoft/Phi-3CookBook/tree/main/code/08.RAG/rag_webgpu_chat. The following are some relevant points.
What is set here is the location of the model when Transformer.js calls the model, and whether access to the remote model is allowed.
constructor() {
env.localModelPath = ‘models’;
env.allowRemoteModels = 0; // disable remote models
env.allowLocalModels = 1; // enable local models
}
ONNX Runtime Web Setting
The standard ONNX Runtime Web library includes the following WebAssembly binary files:
SIMD: whether the Single Instruction, Multiple Data (SIMD) feature is supported.
Multi-threading: whether the WebAssembly multi-threading feature is supported.
JSEP: whether the JavaScript Execution Provider (JSEP) feature is enabled. This feature powers the WebGPU and WebNN execution providers.
Training: whether the training feature is enabled.
When using WebGPU or WebNN execution provider, the ort-wasm-simd-threaded.jsep.wasm file is used.
So add the following content to phi3_slm.js
ort.env.wasm.numThreads = 1;
ort.env.wasm.simd = true;
ort.env.wasm.wasmPaths = document.location.pathname.replace(‘index.html’, ”) + ‘dist/’;
And set it in webpack.config.js
plugins: [
// Copy .wasm files to dist folder
new CopyWebpackPlugin({
patterns: [
{
from: ‘node_modules/onnxruntime-web/dist/*.jsep.*’,
to: ‘dist/[name][ext]’
},
],
})
],
To use WebGPU we need to set it in the ORT session
like
const session = await ort.InferenceSession.create(modelPath, { …, executionProviders: [‘webgpu’] });
For other text generation, please refer to async generate(tokens, callback, options)
B. Create RAG class
Calling the jina-embeddings-v2-base-en model through Transformer.js is consistent with Python use, but there are a few things to note.
jina-embeddings-v2-base-en It is recommended to use the model of https://huggingface.co/Xenova/jina-embeddings-v2-base-en, which will have better performance after adjustment.
Because a vector database is not used, the vector similarity calculation method is used directly to complete the embeding work. This is also the most original method.
async getEmbeddings(query,kbContents) {
const question = query;
let sim_result = [];
for(const content of kbContents) {
const output = await this.extractor([question, content], { pooling: ‘mean’ });
const sim = cos_sim(output[0].data, output[1].data);
sim_result.push({ content, sim });
}
sim_result.sort((a, b) => b.sim – a.sim);
var answer = ”;
console.log(sim_result);
answer = sim_result[0].content;
return answer;
}
Please place jina-embeddings-v2-base-en in models and phi-3 mini in the directory of models
C. Running
This application implements the RAG function by uploading markdown documents. We can see that it has good performance and effects in content generation.
If you wish to run the example you can visit this link Sample Code
Resources
Learning Phi-3-mini-4k-instruct-onnx-web https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx-web
Learning ONNX Runtime Web https://onnxruntime.ai/docs/tutorials/web/
Learning WebGPU https://www.w3.org/TR/webgpu/
Reading Enjoy the Power of Phi-3 with ONNX Runtime on your device https://huggingface.co/blog/Emma-N/enjoy-the-power-of-phi-3-with-onnx-runtime
Official E2E samples https://github.com/microsoft/onnxruntime-inference-examples/tree/main/js/chat
Microsoft Tech Community – Latest Blogs –Read More
Can’t edit PWA Home Page – Edit Page Removed
I tried to edit the Home Page in PWA to update information for our group and after navigating to the Settings Gear I found that Edit Page is removed. Was this an error on an update pushed out to Project Online sites?
Thanks,
Paul Blake
I tried to edit the Home Page in PWA to update information for our group and after navigating to the Settings Gear I found that Edit Page is removed. Was this an error on an update pushed out to Project Online sites?Thanks,Paul Blake Read More
Booking – how to add Team members to calendar invite
Hello…
Need assistance –
Once a customer has booked an event/appointment with me, how do I add other team members to the calendar appointment that shows up on my calendar?
I appreciate your assistance and insight!
Jill
Hello…Need assistance – Once a customer has booked an event/appointment with me, how do I add other team members to the calendar appointment that shows up on my calendar? I appreciate your assistance and insight!Jill Read More
No Audio Output Device is installed, No audio device is installed !!!
I updated my windows to Windows 11 at the very beginning of its first release. That was completely fine; my laptop run smoothly. But a few months ago, after a Windows update, my audio stopped working. It raises an “No audio device is installed” error.
I updated my windows to Windows 11 at the very beginning of its first release. That was completely fine; my laptop run smoothly. But a few months ago, after a Windows update, my audio stopped working. It raises an “No audio device is installed” error. No audio device is installedNo output devices foundESAuDriver Device@TDM ModeHD Audio Driver for Display Audio Not FoundSelect the device driver you want to install for this hardware.Windows encountered a problem installing the drivers for your deviceError HD Audio Driver for Display Audio Read More
Resource redirection based on location
Is there any way to accomplish this today with Cloud PC? Admins can also control the user actions by enabling or disabling clipboard access, printers, client drive mapping and so on, based on the user’s network location.
Is there any way to accomplish this today with Cloud PC? Admins can also control the user actions by enabling or disabling clipboard access, printers, client drive mapping and so on, based on the user’s network location. Read More
How do I restore a QᴜɪᴄᴋBᴏᴏᴋs backup?
To restore a QᴜɪᴄᴋBᴏᴏᴋs backup, open QᴜɪᴄᴋBᴏᴏᴋs and go to the “File” menu. Select “Open or Restore Company” and choose “Restore a backup copy.” Click “Next,” then select “Local Backup” and click “Next” again. Browse to the location of your backup file (with a .qbb extension), select it, and click “Open.” Choose the location where you want to restore the file, click “Save,” and follow any on-screen instructions to complete the process.
To restore a QᴜɪᴄᴋBᴏᴏᴋs backup, open QᴜɪᴄᴋBᴏᴏᴋs and go to the “File” menu. Select “Open or Restore Company” and choose “Restore a backup copy.” Click “Next,” then select “Local Backup” and click “Next” again. Browse to the location of your backup file (with a .qbb extension), select it, and click “Open.” Choose the location where you want to restore the file, click “Save,” and follow any on-screen instructions to complete the process. Read More
Patch Fail KB5039227
Patch Fail KB5039227
Sfc is success with no error.
DISM /Online /Cleanup-Image /CheckHealth : Component Store corruption was detected.
DISM repair : Success
Server is having OS 2022.
Any help would be highly appreciated.
Patch Fail KB5039227 Sfc is success with no error.DISM /Online /Cleanup-Image /CheckHealth : Component Store corruption was detected.DISM repair : SuccessServer is having OS 2022. Any help would be highly appreciated. Read More
Power Automate latest SP List appended text update
Good evening,
I am having problems with the appended text field in SharePoint Lists and using this in PowerBI.
Is there a way using Power Automate to capture the latest appended text update and duplicate that to a separate list column for this to be used in PowerBI reporting and show the latest update?
Currently I have problems where the appended text update shows as blank in PowerBI when an alternative column is updated as it assumes the lack of an appended text update is the latest (i.e blank)
Hopeful this is a common issue and is solvable somehow, thank you 🙂
Good evening, I am having problems with the appended text field in SharePoint Lists and using this in PowerBI. Is there a way using Power Automate to capture the latest appended text update and duplicate that to a separate list column for this to be used in PowerBI reporting and show the latest update? Currently I have problems where the appended text update shows as blank in PowerBI when an alternative column is updated as it assumes the lack of an appended text update is the latest (i.e blank) Hopeful this is a common issue and is solvable somehow, thank you 🙂 Read More
How do I fix QᴜɪᴄᴋBᴏᴏᴋs Error 6175, 0?
QᴜɪᴄᴋBᴏᴏᴋs Error 6175, 0 occurs when QᴜɪᴄᴋBᴏᴏᴋs is unable to start the database service. To fix it, ensure the QᴜɪᴄᴋBᴏᴏᴋs Database Server Manager is running. Open QᴜɪᴄᴋBᴏᴏᴋs, go to “File,” then “Utilities,” and select “Host Multi-User Access.” If the error persists, check the server hosting the company file to ensure it’s not in sleep mode. Additionally, configure your firewall and antivirus settings to allow QᴜɪᴄᴋBᴏᴏᴋs processes, and restart both the server and workstation.
QᴜɪᴄᴋBᴏᴏᴋs Error 6175, 0 occurs when QᴜɪᴄᴋBᴏᴏᴋs is unable to start the database service. To fix it, ensure the QᴜɪᴄᴋBᴏᴏᴋs Database Server Manager is running. Open QᴜɪᴄᴋBᴏᴏᴋs, go to “File,” then “Utilities,” and select “Host Multi-User Access.” If the error persists, check the server hosting the company file to ensure it’s not in sleep mode. Additionally, configure your firewall and antivirus settings to allow QᴜɪᴄᴋBᴏᴏᴋs processes, and restart both the server and workstation. Read More
Draft with Copilot in Word on a selection of text, a list, or a table
If you are a document creator, you know how challenging it can be to produce high-quality, engaging, and informative content. You must research, write, edit, format, and proofread your work while keeping your audience and purpose in mind. Sometimes, you may need to generate new content from existing sources, update your content to suit different contexts, organize your content more clearly, or transform your content into different formats or languages.
To support these workflows, it’s often easier to reference or build upon existing content. That’s why we’ve added the ability to draft with Copilot based on a specific selection in your Word doc. This will help to:
Refine, rewrite, or paraphrase an existing selection
Elaborate on, expand upon, or explain your selected content in more detail
Enhance your content with statistics and additional information
Here are some use case examples of how Draft on Selection can help you as a document creator. All you need to do is select the content you want to work with and choose the “Make changes” option that appears.
Transform content into different formats or languages
Sometimes, you may need to transform your content into different formats or languages, such as converting a paragraph into a bullet list, or translating a paragraph into another language. For example, you may need to take a list and expand it to a paragraph with some added context or take notes and turn them into well-formed thoughts with references and statistics.
To do this, simply select the content you want to transform, and type in the prompt that describes the nature of the proposed change. Draft will use natural language generation to transform your content accordingly.
Update content to suit different contexts
Sometimes, you may need to update your content to suit different contexts, such as different audiences, regions, or purposes. For example, you may need to update your content to reflect a US audience’s needs, such as changing the currency, spelling, or cultural references. Instead of manually editing your content or using a generic find-and-replace tool, you can use Draft to update your content contextually.
To do this, simply select the content you want to update, and type “update this selection to reflect a US audience” in the prompt. Draft will use natural language understanding and cultural awareness to update your content accordingly. For example, if you select the text “The programme costs £50 per month and includes access to a variety of online resources. You can cancel anytime without any hassle. Please note that this offer is only valid for UK residents.”
Draft with Copilot may update it to: “The program costs $65 per month and includes access to a variety of online resources. You can cancel anytime without any hassle. Please note that this offer is only valid for US residents.”
Organize content more clearly
Sometimes, you may need to organize your content more clearly, such as adding headings, bullet points, or paragraphs. For example, you may need to take the text in a table and turn it into sections of content with section headers. Instead of rearranging and formatting your content manually, you can use Draft to organize your content automatically.
To do this, simply select the content you want to organize, and type “take the text in this selected table and turn it into sections of content with section headers” in the prompt. Draft will use natural language processing and document structure analysis to organize your content logically and coherently. The example below illustrates transforming a table outlining various figures of speech into clear paragraphs structures with headings.
Try Draft with Copilot on selected content today
Using Draft with Copilot on selected text, a list, or a table can help you with your content creation tasks. Whether you need to generate new content, update, organize, or transform your content, Draft with Copilot can do it for you with ease. You can use this to create high-quality, engaging, and informative content for any audience, purpose, or context, and save time, improve quality, enhance creativity, and increase productivity.
This is our first iteration on draft on selected text, a list, or a table and you can expect continued enhancements over the coming months that will include:
An affordance to more quickly and directly replace the original selected content with the new output
The ability to attach and reference additional content from other files in this workflow; as you can already do using Draft with Copilot in blank document or new line scenarios.
We are also exploring additional functionality to preserve the formatting of original selected content or direct Copilot to apply formatting to newly generated content and as other modalities and capabilities become available with Copilot these will become available for Draft with Copilot as well.
Microsoft Tech Community – Latest Blogs –Read More
Modify formula to return false values too
Q5:Q1494 contain yes and no values and are custom formatted where a 1 returns “yes” and 0 returns “no”
The following formula is inserted into cell Q1495 and filled down into many cells
=IF(ISERROR(VLOOKUP(G1528,$G$5:$Q$1494,11,0)),””,IF(VLOOKUP(G1528,$G$5:$Q$1494,11,0)=0,””,VLOOKUP(G1528,$G$5:$Q$1494,11,0)))
The problem with this formula only 1 “yes” is returned, 0 “no” are ignored
What needs changed in the formula so that 0 “no” are returned also?
Thank you if you can help!
Q5:Q1494 contain yes and no values and are custom formatted where a 1 returns “yes” and 0 returns “no” The following formula is inserted into cell Q1495 and filled down into many cells =IF(ISERROR(VLOOKUP(G1528,$G$5:$Q$1494,11,0)),””,IF(VLOOKUP(G1528,$G$5:$Q$1494,11,0)=0,””,VLOOKUP(G1528,$G$5:$Q$1494,11,0))) The problem with this formula only 1 “yes” is returned, 0 “no” are ignored What needs changed in the formula so that 0 “no” are returned also? Thank you if you can help! Read More
Issues with Quick__Books Clean Install Tool – Need Help!
Hi everyone,
I’m having trouble using the Quick__B00ks Clean Install Tool. I followed the steps, but my Quick__B00ks still isn’t working properly. Has anyone else experienced issues with this tool? Are there any additional steps or troubleshooting tips I should know about? Any advice or guidance would be greatly appreciated!
Thanks in advance!
Hi everyone,I’m having trouble using the Quick__B00ks Clean Install Tool. I followed the steps, but my Quick__B00ks still isn’t working properly. Has anyone else experienced issues with this tool? Are there any additional steps or troubleshooting tips I should know about? Any advice or guidance would be greatly appreciated!Thanks in advance! Read More
Accidently deleted email
I was composing an email and I was trying to add an attachment, I think I clicked onto the calendar invite and accidently pressed delete, the email has not gone to my deleted, archive or draft outbox, can I get it back as it was a really important email
I was composing an email and I was trying to add an attachment, I think I clicked onto the calendar invite and accidently pressed delete, the email has not gone to my deleted, archive or draft outbox, can I get it back as it was a really important email Read More
Nᴇᴇᴅ Hᴇʟᴘ﹖ Cᴏɴᴛᴀᴄᴛ Sᴜᴘᴘᴏʀᴛ ꜰᴏʀ Qᴜɪᴄᴋ_B000ᴋs Rᴇʙᴜɪʟᴅ Eʀʀᴏʀ 213
Update Internet Explorer: Ensure that you have the latest version of Internet Explorer installed.Add Intuit URL to Trusted Sites: Go to Internet Explorer settings, add *.intuit.com and *.Qᴜɪᴄᴋ_B000ᴋs .com to the Trusted Sites.Check Digital Signature: Verify that the digital signature is enabled for the Intuit content.Update Qᴜɪᴄᴋ_B000ᴋs : Ensure that your Qᴜɪᴄᴋ_B000ᴋs software is up-to-date.Disable Antivirus/Firewall: Temporarily disable any antivirus or firewall that might be blocking the update.
For further assistance, contact Qᴜɪᴄᴋ_B000ᴋs Expert Support for personalized help.
Update Internet Explorer: Ensure that you have the latest version of Internet Explorer installed.Add Intuit URL to Trusted Sites: Go to Internet Explorer settings, add *.intuit.com and *.Qᴜɪᴄᴋ_B000ᴋs .com to the Trusted Sites.Check Digital Signature: Verify that the digital signature is enabled for the Intuit content.Update Qᴜɪᴄᴋ_B000ᴋs : Ensure that your Qᴜɪᴄᴋ_B000ᴋs software is up-to-date.Disable Antivirus/Firewall: Temporarily disable any antivirus or firewall that might be blocking the update.For further assistance, contact Qᴜɪᴄᴋ_B000ᴋs Expert Support for personalized help. Read More
How to Create a Journal Entry in Quick–B00ks?
I’m new to Quick–B00ks and need help creating a journal entry. I have some transactions that don’t fit the standard forms, and I’ve been told a journal entry is the way to go. Can someone walk me through the steps to create one? Also, what are the best practices for ensuring my entries are accurate and don’t mess up my accounts? Any tips or common mistakes to avoid would be greatly appreciated!
I’m new to Quick–B00ks and need help creating a journal entry. I have some transactions that don’t fit the standard forms, and I’ve been told a journal entry is the way to go. Can someone walk me through the steps to create one? Also, what are the best practices for ensuring my entries are accurate and don’t mess up my accounts? Any tips or common mistakes to avoid would be greatly appreciated! Read More
Information protection: Auto labelling policy vs Information protection: Label Policy
Hello,
What is the difference between Information protection: Auto labelling policy vs Information protection: Label Policy
Hello,What is the difference between Information protection: Auto labelling policy vs Information protection: Label Policy Read More
Index Match only blank cells
I have a dataset which contains only partial data in one column (SAP below). I have the data in another table, but I only want to INDEX/MATCH the blank fields in the SAP column below. Any thoughts how to accomplish this?
LocationCodeSAPTexas101 Texas101
TX
New Mexico102 New Mexico102NM
I have a dataset which contains only partial data in one column (SAP below). I have the data in another table, but I only want to INDEX/MATCH the blank fields in the SAP column below. Any thoughts how to accomplish this? LocationCodeSAPTexas101 Texas101TXNew Mexico102 New Mexico102NM Read More
Patch & DISM fails at 97.2%
The below patch fails
DISM fails at 97.2%
Windows Version: Windows Server 2019 Standard x64 (10.0.17763.4737)
Tried DISM with NW and Restore.
Tried to install manually : Failed
CBS logs error:
2024-07-11 11:02:47, Info CSI 00005d75 [SR] Cannot repair member file [l:12]’services.lnk’ of Microsoft-Windows-ServicesSnapIn, version 10.0.17763.1, arch amd64, nonSxS, pkt {l:8 b:31bf3856ad364e35} in the store, hash mismatch 2024-07-11 11:02:47, Info CSI 00005d78 [SR] Cannot repair member file [l:12]’services.lnk’ of Microsoft-Windows-ServicesSnapIn, version 10.0.17763.1, arch amd64, nonSxS, pkt {l:8 b:31bf3856ad364e35} in the store, hash mismatch 2024-07-11 11:02:47, Info CSI 00005d79 [SR] This component was referenced by [l:132]’Microsoft-Windows-Server-Gui-Mgmt-Package-admin~31bf3856ad364e35~amd64~~10.0.17763.1.Microsoft-Windows-Server-Gui-Mgmt-Package-admin’ 2024-07-11 11:02:47, Info CSI 00005d7a [SR] This component was referenced by [l:116]’Microsoft-Windows-Server-Features-Package01613~31bf3856ad364e35~amd64~~10.0.17763.1.3a3f5d9a1e7ba91a7cc08e1f5d07f00a’ 2024-07-11 11:02:47, Info CSI 00005d7d [SR] Could not reproject corrupted file ??C:ProgramDataMicrosoftWindowsStart MenuProgramsAdministrative Tools\services.lnk; source file in store is also corrupted 2024-07-11 11:02:47, Info CSI 00005d7f [SR] Repair complete
Any help would be highly appreciated!
The below patch fails DISM fails at 97.2%Windows Version: Windows Server 2019 Standard x64 (10.0.17763.4737) Tried DISM with NW and Restore.Tried to install manually : Failed CBS logs error: 2024-07-11 11:02:47, Info CSI 00005d75 [SR] Cannot repair member file [l:12]’services.lnk’ of Microsoft-Windows-ServicesSnapIn, version 10.0.17763.1, arch amd64, nonSxS, pkt {l:8 b:31bf3856ad364e35} in the store, hash mismatch 2024-07-11 11:02:47, Info CSI 00005d78 [SR] Cannot repair member file [l:12]’services.lnk’ of Microsoft-Windows-ServicesSnapIn, version 10.0.17763.1, arch amd64, nonSxS, pkt {l:8 b:31bf3856ad364e35} in the store, hash mismatch 2024-07-11 11:02:47, Info CSI 00005d79 [SR] This component was referenced by [l:132]’Microsoft-Windows-Server-Gui-Mgmt-Package-admin~31bf3856ad364e35~amd64~~10.0.17763.1.Microsoft-Windows-Server-Gui-Mgmt-Package-admin’ 2024-07-11 11:02:47, Info CSI 00005d7a [SR] This component was referenced by [l:116]’Microsoft-Windows-Server-Features-Package01613~31bf3856ad364e35~amd64~~10.0.17763.1.3a3f5d9a1e7ba91a7cc08e1f5d07f00a’ 2024-07-11 11:02:47, Info CSI 00005d7d [SR] Could not reproject corrupted file ??C:ProgramDataMicrosoftWindowsStart MenuProgramsAdministrative Tools\services.lnk; source file in store is also corrupted 2024-07-11 11:02:47, Info CSI 00005d7f [SR] Repair complete Any help would be highly appreciated! Read More