Category: Microsoft
Category Archives: Microsoft
How to Fix bank error 179 in QuickBooks After new update?
I keep encountering bank error 179 in QuickBooks, which prevents me from accessing my bank’s online services. How can I resolve this issue? What steps should I take to troubleshoot and fix it?
I keep encountering bank error 179 in QuickBooks, which prevents me from accessing my bank’s online services. How can I resolve this issue? What steps should I take to troubleshoot and fix it? Read More
Gmail Sender not receive Out Of Office message
Hello,
I tested Out Of Office message two different organizaton. One of them use O365 Exchange Online the other one use On-Prem Exchange 2019 with the latest patch level.
OOF messages works except the sender come from gmail domain.
The ExO org use spf dmarc and dkim records, the on-prem org use spf and dmarc only.
Can anybody explain to how can i solve this issue?
Hello,I tested Out Of Office message two different organizaton. One of them use O365 Exchange Online the other one use On-Prem Exchange 2019 with the latest patch level.OOF messages works except the sender come from gmail domain.The ExO org use spf dmarc and dkim records, the on-prem org use spf and dmarc only.Can anybody explain to how can i solve this issue? Read More
The difference E32-16ads_v5 and E32ads_v5
Hello!
Can you tell me please what is the exact difference between E32-16ads_v5 and E32ads_v5?
As far as I see in the documentation the machines E32-16ads_v5 are basing on original size E32ads_v5 but has cut CPU’s by half
https://learn.microsoft.com/en-us/azure/virtual-machines/constrained-vcpu?tabs=family-E
What is the advance of choosing machine with half of processors instead of full? Is there any advantage of this option?
Looking at the prices machines are on the same base price level 1432,25 €
Hello! Can you tell me please what is the exact difference between E32-16ads_v5 and E32ads_v5?As far as I see in the documentation the machines E32-16ads_v5 are basing on original size E32ads_v5 but has cut CPU’s by half https://learn.microsoft.com/en-us/azure/virtual-machines/constrained-vcpu?tabs=family-EWhat is the advance of choosing machine with half of processors instead of full? Is there any advantage of this option?Looking at the prices machines are on the same base price level 1432,25 € Read More
Not able to grant the access to SharePoint List
Hi
I am trying to Approve a user under the ‘Access Requests’ as shown below but for some reason it says ‘Request approval failed’
Can someone help me how to troubleshoot it. I am added as site owner with full control.
Hi I am trying to Approve a user under the ‘Access Requests’ as shown below but for some reason it says ‘Request approval failed’Can someone help me how to troubleshoot it. I am added as site owner with full control. Read More
Copilot Word Suggestion
We are having an odd issue where we have a user who has written some content and when we are reviewing the changes it is showing that “Microsoft Word” was the author of a particular section of content.
Would this suggest that this was done with Copilot making this addition?
We are having an odd issue where we have a user who has written some content and when we are reviewing the changes it is showing that “Microsoft Word” was the author of a particular section of content. Would this suggest that this was done with Copilot making this addition? Read More
How to combine multiple MP3 files into one on Windows 10 PC?
I have dozens of mp3 files with 3-5 minutes long. I need to combines multiple mp3 files into one so the music will be playing much longer with disruption. Does anyone know a good mp3 combiner software that works for this purpose?
I am currently using a Windows 10 laptop and hope it could be easily to combine mp3 files on my computers.
I have dozens of mp3 files with 3-5 minutes long. I need to combines multiple mp3 files into one so the music will be playing much longer with disruption. Does anyone know a good mp3 combiner software that works for this purpose? I am currently using a Windows 10 laptop and hope it could be easily to combine mp3 files on my computers. Read More
Azure DevOps Linux Agents Azure Powershell Task taking longer time to login and log out
Dear All,
I’ve azure power shell script which copied the files from staging directory to ADLS location. I’m using Azure Power shell task in devops pipeline to do the work, when I’ve run the pipeline which is taking 13 minutes overall for completing the task. AZ context process taking 11 minutes and actual copy is faster.
Agent pools are Linux red-hat server.
login using Service Principle as authentication to azure services.
Can any please help out what causing the issue.
Regards,
Shekhar.
Dear All,I’ve azure power shell script which copied the files from staging directory to ADLS location. I’m using Azure Power shell task in devops pipeline to do the work, when I’ve run the pipeline which is taking 13 minutes overall for completing the task. AZ context process taking 11 minutes and actual copy is faster. Agent pools are Linux red-hat server.login using Service Principle as authentication to azure services.Can any please help out what causing the issue. Regards,Shekhar. Read More
Windows 11 version 26100.712
Hi,
Upgrade to build 26100.712 but now Task Manager has this issue if I use Dark Mode (before the update there was no issue)
I have even booted into Safe Mode assuming it was a display driver issue, same results. In light mode the graphs show correctly
Hi, Upgrade to build 26100.712 but now Task Manager has this issue if I use Dark Mode (before the update there was no issue) I have even booted into Safe Mode assuming it was a display driver issue, same results. In light mode the graphs show correctly Read More
What’s the best color picker software for windows 11?
I’m working on a project that requires accurately capturing the RGB values of colors from various applications and websites. I would ideally want a color picker that offers features like color history, the ability to save favorite colors, and various color formats (e.g., HEX, RGB). I appreciate advice based on personal experience, thank you in advanced!
I’m working on a project that requires accurately capturing the RGB values of colors from various applications and websites. I would ideally want a color picker that offers features like color history, the ability to save favorite colors, and various color formats (e.g., HEX, RGB). I appreciate advice based on personal experience, thank you in advanced! Read More
Measuring benefits of implementing Copilot
Hello, how could I measure the benefit Copilot brings to our organisation? We have a small pilot group that use Copilot for 365 in our organisation but have faced difficulty so far being able to accurately measure the benefit it brings these people.
What are you doing for your organisation to quantify/justify the cost of Copilot vs the value it brings to employees?
Thanks!
Hello, how could I measure the benefit Copilot brings to our organisation? We have a small pilot group that use Copilot for 365 in our organisation but have faced difficulty so far being able to accurately measure the benefit it brings these people. What are you doing for your organisation to quantify/justify the cost of Copilot vs the value it brings to employees? Thanks! Read More
What runs GPT-4o and Microsoft Copilot? | Largest AI supercomputer in the cloud | Mark Russinovich
Microsoft has built the world’s largest cloud-based AI supercomputer that is already exponentially bigger than it was just 6 months ago, paving the way for a future with agentic systems.
For example, its AI infrastructure is capable of training and inferencing the most sophisticated large language models at massive scale on Azure. In parallel, Microsoft is also developing some of the most compact small language models with Phi-3, capable of running offline on your mobile phone.
Watch Azure CTO and Microsoft Technical Fellow Mark Russinovich demonstrate this hands-on and go into the mechanics of how Microsoft is able to squeeze as much performance from its AI infrastructure as possible to run AI workloads of any size efficiently on a global scale.
This includes a look at: how it designs its AI systems to take a modular and vendor agnostic approach to running the latest GPU innovations from different chip vendors; the industry leading work to develop a common GPU interoperability layer, and its work to develop its own state-of-the-art AI-optimized hardware and software architecture to run its own commercial services like Microsoft Copilot and more.
Portable and powerful for IoT and mobile devices, get top-tier AI capabilities with fewer parameters. Click to watch.
Scale large language models with optimized GPU designs.
Ensure faster data transfer and superior performance. Check out the AI supercomputer updates.
Fine-tuning without the cost of dedicated infrastructure.
See how one base LLM on the same server cluster can be customized and shared concurrently by hundreds of tenants using Multi-LoRA. Watch here.
Watch our video here:
QUICK LINKS:
00:00 — AI Supercomputer
01:51 — Azure optimized for inference
02:41 — Small Language Models (SLMs)
03:31 — Phi-3 family of SLMs
05:03 — How to choose between SLM & LLM
06:04 — Large Language Models (LLMs)
07:47 — Our work with Maia
08:52 — Liquid cooled system for AI workloads
09:48 — Sustainability commitments
10:15 — Move between GPUs without rewriting code or building custom kernels
11:22 — Run the same underlying models and code on Maia silicon
12:30 — Swap LLMs or specialized models with others.
13:38 — Fine-tune an LLM
14:15 — Wrap up
Unfamiliar with Microsoft Mechanics?
As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
Keep getting this insider knowledge, join us on social:
Follow us on Twitter: https://twitter.com/MSFTMechanics
Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Video Transcript:
-Microsoft has built the world’s largest AI supercomputer that’s already exponentially bigger than it was just six months ago, capable of training and inferencing the most sophisticated large language models at scale on Azure, including things like Microsoft Copilot and ChatGPT. And based on training innovations for Microsoft Research, we’ve also built some of the world’s most compact small language models with Phi-3 that can run locally and offline even on a mobile phone. And today we’re joined by Microsoft Technical Fellow and Azure CTO, Mark Russinovich, who’s going to help us demonstrate and unpack what makes all of this possible. So welcome back to the show.
– It’s good to be back, thanks for having me.
– And thanks for joining us again. You know, since last time you were on, about a year ago in May, we went into the mechanics of our AI supercomputer built in 2020 for OpenAI to be able to train and run GPT-3 at the time. Now that system had actually comprised of 10,000 networked Nvidia V100 GPUs. And it’s not an exaggeration to say that a lot has changed since then.
– Yeah, actually that size, that system pales in comparison to the one we built in November 2023 to train OpenAI’s next generation of large models. That one was independently ranked by TOP500 as the number three supercomputer in the world and the largest cloud-based supercomputer. We secured that place with 14,400 Nvidia H100 GPUs and 561 petaflops of compute, which at the time represented just a fraction of the ultimate scale of that supercomputer. Our AI system is now orders of magnitude bigger and changing every day and every hour. Today, just six months later, we’re deploying the equivalent of five of those supercomputers every single month. Our high speed and InfiniBand cabling that connects our GPUs would be long enough to wrap around the earth at least five times.
– And to me, that just kind of sounds like a cable management nightmare.
– Well, nothing like the cable management nightmare under my desk. The point here is that not only can we accelerate model training for OpenAI and our own services, but where this makes a huge difference is with inference to run these models as part of your apps. And inference is where we see the most growth in demand. In fact, we’ve optimized Azure for inference. We run our own commercial services like Microsoft Copilot, which is used by 60% of the Fortune 500, along with copilot experiences in Azure and GitHub, all at massive scale and high performance. And with our model as a service option in Azure, you can use our infrastructure to access and run the most sophisticated AI models such as GPT-3.5 Turbo, GPT-4, Meta’s Llama, Mistral, and many more.
– This makes a lot of sense because most organizations are probably going to be using existing models with their own apps versus building out and training their own large language models. So it’s really great to see the diversity of large language models that we have now. At the same time though, there’s this world of smaller small language models which some people see really as the future of generative AI. So how are we looking at that area?
– Well, this has been a focus of ours to try to get models to be as efficient as possible and we now have achieved getting a small model to be the equivalent in reasoning capability as ones five to 10 times its size. We recently announced the Phi-3 family of small language models or SLMs based on the work of Microsoft Research. Those have fewer parameters because they’re trained on filtered web content, high quality data and synthetic data. Depending on the scenario, these SLMs have similar capabilities to those found in large language models and require less compute. They can use the ONNX Runtime for inference, which makes them portable and they can even run in your device’s local NPU. And they’re a great option when you have limited to no connectivity like with IoT devices or on a mobile device. In fact, I’ve got phi-3 Mini running right here on this iPhone.
– Wow.
– I’ll start by putting it in airplane mode and I’ll make sure WiFi is also disabled, so it’s running offline and there’s no data being sent to the cloud. Now I’ll open my app and when I move into its settings, you can see that the model is Phi-3 Mini-4K and it’s a standard Hugging Face format. I can also see some of the other settings for prompt format and prediction options, I’ll close those out. Now I’ll paste my prompt to give me a chocolate chip cookie recipe with lots of sarcasm in the tone, which should be humorous. And now you can see that it’s starting out pretty good, chocolate chip concoction. It’s listing out the ingredients with jokes that only a professional greeting card writer would love. And this is impressive because it’s done reasoning to merge baking instructions with sarcasm and it’s also running pretty fast on this phone and those look like legitimate baking temperatures. Then there are proper instructions for mixing everything and baking. And at the end it seems to know that I’m actually not going to bake them. The real test would be trying it out but I’m not a baker in my spare time, I actually prefer to draw.
– Yeah, I’ve seen that, I actually was a big fan of your Grogu sketch during the pandemic.
– Yeah, that was a really popular one. It kind of goes with the theme here, small packages having lots of power. And by the way, these SLMs still contain billions of parameters ranging from 3.8 billion for the Phi-3 Mini model to 14 billion for Phi-3 Medium, but they’re still significantly smaller than a large language model like Meta’s Llama 3, which is up to 70 billion parameters, and GPT-3, which has even more, with 175 billion.
– And those just keep getting a lot bigger. So how do you make sure that you make the right choice then between the different small language models and maybe using a larger language model?
– Well, like we just said, a small language model won’t have the same amount of inherent knowledge contained within it. For example, GPT-4 knows the detailed history of ancient Babylon, it knows chemistry, it knows philosophy. It’s been trained on significantly more information and can understand more nuanced language patterns to generate more contextually accurate responses. Small language models just simply can’t get to that kind of knowledge. And so the choice will be task-specific based on the level of sophistication and reasoning you need, the amount of knowledge you need the model to inherently have in it. For example, for general chat, you want it to know about all those things and it’ll also be resource and latency-specific. Like if you need to run it on a phone, it can’t be a very large language model.
– Right, so the SLMs then have a lot more specificity in terms of what they’re bringing to the table. They’ve got a different level of quality and efficiency as well. It’s going to be interesting, I think, to see their impact then on AI PCs with more scope degenerative AI experiences. But why don’t we move back to large language models because to use sophisticated reasoning that they do provide, how do you even begin to use them efficiently and at scale?
– Well, this is where our experience in developing these systems over the last few years really pays off. A single server can cost several hundred thousand dollars like the price of a house basically. So we want to make sure that we aren’t wasting resources. As I mentioned, Microsoft runs inference at massive scale. There are aspects of inferencing that benefit more from high bandwidth memory versus pure compute power, and that helps with faster data transfer, better performance, and more efficient data access. And we’ve been working with our hardware partners to evolve their GPU design. For example, we partnered closely with AMD as they designed their MI300X GPU. That’s optimized for AI with 192 gigabytes of high bandwidth memory. And we were the first cloud provider to offer VMs with MI300X GPUs. But in parallel, we worked with Nvidia on their GPU design for high bandwidth memory. Their H200 chips will have 141 gigabytes based on our work with OpenAI. And their Blackwell architecture, which is coming after that, will increase that up to 384 gigabytes.
– And that’s really a lot because just to put that into perspective, given that we just saw high bandwidth memory being 80 gigabytes just a year ago, and at the time, that was more than respectable.
– Well, yeah, the speed of innovation we’re seeing in AI hardware is like nothing we’ve seen before, it’s a really unique moment in time. The newer NVIDIA Quantum and InfiniBand switches can connect network GPUs at 800 gigabits per second, so the port speeds have already doubled compared to when we talked last year. And to take advantage of the best cost performance, our systems support a modular approach to deploy whichever GPU demand calls for. We can already use AMD and NVIDIA GPUs on the same InfiniBand network.
– So we’ve heard you refer to this as the AI system, which refers to the specialized hardware and software stack behind our AI supercomputer. So beyond those individual hardware components, what are some of the things that we’re doing at the AI system level?
– Well, so there’s the stack we built with AMD and Nvidia but then there’s our own silicon innovation. We’ve taken a step back to think about the ideal hardware and software architecture and what we’d build if we had no preexisting dependencies or constraints. And that’s where the work on Maia comes in. Maia represents our next generation hardware and software reference architecture designed for one purpose alone, to run large scale AI workloads like Microsoft Copilot more efficiently. Maia vertically integrates what we’ve learned across every layer of the stack; from the silicon with our Maia 100 AI accelerator, the Maia kernel library and API that lets us squeeze as much performance as possible from the infrastructure while running AI workloads to the custom backend network and that is deeply integrated into the chip. Maia uses an ethernet based network protocol as opposed to InfiniBand for high speed transfer to connect with other Maia accelerators on the network.
– So this work also then impact our data center design for physical components?
– It actually does, this is brand new technology we’re landing in our data centers. One of the areas of data center design that we’re evolving is cooling. For example, when you’re running GPU clusters at this level, they produce a tremendous amount of heat. Not only do you have to cool the data center environment itself to keep ambient temperatures as low as possible, but GPUs like NVIDIA’s H100 use air cooling so you need a lot of fans to keep the GPUs operating within their target ranges. That also means more power consumptions. So we’ve instead taken the approach to design the Maia system with liquid cooling for more efficient heat transfer. Maia’s our first liquid cooled system for AI workloads. We’ve also built a dedicated liquid cooling tower as a sidekick to the Maia server. Those match the thermo profile of the Maia chip. This is a rack level, closed loop liquid cooling system for higher efficiency and we expect to see liquid cooling incorporated into the GP designs of our hardware partners coming up in the near future.
– Right, but this does beg the question though, as we build these bigger and more powerful systems, how’s this going to impact our sustainability commitments?
– Well, as we design these, we’re still committed to meeting our goals including being carbon neutral by 2030. Our Maia architecture, for example, has been developed to meet our zero waste commitment and by design we’re optimizing for running Maia servers within our existing data center footprints.
– Right, just to be clear here, you know Maia is being used for Microsoft services initially but is it possible then to have maybe the software stack and resource manager that abstracts the silicon models for people to be able to pick the workload and kind of the compute they need without changing any code?
– Exactly, that’s exactly what we’re working on is to make it so that code can run across different GPU architectures without you having to change your code each time. Let me break down how this works. At the top of the stack, you’ve got your models and application you need to run. Under that are your AI frameworks like PyTorch or the ONNX Runtime. Those will often communicate directly to a GPU or accelerator kernel library and SDK. And this is where each manufacturer has their own. Nvidia has CUDA, AMD has ROCm, and we’re using the Maia API and these interact with the GPUs directly. Now typically, you’ll need deep knowledge of the underlying GP architecture for each GPU to write custom kernels for your app to be portable. So to solve for this, we partner with OpenAI to build a Python based interoperability layer called Triton to work across Nvidia, AMD and Maia silicon. Triton will make it possible to move between different GPUs without the need to rewrite your code or to build custom kernels.
– So do we have a running example then where maybe we’ve built something for one set of GPUs where we want to bring it to another stack?
– Well, so as a proof of concept, we’ve taken the model underneath GitHub Copilot and ported it to the Maia accelerator. Let me show you that running. I’ve got my desktop set up with three windows, on the left is Visual Studio Code to interact with Copilot. On the top right is network traffic from our Maia machine. On the bottom right is our command line to look at the accelerator topology and you can see there are four devices running with inferencing. I will start on the code and write a comment to create a Python dictionary of six countries and their capitals. And just based on that code comment, GitHub Copilot goes ahead and writes the code using the model on Maia. You can see the network traffic spiked on the right as the orchestrator sent that traffic to the model and returned our code. Now I’ll clear that example and start a second one. This time I’ll say, write bubble sort in Python. You’ll see that the network spike lasts a bit longer because it wrote more code this time. So it’s possible to run the same underlying models and code on Maia silicon and there’s no noticeable trade off on speed and accuracy. And once we have Triton running, you’ll be able to just run your code on different GPUs without porting the model.
– So is it possible then to also swap out the underlying large language models to more specialized ones in Azure?
– Yeah, that’s actually something we’ve had for a while in Azure AI services where you can deploy the models you want first and then switch between them. And once you have a few running, you can either select the model you want for your app in the Playground or the same works in code. It’s just a matter of changing the endpoint to the model you want to run.
– Great, so this means it’s really easy then, effectively, to spin up the model that you want. And something else that we’ve had in Azure AI Studio is model as a service for pay as you go inference and fine tuning models with your own data. So are we having to spin up then different models and different compute every time somebody fine tunes their model? That seems expensive.
– Yeah, well spinning up your own model instance and infrastructure would be too expensive. Last time I introduced the concept of Low Rank Adaptive fine tuning or LoRA, where you can add new skills by fine tuning a small set of parameters instead of retraining the entire model with a targeted dataset. So you’re only adding just 100 megabytes of data to a base model that’s several hundred gigabytes in size, for example.
– Right, we kind of compared it to Neo on “The Matrix” learning a new skill like kung fu.
– Right, so now imagine experts being able to teach Neo multiple new skills simultaneously. For fine tuning LLM, we can achieve this using a multi-serve model instance. With an approach called Multi-LoRA, where we can share one base LLM on the same server cluster, we can let different customers fine tune the base model specific to their needs and have it be isolated and used only by them. We’re able to attach hundreds or thousands of fine tuned models as adapters that run simultaneously and isolated from each other on the base model. This gives you a secure way to fine tune an LLM with additional skills without having to spin up your own compute intensive infrastructure, which is a massive cost savings.
– Right, so now we’ve covered all the major updates since the last time you were on the show, now if we were to look at one or two years from now, given how available AI is now becoming, what do you think the future looks like?
– Well, I think two things we’re seeing happen is agentic systems are going to evolve where you’ve got a high reasoning LLM kind of as the core brain talking to lots of other LLMs and SLMs that are task specific, including multimodal models each performing their own tasks as a part of a larger workflow. The other thing you’re going to see is Azure just continue to always offer the best, latest and greatest frontier models, as well as small models, as well as open models and closed models on infrastructure that is continuously improving in efficiency.
– Really great to hear the vision from the man himself, Mark Russinovich. Always great to have you on the show. Hopefully, next time you’re on a year from now or so, we’ll even have more momentum to share with everyone watching. So until then, keep watching Microsoft Mechanics for all latest AI updates. Thanks for watching and we’ll see you soon.
Microsoft Tech Community – Latest Blogs –Read More
Who is the Product Manager in charge of the new features for Windows 11?
I have a very serious question. Why did on earth did you decide that the rename feature when right clicking a file has to be selected by clicking on “Show more options”?
It is by FAR one of the most commonly used options and now I have to do a TWO step process when it was ONE click before???
Did you do absolutely no UI/UX user research or feedback studies? Who were the people you tested on? Were ANY of them people who accidently mistype anything, like NORMAL HUMANS??
Please, for the love of god, try to emulate some of apple’s design principles where the experience is as easy as it can be, with the option to get more technical as needed.
AND PLEASE ADD BACK THE RENAME OPTION TO THE DEFAULT OPTION LIST WHEN RIGHT CLICKING. I HIGHLY DOUBT (AND HOPE) ITS NO MORE THAN LIKE A 15 LINE CODE CHANGE.
I have a very serious question. Why did on earth did you decide that the rename feature when right clicking a file has to be selected by clicking on “Show more options”? It is by FAR one of the most commonly used options and now I have to do a TWO step process when it was ONE click before??? Did you do absolutely no UI/UX user research or feedback studies? Who were the people you tested on? Were ANY of them people who accidently mistype anything, like NORMAL HUMANS?? Please, for the love of god, try to emulate some of apple’s design principles where the experience is as easy as it can be, with the option to get more technical as needed.AND PLEASE ADD BACK THE RENAME OPTION TO THE DEFAULT OPTION LIST WHEN RIGHT CLICKING. I HIGHLY DOUBT (AND HOPE) ITS NO MORE THAN LIKE A 15 LINE CODE CHANGE. Read More
Locking down external sharing
I’ve inherited a SharePoint instance that is too open externally for my liking.
I plan to make the following changes:
1. Change SharePoint and OneDrive sharing from “New and existing guests” to “Only people in your organisation” and
2. Enable the “Limit external sharing by domain”
Trying to find out if i make these change will anything already shared be no longer available or will they remain and the changes only apply from that point onward.
I’ve inherited a SharePoint instance that is too open externally for my liking. I plan to make the following changes: 1. Change SharePoint and OneDrive sharing from “New and existing guests” to “Only people in your organisation” and2. Enable the “Limit external sharing by domain” Trying to find out if i make these change will anything already shared be no longer available or will they remain and the changes only apply from that point onward. Read More
Azure SQL Auditing
I am trying to fill some gaps in my understanding of auditing for Azure SQL
1) Why enable server auditing? I see no value proposition (Azure SQL)
2) when I enable database auditing does it automatically start auditing everything?
3) If 2 is a yes, can I create an audit specification to audit only select tables before enabling to avoid log ingestion which I don’t require?
4) does enabling audit logging for a database automatically start filling logs or does that merely create the connection to the storage
Thanks for any answers
Peter
I am trying to fill some gaps in my understanding of auditing for Azure SQL1) Why enable server auditing? I see no value proposition (Azure SQL)2) when I enable database auditing does it automatically start auditing everything?3) If 2 is a yes, can I create an audit specification to audit only select tables before enabling to avoid log ingestion which I don’t require?4) does enabling audit logging for a database automatically start filling logs or does that merely create the connection to the storage Thanks for any answers Peter Read More
pivot table date formatting not working
I want my pivot table to show the date format that exists in my source data (mm/dd/yyyy). The pivot table is automatically grouping the date into Year and Quarter, and after deleting these from the sidebar field list, the Date Field only shows the month (mm); refuses to allow formatting as desired. Does anyone know if this known ms excel pivot table problem was ever solved?
I want my pivot table to show the date format that exists in my source data (mm/dd/yyyy). The pivot table is automatically grouping the date into Year and Quarter, and after deleting these from the sidebar field list, the Date Field only shows the month (mm); refuses to allow formatting as desired. Does anyone know if this known ms excel pivot table problem was ever solved? Read More
Problem 2fa authenticator
“Hello, I am the Microsoft 365 administrator. I have encountered a problem: I changed my phone and forgot to enable the 2FA feature in the Authenticator on the new phone. Now, I can’t access my admin page. Is there a solution to this? Please help me.”
“Hello, I am the Microsoft 365 administrator. I have encountered a problem: I changed my phone and forgot to enable the 2FA feature in the Authenticator on the new phone. Now, I can’t access my admin page. Is there a solution to this? Please help me.” Read More
I cannot receive external emails after migrating from Exchange Server 2013 to 2019.
Hello everyone,
I just followed the step-by-step guide from the following link: Migrating from Exchange 2013 to Exchange 2019 – A Step-by-Step Guide to migrate my Exchange 2013 server to the 2019 version. I completed all the steps mentioned, including creating the connectors, databases, etc. Currently, I have both servers coexisting.
I used a test mailbox, which I migrated to one of the databases on the new 2019 server. The problem I’m encountering is that when sending an email from this mailbox to outside the organization, the emails are received. However, when I reply to that email from outside, the replies are not received in the Exchange 2019 mailbox.
I used the Microsoft Remote Connectivity Analyzer tool, and it didn’t show any errors. When I check the queue on the 2013 server, I can see that all the emails sent from outside are in it. The following error is indicated:
Identity: UHPEX2013Unreachable309869005505088
Subject: RE: Test
Internet Message ID: <email address removed for privacy reasons>
From Address: email address removed for privacy reasons
Status: Ready
Size (KB): 413
Message Source Name: SMTP:Default UHPEX2013
Source IP: 200.123.132.100
SCL: 0
Date Received: 23/05/2024 02:26:42 PM
Expiration Time: 25/05/2024 02:26:42 PM
Last Error: There is currently no route to the mailbox database.
Queue ID: UHPEX2013Unreachable
Recipients: email address removed for privacy reasons;2;3;There is currently no route to the mailbox database.;2;ExternosUHP2EX2019;0.
Could you please help me resolve the issue without causing impact or disruption to my productive 2013 server?
Hello everyone,I just followed the step-by-step guide from the following link: Migrating from Exchange 2013 to Exchange 2019 – A Step-by-Step Guide to migrate my Exchange 2013 server to the 2019 version. I completed all the steps mentioned, including creating the connectors, databases, etc. Currently, I have both servers coexisting.I used a test mailbox, which I migrated to one of the databases on the new 2019 server. The problem I’m encountering is that when sending an email from this mailbox to outside the organization, the emails are received. However, when I reply to that email from outside, the replies are not received in the Exchange 2019 mailbox.I used the Microsoft Remote Connectivity Analyzer tool, and it didn’t show any errors. When I check the queue on the 2013 server, I can see that all the emails sent from outside are in it. The following error is indicated: Identity: UHPEX2013Unreachable309869005505088
Subject: RE: Test
Internet Message ID: <email address removed for privacy reasons>
From Address: email address removed for privacy reasons
Status: Ready
Size (KB): 413
Message Source Name: SMTP:Default UHPEX2013
Source IP: 200.123.132.100
SCL: 0
Date Received: 23/05/2024 02:26:42 PM
Expiration Time: 25/05/2024 02:26:42 PM
Last Error: There is currently no route to the mailbox database.
Queue ID: UHPEX2013Unreachable
Recipients: email address removed for privacy reasons;2;3;There is currently no route to the mailbox database.;2;ExternosUHP2EX2019;0. Could you please help me resolve the issue without causing impact or disruption to my productive 2013 server? Read More
New skilling snack: Windows security for developers
Developers! Get this week’s collection of valuable tricks and tools to help you safeguard your code and protect your users from the worst threats the internet has to offer. In under an hour, read through the many resources on VBS, passkeys, Zero Trust, MSIX Packaging Tool, and more! There’s also a quick link to bookmark any of your favorite on demand recordings from Microsoft Build. Stay three steps ahead in the race against malware at Skilling snack: Windows security for developers.
Unfamiliar with Windows skilling snacks?
There is a sea of technical information out there, so here’s a new way to dive in without getting overwhelmed!
Every two weeks, there will be a handpicked snack-size selection of essential articles, demos, deep dives, and learning modules on a specific topic—all of which can be consumed in less than two hours. Skill up over a long lunch break, the weekend, or a slow day at the office! Break the routine with hands-off video or use your browser to turn an article into a read-aloud experience while you get up and stretch, take a walk, or multitask.
To catch up and view the whole menu, see Windows skilling snacks: bite-sized learning for IT pros.
Bon appétit!
Developers! Get this week’s collection of valuable tricks and tools to help you safeguard your code and protect your users from the worst threats the internet has to offer. In under an hour, read through the many resources on VBS, passkeys, Zero Trust, MSIX Packaging Tool, and more! There’s also a quick link to bookmark any of your favorite on demand recordings from Microsoft Build. Stay three steps ahead in the race against malware at Skilling snack: Windows security for developers.
Unfamiliar with Windows skilling snacks?
There is a sea of technical information out there, so here’s a new way to dive in without getting overwhelmed!
Every two weeks, there will be a handpicked snack-size selection of essential articles, demos, deep dives, and learning modules on a specific topic—all of which can be consumed in less than two hours. Skill up over a long lunch break, the weekend, or a slow day at the office! Break the routine with hands-off video or use your browser to turn an article into a read-aloud experience while you get up and stretch, take a walk, or multitask.
To catch up and view the whole menu, see Windows skilling snacks: bite-sized learning for IT pros.
Bon appétit! Read More
How Do I Fix QuickBooks Error 12157 When Payroll Update Failed
QuickBooks is having trouble QuickBooks Error 12157 When Trying to Payroll Update. Each attempt ends in failure with an error message. Has anyone else experienced this problem? What can I do to fix it? Any advice would be greatly appreciated!
QuickBooks is having trouble QuickBooks Error 12157 When Trying to Payroll Update. Each attempt ends in failure with an error message. Has anyone else experienced this problem? What can I do to fix it? Any advice would be greatly appreciated! Read More
What to Do When QuickBooks Connection Has Been Lost Windows 10
I’m having trouble QuickBooks Connection Has Been Lost Windows 10, I encounter errors or the process doesn’t seem to work. How can I resolve this issue and regain access to my QuickBooks account?
I’m having trouble QuickBooks Connection Has Been Lost Windows 10, I encounter errors or the process doesn’t seem to work. How can I resolve this issue and regain access to my QuickBooks account? Read More