Month: April 2024
Project Tasktype
Hi,
Is there a way to set “task type” to fixed work in new project online version ?
I need the “duration” automatically to be adjust if I change the numbers of employees working on it.
Hi, Is there a way to set “task type” to fixed work in new project online version ? I need the “duration” automatically to be adjust if I change the numbers of employees working on it. Read More
How to create a lists based modern site that I can reproduce across tenants?
Good day,
Wish to have a sharepoint site with many lists, views on those lists , (modern) sharepoint pages using those list, all depending on each other.
I need to be able to deploy/copy complete site, all information, lists, pages etc to a new sharepoint site. I would even like to copy this to a different tenant.
How could I get started with this? Does a microsoft sollution exists to copy this (and not break links between the lists). (My research says no)
Should I design all of this as a sharepoint app? (can apps do this?) (could apps handle schema updates in the future on the lists?)
What technology/solutions should I be looking at to achieve my goal?
Good day, Wish to have a sharepoint site with many lists, views on those lists , (modern) sharepoint pages using those list, all depending on each other.I need to be able to deploy/copy complete site, all information, lists, pages etc to a new sharepoint site. I would even like to copy this to a different tenant. How could I get started with this? Does a microsoft sollution exists to copy this (and not break links between the lists). (My research says no)Should I design all of this as a sharepoint app? (can apps do this?) (could apps handle schema updates in the future on the lists?) What technology/solutions should I be looking at to achieve my goal? Read More
KQl leftanti join query
I need to verify if my devices are having the security tools installed. One way of doing it I am thinking of is running KQL query on BehaviourAnalytics logs to extract user list who signed in last 24 hours and compare with userlist of CommonSecurity table.
In the comparison output I need to list those usernames which are not found in CommonSecurity table. This will tell me which users do not have the tool installed on their systems.
From my understanding leftanti join query is helpful, but stuck on it.
In the below query, I want the comparison check to be done between Username from BehaviourAnalytics table and UserName_CS from CommonSecurity table, and give the non-matching entries from UserName table only.
Looking for suggestions on how to proceed further
BehaviorAnalytics
| where TimeGenerated >= ago(1d)
| where DevicesInsights !has “zscaler” and ActionType == ‘Sign-in’
| summarize count() by UserName
| join kind =leftanti(CommonSecurityLog
| where TimeGenerated >= ago(1d)
| summarize count() by UserName_CS))
I need to verify if my devices are having the security tools installed. One way of doing it I am thinking of is running KQL query on BehaviourAnalytics logs to extract user list who signed in last 24 hours and compare with userlist of CommonSecurity table.In the comparison output I need to list those usernames which are not found in CommonSecurity table. This will tell me which users do not have the tool installed on their systems. From my understanding leftanti join query is helpful, but stuck on it.In the below query, I want the comparison check to be done between Username from BehaviourAnalytics table and UserName_CS from CommonSecurity table, and give the non-matching entries from UserName table only. Looking for suggestions on how to proceed further BehaviorAnalytics
| where TimeGenerated >= ago(1d)
| where DevicesInsights !has “zscaler” and ActionType == ‘Sign-in’
| summarize count() by UserName
| join kind =leftanti(CommonSecurityLog
| where TimeGenerated >= ago(1d)
| summarize count() by UserName_CS)) Read More
How to Fix QuickBooks payroll Error PS077 Windows 10 after update
I’m experiencing QuickBooks Payroll Error PS077 on Windows 10. How can I troubleshoot and fix this issue?
I’m experiencing QuickBooks Payroll Error PS077 on Windows 10. How can I troubleshoot and fix this issue? Read More
Sensitivity labels integration not working after latest update on Android & iOS
Hi!
Since the two latest updates for Outlook on Android and iOS (versions 4.2413.1 and 4.2414.0 respectively), integration with Purview sensitivity labels has stopped working. There is no icon/button to add a sensitivity label when you create a new email, neither the user gets prompted to apply a label (for policies that require that). Devices are enrolled and managed in Intune.
Is this a bug or did we miss something?
Hi! Since the two latest updates for Outlook on Android and iOS (versions 4.2413.1 and 4.2414.0 respectively), integration with Purview sensitivity labels has stopped working. There is no icon/button to add a sensitivity label when you create a new email, neither the user gets prompted to apply a label (for policies that require that). Devices are enrolled and managed in Intune. Is this a bug or did we miss something? Read More
How to Fix QuickBooks Payroll Error PS077 Windows 11 after update
I’m encountering QuickBooks Payroll Error PS077 on Windows 11. How can I resolve this issue quickly?
I’m encountering QuickBooks Payroll Error PS077 on Windows 11. How can I resolve this issue quickly? Read More
Removing Licenses from Entra ID Accounts When a Replacement License Exists
License management is a core competence for Microsoft 365 tenant administrators. This article explains how to use PowerShell to remove licenses from accounts when an equivalent service plan is available from another license. It’s the kind of fix-up operation that tenant administrators need to do on an ongoing basis.
https://office365itpros.com/2024/04/19/license-management-switch/
License management is a core competence for Microsoft 365 tenant administrators. This article explains how to use PowerShell to remove licenses from accounts when an equivalent service plan is available from another license. It’s the kind of fix-up operation that tenant administrators need to do on an ongoing basis.
https://office365itpros.com/2024/04/19/license-management-switch/
Read More
Microsoft To Do “Build Removed”
I went to open To Do and it said the build expired. I went into test flight and it says Build Removed. How do I get it back again?
Jon
I went to open To Do and it said the build expired. I went into test flight and it says Build Removed. How do I get it back again? Jon Read More
NorthWind
I connected to the Microsoft site and downloaded the scripts to activate the pubs and NorthWind databases.
pubs was activated but NorthWind was not for SqlServer2017 instnwnd.sql and instpubs.sql.
In instpubs.sql there are CREATE DATABASE which generates the database, in instnwnd.sql no.
can anyone help me?
Totino1956
I connected to the Microsoft site and downloaded the scripts to activate the pubs and NorthWind databases.pubs was activated but NorthWind was not for SqlServer2017 instnwnd.sql and instpubs.sql.In instpubs.sql there are CREATE DATABASE which generates the database, in instnwnd.sql no.can anyone help me?Totino1956 Read More
SharePoint Online records management
In light of the fast-evolving Compliance Center and slowly but surely devolving Workflow 2013, I wondered what will become of the in-place records management features in SharePoint online? I know lots of RMS practitioners have invested heavily in Retention policies utilizing both Content type and List/Folders. Employed workflows (2013 and Nintex for example). I’d like to know if anyone has insight into where is all that headed.
Adam Salah
Collaboration Architect
MCT, MCSE/ MCSD (Charter member),:
SharePoint 201x, App builder, Web ApplicationsSQL Data and Analytics platforms
In light of the fast-evolving Compliance Center and slowly but surely devolving Workflow 2013, I wondered what will become of the in-place records management features in SharePoint online? I know lots of RMS practitioners have invested heavily in Retention policies utilizing both Content type and List/Folders. Employed workflows (2013 and Nintex for example). I’d like to know if anyone has insight into where is all that headed. Adam SalahCollaboration ArchitectMCT, MCSE/ MCSD (Charter member),:SharePoint 201x, App builder, Web ApplicationsSQL Data and Analytics platforms Read More
Is GlucoAlert Reviews Easy to Use for Tracking Blood Sugar?
Check If GlucoAlert Is Currently Available On The Official Website!
100% Satisfaction Guaranteed
It’s great to hear that Sugar Defender comes with a 100% money-back guarantee, offering customers peace of mind and confidence in their purchase. Here are some key points about the guarantee:
Duration: The guarantee extends for 60 full days from the original purchase date, giving customers a generous window of time to evaluate the product and its effects.
Easy Process: Customers can request a refund by contacting Sugar Defender’s toll-free number or sending an email. The straightforward process ensures that customers can initiate the refund procedure without hassle.
Full Refund: Sugar Defender promises a full refund within 48 hours of the product being returned, including both unopened and empty bottles. This demonstrates the company’s commitment to customer satisfaction and confidence in the product’s effectiveness.
Check If GlucoAlert Is Currently Available On The Official Website! 100% Satisfaction GuaranteedIt’s great to hear that Sugar Defender comes with a 100% money-back guarantee, offering customers peace of mind and confidence in their purchase. Here are some key points about the guarantee:Duration: The guarantee extends for 60 full days from the original purchase date, giving customers a generous window of time to evaluate the product and its effects.Easy Process: Customers can request a refund by contacting Sugar Defender’s toll-free number or sending an email. The straightforward process ensures that customers can initiate the refund procedure without hassle.Full Refund: Sugar Defender promises a full refund within 48 hours of the product being returned, including both unopened and empty bottles. This demonstrates the company’s commitment to customer satisfaction and confidence in the product’s effectiveness. Read More
Announcing the Top Three Teams of the 2024 Imagine Cup!
Today marks a pivotal moment in the 2024 Imagine Cup as we reveal the top three teams selected to progress from the semifinals to the highly anticipated Imagine Cup World Championship, live at Microsoft Build!
The Imagine Cup, the premier student technology startup competition, has attracted thousands of visionary student entrepreneurs worldwide. Each team has developed an AI-driven solution to tackle pressing challenges including accessibility, sustainability, productivity, and healthcare.
This year’s semifinalists have demonstrated exceptional innovation with Azure AI services and OpenAI, showcasing their innovation, grit and ability make a positive impact through entrepreneurship. Congratulations to all the semifinalists for their remarkable achievements!
However, only three teams have been selected to progress to the World Championship where they will live on the global stage as they vie for the Imagine Cup Trophy, USD100,000, and a mentorship session with Microsoft Chairman and CEO, Satya Nadella! You can watch these startups live at Microsoft Build on May 21 to see who wins.
Drumroll, please, as we unveil FROM YOUR EYES, JRE, and PlanRoadmap! These startups represent the pinnacle of creativity and resilience, embodying the spirit of innovation that defines Imagine Cup.
Meet the Teams! Listed in alphabetical order.
Turkey
About: Using Azure Computer Vision and Text Translator, FROM YOUR EYES has built a mobile application that offers both fast and qualified visual explanations to visually impaired users.
In their own words…
Who/what inspires you? “After being selected as one of Microsoft’s leading women in technology in 2020, I was invited to join the experience team of Microsoft’s Seeing AI program. It was there that I took on responsibilities and crossed paths with visually impaired developers worldwide who held significant roles. They encouraged me to delve into coding. In addition, Onur Koç, Microsoft Turkey’s CTO, also greatly inspired us, he addressed all student ambassadors saying, ‘Software is magic. You can change the life of someone you’ve never met on the other side of the world.’ We were deeply moved by this, and with this motivation, we worked to reach people…with our developed technology, and we succeeded.”
How do you want to make an impact with AI? “The issue of blindness directly affects 330 million people worldwide and indirectly impacts over a billion individuals. For a visually impaired person, using image processing solutions means freedom. With the technology we have developed, our goal is to enable visually impaired individuals to live freely, remove barriers to their dreams, and solve the problem of blindness through technology. This competition will provide us with the opportunity to promote our technology to millions of visually impaired individuals worldwide. They do not have time to waste. We also want to quickly deliver our technology to those in need.”
United Kingdom
About: Using Azure Machine Learning, Microsoft Fabric, and Copilot, JRE has built a slag detection system used in the continuous casting process of steel. Accurately detecting slag optimizes yield while improving quality.
In their own words…
Who/what inspires you? Jorge: “I learned how to code out of necessity. Even though I took courses as an undergrad, I never really liked the type of projects we did because they were primarily simulations about atomic interactions and molecular optimizations. I found these problems beautiful but very abstract. After college, many people wanted to create businesses around apps, and I learned how to code front and back-end applications to sell these apps. Later on, when I started working in the steel industry, I was frustrated by the lack of automation and unsafe and repetitive processes, so I started creating more complex integrated systems in this space.”
How do you want to make an impact with AI? “Our aim is to redefine manufacturing for the 21st century—making it smarter, more efficient, and sustainable. The Imagine Cup represents a unique opportunity to showcase our solution to a global audience, garnering support and resources necessary to scale our impact. We’re driven by the challenge of solving real-world problems and believe that through this competition, we can take a significant step towards achieving our vision.”
United States
About: Using Azure OpenAI Service, PlanRoadmap has built an AI-powered productivity coach to help people with ADHD who are struggling with task paralysis get their tasks done. Their coach asks questions to identify the user’s obstacles, suggests strategies, and teaches the user about their work style.
In their own words…
Who/what inspires you? Aaliya: One of my biggest inspirations has been my father… as he helped guide my direction within computer science. He has always been an advocate for women in STEM, and at a young age, that was incredibly powerful to be supported on. It enabled me to overcome feelings of imposter syndrome and have confidence in myself. He has always painted a vision of who I could be before I really believed in myself, and he inspires me to be dedicated, passionate, and ambitious.”
Clay: “At a young age, I was diagnosed with dysgraphia, a condition that impairs writing ability and fine motor skills. Even if not explicitly stated, when everything in school is handwriting, you are at a pretty severe disadvantage when you struggle to even write a few sentences.”
Ever: “Some of my biggest inspiration in pursuing computer science and engineering has been from cinema. I didn’t really have many people in my life who were in the tech field growing up, so I got a lot of inspiration from seeing tech in movies. In cinema you can see tech exactly as the artist imagined it, without the restrictions of the real world.”
How do you want to make an impact with AI? Clay: “As I became increasingly proficient in programming, I realized that not only did I want to do something big, but that I had the potential to make it happen. We are unified under the mission to help people with ADHD achieve their dreams. Our customer discovery efforts have revealed that despite significant increases in technological tools, there are still millions of people facing barriers caused by their ADHD symptoms and related executive function deficits. We want to change that. The mentorship from Microsoft will help us with the technical innovation and provide that frictionless experience to provide a novel approach towards supporting neurodivergent people.”
Up Next…
These top three teams will be live on the global stage at Microsoft Build on May 21 for the Imagine Cup World Championship, showcasing the depth and promise of their startups. Follow the journey on Instagram and X to stay up to date with all the competition action – and join us live to find out who is crowned champion!
Microsoft Tech Community – Latest Blogs –Read More
Build Great AI Apps using Azure SQL DB Hyperscale | Data Exposed
Discover the power of Azure SQL Database Hyperscale for creating AI-ready applications of any scale. In this episode we’ll showcase the seamless auto-scaling capabilities of Azure SQL Database Hyperscale Serverless, ensuring peak performance across all system sizes. Explore the dynamic storage scaling that accommodates apps ranging from compact to expansive. Additionally, get acquainted with the innovative Copilot feature in Azure Data Studio, designed to enhance your daily TSQL tasks and beyond, enabling the development of secure, scalable, and intelligent applications.
Resources:
General availability: serverless for Hyperscale in Azure SQL Database – Microsoft Community Hub
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Apply critical update for Azure Stack HCI VMs to maintain Azure verification
Azure verification for VMs on Azure Stack HCI makes it possible for Azure-exclusive benefits to work outside of the cloud and in on-premises and edge environments. These benefits include Azure Virtual Desktop for Azure Stack HCI, Windows Server Datacenter: Azure Edition, Extended Security Updates (ESUs) for SQL and Windows Server Virtual Machines (VMs) on Azure Stack HCI, and Azure Policy guest configuration. To keep these workloads continuing to function, periodic updates are required to maintain their security and functionality.
Update Azure Stack HCI VMs
Recently, Microsoft rolled out critical updates for Azure verification for VMs on Azure Stack HCI, and we strongly recommend current users to apply these latest cumulative updates for continued, seamless functionality of your Azure benefits on Azure Stack HCI VMs. The latest VM updates are as follows:
Azure Virtual Desktop for Azure Stack HCI
Windows 11 multi-session, 4B update (22H2+: KB5036893, 21H2 KB5036894)
Windows 10 multi-session, 4B update (KB5036892)
Windows Server Datacenter: Azure Edition
Windows Server 2022, 4B update (KB5036909)
Extended Security Updates (ESUs) on Azure Stack HCI
Windows Server 2008, 2024.2B SSU (KB5034867)
Windows Server 2008 R2, 2024.2B SSU (KB5034866)
Windows Server 2012, 2024.4B SSU (KB5037022)
Windows Server 2012 R2, 2024.4B SSU (KB5037021)
For full details, see Latest Servicing Stack Updates
Azure Stack HCI VMs can be updated using a variety of methods including Windows Update, Azure Update Manager, Windows Server Update Services, or System Center Virtual Machine Manager. Please note that not all OSes are supported with Azure Update Manager, see Azure Update Manager support matrix.
We recommend that these VMs updates be completed by June 17, 2024 to avoid possible impact to your current workloads and licensing of these Azure benefits. If you do not update your Azure Stack HCI VMs to the latest version as noted above, you may see issues with the Azure verification and licensing of Azure Stack HCI VMs. Please update the VMs at that time and reach out to Microsoft Support if there are further questions or issues.
This update is not critical if you are not using the Azure benefits enabled by Azure verification of Azure Stack HCI VMs.
Update Azure Stack HCI Azure Marketplace VM Images
If you’re using Azure Marketplace VM images, we also recommend updating pre-existing Azure Marketplace VM images on your Azure Stack HCI cluster to the latest versions. The updated VM images are available now on Azure Marketplace and for update in the Azure Portal.
More Information
For full details on updates, please see Azure verification for VMs – Azure Stack HCI.
For more information regarding why this update is recommended, please review this Tech Community blog for more details on updates to Azure verification.
Microsoft Tech Community – Latest Blogs –Read More
Three Reasons Why You Should Not Use iPerf3 on Windows
James Kehr here with the Microsoft Commercial Support – Windows Networking team. This article will explain why you should not use iPerf3 on Windows for synthetic network benchmarking and testing. Followed by a brief explanation of why you should use ntttcp and ctsTraffic instead.
Reason 1 – It is Not Supported
iPerf3 is owned and maintained by an organization called ESnet (Energy Sciences Network). They do not officially support nor recommend that iPerf3 be used on Windows. Their recommendation is to use iPerf2. More on the Microsoft recommendation later.
Here are some direct quotes from the official ESnet iPerf3 FAQ, retrieved on 18 April 2024.
I’m trying to use iperf3 on Windows, but having trouble. What should I do?
iperf3 is not officially supported on Windows, but iperf2 is. We recommend you use iperf2.
And from the ESnet “Obtaining iPerf3” article, retrieved on 18 April 2024.
Primary development for iperf3 takes place on CentOS 7 Linux, FreeBSD 11, and macOS 10.12. At this time, these are the only officially supported platforms…
Microsoft does not recommend using iPerf3 for a different reason.
Reason 2 – iPerf3 is Emulated on Windows
iPerf3 does not make Windows native API calls. It only knows how to make Linux/POSIX calls.
The iPerf3 community uses Cygwin as an emulation layer to get iPerf3 working on Windows. You can read more about Cygwin in their FAQ.
The iPerf3 calls are sent to Cygwin, which translates them to Windows APIs calls. Only then does the Windows network stack come into play. The iPerf3 on Windows maintainers do an excellent job of making it all work together, but, ultimately, there are potential issues with this approach.
Not all the iPerf3 features will work on Windows. The basic options work well, but advanced capabilities needed for certain network testing may not be available on Windows or may behave in unexpected ways.
Emulation tends to have a performance penalty. The emulation overhead on a latency sensitive operation, such as network testing, can result in lower than expected throughput.
Finally, iPerf3 uses uncommon Windows Socket (winsock) options versus native Windows applications. For generic throughput testing this is fine. For application testing the uncommon socket options will not mimic real-world Windows-native application behavior.
Reason 3 – You Are Probably Using an Old Version of iPerf3.
Go search for “iPerf3 on Windows” on the web. Go ahead, open a tab, and use your search engine of choice. Which I am certain is Bing with Copilot.
What is the top result, and thus the most likely link you will click on? I bet the site was iperf.fr.
The newest version of iPerf3 on iperf.fr is 3.1.3 from 8 June 2016. That was nearly 8 years ago at the time of writing.
The current version of iPerf3, directly from ESnet, is 3.16. A full 15 versions of missing bug fixes, features, and changes from the version people are most likely to download.
This specific copy of iPerf3, from iperf.fr, includes a version of cygwin1.dll that contains a bug which limits the socket buffer to 1MB. This will cause poor performance on high speed-high latency and high bandwidth networks because iPerf3 will not be capable of putting enough data in-flight to saturate the link, resulting in inaccurate testing.
Where should you look for iPerf3 on Windows?
From ESnet’s article, “Obtaining iPerf3” they say:
Windows: iperf3 binaries for Windows (built with Cygwin) can be found in a variety of locations, including https://files.budman.pw/ (discussion thread).
What Does Microsoft Recommend
Microsoft maintains two synthetic network benchmarking tools: ntttcp (Windows NT Test TCP) and ctsTraffic. The newest version of ntttcp is maintained on GitHub. This is a Windows native tool which utilizes Windows networking in the same way a native Windows application does.
But what about Linux?
There is! Details can be found on the ntttcp for Linux GitHub repo. This is a separate codebase built for Linux that is compatible with ntttcp for Windows, but it is not identical to the Windows counterpart.
Ntttcp allows you to perform API native synthetic network tests between Windows and Windows, Linux and Linux, and between Windows and Linux.
ctsTraffic is Windows-to-Windows only. Where ntttcp is more iPerf3-like, ctsTraffic has a different set of options and goals. ctsTraffic focuses on end-to-end goodput scenarios, where ntttcp and iPerf3 focus more on isolating network stack throughput.
How do you use ntttcp?
The Azure team has written a great article about basic ntttcp functionality for Windows and Linux. I do not believe in reinventing the wheel, so I will simply link you to the article.
There is a known interoperability limitation when testing between Windows and Linux. Details can be found in this ntttcp for Linux wiki article on GitHub.
Testing
I built a lab while preparing this article using two Windows Server 2022 VMs. The tests used the newest versions of iPerf3 (3.16), ntttcp (5.39), and ctsTraffic (2.0.3.3).
The default iPerf3 parameters are the most common configuration I see among Microsoft support customers. So, I am tuning ntttcp and ctsTraffic to better match iPerf3’s default single connection, 128KB buffer length behavior. While this is not a perfect comparison, this does make it a better comparison.
Single stream tests are used for targeted analyses since many applications do not perform multi-threaded transfers. Bandwidth and maximum throughput testing should be multi-threaded with large buffers, but that is a topic for a different day.
Don’t forget to allow the network traffic on the Windows Defender Firewall if you wish to run your own tests.
iPerf3
iPerf3 server command:
iperf3 -s
iPerf3 client command:
iperf3 -c <IP> -t 60
The average across multiple tests was about 7.5 Gbps. The top result was 8.5 Gbps, with a low of 5.26 Gbps.
ntttcp
Ntttcp server command:
ntttcp -r -m 1,*,<IP> -t 60
Ntttcp client command:
ntttcp -s -m 1,*,<IP> -l 128K -t 60
Ntttcp averaged about 12.75 Gbps across multiple tests. The top test averaged 13.5 Gbps, with a low test of 12.5 Gbps.
Ntttcp does something called pre-posting receives, which is unique to this tool. This reduces application wait time as part of network stack isolation, allowing for quicker than normal application responses to socket messages.
-r is receiver, and -s is sender.
-m is a mapping of values that are: <num threads>, <CPU affinity>, <Target IP>. In this test we use a single thread, no CPU affinity (*), and both -r and -s side uses the target IP address as the final value.
-t is test time, in seconds.
-l sets the buffer length. You can use K|M|G with ntttcp as shorthand for kilo-, mega-, and giga-bytes.
ctsTraffic
These commands are run in PowerShell to make reading values easier.
ctsTraffic server command:
.ctstraffic.exe -listen:* -Buffer:”$(128KB)” -Transfer:”$(1TB)” -ServerExitLimit:1 -consoleverbosity:1 -TimeLimit:60000
ctsTraffic client command:
.ctstraffic.exe -target:<IP> -Connections:1 -Buffer:”$(128KB)” -Transfer:”$(1TB)” -Iterations:1 -consoleverbosity:1 -TimeLimit:60000
The result, about 9.2 Gbps average. It is a little faster and far more consistent than iPerf3, but not quite as fast as ntttcp. The two primary reasons why ctsTraffic is slower are versus pre-posting receives like ntttcp.
-Buffer is the buffer length (ntttp: -l).
-Transfer is the amount of data to send per iteration.
-Iterations/-ServerExitLimit is the number of times a data sets will be transferred.
-Connections is the number of concurrent TCP streams that will be used.
-TimeLimit is the number of milliseconds to run the test. The test stops even if the iteration transfer has not been completed when the time limit is reached.
Thank you for reading and I hope this helps improve your understanding of synthetic network benchmarking on Windows!
Microsoft Tech Community – Latest Blogs –Read More
Introducing Meta Llama 3 Models on Azure AI Model Catalog
In collaboration with Meta, today Microsoft is excited to introduce Meta Llama 3 models to Azure AI. Meta-Llama-3-8B-Instruct, Meta-Llama-3-70B-Instruct pretrained and instruction fine-tuned models are the next generation of Meta Llama large language models (LLMs), available now on Azure AI Model Catalog. Trained on a significant amount of pretraining data, developers building with Meta Llama 3 models on Azure can experience significant boosts to overall performance, as reported by Meta. Microsoft is delighted to be a launch partner to Meta as enterprises and developers build with Meta Llama 3 on Azure, supported with popular LLM developer tools like Azure AI prompt flow and LangChain.
What’s New in Meta Llama 3?
Meta Llama 3 includes pre-trained, and instruction fine-tuned language models and are designed to handle a wide spectrum of use cases. As indicated by Meta, Meta-Llama-3-8B, Meta-Llama-70B, Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct models showcase remarkable improvements in industry-standard benchmarks and advanced functionalities, such as improved reasoning abilities.. Significant enhancements to post-training procedures have been reported by Meta, leading to a substantial reduction in false refusal rates and improvements in the alignment and diversity of model responses. According to Meta, these adjustments have notably enhanced the capabilities of Llama 3. Meta also states that Llama 3 retains its decoder-only transformer architecture but now includes a more efficient tokenizer that significantly boosts overall performance. The training of these models, as described by Meta, involved a combination of data parallelization, model parallelization, and pipeline parallelization across two specially designed 24K GPU clusters, with plans for future expansions to accommodate up to 350K H100 GPUs.
Features of Meta Llama 3:
Significantly increased training tokens (15T) allow Meta Llama 3 models to better comprehend language intricacies.
Extended context window (8K) doubles the capacity of Llama 2, enabling the model to access more information from lengthy passages for informed decision-making.
A new Tiktoken-based tokenizer is used with a vocabulary of 128K tokens, encoding more characters per token. Meta reports better performance on both English and multilingual benchmark tests, reinforcing its robust multilingual capabilities.
Read more about the new Llama 3 models here.
Use Cases:
Meta-Llama-3-8B pre-trained and instruction fine-tuned models are recommended for scenarios with limited computational resources, offering faster training times and suitability for edge devices. It’s appropriate for use cases like text summarization, classification, sentiment analysis, and translation.
Meta-Llama-3-70B pre-trained and instruction fine-tuned models are geared towards content creation and conversational AI, providing deeper language understanding for more nuanced tasks, like R&D and enterprise applications requiring nuanced text summarization, classification, language modeling, dialog systems, code generation and instruction following.
Expanding horizons: Meta Llama 3 now available on Azure AI Models as a Service
In November 2023, we launched Meta Llama 2 models on Azure AI, marking our inaugural foray into offering Models as a Service (MaaS). Coupled with a growing demand for our expanding MaaS offerings, numerous Azure customers across enterprise and SMC have rapidly developed, deployed and operationalized enterprise-grade generative AI applications across their organizations.
The launch of Llama 3 reflects our strong commitment to enabling the full spectrum of open source to proprietary models with the scale of Azure AI-optimized infrastructure and to empowering organizations in building the future with generative AI.
Azure AI provides a diverse array of advanced and user-friendly models, enabling customers to select the model that best fits their use case. Over the past six months, we’ve expanded our model catalog sevenfold through our partnerships with leading generative AI model providers and releasing Phi-2, from Microsoft Research. Azure AI model catalog lets developers select from over 1,600 foundational models, including LLMs and SLMs from industry leaders like Meta, Cohere, Databricks, Deci AI, Hugging Face, Microsoft Research, Mistral AI, NVIDIA, OpenAI, and Stability AI. This extensive selection ensures that Azure customers can find the most suitable model for their unique use case.
How can you benefit from using Meta Llama 3 on Azure AI Models as a Service?
Developers using Meta Llama 3 models can work seamlessly with tools in Azure AI Studio, such as Azure AI Content Safety, Azure AI Search, and prompt flow to enhance ethical and effective AI practices. Here are some main advantages that highlight the smooth integration and strong support system provided by Meta’s Llama 3 with Azure, Azure AI and Models as a Service:
Enhanced Security and Compliance: Azure places a strong emphasis on data privacy and security, adopting Microsoft’s comprehensive security protocols to protect customer data. With Meta Llama 3 on Azure AI Studio, enterprises can operate confidently, knowing their data remains within the secure bounds of the Azure cloud, thereby enhancing privacy and operational efficiency.
Content Safety Integration: Customers can integrate Meta Llama 3 models with content safety features available through Azure AI Content Safety, enabling additional responsible AI practices. This integration facilitates the development of safer AI applications, ensuring content generated or processed is monitored for compliance and ethical standards.
Simplified Assessment of LLM flows: Azure AI’s prompt flow allows evaluation flows, which help developers to measure how well the outputs of LLMs match the given standards and goals by computing metrics. This feature is useful for workflows created with Llama 3; it enables a comprehensive assessment using metrics such as groundedness, which gauges the pertinence and accuracy of the model’s responses based on the input sources when using a retrieval augmented generation (RAG) pattern.
Client integration: You can use the API and key with various clients. Use the provided API in Large Language Model (LLM) tools such as prompt flow, OpenAI, LangChain, LiteLLM, CLI with curl and Python web requests. Deeper integrations and further capabilities coming soon.
Simplified Deployment and Inference: By deploying Meta models through MaaS with pay-as-you-go inference APIs, developers can take advantage of the power of Llama 3 without managing underlying infrastructure in their Azure environment. You can view the pricing on Azure Marketplace for Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct models based on input and output token consumption.
These features demonstrate Azure’s commitment to offering an environment where organizations can harness the full potential of AI technologies like Llama 3 efficiently and responsibly, driving innovation while maintaining high standards of security and compliance.
Getting Started with Meta Llama3 on MaaS
To get started with Azure AI Studio and deploy your first model, follow these clear steps:
Familiarize Yourself: If you’re new to Azure AI Studio, start by reviewing this documentation to understand the basics and set up your first project.
Access the Model Catalog: Open the model catalog in AI Studio.
Find the Model: Use the filter to select the Meta collection or click the “View models” button on the MaaS announcement card.
Select the Model: Open the Meta-Llama-3-70B text model from the list.
Deploy the Model: Click on ‘Deploy’ and choose the Pay-as-you-go (PAYG) deployment option.
Subscribe and Access: Subscribe to the offer to gain access to the model (usage charges apply), then proceed to deploy it.
Explore the Playground: After deployment, you will automatically be redirected to the Playground. Here, you can explore the model’s capabilities.
Customize Settings: Adjust the context or inference parameters to fine-tune the model’s predictions to your needs.
Access Programmatically: Click on the “View code” button to obtain the API, keys, and a code snippet. This enables you to access and integrate the model programmatically.
Integrate with Tools: Use the provided API in Large Language Model (LLM) tools such as prompt flow, Semantic Kernel, LangChain, or any other tools that support REST API with key-based authentication for making inferences.
Looking Ahead
The introduction of these latest Meta Llama 3 models reinforces our mission to scaling AI. As we continue to innovate and expand our model catalog, our commitment remains firm: to provide accessible, powerful, and efficient AI solutions that empower developers and organizations alike with the full spectrum of model options from open source to proprietary.
Build with Meta Llama 3 on Azure:
We invite you to explore and build with the new Meta Llama 3 models within the Azure AI model catalog and start integrating cutting-edge AI into your applications today. Stay tuned for more updates and developments.
FAQ
What does it cost to use Meta LLama 3 models on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Cohere offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace:
Meta-Llama-3-8B-Instruct
Meta-Llama-3-70B-Instruct
Do I need GPU capacity in my Azure subscription to use Llama 3 models?
No, you do not need GPU capacity. Llama 3 models are offered as an API.
Llama 3 is listed on the Azure Marketplace. Can I purchase and use Llama 3 directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Llama 3, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Llama 3 models from the Marketplace will redirect you to Azure AI Studio.
Given that Llama 3 is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Llama 3 is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Meta?
No, Microsoft does not share the content of any inference request or response data with Meta.
Are there rate limits for the Meta models on Azure?
Meta models come with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Meta models region specific?
Meta model API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in EastUS2. If you want to use Meta models in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in EastUS2.
Can I fine-tune Meta Llama 3 models?
Not yet, stay tuned…
Microsoft Tech Community – Latest Blogs –Read More
IAMCP Insights Event: Setting Yourself up for Success in Microsoft’s FY25 with Rob Fegan
Join us on Thursday, April 25th, 2024 at 7:00am-8:00am PST for and IAMCP Insights event!
Presenter: Rob Fegan, Venvito
Setting yourself up for Success in Microsoft’s FY25
This course is designed to guide you through the four-phase journey of selling with Microsoft.
Phase 1: Messaging
Develop a message that resonates with your Microsoft counterpart.
Understand who you are, what you do, how you do it, and who you do it for.
Identify what’s in it for Microsoft to work with you.
Phase 2: Selling
Understand the solution plays you’re running and the customer journey Microsoft takes.
Build a consistent, repeatable process to get in front of Microsoft people and stay top of mind.
Enable your sales team to use this process consistently to leverage top of funnel opportunities.
Phase 3: Partner Center and Marketplace
Understand the importance of Partner Center in the Microsoft AI Cloud Partner Program.
Learn how to operationalize Partner Center to grow your relationship with Microsoft.
Understand the role of Marketplace in facilitating the new customer buying journey.
Measure your success using KPIs and metrics across all components of Partner Center and Marketplace.
Phase 4: Programs and Incentives
Learn how to leverage the Microsoft AI Cloud Partner Program for funding and co-selling opportunities.
Understand how to create co-sell ready offers and transactable offers in the marketplace.
Learn how to take advantage of the programs and incentives within the Microsoft ecosystem.
Note: The live event is open to IAMCP members and non-members. Access to the recording and any session materials is reserved for current IAMCP members.
Register here
Contact info@iamcp.org for more information and be sure to join our IAMCP discussion board on tech community!
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | What’s new for Microsoft partners: April 2024 edition
Over the past few months, we have continued to add benefits and resources to the Microsoft AI Cloud Partner Program to help you and your customers realize the most from our latest technology. These changes have been informed by partner feedback and developed with the diversity of the partner community in mind.
In this blog, you’ll find links to expert insights, redesigned learning materials, and updated benefits to accelerate your growth in the coming year.
Announcements
State of the Partner Ecosystem: Chief Partner Officer Nicole Dezen showcased the latest Microsoft partner business news, changes, updates, and momentum in her annual State of the Partner Ecosystem post on the Official Microsoft Blog. Learn about program updates, including new designations and certifications for partners. Find out how we are equipping partners through AI skilling, and read about partners delivering AI solutions around the world.
New benefits packages: In January, we launched three new benefits packages designed to help partners at various stages of growth to develop their business. Find out which package is right for you by reading more on the partner blog.
Realigning global licensing for Microsoft 365: Last year Microsoft updated the way Microsoft 365, Office 365, and Teams were licensed in the European Economic Area (EEA) and Switzerland. We have recently announced our plan to extend that approach worldwide to ensure globally consistent licensing. Learn more.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More