Category: News
Introducing SDAIA and Their Latest Arabic LLM on Azure AI Model Catalog
SDAIA Overview
The Saudi Data & AI Authority (SDAIA) plays a central role in the Kingdom of Saudi Arabia ‘s ambitious AI strategy. Established by Royal Order, SDAIA focuses on organizing, developing, and advancing the use of big data and AI technologies across sectors. Their mission aligns with Vision 2030’s goals — transforming Saudi Arabia into a data-driven economy and global leader in AI innovation. Through continuous research and technological advancement, SDAIA aims to unlock the value of data to drive national and global impact.
Partnership with Microsoft
SDAIA and Microsoft have forged a strong partnership to push the boundaries of AI capabilities, fostering innovation through collaboration. By leveraging Azure’s cloud platform, SDAIA can deliver cutting-edge AI models with enhanced reliability and performance. This partnership supports SDAIA’s strategic vision to provide a robust infrastructure for AI development while ensuring seamless access to essential tools for enterprises, researchers, and developers.
ALLaM-2-7B: the latest Arabic LLM on Azure
SDAIA’s National Center for Artificial Intelligence (NCAI) has developed ALLaM-2-7B, a transformative model that enhances Arabic Language Technology (ALT). This autoregressive transformer-based model is designed to facilitate natural language understanding in both Arabic and English. With 7 billion parameters, ALLaM-2-7B aims to serve as a critical tool for industries requiring advanced language processing capabilities.
Why Choose Azure AI?
Azure’s secure, scalable infrastructure ensures optimal performance and flexibility for every AI workload. By building their solutions on Azure, developers gain access to state-of-the-art tools such as Azure AI Studio, offering seamless integration, deployment, and management of a wide portfolio of models. Furthermore, Azure’s robust compliance framework supports SDAIA’s commitment to secure and ethical AI practices, ensuring that enterprises can trust their data in a reliable cloud environment.
Using ALLaM-2-7B on Azure
To use ALLaM-2-7B, users can deploy it through Azure AI Studio. The intuitive platform supports easy API integrations, allowing developers to quickly fine-tune and experiment with the model. Users can also take advantage of Azure’s flexible pricing model to scale resources according to their needs.
Steps to get started:
Visit Azure AI Studio and navigate to the Model Catalog.
Search for ALLaM-2-7B and select the model.
Deploy the model in your preferred region.
Use the provided API to integrate the model into your application.
FAQ: Frequently Asked Questions
What languages does ALLaM-2-7B support? ALLaM-2-7B is designed to support both Arabic and English.
What is the primary use of ALLaM-2-7B? It’s intended for advancing Arabic Language Technology (ALT), especially for research, development, and enterprise use cases requiring robust language models.
How do I fine-tune ALLaM-2-7B on Azure? Developers can fine-tune the model through Azure AI Studio using the model’s API, which allows integration with custom datasets.
Is ALLaM-2-7B secure to use? Yes, Azure ensures high levels of security and privacy, making it suitable for enterprise-level deployment.
What are the ethical considerations for using ALLaM-2-7B? ALLaM-2-7B, like all generative models, comes with certain biases inherent to its training data. Developers must conduct safety checks and employ content filtering mechanisms when deploying the model.
Conclusion
SDAIA’s ALLaM-2-7B on Azure AI Model Catalog is a significant step forward in advancing Arabic Language Technology. This partnership between SDAIA and Microsoft ensures that enterprises, researchers, and developers have access to a powerful tool for language processing, backed by Azure’s secure, scalable infrastructure. The future of AI innovation in the Kingdom is bright, and ALLaM-2-7B is set to play a crucial role in it.
Microsoft Tech Community – Latest Blogs –Read More
Machine learning onramp, module 3.3, task 3
I am stucked at module 3.3, task 3. This line of code is not working. Kindly help out.
data = readall(preprocds)
plot(data.Time,data.Y)I am stucked at module 3.3, task 3. This line of code is not working. Kindly help out.
data = readall(preprocds)
plot(data.Time,data.Y) I am stucked at module 3.3, task 3. This line of code is not working. Kindly help out.
data = readall(preprocds)
plot(data.Time,data.Y) machine learning onramp, module 3.3, task 3, model, machine learning MATLAB Answers — New Questions
interpreting xcorr for time series plots
i’m using xcorr to work out the lag between two time series.
When I plot the results of xcorr, and ask it to return the maximum lag, the result I get is -58.3.
This is the maximum positive c value, but the spike at 0 into negative c values is the largest overall – is this the result i should be using instead?i’m using xcorr to work out the lag between two time series.
When I plot the results of xcorr, and ask it to return the maximum lag, the result I get is -58.3.
This is the maximum positive c value, but the spike at 0 into negative c values is the largest overall – is this the result i should be using instead? i’m using xcorr to work out the lag between two time series.
When I plot the results of xcorr, and ask it to return the maximum lag, the result I get is -58.3.
This is the maximum positive c value, but the spike at 0 into negative c values is the largest overall – is this the result i should be using instead? xcorr, correlation, lag MATLAB Answers — New Questions
What is the duration of Mathworks Matlab Online student license?
Hello, I use Matlab online for programming. I have run out on my 20 hours per month limit and would like to purchase a basic student version which is USD 115. Please advise me upon the duration of this license – month, year or perpetual? And does it have a limit ont he number of hours per month or unlimited?Hello, I use Matlab online for programming. I have run out on my 20 hours per month limit and would like to purchase a basic student version which is USD 115. Please advise me upon the duration of this license – month, year or perpetual? And does it have a limit ont he number of hours per month or unlimited? Hello, I use Matlab online for programming. I have run out on my 20 hours per month limit and would like to purchase a basic student version which is USD 115. Please advise me upon the duration of this license – month, year or perpetual? And does it have a limit ont he number of hours per month or unlimited? mathworks, matlab online, license duration MATLAB Answers — New Questions
Match Columns While Making Space for Dupicates
Hi all, I have two sheets with 10,000 rows that I need to match/sort, however, the 2nd set has multiples that need to be matched with a single value from the 1st list. Please see the example I have attached below. Hopefully this makes sense!
Hi all, I have two sheets with 10,000 rows that I need to match/sort, however, the 2nd set has multiples that need to be matched with a single value from the 1st list. Please see the example I have attached below. Hopefully this makes sense! Read More
Managing Googlebot/Bingbot Exclusions in Security JavaScript without Impacting SEO
I need to add an important security related javascript on my HTML pages that detects a few signals like presence of selenium variables in widnow/document objects. Once something is detected, a request is sent to my backend to capture this data.
Googlebot / bingbot may also emit some of these signals (I am tracking 20+ signals) & these bots make thousands of visits to my various webpages. So somehow I do not want to execute the script for these bots.
1. If I use useragent, either on backend to totally exclude this script for googlebot or on frontend to not execute the script – will it be safe for my SEO? Can Googlebot penalize this assuming script is used for cloaking etc.?
2. How bot detection companies like Human Security (PerimeterX) manage this? Do they track even Googlebot activity?
I need to add an important security related javascript on my HTML pages that detects a few signals like presence of selenium variables in widnow/document objects. Once something is detected, a request is sent to my backend to capture this data.Googlebot / bingbot may also emit some of these signals (I am tracking 20+ signals) & these bots make thousands of visits to my various webpages. So somehow I do not want to execute the script for these bots.1. If I use useragent, either on backend to totally exclude this script for googlebot or on frontend to not execute the script – will it be safe for my SEO? Can Googlebot penalize this assuming script is used for cloaking etc.?2. How bot detection companies like Human Security (PerimeterX) manage this? Do they track even Googlebot activity? Read More
Really slow indexing of sharepoint site
Has anyone experience this issue with PNP search V4
I have created a new communication Sharepoint online site with about 20K files.
Using the PNP search webpart i built a search page for users. Once i was finished and added the users as visitors for the Sharepoint site. But users only see a fraction of the results. The search total numbers are increasing which means that its the site is indexing, but its indexing about 100 files per day which is really slow
anyone else experiencing this?
Has anyone experience this issue with PNP search V4 I have created a new communication Sharepoint online site with about 20K files.Using the PNP search webpart i built a search page for users. Once i was finished and added the users as visitors for the Sharepoint site. But users only see a fraction of the results. The search total numbers are increasing which means that its the site is indexing, but its indexing about 100 files per day which is really slow anyone else experiencing this? Read More
M365 Exchange : Contact suppression Issue
Hello,
we created a guest account to share a file, but we had a problem. There Are two exchange contact was created and I can’t suppress there. Mail can’t be deliver to this external contact with this error :
Reason: [{LED=420 4.2.0 Transient Failure during recipients lookup AmbiguousRecipientTransientException, Exception of type ‘Microsoft.Exchange.Transport.Categorizer.AmbiguousRecipientTransientException’ was thrown.};{MSG=};{FQDN=};{IP=};{LRT=}]
I could suppress Guest account on Entra ID, and I did, but I have already this two Exchange Mail contact account.
I used graphic interface and powershell commande :
Remove-Mailcontact -Identity “rholdXXXXXXXX”
Write-ErrorMessage : Ex6F9304|Microsoft.Exchange.Configuration.Tasks.ManagementObjectNotFoundException|Impossible d’effectuer l’opération, car l’objet ‘rhoXXXXXXX est introuvable
sur ‘DB7PR03A07DC001.EURPR03A007.PROD.OUTLOOK.COM’.
Au caractère C:Usersuptr0001AppDataLocalTemptmpEXO_0uvzubhp.reptmpEXO_0uvzubhp.rep.psm1:1205 : 13
+ Write-ErrorMessage $ErrorObject
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Remove-MailContact], ManagementObjectNotFoundException
+ FullyQualifiedErrorId : [Server=DB9PR03MB7723,RequestId=35acbab2-9fae-f969-e098-bc9aba96ee31,TimeStamp=Mon, 09 Sep 2024 05:59:44 GMT],Write-ErrorMessage
I precise I’m global Admin. what can I do to suppress these contacts ?
regards
Hello,we created a guest account to share a file, but we had a problem. There Are two exchange contact was created and I can’t suppress there. Mail can’t be deliver to this external contact with this error : Reason: [{LED=420 4.2.0 Transient Failure during recipients lookup AmbiguousRecipientTransientException, Exception of type ‘Microsoft.Exchange.Transport.Categorizer.AmbiguousRecipientTransientException’ was thrown.};{MSG=};{FQDN=};{IP=};{LRT=}] I could suppress Guest account on Entra ID, and I did, but I have already this two Exchange Mail contact account. I used graphic interface and powershell commande :Remove-Mailcontact -Identity “rholdXXXXXXXX” Write-ErrorMessage : Ex6F9304|Microsoft.Exchange.Configuration.Tasks.ManagementObjectNotFoundException|Impossible d’effectuer l’opération, car l’objet ‘rhoXXXXXXX est introuvablesur ‘DB7PR03A07DC001.EURPR03A007.PROD.OUTLOOK.COM’.Au caractère C:Usersuptr0001AppDataLocalTemptmpEXO_0uvzubhp.reptmpEXO_0uvzubhp.rep.psm1:1205 : 13+ Write-ErrorMessage $ErrorObject+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo : NotSpecified: (:) [Remove-MailContact], ManagementObjectNotFoundException+ FullyQualifiedErrorId : [Server=DB9PR03MB7723,RequestId=35acbab2-9fae-f969-e098-bc9aba96ee31,TimeStamp=Mon, 09 Sep 2024 05:59:44 GMT],Write-ErrorMessage I precise I’m global Admin. what can I do to suppress these contacts ?regards Read More
Set a reminder flow in SharePoint Document library is currently not working
A customer pointed out to me last week that it is not possible to set a reminder flow in document libraries. I checked this today in our tenant’s existing and new team websites – it is true.
When you click on Automate > Set a reminder > [name of column], nothing happens.
The column “Review” here is Date and time for sure
This worked exactly this way weeks ago. I noted the steps in my user-guides.
Can someone confirm please?
A customer pointed out to me last week that it is not possible to set a reminder flow in document libraries. I checked this today in our tenant’s existing and new team websites – it is true.When you click on Automate > Set a reminder > [name of column], nothing happens.The column “Review” here is Date and time for sureThis worked exactly this way weeks ago. I noted the steps in my user-guides.Can someone confirm please? Read More
RSVPs show up incorrectly
Hi everyone,
Does anyone else have inconsistent RSVP displays?
We’ve been experiencing this for a few weeks quite randomly where a member of the team RSVPd in a certain way (Yes/No) and Outlook for everyone shows that they “have not responded.”
See an example of my own RSVP (Yes, I attend), which shows as Not Responded.
Is this a bug?
Hi everyone,Does anyone else have inconsistent RSVP displays? We’ve been experiencing this for a few weeks quite randomly where a member of the team RSVPd in a certain way (Yes/No) and Outlook for everyone shows that they “have not responded.” See an example of my own RSVP (Yes, I attend), which shows as Not Responded. Is this a bug? Read More
Index non-default views and Reindex list button are disabled and grayed out in SP.
Hello,
Could you do me a favor?
Other lists are fine. Only one list has this issue.
The index non-default views and reindex list button are disabled and grayed out.
My permission level is “owner” so I don’t know why. Thank you.
Hello, Could you do me a favor?Other lists are fine. Only one list has this issue.The index non-default views and reindex list button are disabled and grayed out.My permission level is “owner” so I don’t know why. Thank you. Read More
where are the linux drivers?
I recall having to traverse the https://msedgewebdriverstorage.z22.web.core.windows.net/ clicking around 100 timest and wasting lots of time looking for it so it was great when you added it to the dashboard above.
I recall having to traverse the https://msedgewebdriverstorage.z22.web.core.windows.net/ clicking around 100 timest and wasting lots of time looking for it so it was great when you added it to the dashboard above. Read More
Hybrid Join Process – Question
Hello all,
I’m looking for information regarding Hybrid Join process because it is not clear for me, this is what I have:
Entra Connect syncs what I have under the OU I have specified on its configurations.I have joined a new device to on-prem AD, out of that OU, therefore Entra Connect will not sync the device.The device can reach the Microsoft endpoints (Network connectivity requirements accomplished)
What happens when Entra Connect does not sync the device but it’s triggered the Automatic Device Join task? Will it become hybrid join even without Entra Connect synched it?
I have read this:
Hybrid join is a process initiated from the device itself and Azure AD. Hybrid Join does not depend on, nor is able to be achieved from Azure AD Connect, though AAD Connect does stage the device in Azure, allowing policies to be more immediately applied and AAD Connect
Is this correct?
So, when Entra Connect syncs the device the purpose is only to, let’s say, provision the device in Entra ID ?
If Entra Connect does not sync the device, Hybrid Join will happen no matter what?
The process is documented here: How Microsoft Entra device registration works – Microsoft Entra ID | Microsoft Learn but I still have doubts 😞
Many thanks!
Best regards,
Ivo Duarte
Hello all, I’m looking for information regarding Hybrid Join process because it is not clear for me, this is what I have: Entra Connect syncs what I have under the OU I have specified on its configurations.I have joined a new device to on-prem AD, out of that OU, therefore Entra Connect will not sync the device.The device can reach the Microsoft endpoints (Network connectivity requirements accomplished) What happens when Entra Connect does not sync the device but it’s triggered the Automatic Device Join task? Will it become hybrid join even without Entra Connect synched it? I have read this: Hybrid join is a process initiated from the device itself and Azure AD. Hybrid Join does not depend on, nor is able to be achieved from Azure AD Connect, though AAD Connect does stage the device in Azure, allowing policies to be more immediately applied and AAD Connect Is this correct? So, when Entra Connect syncs the device the purpose is only to, let’s say, provision the device in Entra ID ? If Entra Connect does not sync the device, Hybrid Join will happen no matter what? The process is documented here: How Microsoft Entra device registration works – Microsoft Entra ID | Microsoft Learn but I still have doubts 😞 Many thanks! Best regards,Ivo Duarte Read More
FAQ: How to control messaging on decommissioned VM Offers?
Q: The offers we have on the marketplace are a two-part offer – a VM Image and then an Application Template that controls the deployment. The VM Image is hidden, and the application is the part the user interacts with.
In the old account, it is easy enough to stop distribution of the Application template – Which we did so users will not launch the older offers.
My question is for the VM, and doing the deletes of the two offers, if we stop the distribution, we are worried about the messaging to the users who have already launched the images. As because the new offer is not in the same account, I can’t use the in place tooling to say – “This one replaces the other”.
Just trying to see what is the right way to decommission the offers and how to control the messaging to customers so they don’t think the offers went away, etc.?
A: Please refer to the VM deprecation documentation provided here: https://learn.microsoft.com/en-us/partner-center/marketplace-offers/deprecate-vm.
Q: The offers we have on the marketplace are a two-part offer – a VM Image and then an Application Template that controls the deployment. The VM Image is hidden, and the application is the part the user interacts with.In the old account, it is easy enough to stop distribution of the Application template – Which we did so users will not launch the older offers.
My question is for the VM, and doing the deletes of the two offers, if we stop the distribution, we are worried about the messaging to the users who have already launched the images. As because the new offer is not in the same account, I can’t use the in place tooling to say – “This one replaces the other”.Just trying to see what is the right way to decommission the offers and how to control the messaging to customers so they don’t think the offers went away, etc.?
A: Please refer to the VM deprecation documentation provided here: https://learn.microsoft.com/en-us/partner-center/marketplace-offers/deprecate-vm.
Read More
Microsoft MVPs x. Microsoft Zero to Hero Community
We previously introduced the Microsoft Learn Learning Room in our blog post: Learn Azure Together in Microsoft Learn Learning Room.
This time, we interviewed Kerry Herger (Worldwide Microsoft Learn Marketing Community Lead) and Microsoft MVPs Saeid Dahl & Hamid Saleh about the Microsoft Zero to Hero Community. We also got to learn more about the Microsoft Backyard and NextGen Heroes.
Tell us about Microsoft Zero To Hero.
Microsoft Zero To Hero is a vibrant community within the Microsoft Learn ecosystem that’s approaching its first anniversary and it is lead by Saeid Dahl, Hamid Sadeghpour Saleh and Mohsen Akhavan. We started this initiative with a clear goal: to inspire individuals to become active participants in the Microsoft community. From the very beginning, our mission has been to create a space where people can share their knowledge, learn from one another, and grow together. We kicked off with non-stop live sessions every Saturday, and due to the overwhelming success, we’ve now expanded to cover weekdays and multiple time zones. This has allowed us to reach and inspire even more people across the globe. In less than a year, we’ve hosted over 80 live sessions, featuring speakers from around the globe. Our community has grown to include 6,000 members, and our content has generated over 1,500 hours of watch time on YouTube. These numbers aren’t just statistics—they reflect the deep engagement and passion within Microsoft Zero To Hero and Microsoft Learn community.
What is the concept of Learning Rooms within the Microsoft Learn community?
Microsoft Learn Community is a collection of experiences offering learners a variety of ways to connect and engage in a dynamic and inclusive space. Whether you’re just getting started or aiming to advance your skills, learners will find the support and guidance they need on their technical learning journey. Our mission is to meet learners where they are, offering connections and resources in their preferred language and learning style, catering to all skill levels. A key feature of our community is the Learning Rooms, which offers a free, engaging space for anyone seeking to expand their technical knowledge. These rooms, led by our worldwide network of Microsoft Learn experts, provide a collaborative environment for learners to develop their skills and connect with peers. You can join multiple rooms based on your interests, and each room is focused on achieving learning goals through up-to-date technical content. Learning Rooms offer:
Cohort Learning: Guided discussions and office hours with Learn Experts.
Peer Connection: Hands-on learning, practical demos, and exam prep in a collaborative setting.
Engaging Community: A fun, interactive space where you can connect with like-minded learners and thrive in your learning journey.
Together, we explore, engage, and grow to achieve skilling and career goals. Join us today and unlock the full potential of your technical journey with the support of our community and Learning Rooms!
Tell us a bit more about Microsoft Backyard and Microsoft NextGen Heroes?
Microsoft Backyard and Microsoft NextGen Heroes are two exciting programs we’ve developed under the Microsoft Zero To Hero name. Microsoft Backyard is unique because it’s not a technical session. Instead, it focuses on the personal stories of Microsoft employees, Regional Directors (RDs), and Most Valuable Professionals (MVPs). We dive into their success stories, the lessons they’ve learned, and even some of the challenges they’ve faced. The feedback after our first episode was incredible—people from all around the world told us how much they enjoyed hearing these personal journeys.
On the other hand, Microsoft NextGen Heroes is a program that’s very close to my heart. It’s designed specifically for Microsoft Learn Student Ambassadors (Student Ambassadors) and individuals who are new to the Microsoft world. The idea was born from the immense energy and talent we saw in the Student Ambassador community. We wanted to give these young leaders a platform to share their experiences and knowledge. Our first live session was a huge success, with 70 registrants and nearly 60 live audience members, and the feedback we received was truly inspiring.
We are very proud to have three Student Ambassadors leading this program: Nicklas Olsen, Bojan Ivanovski, and Marko Atanasov.
What is the purpose of the Microsoft NextGen Heroes program?
The primary purpose of Microsoft NextGen Heroes is to empower the next generation of leaders within the Microsoft community. We want to create a space where young individuals—especially those who are just starting out in their careers—can connect, share their experiences, and learn from each other. This program is all about fostering a sense of community and belonging, where new voices are encouraged and celebrated.
How does it differ from other Microsoft initiatives like the Student Ambassadors or the Zero To Hero community?
Great question! While Student Ambassadors and Microsoft Zero To Hero is both about learning and growth, Microsoft NextGen Heroes takes a slightly different approach. The Student Ambassadors program is designed to recognize and support students who are passionate about technology and community leadership. Microsoft Zero To Hero is broader, focusing on inspiring individuals at all stages of their journey within the Microsoft ecosystem.
NextGen Heroes, however, is specifically tailored for those who are new to the Microsoft world or early in their careers. It’s more focused on peer-to-peer learning and leadership development. We want to give these new members of our community a platform to shine, share their experiences, and learn from each other in a supportive environment.
What are the best ways to stay informed about the latest news and updates related to Microsoft NextGen Heroes?
To stay updated, I recommend following our official channels on social media and subscribing to our newsletter. We regularly post updates, event announcements, and highlights from past sessions. You can also join the Microsoft Zero To Hero community (https://aka.ms/joinzerotohero) on Microsoft Learn, where we share all the latest news about both Microsoft Zero To Hero and Microsoft NextGen Heroes. Lastly, following Microsoft Zero To Hero YouTube channel is a great way to catch up on any sessions you might have missed.
Hierarchy of team members.
LinkedIn page: https://www.linkedin.com/company/microsofthero
YouTube Channel: https://www.youtube.com/@MicrosoftHero
Website: https://microsofthero.com
X: https://x.com/MicrosoftHero
Microsoft Tech Community – Latest Blogs –Read More
Microsoft announces the best performing logical qubits on record and will provide priority access to reliable quantum hardware in Azure Quantum
At Microsoft, we’re ushering in a new era of computing on the path to unlocking scientific advantage and tackling some of the world’s most pressing challenges. This is why we’re building Azure Quantum — to create the first platform for reliable quantum computing and achieve the vision of quantum at scale.
In April, we announced we’re entering the next phase for solving meaningful problems with reliable quantum computers by demonstrating the most reliable logical qubits with an error rate 800x better than physical qubits. The main issue with today’s noisy intermediate-scale quantum (NISQ) machines is that their physical qubits are too noisy and error-prone, making the machines impractical for real-world applications. That’s why we must transition to using reliable logical qubits that combine multiple physical qubits together to protect against noise and to maintain coherence for long-running computations.
But quantum computing doesn’t exist in isolation. It requires deep integration with the power of the cloud. We must leverage the best of computing to unlock a new generation of hybrid quantum applications that could solve some of our most pressing challenges — from pioneering more sustainable energy solutions to transforming how we treat disease with the next generation of life-saving therapeutics.
We designed the Azure Quantum compute platform to provide quantum computing across a variety of hardware architectures, enabling the most advanced hybrid quantum applications in the industry — all in a secure, unified and scalable cloud environment — to tackle classically intractable problems. This is our vision for Azure Quantum. Today, we continue to make advances that bring us closer to achieving it with our industry-leading partners, Quantinuum and Atom Computing. With both companies, we want to bring best-in-class solutions to the Azure Quantum platform, and collectively advance and scale resilient quantum capabilities.
In collaboration with Quantinuum, we applied our improved qubit-virtualization system to create and entangle 12 highly reliable logical qubits. This represents the largest number of entangled logical qubits, with the highest fidelity, on record. These results scale logical qubit computation — on ion-trap hardware — within our Azure Quantum compute platform. In addition, advancing toward scalable quantum computing necessitates not only reaching significant hardware milestones, but also proving these improvements can address practical and real-world challenges.
This is why we demonstrated the first end-to-end chemistry simulation that combines reliable logical quantum computation with cloud high-performance computing (HPC) and AI. Today’s announcements would not have been possible without Quantinuum’s leading quantum machines. This paves the way toward practical solutions at the intersection of these technologies, especially in the domains of chemistry, physics and life sciences.
Lastly, as we expand our Azure Quantum compute platform, we are excited to announce that Microsoft and Atom Computing are coming together to ultimately build the world’s most powerful quantum machine. Through this collaboration, we’re bringing a new generation of reliable quantum hardware to customers by integrating and advancing Atom Computing’s neutral-atom hardware into our Azure Quantum compute platform. With it, we are bringing the best-in-class from Microsoft and our partner ecosystem to provide the commercial offering of a reliable quantum machine.
Combining the capabilities of this reliable quantum hardware with our platform for Science, Azure Elements, we are providing a comprehensive discovery suite to achieve scientific quantum advantage.
Creating a new generation of hybrid quantum applications
At Microsoft, we’re pioneering a new computing paradigm by bringing the power of the cloud and AI together with quantum. Our Azure Quantum compute platform enables the seamless execution of quantum applications that leverage hardware across a variety of qubit architectures and chips, while offering integration with cloud HPC and AI. Over this past year, we’ve continued to announce new breakthroughs and collaborations in pursuit of this platform mission, including offering Generative Chemistry and Accelerated DFT and advancing the industry to reliable quantum computing by demonstrating highly reliable logical qubits.
We are bringing these technologies together in a purpose-built cloud platform that leverages the complementary strengths of both AI for large-scale data processing and quantum for complex calculations and unprecedented accuracy. This strong compute foundation offers a secure, unified and scalable hybrid computing environment that enables innovators to develop best-in-class solutions for tackling problems that are difficult or even intractable on classical computers. We are integrating quantum hardware architectures from our ecosystem partners with our quantum control, processing and error correction software — in addition to capabilities for copilot-assisted workflows, developer tools, classical supercomputing and multi-modal AI models. This differentiated computing stack will pave the way for this new generation of hybrid applications. AI co-reasoning will help articulate problems and translate them into workflows, using both classical and scaled quantum tools at the right stages to drive impactful insights in an iterative loop to compress R&D and time-to-solution into days, not years.
Continuing to implement reliable quantum computing with Quantinuum
Today, in collaboration with Quantinuum, we’re proud to announce the demonstration of the best performing logical qubits on record, achieving the largest number of entangled logical qubits. We created 12 logical qubits by improving and optimizing our qubit-virtualization system for Quantinuum’s 56-physical-qubits H2 machine.
This progress speaks to the world-class error correction expertise at Microsoft. In less than six months, our improved qubit-virtualization system tripled reliable logical qubit counts. Furthermore, when we entangled all 12 logical qubits in a complex state required for ‘deeper’ quantum computation, they exhibited a 22X circuit error rate improvement over the corresponding physical qubits.
The ability of our systems to triple the number of logical qubits while less than doubling our physical qubits from 30 to 56 physical qubits is a testament to the high fidelities and all-to-all connectivity of our H-Series trapped-ion hardware. Our current H2-1 hardware combined with Microsoft’s qubit-virtualization system is bringing us and our customers fully into Level 2 resilient quantum computing. This powerful collaboration will unlock even greater advancements when combined with the cutting-edge AI and HPC tools delivered through Azure Quantum.
— Rajeeb Hazra, CEO of Quantinuum
With our improved error correction code and qubit-virtualization system, we’ve demonstrated a 22X improvement between physical and logical circuit error rates when entangled.
As we continue to strive toward scientific and industrial breakthroughs with quantum computers, noise remains our biggest barrier. In a previous post, I highlighted how increasing the number of physical qubits alone is not enough to make robust quantum error correction possible. As part of the quantum ecosystem, we must remain focused on improving both logical qubit counts and fidelity to have a solid foundation for producing meaningful results. This will be possible through hardware and software advancements that together enable running longer and more reliable quantum applications. Today’s announcement demonstrates that it is possible to realize these fundamental capabilities on the path to large-scale quantum computing.
A true computing paradigm shift also requires a focus on practical and commercially relevant applications. Earlier, we successfully completed a chemistry simulation in the first end-to-end workflow that combined HPC, AI and logical qubit computation to predict the ground state energy for a specific catalyst problem. This demonstration marked a critical step toward ushering in a new generation of hybrid applications that will become increasingly impactful as quantum technologies scale. Quantum and AI will have the earliest significant impact on scientific discovery, and researchers at Microsoft have demonstrated the breakthrough potential of this integration. This work was only possible thanks to our long-standing and close collaboration with Quantinuum, a company that remains at the forefront of quantum computing.
You can learn more about today’s improved logical qubits and the technical details about this chemistry simulation in our blog Microsoft and Quantinuum create 12 logical qubits and demonstrate a hybrid, end-to-end chemistry simulation.
Announcing a new commercial offering with Atom Computing
Lastly, in collaboration with Atom Computing, we are excited to bring a new generation of reliable quantum hardware to customers. Bringing together Microsoft’s enhanced qubit-virtualization system with Atom Computing’s neutral-atom hardware, we’ve jointly generated logical qubits and are optimizing the system to enable reliable quantum computation. Together, we believe this new commercial offering will be the world’s most powerful quantum machine on record and will scale to scientific advantage and beyond.
Atom Computing’s hardware uniquely combines capabilities essential for expanding quantum error correction, including large numbers of high-fidelity qubits, all-to-all qubit connectivity, long coherence times and mid-circuit measurements with qubit reset and reuse. The company is building 2nd generation systems with over 1,200 physical qubits and plans to increase the physical qubit count tenfold with each new hardware generation. By applying Microsoft’s state-of-the-art fault-tolerance protocols on a different qubit architecture, our Azure Quantum compute platform can offer a spectrum of best-in-class logical qubits across multiple hardware platforms, providing flexibility and future proofing our customers’ investments.
Microsoft and Atom Computing team up to enhance the Azure Quantum compute platform with neutral-atom hardware and tailored qubit virtualization, enabling a commercial discovery suite with continuous upgrade capabilities for additional logical qubits.
Our collaboration with Atom Computing aims to integrate these capabilities with Azure Elements, our purpose-built cloud platform offering differentiated computing scale, state-of-the-art AI models for chemistry and materials science simulations and Copilot. Our goal is to empower governments and organizations to tackle scientifically and commercially relevant problems with today’s most advanced computational solutions, including designing and predicting properties of chemicals and materials, exploring molecular interactions and simulating complex chemical reactions. Additionally, we want to help galvanize a quantum-ready ecosystem, providing the critical tools necessary for commercial adoption of these technologies that can help build quantum expertise and create new demand for jobs.
We are excited to accelerate Atom Computing’s quantum capabilities with Microsoft as our partner. We believe that this collaboration uniquely positions us to scale and be first to reach scientific quantum advantage. Our neutral-atom technology is an ideal foundation for Microsoft’s leading qubit-virtualization capabilities, and we look forward to enabling fault tolerant, cutting-edge quantum applications for global innovators to use the best platform in the world.
— Ben Bloom, PhD, Founder and CEO of Atom Computing
Empowering customers with the best of quantum and AI
At Microsoft, we want to enable practitioners to unlock a new generation of applications that harness the complementary strengths of quantum, classical supercomputing and AI, all connected in the Azure cloud.
We remain committed to achieving quantum at scale so we can solve commercially significant problems that are far too complex for classical computers. As a platform company, it’s critical that we continue investing in the quantum ecosystem and collaborating with industry leaders such as Quantinuum, Atom Computing, Photonic and others to advance and scale quantum capabilities. Alongside our industry collaborations, we’re also focused on our own innovation with a topological qubit-based approach.
This approach continues to offer a unique path to scaling up, with fast clock speeds, digital control and more. Furthermore, a topological quantum computer could control over one million physical qubits on a single chip, with the ability to process information faster than other types of qubits. Our Azure Quantum team previously demonstrated the feasibility of this approach, and we look forward to scaling this to the level of quantum supercomputing.
Azure is the place where all this innovation comes together. For more information about today’s announcements:
Read the technical blog Microsoft and Quantinuum create 12 logical qubits and demonstrate a hybrid end-to-end chemistry simulation.
Register for the upcoming Microsoft Quantum Innovator Series on how quantum and AI can unlock a new generation of hybrid applications for science.
Get the latest news and announcements from Azure Quantum.
The post Microsoft announces the best performing logical qubits on record and will provide priority access to reliable quantum hardware in Azure Quantum appeared first on The Official Microsoft Blog.
At Microsoft, we’re ushering in a new era of computing on the path to unlocking scientific advantage and tackling some of the world’s most pressing challenges. This is why we’re building Azure Quantum — to create the first platform for reliable quantum computing and achieve the vision of quantum at scale. In April, we announced…
The post Microsoft announces the best performing logical qubits on record and will provide priority access to reliable quantum hardware in Azure Quantum appeared first on The Official Microsoft Blog.Read More
How do I manually configure my network license file for the Network License Manager?
How do I manually configure my network license file for the Network License Manager? How do I manually configure my network license file for the Network License Manager? How do I manually configure my network license file for the Network License Manager? MATLAB Answers — New Questions
Model_X has multiple sample times error
I am getting the following error
‘MyTopLevelModel/SubsystemA/Model’ has multiple sample times. Only constant (inf) or inherited (-1) sample times are allowed in function call subsystem ‘MyTopLevelModel/SubsystemA’.
More info of my setup
SubsystemA has a function call which has a trigger port connected to a port set as "output function call with sample time of 0.001. All my other blocks have sample time -1 (inheritet).
If I go to the model explorer the compiled sample time are not [-1 0] some of the have set the [0.001 0]
Any idea what could be causing this error ?I am getting the following error
‘MyTopLevelModel/SubsystemA/Model’ has multiple sample times. Only constant (inf) or inherited (-1) sample times are allowed in function call subsystem ‘MyTopLevelModel/SubsystemA’.
More info of my setup
SubsystemA has a function call which has a trigger port connected to a port set as "output function call with sample time of 0.001. All my other blocks have sample time -1 (inheritet).
If I go to the model explorer the compiled sample time are not [-1 0] some of the have set the [0.001 0]
Any idea what could be causing this error ? I am getting the following error
‘MyTopLevelModel/SubsystemA/Model’ has multiple sample times. Only constant (inf) or inherited (-1) sample times are allowed in function call subsystem ‘MyTopLevelModel/SubsystemA’.
More info of my setup
SubsystemA has a function call which has a trigger port connected to a port set as "output function call with sample time of 0.001. All my other blocks have sample time -1 (inheritet).
If I go to the model explorer the compiled sample time are not [-1 0] some of the have set the [0.001 0]
Any idea what could be causing this error ? error, time MATLAB Answers — New Questions
Error displayed when starting MATLAB on Linux systems using NVIDIA OR AMD graphics hardware
I’m using Dell precision 7510 with preinstalled Ubuntu 14.04(I have the driver the NVIDIA). I just downloaded the MATLAB R2017a and I run into the following error. I just searched and found it’s not just me who have the same problem. I wonder if this problem is fixable? To put it in another way, could I use all the features provided by MATALB with this error? Additionally, the error also occurs when I use a Linux Ubuntu 17.10 or 18.04 system that has AMD graphics hardware.
Thank you in advance, attached is the error information:
com.jogamp.opengl.GLException: X11GLXDrawableFactory – Could not initialize shared resources for X11GraphicsDevice[type .x11, connection :0, unitID 0, handle 0x0, owner false, ResourceToolkitLock[obj 0x413b9df1, isOwner false, [count 0, qsz 0, owner ]]] at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:326) at jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:297) at java.lang.Thread.run(Unknown Source) Caused by: com.jogamp.opengl.GLException: glXGetConfig(0x1) failed: error code Unknown error code 6 at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.glXGetConfig(X11GLXGraphicsConfiguration.java:570) at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.XVisualInfo2GLCapabilities(X11GLXGraphicsConfiguration.java:500) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationXVisual(X11GLXGraphicsConfigurationFactory.java:434) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationStatic(X11GLXGraphicsConfigurationFactory.java:240) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createMutableSurfaceImpl(X11GLXDrawableFactory.java:524) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createDummySurfaceImpl(X11GLXDrawableFactory.java:535) at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:283) … 2 moreI’m using Dell precision 7510 with preinstalled Ubuntu 14.04(I have the driver the NVIDIA). I just downloaded the MATLAB R2017a and I run into the following error. I just searched and found it’s not just me who have the same problem. I wonder if this problem is fixable? To put it in another way, could I use all the features provided by MATALB with this error? Additionally, the error also occurs when I use a Linux Ubuntu 17.10 or 18.04 system that has AMD graphics hardware.
Thank you in advance, attached is the error information:
com.jogamp.opengl.GLException: X11GLXDrawableFactory – Could not initialize shared resources for X11GraphicsDevice[type .x11, connection :0, unitID 0, handle 0x0, owner false, ResourceToolkitLock[obj 0x413b9df1, isOwner false, [count 0, qsz 0, owner ]]] at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:326) at jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:297) at java.lang.Thread.run(Unknown Source) Caused by: com.jogamp.opengl.GLException: glXGetConfig(0x1) failed: error code Unknown error code 6 at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.glXGetConfig(X11GLXGraphicsConfiguration.java:570) at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.XVisualInfo2GLCapabilities(X11GLXGraphicsConfiguration.java:500) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationXVisual(X11GLXGraphicsConfigurationFactory.java:434) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationStatic(X11GLXGraphicsConfigurationFactory.java:240) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createMutableSurfaceImpl(X11GLXDrawableFactory.java:524) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createDummySurfaceImpl(X11GLXDrawableFactory.java:535) at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:283) … 2 more I’m using Dell precision 7510 with preinstalled Ubuntu 14.04(I have the driver the NVIDIA). I just downloaded the MATLAB R2017a and I run into the following error. I just searched and found it’s not just me who have the same problem. I wonder if this problem is fixable? To put it in another way, could I use all the features provided by MATALB with this error? Additionally, the error also occurs when I use a Linux Ubuntu 17.10 or 18.04 system that has AMD graphics hardware.
Thank you in advance, attached is the error information:
com.jogamp.opengl.GLException: X11GLXDrawableFactory – Could not initialize shared resources for X11GraphicsDevice[type .x11, connection :0, unitID 0, handle 0x0, owner false, ResourceToolkitLock[obj 0x413b9df1, isOwner false, [count 0, qsz 0, owner ]]] at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:326) at jogamp.opengl.SharedResourceRunner.run(SharedResourceRunner.java:297) at java.lang.Thread.run(Unknown Source) Caused by: com.jogamp.opengl.GLException: glXGetConfig(0x1) failed: error code Unknown error code 6 at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.glXGetConfig(X11GLXGraphicsConfiguration.java:570) at jogamp.opengl.x11.glx.X11GLXGraphicsConfiguration.XVisualInfo2GLCapabilities(X11GLXGraphicsConfiguration.java:500) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationXVisual(X11GLXGraphicsConfigurationFactory.java:434) at jogamp.opengl.x11.glx.X11GLXGraphicsConfigurationFactory.chooseGraphicsConfigurationStatic(X11GLXGraphicsConfigurationFactory.java:240) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createMutableSurfaceImpl(X11GLXDrawableFactory.java:524) at jogamp.opengl.x11.glx.X11GLXDrawableFactory.createDummySurfaceImpl(X11GLXDrawableFactory.java:535) at jogamp.opengl.x11.glx.X11GLXDrawableFactory$SharedResourceImplementation.createSharedResource(X11GLXDrawableFactory.java:283) … 2 more linux, ubuntu, amd, graphics, hardware, nvidia MATLAB Answers — New Questions
Change format of field contour labels that have been manually added in a tiled layout
Hi,
Trying to manually add contour labels over a filled contour plot over a narrow range of values.
Matlab returns the error:
Error using hgconvertunits
The reference object is invalid.
Any ideas how I can manually add labels to a tiledlayout?
I also assume there is no workaround for avoiding the contours from striking through the labels when added manually?
Thank you
a = 0.98;
b = 1.02;
Z = round(((b-a).*rand(99,99) + a),2);
XY = [-0.375:.125:.375];
no_levels = 8;
npoints = [99 99]; % define improved resolution
Xfine = linspace(XY(1),XY(end),npoints(1)); % y coords
Yfine = linspace(XY(1),XY(end),npoints(2)); % z coords
[Xfine Yfine] = meshgrid(Xfine,Yfine); % create mesh
tiledlayout(1,2)
for i = 1:2
nexttile(1)
[C,H] = contourf(Xfine,Yfine,Z,no_levels,’-‘);
% clabel(C,H,’interpreter’,’latex’,’FontSize’,14);
% H.LevelList = round(H.LevelList,1); % rounds levels to 1st decimal place
clabel(C,H,"manual",’FontSize’, 22)
hold on
ylim([-0.5 0.5]);
y = ylim;
yticks([y(1):.25:y(2)])
xticks([-.5:.25:.5])
xlim([-.55 .55])
axis square
endHi,
Trying to manually add contour labels over a filled contour plot over a narrow range of values.
Matlab returns the error:
Error using hgconvertunits
The reference object is invalid.
Any ideas how I can manually add labels to a tiledlayout?
I also assume there is no workaround for avoiding the contours from striking through the labels when added manually?
Thank you
a = 0.98;
b = 1.02;
Z = round(((b-a).*rand(99,99) + a),2);
XY = [-0.375:.125:.375];
no_levels = 8;
npoints = [99 99]; % define improved resolution
Xfine = linspace(XY(1),XY(end),npoints(1)); % y coords
Yfine = linspace(XY(1),XY(end),npoints(2)); % z coords
[Xfine Yfine] = meshgrid(Xfine,Yfine); % create mesh
tiledlayout(1,2)
for i = 1:2
nexttile(1)
[C,H] = contourf(Xfine,Yfine,Z,no_levels,’-‘);
% clabel(C,H,’interpreter’,’latex’,’FontSize’,14);
% H.LevelList = round(H.LevelList,1); % rounds levels to 1st decimal place
clabel(C,H,"manual",’FontSize’, 22)
hold on
ylim([-0.5 0.5]);
y = ylim;
yticks([y(1):.25:y(2)])
xticks([-.5:.25:.5])
xlim([-.55 .55])
axis square
end Hi,
Trying to manually add contour labels over a filled contour plot over a narrow range of values.
Matlab returns the error:
Error using hgconvertunits
The reference object is invalid.
Any ideas how I can manually add labels to a tiledlayout?
I also assume there is no workaround for avoiding the contours from striking through the labels when added manually?
Thank you
a = 0.98;
b = 1.02;
Z = round(((b-a).*rand(99,99) + a),2);
XY = [-0.375:.125:.375];
no_levels = 8;
npoints = [99 99]; % define improved resolution
Xfine = linspace(XY(1),XY(end),npoints(1)); % y coords
Yfine = linspace(XY(1),XY(end),npoints(2)); % z coords
[Xfine Yfine] = meshgrid(Xfine,Yfine); % create mesh
tiledlayout(1,2)
for i = 1:2
nexttile(1)
[C,H] = contourf(Xfine,Yfine,Z,no_levels,’-‘);
% clabel(C,H,’interpreter’,’latex’,’FontSize’,14);
% H.LevelList = round(H.LevelList,1); % rounds levels to 1st decimal place
clabel(C,H,"manual",’FontSize’, 22)
hold on
ylim([-0.5 0.5]);
y = ylim;
yticks([y(1):.25:y(2)])
xticks([-.5:.25:.5])
xlim([-.55 .55])
axis square
end contour labels, tiledlayout MATLAB Answers — New Questions