Month: October 2024
Skilling snack: Windows compliance reports and analytics
One of the most important questions asked is, “How many of our devices are up to date?” This question is one you can easily answer using Microsoft Intune, Windows Autopatch, or Windows Update for Business reports. Learn more about the tools and reports you can use to identify exactly which devices are up to date, which might need attention, and other insights.
Feel free to choose the resources best suited to your needs and interests.
Time to learn: 126 minutes
WATCH
The latest on managing Windows updates in Microsoft Intune
For a broader overview of Windows updates in Microsoft Intune, walk through built-in functionalities, including the Windows update distribution report for all your Intune enrolled devices. And learn more about this report in the resources below.
(30 mins)
Intune + Quality updates + Drivers + Windows 11 + Windows 10 + Policies
READ
Use Windows Update for Business reports for Windows Updates in Microsoft Intune
If you use policies to update your Windows 10/11 devices, there are reports for them! Get a summary of update success, in-progress updates, device update status, and failure alerts along with remediation recommendations. Most importantly, learn about the Windows update distribution report and Windows Update for Business (WUfB) reports. The last two are what you need to know which devices are up to date. Find more information in the resources below.
(36 mins)
Intune + Policies + Rings + WUfB reports + Quality + Feature + Device + Azure
Windows update distribution report in Microsoft Intune
If you use Microsoft Intune, start with the Windows update distribution report, also known as the Quality update distribution report. This report provides status for all devices enrolled in Microsoft Intune, regardless of whether they are assigned to any update policies.
Windows update distribution report (the latest on managing Windows updates in Microsoft Intune) (5 mins)
Starting at the 14:50 minute mark, learn about the Windows update distribution report in Intune. Watch the 5-minute demo to walk through the interface, the structure, and the details available in the report.
Windows update distribution report (6 mins)
Among the more comprehensive documentation on reports, read the section on the Windows update distribution report. How many and what devices are on each Windows feature version and quality update? Get the high-level summary and drill down with this Microsoft Intune report.
Intune + Rings + Co-management + Feature version + Device version + Update type + Device activity
Windows quality update reports in Windows Autopatch
Windows Autopatch provides additional insights, such as a historical view of updates. Here are four useful tips about reporting to try.
Generate reports from Windows Autopatch (2 mins)
Getting started with Windows Autopatch reports? We recommend starting with this short demo of how you can generate update status and update history reports.
Windows quality update summary dashboard (2 mins)
Want a comprehensive overview of the current update status for all devices managed by Windows Autopatch? Learn how to generate and interpret the Summary dashboard in this quick guide.
Quality update status report (6 mins)
For a per-device view of the current update status for your devices, explore Windows Autopatch > Windows quality updates > Reports > Quality update status. You’ll find device-specific information, including build numbers, readiness status, and alerts included in this report.
Quality update trending report (1 min)
This report graphs trends over the last 90 days. Learn about the historical trends by update status or deployment ring.
Autopatch + Update + Status + Ring + History + Build + Alerts
Windows Update for Business reports tools
If you want to query the data or build custom dashboards, access the same data using Windows Update for Business reports. See for yourself.
Use the workbook for Windows Update for Business reports (16 mins)
If you’re new to Windows Update for Business reports, start here. This documentation walks you through what’s inside, how to understand it, and how to customize and use various reports.
Get the most out of expedited Windows quality updates (13 mins)
Whenever you expedite Windows security or non-security updates, you’ll get a failure report. Use it to troubleshoot and remediate common issues marked as alerts. This article introduces you to common alerts, resolutions, and best practices.
Tailor Windows Update for Business reports with Power BI (9 mins)
Use the Power BI integration in Windows Update for Business reports to create custom visualizations. Better understand the device landscape, identify trends, issues, and areas for improvement. Walk through the visualizations of the Windows 11 migration scenario and update deployment monitoring. Let us show you how to turn these reports into actionable insights.
WUfB + Azure + Quality + Feature + Driver + DO + Update state + Power BI + Alerts + Resolution
Which tool, or tools, best fit your needs? Share your thoughts in the comments below!
Check out related resources to keep building your skills:
Skilling snack: Windows Update for Business reports
Skilling snack: Windows Autopatch
Automate updates with Windows Autopatch – Tackling Tech
Skilling snack: Managing Windows 11 updates
We’re turning our beloved skilling snacks into a monthly series! With our library of 45 learning bites, we invite you to review what you’ve missed and come back once a month to keep your skills sharp and memory fresh.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X and on LinkedIn. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Product Placemat for CMMC – October 2024 Update
Microsoft CMMC Acceleration
We are actively building acceleration by developing resources for both partners and Defense Industrial Base (DIB) companies to leverage in their Cybersecurity Maturity Model Certification (CMMC) journey. These tools cannot guarantee a positive CMMC adjudication, but they may assist Organizations Seeking Certification (OSC) by improving their CMMC posture going into a formal CMMC assessment in accordance with the DOD and Cyber Accreditation Body (Cyber-AB) standards.
For more information, please see Notices later in this article.
Here is a summary of the most recent resources to help get you started.
Home Page for CMMC
Want to start your CMMC compliance journey on the right foot? We have a home page for CMMC at https://aka.ms/cmmc. Found on the Microsoft Federal site, the home page includes an outline of resources available, including references to our Microsoft Cloud service offerings and an up-to-date list of blogs and documentation we release. Please bookmark the site and leverage it as your launching point in all things Microsoft and CMMC.
While you are there on the Microsoft Federal site, also browse around and check out our Federal Segment on Defense and the Solutions we have for DoD Zero Trust Strategy and the Cybersecurity Executive Order.
Microsoft Product Placemat for CMMC
Microsoft Product Placemat for CMMC is an interactive view representing how we believe Microsoft cloud products and services satisfy requirements for CMMC practices. The user interface resembles a periodic table of CMMC Practice Families. The default view illustrates the practices with Microsoft Coverage that are inherited from the underlying cloud platform. It also depicts practices for Shared Coverage where the underlying cloud platform contributes coverage for specific practices but requires additional customer configuration to satisfy requirements for full coverage. For each practice that aligns with Microsoft Coverage or Shared Coverage, verbal customer implementation guidance and practice implementation details are documented. This enables you to drill down into each practice and discover details on inheritance and prescriptive guidance for actions to be taken by the customer to try to meet practice requirements in the shared scope of responsibility for compliance with CMMC.
In addition to the default view, you may select and include products, features and suite SKUs to adjust how each cloud product is placed with CMMC. For example, you may select the Microsoft 365 E5 SKU or “Select All” for maximum coverage of CMMC. You may also use the blue-colored cell on the top left to select from a drop-down menu filtering the Placemat. You may choose between three options:
Level 1 – Foundational: This option will display the practices associated with CMMC Level 1.
Note: there are 17 practices in this release, but will be updated soon to reflect the Final Rule’s trim to 15 practices.
Level 2 – Advanced: This filter will display 110 practices associated with CMMC Level 2.
Note: aligns with the controls for NIST SP 800-171.
Level 3 – Expert: This filter displays the additional CMMC Level 3 practices that align with NIST SP 800-172.
The Microsoft Product Placemat for CMMC is currently in public preview. It has been updated to include support for CMMC Level 3 and usability improvements based on public preview feedback. In addition, the public preview release has been updated to include implementation guidance for every practice in alignment with the Technical Reference Guide.
Note: This release was issued prior to the final CMMC rule publication in this month (October 2024). We are diligently working on a refresh to refine for the final rule.
You may download a copy at:
https://aka.ms/cmmc/productplacemat
Please share feedback at https://aka.ms/cmmc/productplacematfeedback.
Microsoft Technical Reference Guide for CMMC
We are excited to update this significant artifact of CMMC Acceleration! The Microsoft Technical Reference Guide for CMMC includes implementation statements for an organization pursuing CMMC while leveraging relevant Microsoft services. This includes brief descriptions of relevant Microsoft cloud services and products, and links to further implementation documentation. The guide focuses on CMMC Level 2 (L2) and Level 3 (L3) for this release.
If you think of the Microsoft Product Placemat for CMMC as being a level 100 document, the guide is level 200 and more.
The guide is organized in sections for each of the domains of CMMC, beginning with Access Control:
AC.L1-3.1.1
Control Summary Information
NIST SP 800-53 Mapping: AC-2, AC-3, AC-17
Practice: Limit information system access to authorized users, processes acting on behalf of authorized users or devices (including other information systems).
Assessment Objectives:
[a] authorized users are identified;
[b] processes acting on behalf of authorized users are identified;
[c] devices (and other systems) authorized to connect to the system are identified;
[d] system access is limited to authorized users;
[e] system access is limited to processes acting on behalf of authorized users; and
[f] system access is limited to authorized devices (including other systems).
Primary Services
Secondary Services
Microsoft Entra ID
Azure RBAC
Intune/Intune Suite
Microsoft Information Protection
Conditional Access
Customer Lockbox
Privileged Identity Management (PIM)
Microsoft 365 Web Apps
M365 Groups
Microsoft Entra ID Multi-Factor Authentication
You may notice the guide has the same outline of Primary and Secondary Services as identified in the Microsoft Product Placemat for CMMC. However, this document format lets us get into much more depth of the implementation statements as compared to the Placemat spreadsheet.
The Microsoft Technical Reference Guide for CMMC is currently in public preview.
Note: This release was issued prior to the final CMMC rule publication in this month (October 2024). We are diligently working on a refresh to refine for the final rule.
You may download a copy at:
https://aka.ms/cmmc/techrefguide
Please share feedback at https://aka.ms/cmmc/techrefguidefeedback.
Notices
Microsoft CMMC Acceleration provides customers and partners with resources to pursue CMMC compliance while leveraging Microsoft products and services— It does not address security practices occurring outside of Microsoft products and services.
Please further note that the CMMC compliance standard has yet to be officially rolled out. As a result, there may be additional nuance or complexity associated with CMMC compliance that will only materialize through the practical application of the standard by the DoD and Cyber-AB. As a result, the information herein, including all Microsoft CMMC related offerings, are provisional and may be enhanced to align with future guidance.
Microsoft does not guarantee nor imply any ultimate compliance outcome or determination based on one’s consumption of this article or the resources linked from it — all CMMC certification requirements and decisions are governed by the DoD and Cyber-AB, and Microsoft has no direct or indirect insight into or bearing over compliance determinations. The associations between compliance domains, practices, and Microsoft CMMC Acceleration may change at any time.
Customers must individually determine the necessary steps required to ensure their organization fully satisfies each recommended CMMC compliance practice, in addition to or in place of what is described in program resources. This responsibility spans all Microsoft (Azure, Microsoft 365, etc.) consumption decisions, including, among other things, which Microsoft offerings to procure, as well as all configuration decisions associated with such use and consumption.
Appendix
Please follow me here and on LinkedIn. Here are my additional blog articles:
Blog Title
Aka Link
Microsoft Collaboration Framework
https://aka.ms/ND-ISAC/CollabFramework
ND-ISAC MSCloud – Reference Identity Architectures for the US Defense Industrial Base
https://aka.ms/ND-ISAC/IdentityWP
Microsoft CMMC Acceleration Update
https://aka.ms/CMMC/Acceleration
History of Microsoft Cloud Service Offerings leading to the US Sovereign Cloud for Government
https://aka.ms/USSovereignCloud
The Microsoft 365 Government (GCC High) Conundrum – DIB Data Enclave vs Going All In
Microsoft US Sovereign Cloud Myth Busters – A Global Address List (GAL) Can Span Multiple Tenants
Microsoft US Sovereign Cloud Myth Busters – A Single Domain Should Not Span Multiple Tenants
Microsoft US Sovereign Cloud Myth Busters – Active Directory Does Not Require Restructuring
Microsoft US Sovereign Cloud Myth Busters – CUI Effectively Requires Data Sovereignty
Microsoft expands qualification of contractors for government cloud offerings
https://aka.ms/GovCloudEligibility
Microsoft Tech Community – Latest Blogs –Read More
Exploring SUSE Enterprise Linux on Azure
Exploring SUSE Enterprise Linux on Azure
In today’s cloud-centric world, leveraging robust and reliable operating systems is crucial for businesses. One such powerful combination is SUSE Enterprise Linux on Azure. This blog delves into the various aspects of using SUSE Enterprise Linux, particularly the High Availability (HA) extension, on Azure.
SUSE Distributions Supported on Azure
Azure supports several SUSE distributions, including:
SUSE Enterprise Linux Server
SUSE Enterprise Linux Server for SAP
SUSE Enterprise Linux Server for HPC
For more details: click here.
Pricing and Licensing Models
SUSE has three offerings in Azure:
BYOS
SUSE Image with patching support
24×7 support
When it comes to pricing, the Azure pricing calculator provides various options. For instance, for a B2as machine with SUSE Linux Enterprise + 24/7 Support, PAYG, the cost is approximately $47.45 per month as of writing this blog.
You can select the desired image during the installation, and the Azure marketplace provides detailed information on what is offered under each SUSE image deployed through the marketplace.
High Availability Extension
The High Availability extension is a critical component for businesses that require continuous uptime and disaster recovery solutions. If you purchase the SUSE Linux Enterprise Server for SAP, the High Availability extension is included. However, if you opt for the standard SUSE Linux Enterprise Server through the Azure marketplace, you cannot add the High Availability extension. In such cases, you will need to use the BYOS (Bring Your Own Subscription) model.
Key Features of SUSE Enterprise Linux for SAP
SUSE Enterprise Linux for SAP comes with several features designed to enhance performance and reliability, including:
A full High Availability / Disaster Recovery solution
SAP HANA System Replication automation agents
SAP HANA Firewall with automated setup
A KMIP-compliant key server for remote storage
SAP configuration and tuning packages
Automated configuration and installation of SAP HANA clusters
A clustered SAP HANA software automated update wizard (tech preview)
Azure market place provides the detail on what is offered under each SUSE image deployed through the marketplace.
Considerations
When using SLES Enterprise HA on Azure, it’s essential to be aware of certain considerations. For example, Generation 2 VM support specific SLES marketplace images. Refer Azure support for Generation 2 VMs – Azure Virtual Machines | Microsoft Learn for more details.
Conclusion
SUSE Enterprise Linux on Azure provides a robust and scalable solution for businesses looking to leverage the power of the cloud. With various distributions, pricing models, and the critical High Availability extension, SUSE on Azure is a compelling choice for enterprises.
Microsoft Tech Community – Latest Blogs –Read More
Changing Client Push Service Account question
Hello, we have not changed the password for our service account that does the Client Push since the initial setup. According to the Microsoft documentation for “Accounts used in Configuration Manager”, it is recommended that you create a new account and assign it the Client Push role. Give it some time to propagate, then remove the original account.
Is there a way to make the new account be the primary account that is used? I don’t really understand how the client will know that there are 2 accounts being used and if one does not exist to use the other.
Thank you,
Steve
Hello, we have not changed the password for our service account that does the Client Push since the initial setup. According to the Microsoft documentation for “Accounts used in Configuration Manager”, it is recommended that you create a new account and assign it the Client Push role. Give it some time to propagate, then remove the original account. Is there a way to make the new account be the primary account that is used? I don’t really understand how the client will know that there are 2 accounts being used and if one does not exist to use the other. Thank you, Steve Read More
Multi line text column with append changes managed property
I’m trying to get data from a multi line text field with append changes on it to appear on a custom search page built using the PnP Search web parts. I haven’t had any luck mapping it successfully to a managed property after trying for about 2 weeks. I’ve tried using the RefinableString properties, and two text managed properties that have Searchable, Queryable, Retrievable and one with Allow multiple values enabled (Allow multiple values: Allow multiple values of the same type in this managed property. For example, if this is the “author” managed property, and a document has multiple authors, each author name will be stored as a separate value in this managed property.) All return null. There is content in the fields, the list has been reindexed a few times and it’s been over 48 hours.
I’m beginning to think it isn’t possible seeing as you have to jump through a few hoops to get the values with REST calls. Does anyone know if it is possible?
I’m trying to get data from a multi line text field with append changes on it to appear on a custom search page built using the PnP Search web parts. I haven’t had any luck mapping it successfully to a managed property after trying for about 2 weeks. I’ve tried using the RefinableString properties, and two text managed properties that have Searchable, Queryable, Retrievable and one with Allow multiple values enabled (Allow multiple values: Allow multiple values of the same type in this managed property. For example, if this is the “author” managed property, and a document has multiple authors, each author name will be stored as a separate value in this managed property.) All return null. There is content in the fields, the list has been reindexed a few times and it’s been over 48 hours. I’m beginning to think it isn’t possible seeing as you have to jump through a few hoops to get the values with REST calls. Does anyone know if it is possible? Read More
Identity forensics with Copilot for Security Identity Analyst Plugin
Overview
This is a step-by-step guided walkthrough of how to use a custom KQL Copilot for Security plugin for Identity SOC and forensics use cases and how it helps in implementing a consistent security policy for every user, employee, frontline worker, customer, and partner as well as apps, devices, and workloads across multi-cloud and hybrid.
Use case summary
Monitoring and governing Identities using Copilot for Security custom Identity Analyst Plugin:
User Risk Assessment: Monitor user risk levels based on their activities. This could include sign-in attempts from unfamiliar locations, repeated failed sign-in attempts, or other suspicious behavior.
Sign-in Monitoring: Track user sign-in activities. This includes successful sign-ins, failed attempts, and the location and device used for sign-in. Unusual sign-in activity could be a sign of a potential security threat.
Admin Activity Monitoring: Admin accounts have high-level access and can be a prime target for attackers. Monitor admin activities, especially those involving changes to security settings, user privileges, or access controls.
Application Usage Monitoring: Keep an eye on the usage of applications within your organization. Unusual application activity, such as a high number of downloads or an increase in usage outside of normal business hours, could indicate a potential security issue.
Privileged Identity Management: Monitor the lifecycle of privileged identities within your organization. This includes the creation, modification, and deletion of privileged accounts.
Access Review: Regularly review user access to various resources within your organization. This can help ensure that users only have access to the resources they need for their job functions, reducing the risk of insider threats.
In this guide, we will provide high-level steps to get started using the new tooling. We will start by adding the custom plugin and it’s recommended for organizations to test this in their dev environment first.
Installation
Use the following steps to obtain and install the custom Identity Analyst Plugin for Copilot for Security: Go to securitycopilot.microsoft.com
Download the IdentitySecurityAnalyst.yml file from here.
Select the plugins icon down in the left corner.
4. Under Custom upload, select upload plugin
5. Select the Copilot for Security plugin and upload the IdentitySecurityAnalyst.yml file
6. Click Add
7. Under Custom you will now see the plug-in. Ensure it is enabled.
The custom package contains the following prompts:
Let us get started with more use cases leveraging Copilot for Security capabilities:
User Risk Assessment
Fetches the user risk levels based on their activities. This could include sign-in attempts from unfamiliar locations, repeated failed sign-in attempts, or other suspicious behavior.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityGetUserRiskAssesment’ as shown below:
A sample result will be:
User Sign-In Activities
Fetches user sign-in activities. This includes successful sign-ins, failed attempts, and the location and device used for sign-in. Unusual sign-in activity could be a sign of a potential security threat.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityGetSignInMonitoring’ or prompt with ‘Get users signin activities using Identity analyst plugin’.
Admin Activities Monitoring
Fetches Admin Activity Monitoring logs. Admin accounts have high-level access and can be a prime target for attackers. Monitor all admin activities, especially those involving changes to security settings, user privileges, or access controls.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityGetAdminActivityMonitoring’ or prompt with ‘Get admin activities monitoring using Identity analyst plugin’.
Applications Usage Monitoring
Fetches Application Usage Monitoring logs to keep an eye on the usage of applications within your organization. Unusual application activity, such as a high number of downloads or an increase in usage outside of normal business hours, could indicate a potential security issue.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityGetApplicationUsageMonitoring’ or prompt with ‘Get application usage monitoring using Identity analyst plugin’.
Privileged Identity Management (PIM) Monitoring
Fetches Privileged Identity Management logs to monitor the lifecycle of privileged identities within your organization. This includes the creation, modification, and deletion of privileged accounts.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityPIMMonitoring or prompt with ‘Get Privileged Identity Management monitoring using Identity analyst plugin’.
Access Review Monitoring
Fetches Access Review logs to regularly review user access to various resources within your organization. This can help ensure that users only have access to the resources they need for their job functions, reducing the risk of insider threats.
In Copilot for Security, you can either directly invoke the plugin via selling the concerned skill under prompt–system capabilities or type ‘/IdentityAccessReviewMonitoring or prompt with ‘Get Access Review monitoring using Identity analyst plugin’.
Conclusion
This plugin is based on KQL that presents a relatively simple and scalable way to leverage the existing repositories of proven KQL queries within the Microsoft security ecosystem, One of the suggestions is you can customize the Custom KQL plugin YML file and make the time range to be as input parameter from Copilot for Security instead of specific hard-coded input. These can then be used as a basis to bring AI enrichment onto security data already present within Microsoft Identity for more details on Microsoft Copilot for Security custom plugins via KQL please visit https://learn.microsoft.com/en-us/copilot/security/plugin-kql. Give it a go and give us your feedback so we can continuously improve the product for your benefit.
Microsoft Tech Community – Latest Blogs –Read More
Enrollment for additional business location fails – support website
Hi there,
we are trying to enroll our US business location for CSP indirect reseller (for our DE location we are successfully registered and enrolled).
I created an Entra tenant and used the enrollment form, but I fail when completing the form to kick everything off. I receive the below error message:
We have one central website, but it won’t accept the entry. What can i provide to make this work?
I can not even open a support request, because I end up in a closed form when i follow the red link :
Any recommendations and ideas are really welcome.
Thanks
Ann
Hi there, we are trying to enroll our US business location for CSP indirect reseller (for our DE location we are successfully registered and enrolled). I created an Entra tenant and used the enrollment form, but I fail when completing the form to kick everything off. I receive the below error message: We have one central website, but it won’t accept the entry. What can i provide to make this work? I can not even open a support request, because I end up in a closed form when i follow the red link : Any recommendations and ideas are really welcome. Thanks Ann Read More
Patient Tracker and Package Tracker
Hi,
I have two sheets one in which patient attendance is tracked with which therapist has been attended. Another sheet that says the type of package that the patients has bought.
Every day I need to calculate the revenue by each Therapist. I have attached both the sheets, to show the type of data that is being generated from the system.
Patient Attendance Tracker
Patient NamePatient IDTherapistDepartmentDateShyam Hani153RyaanOccupational Therapy02/10/2024Shyam Hani153RyaanOccupational Therapy04/09/2024Shyam Hani153RyaanOccupational Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy02/10/2024Shyam Hani153SanjuSpeech Therapy04/09/2024Shyam Hani153SanjuSpeech Therapy05/10/2024Shyam Hani153SanjuSpeech Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy07/09/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy10/08/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy12/10/2024Dev Mani112SanjuOccupational Therapy01/10/2024Dev Mani112SanjuOccupational Therapy02/10/2024Dev Mani112SanjuOccupational Therapy04/10/2024Dev Mani112SanjuOccupational Therapy08/10/2024Dev Mani112SanjuOccupational Therapy09/10/2024Dev Mani112SanjuOccupational Therapy10/09/2024Dev Mani112SanjuOccupational Therapy10/10/2024Dev Mani112RyaanOccupational Therapy11/09/2024Dev Mani112RyaanOccupational Therapy11/10/2024Dev Mani112RyaanOccupational Therapy12/09/2024Dev Mani112RyaanOccupational Therapy01/10/2024Dev Mani112RyaanOccupational Therapy04/10/2024Dev Mani112RyaanOccupational Therapy08/10/2024Dev Mani112RyaanOccupational Therapy10/10/2024
Patient Price Tracker
Patient NameTherapistPatient IDPackage FromPackage ToPackage PricePackageShyam HaniSanju153Wednesday, 2 October 2024Wednesday, 4 September 2024100Speech TherapyShyam HaniRyaan153Wednesday, 2 October 2024 0Occupational TherapyMeera HasanSanju152Wednesday, 9 October 2024Saturday, 12 October 2024200Occupational TherapyDev ManiSanju112Tuesday, 1 October 2024Tuesday, 8 October 2024300Occupational TherapyDev ManiRyaan112Saturday, 27 July 2024Tuesday, 27 August 2024400Occupational Therapy
Hi,I have two sheets one in which patient attendance is tracked with which therapist has been attended. Another sheet that says the type of package that the patients has bought. Every day I need to calculate the revenue by each Therapist. I have attached both the sheets, to show the type of data that is being generated from the system. Patient Attendance Tracker Patient NamePatient IDTherapistDepartmentDateShyam Hani153RyaanOccupational Therapy02/10/2024Shyam Hani153RyaanOccupational Therapy04/09/2024Shyam Hani153RyaanOccupational Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy02/10/2024Shyam Hani153SanjuSpeech Therapy04/09/2024Shyam Hani153SanjuSpeech Therapy05/10/2024Shyam Hani153SanjuSpeech Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy07/09/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy10/08/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy12/10/2024Dev Mani112SanjuOccupational Therapy01/10/2024Dev Mani112SanjuOccupational Therapy02/10/2024Dev Mani112SanjuOccupational Therapy04/10/2024Dev Mani112SanjuOccupational Therapy08/10/2024Dev Mani112SanjuOccupational Therapy09/10/2024Dev Mani112SanjuOccupational Therapy10/09/2024Dev Mani112SanjuOccupational Therapy10/10/2024Dev Mani112RyaanOccupational Therapy11/09/2024Dev Mani112RyaanOccupational Therapy11/10/2024Dev Mani112RyaanOccupational Therapy12/09/2024Dev Mani112RyaanOccupational Therapy01/10/2024Dev Mani112RyaanOccupational Therapy04/10/2024Dev Mani112RyaanOccupational Therapy08/10/2024Dev Mani112RyaanOccupational Therapy10/10/2024 Patient Price Tracker Patient NameTherapistPatient IDPackage FromPackage ToPackage PricePackageShyam HaniSanju153Wednesday, 2 October 2024Wednesday, 4 September 2024100Speech TherapyShyam HaniRyaan153Wednesday, 2 October 2024 0Occupational TherapyMeera HasanSanju152Wednesday, 9 October 2024Saturday, 12 October 2024200Occupational TherapyDev ManiSanju112Tuesday, 1 October 2024Tuesday, 8 October 2024300Occupational TherapyDev ManiRyaan112Saturday, 27 July 2024Tuesday, 27 August 2024400Occupational Therapy Read More
Surface 10 Pro Business – Driver controller sata
I have to format a Surface 10 Pro Business without using a recovery image but using the Windows 11 key. The SSD disk is not recognized because the Sata controller driver is missing. Can anyone tell me the model or the driver download link? Thank you
I have to format a Surface 10 Pro Business without using a recovery image but using the Windows 11 key. The SSD disk is not recognized because the Sata controller driver is missing. Can anyone tell me the model or the driver download link? Thank you Read More
New Field in log
How can I get the “department” field in the AD log? I already have AD integrated with Wazuh! But the data from this field is not coming through!
thanks
How can I get the “department” field in the AD log? I already have AD integrated with Wazuh! But the data from this field is not coming through!thanks Read More
win 10 build 19045.2787
Bonjour,
je suis en windows 10 22h2 build 19045.2787
quelles sont les manips pour l’upgrader, car il est figé .
Merci
Bonjour,je suis en windows 10 22h2 build 19045.2787quelles sont les manips pour l’upgrader, car il est figé .Merci Read More
Planner Patch ETag Issue
Getting below error for planner task update operation
{“error”:{“code”:””,”message”:”The If-Match header contains an invalid value.”,”innerError”:{“date”:”2024-10-24T15:32:02″,”request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″,”client-request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″}}}
string currentETag = “W/”JzEtVGFzayAgQEBAQEBAQEBAQEBAQEBARCc=””;
httpClient.DefaultRequestHeaders.Add(“If-Match”, currentETag);
Need help of right combination for passing the correct etag I tried removing backlash and adding double quotes and other combinations given across articles.
Getting below error for planner task update operation{“error”:{“code”:””,”message”:”The If-Match header contains an invalid value.”,”innerError”:{“date”:”2024-10-24T15:32:02″,”request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″,”client-request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″}}}string currentETag = “W/”JzEtVGFzayAgQEBAQEBAQEBAQEBAQEBARCc=””; httpClient.DefaultRequestHeaders.Add(“If-Match”, currentETag); Need help of right combination for passing the correct etag I tried removing backlash and adding double quotes and other combinations given across articles. Read More
The Future of AI: Deploying your LoRA Fine-tuned Llama 3.1 8B on Azure AI, why it’s a breeze!
The Future of AI: Distillation Just Got Easier
Part 3 – Deploying your LoRA Fine-tuned Llama 3.1 8B model, why it’s a breeze!
Learn how Azure AI makes it effortless to deploy your LoRA fine-tuned models using Azure AI. (🚀🔥 Github recipe repo).
By Cedric Vidal, Principal AI Advocate, Microsoft
Part of the Future of AI 🚀 series initiated by Marco Casalaina with his Exploring Multi-Agent AI Systems blog post.
A Llama on a rocket launched in space, generated using Azure OpenAI DALL-E 3
Welcome back to our series on leveraging Azure AI Studio to accelerate your AI development journey. In our previous posts, we’ve explored synthetic dataset generation and the process of fine-tuning models. Today, we’re diving into the crucial step that turns your hard work into actionable insights: deploying your fine-tuned model. In this installment, we’ll guide you through deploying your model using Azure AI Studio and the Python SDK, ensuring a seamless transition from development to production.
Why Deploying GPU Accelerated Inference Workloads is Hard
Deploying GPU-accelerated inference workloads comes with a unique set of challenges that make the process significantly more complex compared to standard CPU workloads. Below are some of the primary difficulties encountered:
GPU Resource Allocation: GPUs are specialized and limited resources, requiring precise allocation to avoid wastage and ensure efficiency. Unlike CPUs that can be easily provisioned in larger numbers, the specialized nature of GPUs means that effective allocation strategies are crucial to optimize performance.
GPU Scaling: Scaling GPU workloads is inherently more challenging due to the high cost and limited availability of GPU resources. it requires careful planning to balance cost efficiency with workload demands, unlike more straightforward CPU resource scaling.
Load Balancing for GPU Instances: Implementing load balancing for GPU-based tasks is complex due to the necessity of evenly distributing tasks across available GPU instances. This step is vital to prevent bottlenecks, avoid overload in certain instances, and ensure optimal performance of each GPU unit.
Model Partitioning and Sharding: Large models that cannot fit into a single GPU memory require partitioning and sharding. This process involves splitting the model across multiple GPUs, which introduces additional layers of complexity in terms of load distribution and resource management.
Containerization and Orchestration: While containerization simplifies the deployment process by packaging models and dependencies, managing GPU resources within containers and orchestrating them across nodes adds another layer of complexity. Effective orchestration setups need to be fine-tuned to handle the subtle dynamics of GPU resource utilization and management.
LoRA Adapter Integration: LoRA, which stands for Low-order Rank Adaptation, is a powerful optimization technique that reduces the number of trainable parameters by decomposing the original weight matrices into lower-rank matrices. This makes it efficient for fine-tuning large models with fewer resources. However, integrating LoRA adapters into deployment pipelines involves additional steps to efficiently store, load and merge the lightweight adapters with the base model and serve the final model, which increases the complexity of the deployment process.
Monitoring GPU Inference Endpoints: Monitoring GPU inference endpoints is complex due to the need for specialized metrics to capture GPU utilization, memory bandwidth, and thermal limits, not to mention model specific metrics such as token counts or request counts. These metrics are vital for understanding performance bottlenecks and ensuring efficient operation but require intricate tools and expertise to collect and analyze accurately.
Model Specific Considerations: It’s important to acknowledge that the deployment process is often specific to the base model architecture you are working with. Each new version of a model or a different model vendor will require a fair amount of adaptations in your deployment pipeline. This could include changes in preprocessing steps, modifications in environment configurations, or adjustments in the integration or versions of third-party libraries. Therefore, it’s crucial to stay updated with the model documentation and vendor-specific deployment guidelines to ensure a smooth and efficient deployment process.
Model Versioning Complexity: Keeping track of multiple versions of a model can be intricate. Each version may exhibit distinct behaviors and performance metrics, necessitating thorough evaluation to manage updates, rollbacks, and compatibility with other systems. We’ll cover the subject of model evaluation more thoroughly in the next blog post. Another difficulty with versioning is storing the weights of the different LoRA adapters and keeping track of the versions of the base models they must be adapted onto.
Cost Planning: Planning the costs for GPU inference workloads is challenging due to the variable nature of GPU usage and the higher costs associated with GPU resources. Predicting the precise amount of GPU time required for inference under different workloads can be difficult, leading to unexpected expenses.
Understanding and addressing these difficulties is crucial for successfully deploying GPU-accelerated inference workloads, ensuring that the full potential of GPU capabilities is harnessed.
Azure AI Serverless: A Game Changer
Azure AI Serverless is a game changer because it effectively addresses a lot of challenges with deploying GPU-accelerated inference workloads. By leveraging the serverless architecture, it abstracts away the complexities associated with GPU resource allocation, model specific deployment considerations, and API management. This means you can deploy your models without worrying about the underlying infrastructure management, allowing you to focus on your application’s needs. Additionally, Azure AI Serverless supports a diverse collection of models and abstracts away the choice and provisioning of GPU hardware accelerators, ensuring efficient and fast inference times. The platform’s integration with managed services enables robust container orchestration, simplifying the deployment process even further and enhancing overall operational efficiency.
Attractive pay as you go cost model
One of the standout features of Azure AI Serverless is its token-based cost model, which greatly simplifies cost planning. With token-based billing, you are charged based on the number of tokens processed by your model, making it easy to predict costs based on expected usage patterns. This model is particularly beneficial for applications with variable loads, as you only pay for what you use.
Because the managed infrastructure needs to maintain LoRA adapters in memory and swap them on demand, there is an additional per hour cost associated with fine tuned serverless endpoints but it is billed by the hour only while the endpoint is being used. This makes it super easy to plan ahead future bills depending on your expected usage profile.
Also, the hourly cost is meant to go down, it already went down dramatically from $3.09/hour for a Llama 2 7B based model to $0.74/hour for a Llama 3.1 8B based model.
By paying attention to these critical factors, you can ensure that your model deployment is robust, secure, and capable of meeting the demands of your application.
Region Availability
When deploying your Llama 3.1 fine-tuned model, it’s important to consider the geographical regions where the model can be deployed. As of now, Azure AI Studio supports the deployment of Llama 3.1 fine-tuned models in the following regions: East US, East US 2, North Central US, South Central US, West US, and West US 3. Choosing a region that’s closer to your end-users can help reduce latency and improve performance. Ensure you select the appropriate region based on your target audience for optimal results.
For the most up-to-date information on region availability for other models, please refer to this guide on deploying models serverlessly.
Let’s get coding with Azure AI Studio and the Python SDK
Before proceeding to deployment, you’ll need a model that you have previously fine-tuned. One way is to use the process described in the two preceding installments of this fine-tuning blog post series: the first one covers synthetic dataset generation using RAFT and the second one covers fine-tuning. This ensures that you can fully benefit from the deployment steps using Azure AI Studio.
Note: All code samples that follow have been extracted from the 3_deploy.ipynb notebook of the raft-recipe GitHub repository. The snippets have been simplified and some intermediary steps left aside for ease of reading. You can either head over there, clone the repo and start experimenting right away or stick with me here for an overview.
Step 1: Set Up Your Environment
First, ensure you have the necessary libraries installed. You’ll need the Azure Machine Learning SDK for Python. You can install it using pip:
pip install azure-ai-ml
Next, you’ll need to import the required modules and authenticate your Azure ML workspace. This is standard, the MLClient is the gateway to the ML Workspace which gives you access to everything AI and ML on Azure.
from azure.ai.ml import MLClient
from azure.identity import (
DefaultAzureCredential,
InteractiveBrowserCredential,
)
from azure.ai.ml.entities import MarketplaceSubscription, ServerlessEndpoint
try:
credential = DefaultAzureCredential()
credential.get_token(“https://management.azure.com/.default”)
except Exception as ex:
credential = InteractiveBrowserCredential()
try:
client = MLClient.from_config(credential=credential)
except:
print(“Please create a workspace configuration file in the current directory.”)
# Get AzureML workspace object.
workspace = client._workspaces.get(client.workspace_name)
workspace_id = workspace._workspace_id
Step 2: Resolving the previously registered fine-tuned model
Before deploying, you need to resolve your fine-tuned model in the Azure ML workspace.
Since the fine-tuning job might still be running, you may want to wait for the model to be registered, here’s a simple helper function you can use.
def wait_for_model(client, model_name):
“””Wait for the model to be available, typically waiting for a finetuning job to complete.”””
import time
attempts = 0
while True:
try:
model = client.models.get(model_name, label=”latest”)
return model
except:
print(f”Model not found yet #{attempts}”)
attempts += 1
time.sleep(30)
The above function is basic but will make sure your deployment can proceed as soon as your model becomes available.
print(f”Waiting for fine tuned model {FINETUNED_MODEL_NAME} to complete training…”)
model = wait_for_model(client, FINETUNED_MODEL_NAME)
print(f”Model {FINETUNED_MODEL_NAME} is ready”)
Step 3: Subscribe to the model provider
Before deploying a model fine-tuned using a base model from a third-party non-Microsoft source, you need to subscribe to the model provider’s marketplace offering. This subscription allows you to access and use the model within Azure ML.
print(f”Deploying model asset id {model_asset_id}”)
from azure.core.exceptions import ResourceExistsError
marketplace_subscription = MarketplaceSubscription(
model_id=base_model_id,
name=subscription_name,
)
try:
marketplace_subscription = client.marketplace_subscriptions.begin_create_or_update(marketplace_subscription).result()
except ResourceExistsError as ex:
print(f”Marketplace subscription {subscription_name} already exists for model {base_model_id}”)
Details on how to construct the base_model_id and subscription_name are available in the 3_deploy.ipynb notebook.
Step 4: Deploy the model as a serverless endpoint
This section manages the deployment of a serverless endpoint for your fine-tuned model using the Azure ML client. It checks for an existing endpoint and creates one if it doesn’t exist, then proceeds with the deployment.
from azure.core.exceptions import ResourceNotFoundError
try:
serverless_endpoint = client.serverless_endpoints.get(endpoint_name)
print(f”Found existing endpoint {endpoint_name}”)
except ResourceNotFoundError as ex:
serverless_endpoint = ServerlessEndpoint(name=endpoint_name, model_id=model_asset_id)
serverless_endpoint = client.serverless_endpoints.begin_create_or_update(serverless_endpoint).result()
print(“Waiting for deployment to complete…”)
serverless_endpoint = ServerlessEndpoint(name=endpoint_name, model_id=model_id)
created_endpoint = client.serverless_endpoints.begin_create_or_update(serverless_endpoint).result()
print(“Deployment complete”)
Step 5: Check that the endpoint is correctly deployed
As part of a deployment pipeline, it is a good practice to include integration tests that check that the model is correctly deployed and fails fast instead of waiting for steps down the line to fail without context.
import requests
url = f”{endpoint.scoring_uri}/v1/chat/completions”
prompt = “What do you know?”
payload = {
“messages”:[ { “role”:”user”,”content”: prompt } ],
“max_tokens”:1024
}
headers = {“Content-Type”: “application/json”, “Authorization”: endpoint_keys.primary_key}
response = requests.post(url, json=payload, headers=headers)
response.json()
This code assumes that the deployed model is a chat model for simplicity. The code available in the 3_deploy.ipynb notebook is more generic and will cover both completion and chat models.
Conclusion
Deploying your fine-tuned model with Azure AI Studio and the Python SDK not only simplifies the process but also empowers you with unparalleled control, ensuring you have a robust and reliable platform for your deployment needs.
Stay tuned for our next blog post, in two weeks we will delve into assessing the performance of your deployed model through rigorous evaluation methodologies. Until then, head out to the Github repo and happy coding!
Microsoft Tech Community – Latest Blogs –Read More
Question on Consolidation
Hello, could you please tell me, if I have the following data in the multiple worksheet then how can I consolidate the data as per prioriy category. Thank you
Hello, could you please tell me, if I have the following data in the multiple worksheet then how can I consolidate the data as per prioriy category. Thank you Read More
Official Exchange 2019 Training Course Inquiry
Salam
I have a question regarding official training for Exchange Server 2019. I’m aware that Microsoft offers various training materials for its products, but I couldn’t find any official course specifically designed for Exchange 2019 in the catalog.
Could someone confirm if there was ever an official training course or certification for Exchange Server 2019? I’ve seen training for previous versions like Exchange 2010, but it seems like there wasn’t anything equivalent for 2019. Any clarification would be appreciated.
SalamI have a question regarding official training for Exchange Server 2019. I’m aware that Microsoft offers various training materials for its products, but I couldn’t find any official course specifically designed for Exchange 2019 in the catalog.Could someone confirm if there was ever an official training course or certification for Exchange Server 2019? I’ve seen training for previous versions like Exchange 2010, but it seems like there wasn’t anything equivalent for 2019. Any clarification would be appreciated. Read More
Toward a Distributed AI Platform for 6G RAN
by Ganesh Ananthanarayanan, Xenofon Foukas, Bozidar Radunovic, Yongguang Zhang
Introduction to the Evolution of RAN
The development of Cellular Radio Access Networks (RAN) has reached a critical point with the transition to 5G and beyond. This shift is motivated by the need for telecommunications operators to lower their high capital and operating costs while also finding new ways to generate revenue. The introduction of 5G has transformed traditional, monolithic base stations by breaking them down into separate, virtualized components that can be deployed on standard, off-the-shelf hardware in various locations. This approach makes it easier to manage the network’s lifecycle and accelerates the release of new features. Additionally, 5G has promoted the use of open and programmable interfaces and introduced advanced technologies that expand network capacity and support a wide range of applications.
As we enter the era of 5G Advanced and 6G networks, the goal is to maximize the network’s potential by solving the complex issues brought by the added complexity of 5G and introducing new applications that offer unique value. In this emerging landscape, AI stands out as a critical component, with advances in generative AI drawing significant interest from the telecommunications sector. AI’s proficiency in pattern recognition, traffic prediction, and solving intractable problems like scheduling makes it an ideal solution for these and many other longstanding RAN challenges. There is a growing consensus that future mobile networks should be AI-native, with both industry and academia offering support for this trend. However, practical hurdles like data collection from distributed sources and handling the diverse characteristics of AI RAN applications remain obstacles to be overcome.
The Indispensable Role of AI in RAN
The need for AI in RAN is underscored by AI’s ability to optimize and enhance critical RAN functions like network performance, spectrum utilization, and compute resource management. AI serves as an alternative to traditional optimization methods, which struggle to cope with the explosion of search space due to complex scheduling, power control, and antenna assignments. With the infrastructure optimization problems introduced by 5G (e.g. server failures, software bugs), AI shows promise through predictive maintenance and energy efficiency management, presenting solutions to these challenges that were previously unattainable. Moreover, AI can leverage the open interfaces exposed by RAN functions, enabling third-party applications to tap into valuable RAN data, enhancing capabilities for additional use cases like user localization and security.
Distributed Edge Infrastructure and AI Deployment
As AI becomes increasingly integrated into RAN, choosing the optimal deployment location is crucial for performance. The deployment of AI applications in RAN depends on where the RAN infrastructure is located, ranging from the far edge to the cloud. Each location offers different computing power and has its own trade-offs in resource availability, bandwidth, latency, and privacy. These factors are important when deciding the best place to deploy AI applications, as they directly affect performance and responsiveness. For example, while the cloud provides more computing resources, it may also cause higher latency, which can be problematic for applications that need real-time data processing or quick decision-making.
Addressing the Challenges of Deploying AI in RAN
Deploying AI in RAN involves overcoming various challenges, particularly in the areas of data collection and application orchestration. The heterogeneity of AI applications’ input features makes data collection a complex task. Exposing raw data from all potential sources isn’t practical, as it would result in an overwhelming volume of data to be processed and transmitted. The current industry approach of utilizing standardized APIs for data collection is not always conducive to the development of AI-native applications. The standard set of coarse-grained data sources exposed through these APIs often fail to meet the nuanced requirements of AI-driven RAN solutions. This limitation forces developers to adapt their AI applications to the available data rather than collecting the data that would best serve the application’s needs.
The challenge of orchestrating AI RAN applications is equally daunting. The dispersed nature of the RAN infrastructure raises questions about where the various components of an AI application should reside. These questions require a careful assessment of the application’s compute requirements, response latency, privacy constraints, and the varied compute capabilities of the infrastructure. The complexity is further amplified by the need to accommodate multiple AI applications, each vying for the same infrastructure resources. Developers are often required to manually distribute these applications across the RAN, a process that is not scalable and hinders widespread deployment in production environments.
A Vision for a Distributed AI-Native RAN Platform
To address these challenges, we propose a vision for a distributed AI-native RAN platform that is designed to streamline the deployment of AI applications. This platform is built on the principles of flexibility and scalability, with a high-level architecture that includes dynamic data collection probes, AI processor runtimes, and an orchestrator that coordinates the platform’s operations. The proposed platform introduces programmable probes that can be injected at various points in the platform and RAN network functions to collect data tailored to the AI application’s requirements. This approach minimizes data volume and avoids delays associated with standardization processes.
The AI processor runtime is a pivotal component that allows for the flexible and seamless deployment of AI applications across the infrastructure. It abstracts the underlying compute resources and provides an environment for data ingestion, data exchange, execution, and lifecycle management. The runtime is designed to be deployed at any location, from the far edge to the cloud, and to handle both AI RAN and non-RAN AI applications.
The orchestrator is the component that brings all this together, managing the placement and migration of AI applications across various runtimes. It also considers the developer’s requirements and the infrastructure’s capabilities to optimize the overall utility of the platform. The orchestrator is dynamic, capable of adapting to changes in resource availability and application demands, and can incorporate various policies that balance compute and network load across the infrastructure.
In articulating the vision for a Distributed AI-Native RAN platform, it is important to clarify that the proposed framework does not impose a specific architectural implementation. Instead, it defines high-level APIs and constructs that form the backbone of the platform’s functionality. These include a data ingestion API that facilitates the capture and input of data from various sources, a data exchange API that allows for the communication and transfer of data between different components of the platform, and a lifecycle management API that oversees the deployment, updating, and decommissioning of AI applications. The execution environment within the platform is designed to be flexible, promoting innovation and compatibility with major hardware architectures such as CPUs and GPUs. This flexibility ensures that the platform can support a wide range of AI applications and adapt to the evolving landscape of hardware technologies.
Moreover, to demonstrate the feasibility and potential of the proposed platform, we have internally prototyped a specialized and efficient implementation of the AI processor, particularly for the far edge. This prototype is carefully designed to work with fewer CPUs, optimizing resource use while maintaining high performance. It demonstrates that the AI processor runtime principles can be implemented effectively to meet the specific needs of the far edge, where resources are limited and real-time processing is crucial. This specialized implementation exemplifies the targeted innovation that the platform emphasizes, showcasing how the flexible execution environment can be tailored to address specific challenges within the RAN ecosystem.
Balancing Open and Closed Architectures in RAN Integration
The proposed AI platform is adaptable, capable of fitting into open architectures that adhere to O-RAN standards as well as proprietary designs controlled by RAN vendors. This flexibility allows for a range of deployment scenarios, from a fully O-RAN compliant implementation that encourages third-party development to a fully proprietary model, or to a hybrid model that offers a balance between vendor control and innovation. In each scenario, the distributed AI platform can be customized to suit the specific needs of the infrastructure provider or adhere to the guidelines of standardization bodies.
Concluding Thoughts on AI’s Future in 6G RAN
The integration of AI into the RAN is central to the 6G vision, with the potential to transform network management, performance optimization, and application support. While deploying AI solutions in RAN presents challenges, a distributed AI-native platform offers a pathway to overcome these obstacles. By fostering discussions around the architecture of a 6G AI platform, we can guide standards bodies and vendors in exploring opportunities for AI integration. The proposed platform is intentionally flexible, allowing for customization to meet the diverse needs and constraints of different operators and vendors.
The future of RAN will depend on its ability to dynamically adapt to changing conditions and demands. AI is essential to this transformation, providing the intelligence and adaptability needed to manage the complexity of next-generation networks. As the industry progresses towards AI-native 6G networks, embracing both the challenges and opportunities that AI brings will be crucial. The proposed distributed AI platform marks a significant step forward, aiming to unlock the full potential of RAN through intelligent, flexible, and scalable solutions.
Innovation in AI and the commitment to an AI-native RAN are key to ensuring the telecommunications industry and the telecommunications networks of the future are efficient, cost-effective, and capable of supporting advanced services and applications. Collaborative efforts from researchers and industry experts will be vital in refining this vision and making the potential of AI in 6G RAN a reality.
As we approach the 6G era, integrating AI into RAN architectures is not merely an option but a necessity. The distributed AI platform outlined here serves as a blueprint for the future, where AI is seamlessly integrated into RAN, driving innovation and enhancing the capabilities of cellular networks to meet the demands of next-generation users and applications.
For more details, please check the full paper.
Acknowledgements
The project is partially funded by the UK Department for Science, Innovation & Technology (DSIT) under Open Network Ecosystem Competition (ONE) programme.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive Events: Best Practices from Korea Influencer Day
On a beautiful day in Korea, we brought together a diverse group of Microsoft MVPs (Most Valuable Professional), MLSAs (Microsoft Learn Student Ambassadors), RDs (Regional Directors), Microsoft employees, and guests from Japan to create a truly inclusive and inspiring event: Korea Influencer Day. The gathering aimed to build cross-border connections and foster collaboration while empowering communities with shared knowledge and tech trends. With a carefully crafted agenda, we succeeded in sparking meaningful conversations among university students, community leaders, and professionals.
In this post, we’ll walk through the event highlights and share best practices on how to organize inclusive in-person community events. We will also reflect on the valuable feedback received to inspire others to create impactful community gatherings.
Memorable Moments and Reflections
1. Inspiring Cross-Cultural Exchange
A defining feature of the event was the meaningful collaboration between Korean and Japanese MVPs. Kazuyuki Miyake, Japanese Microsoft Azure MVP and RD, and Ryota Nakamura, Japanese Business Applications MVP, introduced their local community trends to Korean community leaders.
Kazuyuki shared his experiences and said, “Participating in Influencer Day in Korea was a milestone. Sharing insights from Japan’s AOAI Dev Day that I successfully organized and proposing the next edition in Seoul marked great progress. I believe collaboration between Microsoft MVPs and RDs can spark a powerful movement. I was especially impressed by the proactive Korean Microsoft Learn Student Ambassadors, whose enthusiasm and curiosity promise a bright future.”
2. Networking through Speed Dating: A Surprising Success
Initially met with hesitation, the speed dating session turned out to be a highlight. It encouraged conversations between individuals from different backgrounds, leading to insights and connections that may not have otherwise emerged. MLSAs engaged with MVPs, attendees shared cultural perspectives between Korea and Japan, and discussions sparked about future collaborations.
JinSeok Kim, a Korean Developer Technologies MVP, who also played a key role as a translator between Korean and Japanese attendees, offered valuable feedback for future events: “While the format encouraged organic interaction, some feedback suggested adding conversation starters or a topic-drawing activity to make it easier for shy participants to dive into meaningful discussions.”
Atsushi Yokohama, an AI Platform MVP from Japan, visited Seoul for the first time to connect with community leaders in Korea. He shared his experience of the event, saying, “It was my first time interacting with Microsoft MVPs from Korea, but I’m grateful to have been able to engage in friendly technical discussions with all of them. This experience has definitely boosted my motivation. I now feel inspired to help strengthen community interactions across Asia.”
3. Empowering the Next Generation of Leaders
The event provided invaluable exposure for Korean MLSA students, whose energy and curiosity left a lasting impression. Many expressed their ambition to grow within the community, including one MLSA student, Minseok Song’s newly formed goal to achieve GOLD MLSA status this year after attending the event.
He continued his reflections and said, “At the event, I asked several questions while talking with the MVPs, and everyone was kind enough to explain things, making it a productive and rewarding experience for me. These conversations inspired me to become someone who can help others, just like you and the MVPs.” This reflection shows how inclusive events can inspire future leaders by connecting them with role models and mentors.
4. Female Tech Influencers and Expanding Community Impact
One of the most impactful sessions was the speech by female tech influencers, highlighting the importance of diversity and gender inclusiveness in the tech space. Representation matters, and hearing from these leaders not only inspired attendees but also promoted the idea that diverse voices are key to creating a thriving tech ecosystem.
The panel discussion on increasing community impact through collaboration also underscored the potential of generative AI to transform communities across Korea and Japan, opening doors for future joint initiatives.
SungHo You, Microsoft Technical Trainer and Justin Yoo, Microsoft Cloud Advocate who participated in the event, shared their thoughts: “The Korea Influencer Day was a pivotal event for the Korean developer community. It brought together diverse community leaders, fostering meaningful interactions, empathy, and moments of joy, especially with Japanese MVPs. I want to particularly commend the efforts to promote gender diversity within the Microsoft tech community, which was positively influenced by the collaboration between Microsoft and the SA team.”
Best Practices for Organizing Inclusive In-Person Events
Drawing on the success of Korea Influencer Day, here are some key practices to consider when planning inclusive events:
Curate a Diverse Agenda
Ensure that the schedule reflects a range of topics and speakers from various backgrounds, including professionals, students, and community leaders.
Highlight underrepresented voices, such as female tech leaders or community members from different regions or fields.
Design for Interactivity and Connection
Incorporate speed networking sessions or icebreaker activities to foster interaction among attendees from different backgrounds.
Use creative formats like Show & Tell or small-group discussions to encourage knowledge sharing.
Provide Conversation Starters or Prompts
Offer topic cards or a discussion board to spark conversations, helping participants break the ice during networking sessions.
Create personalized introductions to connect individuals based on shared interests.
Make Cross-Cultural Exchange a Priority
If attendees come from diverse regions or countries, include sessions that promote cultural understanding, such as cultural exchange talks or panels discussing shared challenges and solutions.
Support Newcomers and Aspiring Leaders
Engage with students and newcomers, offering mentorship opportunities to help them grow within the community.
Recognize and celebrate their achievements to encourage continued participation.
Balance Structure with Flexibility
While structured agendas are important, allow time for unstructured networking to enable organic connections and deeper conversations.
Gather and Act on Feedback
Ask attendees for feedback to understand what worked well and where improvements can be made.
Implement these learnings in future events to enhance inclusiveness and engagement.
Korea Influencer Day sparked creativity through stories of personal tech projects to inspiring students to become future leaders, the event demonstrated the value of bringing people together across cultures, backgrounds, and interests.
By designing events that celebrate diversity, foster interaction, and empower individuals, we can create meaningful experiences that have a lasting impact on communities. Whether you’re organizing a small community meetup or a large-scale event, the lessons from Korea Influencer Day can guide you in creating an environment where everyone feels welcome, heard, and inspired to contribute.
What’s next? As one participant from Japan suggested, we can look forward to taking place in Seoul. Until then, let’s continue building bridges and sharing knowledge to shape the future together.
Microsoft Tech Community – Latest Blogs –Read More
AWS CDK Risk: Exploiting a Missing S3 Bucket Allowed Account Takeover
In June 2024, we uncovered a security issue related to the AWS Cloud Development Kit (CDK), an open-source project. This discovery adds to the six other vulnerabilities we discovered within AWS services. The impact of this issue could, in certain scenarios (outlined in the blog), allow an attacker to gain administrative access to a target AWS account, resulting in a full account takeover.
In June 2024, we uncovered a security issue related to the AWS Cloud Development Kit (CDK), an open-source project. This discovery adds to the six other vulnerabilities we discovered within AWS services. The impact of this issue could, in certain scenarios (outlined in the blog), allow an attacker to gain administrative access to a target AWS account, resulting in a full account takeover. Read More
pivot tables excel
Hello. I ask you a question. I subscribed to the Microsoft 365 paid plan and I want to work with Excel pivot tables but the menu only offers me “insert tables” not “pivot tables”. I’m working on my Samsung tablet. How can I use dynamic tables? I’m paying to use that but I can’t use it because it’s not in the excel menu.
Hello. I ask you a question. I subscribed to the Microsoft 365 paid plan and I want to work with Excel pivot tables but the menu only offers me “insert tables” not “pivot tables”. I’m working on my Samsung tablet. How can I use dynamic tables? I’m paying to use that but I can’t use it because it’s not in the excel menu. Read More
Microsoft Teams/Lists/Forms
Hello,
I need help with a few things.
1. When someone fills out the form (on the right), it then populates into the list (on the left). When I comment on the populated row within the list, the user I am tagging is not getting notifications. How can I fix this? I do not want anyone else having access to the full list, just my comment.
2. Is there a way to create a workflow for approvals? I.e. if it’s a primary buyer I want these 3 people to approve in order, but if it’s an investor then I want these 4 people to approve in order?
Hello, I need help with a few things. 1. When someone fills out the form (on the right), it then populates into the list (on the left). When I comment on the populated row within the list, the user I am tagging is not getting notifications. How can I fix this? I do not want anyone else having access to the full list, just my comment. 2. Is there a way to create a workflow for approvals? I.e. if it’s a primary buyer I want these 3 people to approve in order, but if it’s an investor then I want these 4 people to approve in order? Read More