Tag Archives: microsoft
Enrollment for additional business location fails – support website
Hi there,
we are trying to enroll our US business location for CSP indirect reseller (for our DE location we are successfully registered and enrolled).
I created an Entra tenant and used the enrollment form, but I fail when completing the form to kick everything off. I receive the below error message:
We have one central website, but it won’t accept the entry. What can i provide to make this work?
I can not even open a support request, because I end up in a closed form when i follow the red link :
Any recommendations and ideas are really welcome.
Thanks
Ann
Hi there, we are trying to enroll our US business location for CSP indirect reseller (for our DE location we are successfully registered and enrolled). I created an Entra tenant and used the enrollment form, but I fail when completing the form to kick everything off. I receive the below error message: We have one central website, but it won’t accept the entry. What can i provide to make this work? I can not even open a support request, because I end up in a closed form when i follow the red link : Any recommendations and ideas are really welcome. Thanks Ann Read More
Patient Tracker and Package Tracker
Hi,
I have two sheets one in which patient attendance is tracked with which therapist has been attended. Another sheet that says the type of package that the patients has bought.
Every day I need to calculate the revenue by each Therapist. I have attached both the sheets, to show the type of data that is being generated from the system.
Patient Attendance Tracker
Patient NamePatient IDTherapistDepartmentDateShyam Hani153RyaanOccupational Therapy02/10/2024Shyam Hani153RyaanOccupational Therapy04/09/2024Shyam Hani153RyaanOccupational Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy02/10/2024Shyam Hani153SanjuSpeech Therapy04/09/2024Shyam Hani153SanjuSpeech Therapy05/10/2024Shyam Hani153SanjuSpeech Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy07/09/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy10/08/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy12/10/2024Dev Mani112SanjuOccupational Therapy01/10/2024Dev Mani112SanjuOccupational Therapy02/10/2024Dev Mani112SanjuOccupational Therapy04/10/2024Dev Mani112SanjuOccupational Therapy08/10/2024Dev Mani112SanjuOccupational Therapy09/10/2024Dev Mani112SanjuOccupational Therapy10/09/2024Dev Mani112SanjuOccupational Therapy10/10/2024Dev Mani112RyaanOccupational Therapy11/09/2024Dev Mani112RyaanOccupational Therapy11/10/2024Dev Mani112RyaanOccupational Therapy12/09/2024Dev Mani112RyaanOccupational Therapy01/10/2024Dev Mani112RyaanOccupational Therapy04/10/2024Dev Mani112RyaanOccupational Therapy08/10/2024Dev Mani112RyaanOccupational Therapy10/10/2024
Patient Price Tracker
Patient NameTherapistPatient IDPackage FromPackage ToPackage PricePackageShyam HaniSanju153Wednesday, 2 October 2024Wednesday, 4 September 2024100Speech TherapyShyam HaniRyaan153Wednesday, 2 October 2024 0Occupational TherapyMeera HasanSanju152Wednesday, 9 October 2024Saturday, 12 October 2024200Occupational TherapyDev ManiSanju112Tuesday, 1 October 2024Tuesday, 8 October 2024300Occupational TherapyDev ManiRyaan112Saturday, 27 July 2024Tuesday, 27 August 2024400Occupational Therapy
Hi,I have two sheets one in which patient attendance is tracked with which therapist has been attended. Another sheet that says the type of package that the patients has bought. Every day I need to calculate the revenue by each Therapist. I have attached both the sheets, to show the type of data that is being generated from the system. Patient Attendance Tracker Patient NamePatient IDTherapistDepartmentDateShyam Hani153RyaanOccupational Therapy02/10/2024Shyam Hani153RyaanOccupational Therapy04/09/2024Shyam Hani153RyaanOccupational Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy02/10/2024Shyam Hani153SanjuSpeech Therapy04/09/2024Shyam Hani153SanjuSpeech Therapy05/10/2024Shyam Hani153SanjuSpeech Therapy06/09/2024Shyam Hani153SanjuSpeech Therapy07/09/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy09/10/2024Meera Hasan152SanjuSpeech Therapy10/08/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/09/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy11/10/2024Meera Hasan152SanjuSpeech Therapy12/10/2024Dev Mani112SanjuOccupational Therapy01/10/2024Dev Mani112SanjuOccupational Therapy02/10/2024Dev Mani112SanjuOccupational Therapy04/10/2024Dev Mani112SanjuOccupational Therapy08/10/2024Dev Mani112SanjuOccupational Therapy09/10/2024Dev Mani112SanjuOccupational Therapy10/09/2024Dev Mani112SanjuOccupational Therapy10/10/2024Dev Mani112RyaanOccupational Therapy11/09/2024Dev Mani112RyaanOccupational Therapy11/10/2024Dev Mani112RyaanOccupational Therapy12/09/2024Dev Mani112RyaanOccupational Therapy01/10/2024Dev Mani112RyaanOccupational Therapy04/10/2024Dev Mani112RyaanOccupational Therapy08/10/2024Dev Mani112RyaanOccupational Therapy10/10/2024 Patient Price Tracker Patient NameTherapistPatient IDPackage FromPackage ToPackage PricePackageShyam HaniSanju153Wednesday, 2 October 2024Wednesday, 4 September 2024100Speech TherapyShyam HaniRyaan153Wednesday, 2 October 2024 0Occupational TherapyMeera HasanSanju152Wednesday, 9 October 2024Saturday, 12 October 2024200Occupational TherapyDev ManiSanju112Tuesday, 1 October 2024Tuesday, 8 October 2024300Occupational TherapyDev ManiRyaan112Saturday, 27 July 2024Tuesday, 27 August 2024400Occupational Therapy Read More
Surface 10 Pro Business – Driver controller sata
I have to format a Surface 10 Pro Business without using a recovery image but using the Windows 11 key. The SSD disk is not recognized because the Sata controller driver is missing. Can anyone tell me the model or the driver download link? Thank you
I have to format a Surface 10 Pro Business without using a recovery image but using the Windows 11 key. The SSD disk is not recognized because the Sata controller driver is missing. Can anyone tell me the model or the driver download link? Thank you Read More
New Field in log
How can I get the “department” field in the AD log? I already have AD integrated with Wazuh! But the data from this field is not coming through!
thanks
How can I get the “department” field in the AD log? I already have AD integrated with Wazuh! But the data from this field is not coming through!thanks Read More
win 10 build 19045.2787
Bonjour,
je suis en windows 10 22h2 build 19045.2787
quelles sont les manips pour l’upgrader, car il est figé .
Merci
Bonjour,je suis en windows 10 22h2 build 19045.2787quelles sont les manips pour l’upgrader, car il est figé .Merci Read More
Planner Patch ETag Issue
Getting below error for planner task update operation
{“error”:{“code”:””,”message”:”The If-Match header contains an invalid value.”,”innerError”:{“date”:”2024-10-24T15:32:02″,”request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″,”client-request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″}}}
string currentETag = “W/”JzEtVGFzayAgQEBAQEBAQEBAQEBAQEBARCc=””;
httpClient.DefaultRequestHeaders.Add(“If-Match”, currentETag);
Need help of right combination for passing the correct etag I tried removing backlash and adding double quotes and other combinations given across articles.
Getting below error for planner task update operation{“error”:{“code”:””,”message”:”The If-Match header contains an invalid value.”,”innerError”:{“date”:”2024-10-24T15:32:02″,”request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″,”client-request-id”:”b976210d-9970-4997-9e64-bef1c6c8e9d5″}}}string currentETag = “W/”JzEtVGFzayAgQEBAQEBAQEBAQEBAQEBARCc=””; httpClient.DefaultRequestHeaders.Add(“If-Match”, currentETag); Need help of right combination for passing the correct etag I tried removing backlash and adding double quotes and other combinations given across articles. Read More
The Future of AI: Deploying your LoRA Fine-tuned Llama 3.1 8B on Azure AI, why it’s a breeze!
The Future of AI: Distillation Just Got Easier
Part 3 – Deploying your LoRA Fine-tuned Llama 3.1 8B model, why it’s a breeze!
Learn how Azure AI makes it effortless to deploy your LoRA fine-tuned models using Azure AI. (🚀🔥 Github recipe repo).
By Cedric Vidal, Principal AI Advocate, Microsoft
Part of the Future of AI 🚀 series initiated by Marco Casalaina with his Exploring Multi-Agent AI Systems blog post.
A Llama on a rocket launched in space, generated using Azure OpenAI DALL-E 3
Welcome back to our series on leveraging Azure AI Studio to accelerate your AI development journey. In our previous posts, we’ve explored synthetic dataset generation and the process of fine-tuning models. Today, we’re diving into the crucial step that turns your hard work into actionable insights: deploying your fine-tuned model. In this installment, we’ll guide you through deploying your model using Azure AI Studio and the Python SDK, ensuring a seamless transition from development to production.
Why Deploying GPU Accelerated Inference Workloads is Hard
Deploying GPU-accelerated inference workloads comes with a unique set of challenges that make the process significantly more complex compared to standard CPU workloads. Below are some of the primary difficulties encountered:
GPU Resource Allocation: GPUs are specialized and limited resources, requiring precise allocation to avoid wastage and ensure efficiency. Unlike CPUs that can be easily provisioned in larger numbers, the specialized nature of GPUs means that effective allocation strategies are crucial to optimize performance.
GPU Scaling: Scaling GPU workloads is inherently more challenging due to the high cost and limited availability of GPU resources. it requires careful planning to balance cost efficiency with workload demands, unlike more straightforward CPU resource scaling.
Load Balancing for GPU Instances: Implementing load balancing for GPU-based tasks is complex due to the necessity of evenly distributing tasks across available GPU instances. This step is vital to prevent bottlenecks, avoid overload in certain instances, and ensure optimal performance of each GPU unit.
Model Partitioning and Sharding: Large models that cannot fit into a single GPU memory require partitioning and sharding. This process involves splitting the model across multiple GPUs, which introduces additional layers of complexity in terms of load distribution and resource management.
Containerization and Orchestration: While containerization simplifies the deployment process by packaging models and dependencies, managing GPU resources within containers and orchestrating them across nodes adds another layer of complexity. Effective orchestration setups need to be fine-tuned to handle the subtle dynamics of GPU resource utilization and management.
LoRA Adapter Integration: LoRA, which stands for Low-order Rank Adaptation, is a powerful optimization technique that reduces the number of trainable parameters by decomposing the original weight matrices into lower-rank matrices. This makes it efficient for fine-tuning large models with fewer resources. However, integrating LoRA adapters into deployment pipelines involves additional steps to efficiently store, load and merge the lightweight adapters with the base model and serve the final model, which increases the complexity of the deployment process.
Monitoring GPU Inference Endpoints: Monitoring GPU inference endpoints is complex due to the need for specialized metrics to capture GPU utilization, memory bandwidth, and thermal limits, not to mention model specific metrics such as token counts or request counts. These metrics are vital for understanding performance bottlenecks and ensuring efficient operation but require intricate tools and expertise to collect and analyze accurately.
Model Specific Considerations: It’s important to acknowledge that the deployment process is often specific to the base model architecture you are working with. Each new version of a model or a different model vendor will require a fair amount of adaptations in your deployment pipeline. This could include changes in preprocessing steps, modifications in environment configurations, or adjustments in the integration or versions of third-party libraries. Therefore, it’s crucial to stay updated with the model documentation and vendor-specific deployment guidelines to ensure a smooth and efficient deployment process.
Model Versioning Complexity: Keeping track of multiple versions of a model can be intricate. Each version may exhibit distinct behaviors and performance metrics, necessitating thorough evaluation to manage updates, rollbacks, and compatibility with other systems. We’ll cover the subject of model evaluation more thoroughly in the next blog post. Another difficulty with versioning is storing the weights of the different LoRA adapters and keeping track of the versions of the base models they must be adapted onto.
Cost Planning: Planning the costs for GPU inference workloads is challenging due to the variable nature of GPU usage and the higher costs associated with GPU resources. Predicting the precise amount of GPU time required for inference under different workloads can be difficult, leading to unexpected expenses.
Understanding and addressing these difficulties is crucial for successfully deploying GPU-accelerated inference workloads, ensuring that the full potential of GPU capabilities is harnessed.
Azure AI Serverless: A Game Changer
Azure AI Serverless is a game changer because it effectively addresses a lot of challenges with deploying GPU-accelerated inference workloads. By leveraging the serverless architecture, it abstracts away the complexities associated with GPU resource allocation, model specific deployment considerations, and API management. This means you can deploy your models without worrying about the underlying infrastructure management, allowing you to focus on your application’s needs. Additionally, Azure AI Serverless supports a diverse collection of models and abstracts away the choice and provisioning of GPU hardware accelerators, ensuring efficient and fast inference times. The platform’s integration with managed services enables robust container orchestration, simplifying the deployment process even further and enhancing overall operational efficiency.
Attractive pay as you go cost model
One of the standout features of Azure AI Serverless is its token-based cost model, which greatly simplifies cost planning. With token-based billing, you are charged based on the number of tokens processed by your model, making it easy to predict costs based on expected usage patterns. This model is particularly beneficial for applications with variable loads, as you only pay for what you use.
Because the managed infrastructure needs to maintain LoRA adapters in memory and swap them on demand, there is an additional per hour cost associated with fine tuned serverless endpoints but it is billed by the hour only while the endpoint is being used. This makes it super easy to plan ahead future bills depending on your expected usage profile.
Also, the hourly cost is meant to go down, it already went down dramatically from $3.09/hour for a Llama 2 7B based model to $0.74/hour for a Llama 3.1 8B based model.
By paying attention to these critical factors, you can ensure that your model deployment is robust, secure, and capable of meeting the demands of your application.
Region Availability
When deploying your Llama 3.1 fine-tuned model, it’s important to consider the geographical regions where the model can be deployed. As of now, Azure AI Studio supports the deployment of Llama 3.1 fine-tuned models in the following regions: East US, East US 2, North Central US, South Central US, West US, and West US 3. Choosing a region that’s closer to your end-users can help reduce latency and improve performance. Ensure you select the appropriate region based on your target audience for optimal results.
For the most up-to-date information on region availability for other models, please refer to this guide on deploying models serverlessly.
Let’s get coding with Azure AI Studio and the Python SDK
Before proceeding to deployment, you’ll need a model that you have previously fine-tuned. One way is to use the process described in the two preceding installments of this fine-tuning blog post series: the first one covers synthetic dataset generation using RAFT and the second one covers fine-tuning. This ensures that you can fully benefit from the deployment steps using Azure AI Studio.
Note: All code samples that follow have been extracted from the 3_deploy.ipynb notebook of the raft-recipe GitHub repository. The snippets have been simplified and some intermediary steps left aside for ease of reading. You can either head over there, clone the repo and start experimenting right away or stick with me here for an overview.
Step 1: Set Up Your Environment
First, ensure you have the necessary libraries installed. You’ll need the Azure Machine Learning SDK for Python. You can install it using pip:
pip install azure-ai-ml
Next, you’ll need to import the required modules and authenticate your Azure ML workspace. This is standard, the MLClient is the gateway to the ML Workspace which gives you access to everything AI and ML on Azure.
from azure.ai.ml import MLClient
from azure.identity import (
DefaultAzureCredential,
InteractiveBrowserCredential,
)
from azure.ai.ml.entities import MarketplaceSubscription, ServerlessEndpoint
try:
credential = DefaultAzureCredential()
credential.get_token(“https://management.azure.com/.default”)
except Exception as ex:
credential = InteractiveBrowserCredential()
try:
client = MLClient.from_config(credential=credential)
except:
print(“Please create a workspace configuration file in the current directory.”)
# Get AzureML workspace object.
workspace = client._workspaces.get(client.workspace_name)
workspace_id = workspace._workspace_id
Step 2: Resolving the previously registered fine-tuned model
Before deploying, you need to resolve your fine-tuned model in the Azure ML workspace.
Since the fine-tuning job might still be running, you may want to wait for the model to be registered, here’s a simple helper function you can use.
def wait_for_model(client, model_name):
“””Wait for the model to be available, typically waiting for a finetuning job to complete.”””
import time
attempts = 0
while True:
try:
model = client.models.get(model_name, label=”latest”)
return model
except:
print(f”Model not found yet #{attempts}”)
attempts += 1
time.sleep(30)
The above function is basic but will make sure your deployment can proceed as soon as your model becomes available.
print(f”Waiting for fine tuned model {FINETUNED_MODEL_NAME} to complete training…”)
model = wait_for_model(client, FINETUNED_MODEL_NAME)
print(f”Model {FINETUNED_MODEL_NAME} is ready”)
Step 3: Subscribe to the model provider
Before deploying a model fine-tuned using a base model from a third-party non-Microsoft source, you need to subscribe to the model provider’s marketplace offering. This subscription allows you to access and use the model within Azure ML.
print(f”Deploying model asset id {model_asset_id}”)
from azure.core.exceptions import ResourceExistsError
marketplace_subscription = MarketplaceSubscription(
model_id=base_model_id,
name=subscription_name,
)
try:
marketplace_subscription = client.marketplace_subscriptions.begin_create_or_update(marketplace_subscription).result()
except ResourceExistsError as ex:
print(f”Marketplace subscription {subscription_name} already exists for model {base_model_id}”)
Details on how to construct the base_model_id and subscription_name are available in the 3_deploy.ipynb notebook.
Step 4: Deploy the model as a serverless endpoint
This section manages the deployment of a serverless endpoint for your fine-tuned model using the Azure ML client. It checks for an existing endpoint and creates one if it doesn’t exist, then proceeds with the deployment.
from azure.core.exceptions import ResourceNotFoundError
try:
serverless_endpoint = client.serverless_endpoints.get(endpoint_name)
print(f”Found existing endpoint {endpoint_name}”)
except ResourceNotFoundError as ex:
serverless_endpoint = ServerlessEndpoint(name=endpoint_name, model_id=model_asset_id)
serverless_endpoint = client.serverless_endpoints.begin_create_or_update(serverless_endpoint).result()
print(“Waiting for deployment to complete…”)
serverless_endpoint = ServerlessEndpoint(name=endpoint_name, model_id=model_id)
created_endpoint = client.serverless_endpoints.begin_create_or_update(serverless_endpoint).result()
print(“Deployment complete”)
Step 5: Check that the endpoint is correctly deployed
As part of a deployment pipeline, it is a good practice to include integration tests that check that the model is correctly deployed and fails fast instead of waiting for steps down the line to fail without context.
import requests
url = f”{endpoint.scoring_uri}/v1/chat/completions”
prompt = “What do you know?”
payload = {
“messages”:[ { “role”:”user”,”content”: prompt } ],
“max_tokens”:1024
}
headers = {“Content-Type”: “application/json”, “Authorization”: endpoint_keys.primary_key}
response = requests.post(url, json=payload, headers=headers)
response.json()
This code assumes that the deployed model is a chat model for simplicity. The code available in the 3_deploy.ipynb notebook is more generic and will cover both completion and chat models.
Conclusion
Deploying your fine-tuned model with Azure AI Studio and the Python SDK not only simplifies the process but also empowers you with unparalleled control, ensuring you have a robust and reliable platform for your deployment needs.
Stay tuned for our next blog post, in two weeks we will delve into assessing the performance of your deployed model through rigorous evaluation methodologies. Until then, head out to the Github repo and happy coding!
Microsoft Tech Community – Latest Blogs –Read More
Question on Consolidation
Hello, could you please tell me, if I have the following data in the multiple worksheet then how can I consolidate the data as per prioriy category. Thank you
Hello, could you please tell me, if I have the following data in the multiple worksheet then how can I consolidate the data as per prioriy category. Thank you Read More
Official Exchange 2019 Training Course Inquiry
Salam
I have a question regarding official training for Exchange Server 2019. I’m aware that Microsoft offers various training materials for its products, but I couldn’t find any official course specifically designed for Exchange 2019 in the catalog.
Could someone confirm if there was ever an official training course or certification for Exchange Server 2019? I’ve seen training for previous versions like Exchange 2010, but it seems like there wasn’t anything equivalent for 2019. Any clarification would be appreciated.
SalamI have a question regarding official training for Exchange Server 2019. I’m aware that Microsoft offers various training materials for its products, but I couldn’t find any official course specifically designed for Exchange 2019 in the catalog.Could someone confirm if there was ever an official training course or certification for Exchange Server 2019? I’ve seen training for previous versions like Exchange 2010, but it seems like there wasn’t anything equivalent for 2019. Any clarification would be appreciated. Read More
Toward a Distributed AI Platform for 6G RAN
by Ganesh Ananthanarayanan, Xenofon Foukas, Bozidar Radunovic, Yongguang Zhang
Introduction to the Evolution of RAN
The development of Cellular Radio Access Networks (RAN) has reached a critical point with the transition to 5G and beyond. This shift is motivated by the need for telecommunications operators to lower their high capital and operating costs while also finding new ways to generate revenue. The introduction of 5G has transformed traditional, monolithic base stations by breaking them down into separate, virtualized components that can be deployed on standard, off-the-shelf hardware in various locations. This approach makes it easier to manage the network’s lifecycle and accelerates the release of new features. Additionally, 5G has promoted the use of open and programmable interfaces and introduced advanced technologies that expand network capacity and support a wide range of applications.
As we enter the era of 5G Advanced and 6G networks, the goal is to maximize the network’s potential by solving the complex issues brought by the added complexity of 5G and introducing new applications that offer unique value. In this emerging landscape, AI stands out as a critical component, with advances in generative AI drawing significant interest from the telecommunications sector. AI’s proficiency in pattern recognition, traffic prediction, and solving intractable problems like scheduling makes it an ideal solution for these and many other longstanding RAN challenges. There is a growing consensus that future mobile networks should be AI-native, with both industry and academia offering support for this trend. However, practical hurdles like data collection from distributed sources and handling the diverse characteristics of AI RAN applications remain obstacles to be overcome.
The Indispensable Role of AI in RAN
The need for AI in RAN is underscored by AI’s ability to optimize and enhance critical RAN functions like network performance, spectrum utilization, and compute resource management. AI serves as an alternative to traditional optimization methods, which struggle to cope with the explosion of search space due to complex scheduling, power control, and antenna assignments. With the infrastructure optimization problems introduced by 5G (e.g. server failures, software bugs), AI shows promise through predictive maintenance and energy efficiency management, presenting solutions to these challenges that were previously unattainable. Moreover, AI can leverage the open interfaces exposed by RAN functions, enabling third-party applications to tap into valuable RAN data, enhancing capabilities for additional use cases like user localization and security.
Distributed Edge Infrastructure and AI Deployment
As AI becomes increasingly integrated into RAN, choosing the optimal deployment location is crucial for performance. The deployment of AI applications in RAN depends on where the RAN infrastructure is located, ranging from the far edge to the cloud. Each location offers different computing power and has its own trade-offs in resource availability, bandwidth, latency, and privacy. These factors are important when deciding the best place to deploy AI applications, as they directly affect performance and responsiveness. For example, while the cloud provides more computing resources, it may also cause higher latency, which can be problematic for applications that need real-time data processing or quick decision-making.
Addressing the Challenges of Deploying AI in RAN
Deploying AI in RAN involves overcoming various challenges, particularly in the areas of data collection and application orchestration. The heterogeneity of AI applications’ input features makes data collection a complex task. Exposing raw data from all potential sources isn’t practical, as it would result in an overwhelming volume of data to be processed and transmitted. The current industry approach of utilizing standardized APIs for data collection is not always conducive to the development of AI-native applications. The standard set of coarse-grained data sources exposed through these APIs often fail to meet the nuanced requirements of AI-driven RAN solutions. This limitation forces developers to adapt their AI applications to the available data rather than collecting the data that would best serve the application’s needs.
The challenge of orchestrating AI RAN applications is equally daunting. The dispersed nature of the RAN infrastructure raises questions about where the various components of an AI application should reside. These questions require a careful assessment of the application’s compute requirements, response latency, privacy constraints, and the varied compute capabilities of the infrastructure. The complexity is further amplified by the need to accommodate multiple AI applications, each vying for the same infrastructure resources. Developers are often required to manually distribute these applications across the RAN, a process that is not scalable and hinders widespread deployment in production environments.
A Vision for a Distributed AI-Native RAN Platform
To address these challenges, we propose a vision for a distributed AI-native RAN platform that is designed to streamline the deployment of AI applications. This platform is built on the principles of flexibility and scalability, with a high-level architecture that includes dynamic data collection probes, AI processor runtimes, and an orchestrator that coordinates the platform’s operations. The proposed platform introduces programmable probes that can be injected at various points in the platform and RAN network functions to collect data tailored to the AI application’s requirements. This approach minimizes data volume and avoids delays associated with standardization processes.
The AI processor runtime is a pivotal component that allows for the flexible and seamless deployment of AI applications across the infrastructure. It abstracts the underlying compute resources and provides an environment for data ingestion, data exchange, execution, and lifecycle management. The runtime is designed to be deployed at any location, from the far edge to the cloud, and to handle both AI RAN and non-RAN AI applications.
The orchestrator is the component that brings all this together, managing the placement and migration of AI applications across various runtimes. It also considers the developer’s requirements and the infrastructure’s capabilities to optimize the overall utility of the platform. The orchestrator is dynamic, capable of adapting to changes in resource availability and application demands, and can incorporate various policies that balance compute and network load across the infrastructure.
In articulating the vision for a Distributed AI-Native RAN platform, it is important to clarify that the proposed framework does not impose a specific architectural implementation. Instead, it defines high-level APIs and constructs that form the backbone of the platform’s functionality. These include a data ingestion API that facilitates the capture and input of data from various sources, a data exchange API that allows for the communication and transfer of data between different components of the platform, and a lifecycle management API that oversees the deployment, updating, and decommissioning of AI applications. The execution environment within the platform is designed to be flexible, promoting innovation and compatibility with major hardware architectures such as CPUs and GPUs. This flexibility ensures that the platform can support a wide range of AI applications and adapt to the evolving landscape of hardware technologies.
Moreover, to demonstrate the feasibility and potential of the proposed platform, we have internally prototyped a specialized and efficient implementation of the AI processor, particularly for the far edge. This prototype is carefully designed to work with fewer CPUs, optimizing resource use while maintaining high performance. It demonstrates that the AI processor runtime principles can be implemented effectively to meet the specific needs of the far edge, where resources are limited and real-time processing is crucial. This specialized implementation exemplifies the targeted innovation that the platform emphasizes, showcasing how the flexible execution environment can be tailored to address specific challenges within the RAN ecosystem.
Balancing Open and Closed Architectures in RAN Integration
The proposed AI platform is adaptable, capable of fitting into open architectures that adhere to O-RAN standards as well as proprietary designs controlled by RAN vendors. This flexibility allows for a range of deployment scenarios, from a fully O-RAN compliant implementation that encourages third-party development to a fully proprietary model, or to a hybrid model that offers a balance between vendor control and innovation. In each scenario, the distributed AI platform can be customized to suit the specific needs of the infrastructure provider or adhere to the guidelines of standardization bodies.
Concluding Thoughts on AI’s Future in 6G RAN
The integration of AI into the RAN is central to the 6G vision, with the potential to transform network management, performance optimization, and application support. While deploying AI solutions in RAN presents challenges, a distributed AI-native platform offers a pathway to overcome these obstacles. By fostering discussions around the architecture of a 6G AI platform, we can guide standards bodies and vendors in exploring opportunities for AI integration. The proposed platform is intentionally flexible, allowing for customization to meet the diverse needs and constraints of different operators and vendors.
The future of RAN will depend on its ability to dynamically adapt to changing conditions and demands. AI is essential to this transformation, providing the intelligence and adaptability needed to manage the complexity of next-generation networks. As the industry progresses towards AI-native 6G networks, embracing both the challenges and opportunities that AI brings will be crucial. The proposed distributed AI platform marks a significant step forward, aiming to unlock the full potential of RAN through intelligent, flexible, and scalable solutions.
Innovation in AI and the commitment to an AI-native RAN are key to ensuring the telecommunications industry and the telecommunications networks of the future are efficient, cost-effective, and capable of supporting advanced services and applications. Collaborative efforts from researchers and industry experts will be vital in refining this vision and making the potential of AI in 6G RAN a reality.
As we approach the 6G era, integrating AI into RAN architectures is not merely an option but a necessity. The distributed AI platform outlined here serves as a blueprint for the future, where AI is seamlessly integrated into RAN, driving innovation and enhancing the capabilities of cellular networks to meet the demands of next-generation users and applications.
For more details, please check the full paper.
Acknowledgements
The project is partially funded by the UK Department for Science, Innovation & Technology (DSIT) under Open Network Ecosystem Competition (ONE) programme.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive Events: Best Practices from Korea Influencer Day
On a beautiful day in Korea, we brought together a diverse group of Microsoft MVPs (Most Valuable Professional), MLSAs (Microsoft Learn Student Ambassadors), RDs (Regional Directors), Microsoft employees, and guests from Japan to create a truly inclusive and inspiring event: Korea Influencer Day. The gathering aimed to build cross-border connections and foster collaboration while empowering communities with shared knowledge and tech trends. With a carefully crafted agenda, we succeeded in sparking meaningful conversations among university students, community leaders, and professionals.
In this post, we’ll walk through the event highlights and share best practices on how to organize inclusive in-person community events. We will also reflect on the valuable feedback received to inspire others to create impactful community gatherings.
Memorable Moments and Reflections
1. Inspiring Cross-Cultural Exchange
A defining feature of the event was the meaningful collaboration between Korean and Japanese MVPs. Kazuyuki Miyake, Japanese Microsoft Azure MVP and RD, and Ryota Nakamura, Japanese Business Applications MVP, introduced their local community trends to Korean community leaders.
Kazuyuki shared his experiences and said, “Participating in Influencer Day in Korea was a milestone. Sharing insights from Japan’s AOAI Dev Day that I successfully organized and proposing the next edition in Seoul marked great progress. I believe collaboration between Microsoft MVPs and RDs can spark a powerful movement. I was especially impressed by the proactive Korean Microsoft Learn Student Ambassadors, whose enthusiasm and curiosity promise a bright future.”
2. Networking through Speed Dating: A Surprising Success
Initially met with hesitation, the speed dating session turned out to be a highlight. It encouraged conversations between individuals from different backgrounds, leading to insights and connections that may not have otherwise emerged. MLSAs engaged with MVPs, attendees shared cultural perspectives between Korea and Japan, and discussions sparked about future collaborations.
JinSeok Kim, a Korean Developer Technologies MVP, who also played a key role as a translator between Korean and Japanese attendees, offered valuable feedback for future events: “While the format encouraged organic interaction, some feedback suggested adding conversation starters or a topic-drawing activity to make it easier for shy participants to dive into meaningful discussions.”
Atsushi Yokohama, an AI Platform MVP from Japan, visited Seoul for the first time to connect with community leaders in Korea. He shared his experience of the event, saying, “It was my first time interacting with Microsoft MVPs from Korea, but I’m grateful to have been able to engage in friendly technical discussions with all of them. This experience has definitely boosted my motivation. I now feel inspired to help strengthen community interactions across Asia.”
3. Empowering the Next Generation of Leaders
The event provided invaluable exposure for Korean MLSA students, whose energy and curiosity left a lasting impression. Many expressed their ambition to grow within the community, including one MLSA student, Minseok Song’s newly formed goal to achieve GOLD MLSA status this year after attending the event.
He continued his reflections and said, “At the event, I asked several questions while talking with the MVPs, and everyone was kind enough to explain things, making it a productive and rewarding experience for me. These conversations inspired me to become someone who can help others, just like you and the MVPs.” This reflection shows how inclusive events can inspire future leaders by connecting them with role models and mentors.
4. Female Tech Influencers and Expanding Community Impact
One of the most impactful sessions was the speech by female tech influencers, highlighting the importance of diversity and gender inclusiveness in the tech space. Representation matters, and hearing from these leaders not only inspired attendees but also promoted the idea that diverse voices are key to creating a thriving tech ecosystem.
The panel discussion on increasing community impact through collaboration also underscored the potential of generative AI to transform communities across Korea and Japan, opening doors for future joint initiatives.
SungHo You, Microsoft Technical Trainer and Justin Yoo, Microsoft Cloud Advocate who participated in the event, shared their thoughts: “The Korea Influencer Day was a pivotal event for the Korean developer community. It brought together diverse community leaders, fostering meaningful interactions, empathy, and moments of joy, especially with Japanese MVPs. I want to particularly commend the efforts to promote gender diversity within the Microsoft tech community, which was positively influenced by the collaboration between Microsoft and the SA team.”
Best Practices for Organizing Inclusive In-Person Events
Drawing on the success of Korea Influencer Day, here are some key practices to consider when planning inclusive events:
Curate a Diverse Agenda
Ensure that the schedule reflects a range of topics and speakers from various backgrounds, including professionals, students, and community leaders.
Highlight underrepresented voices, such as female tech leaders or community members from different regions or fields.
Design for Interactivity and Connection
Incorporate speed networking sessions or icebreaker activities to foster interaction among attendees from different backgrounds.
Use creative formats like Show & Tell or small-group discussions to encourage knowledge sharing.
Provide Conversation Starters or Prompts
Offer topic cards or a discussion board to spark conversations, helping participants break the ice during networking sessions.
Create personalized introductions to connect individuals based on shared interests.
Make Cross-Cultural Exchange a Priority
If attendees come from diverse regions or countries, include sessions that promote cultural understanding, such as cultural exchange talks or panels discussing shared challenges and solutions.
Support Newcomers and Aspiring Leaders
Engage with students and newcomers, offering mentorship opportunities to help them grow within the community.
Recognize and celebrate their achievements to encourage continued participation.
Balance Structure with Flexibility
While structured agendas are important, allow time for unstructured networking to enable organic connections and deeper conversations.
Gather and Act on Feedback
Ask attendees for feedback to understand what worked well and where improvements can be made.
Implement these learnings in future events to enhance inclusiveness and engagement.
Korea Influencer Day sparked creativity through stories of personal tech projects to inspiring students to become future leaders, the event demonstrated the value of bringing people together across cultures, backgrounds, and interests.
By designing events that celebrate diversity, foster interaction, and empower individuals, we can create meaningful experiences that have a lasting impact on communities. Whether you’re organizing a small community meetup or a large-scale event, the lessons from Korea Influencer Day can guide you in creating an environment where everyone feels welcome, heard, and inspired to contribute.
What’s next? As one participant from Japan suggested, we can look forward to taking place in Seoul. Until then, let’s continue building bridges and sharing knowledge to shape the future together.
Microsoft Tech Community – Latest Blogs –Read More
pivot tables excel
Hello. I ask you a question. I subscribed to the Microsoft 365 paid plan and I want to work with Excel pivot tables but the menu only offers me “insert tables” not “pivot tables”. I’m working on my Samsung tablet. How can I use dynamic tables? I’m paying to use that but I can’t use it because it’s not in the excel menu.
Hello. I ask you a question. I subscribed to the Microsoft 365 paid plan and I want to work with Excel pivot tables but the menu only offers me “insert tables” not “pivot tables”. I’m working on my Samsung tablet. How can I use dynamic tables? I’m paying to use that but I can’t use it because it’s not in the excel menu. Read More
Microsoft Teams/Lists/Forms
Hello,
I need help with a few things.
1. When someone fills out the form (on the right), it then populates into the list (on the left). When I comment on the populated row within the list, the user I am tagging is not getting notifications. How can I fix this? I do not want anyone else having access to the full list, just my comment.
2. Is there a way to create a workflow for approvals? I.e. if it’s a primary buyer I want these 3 people to approve in order, but if it’s an investor then I want these 4 people to approve in order?
Hello, I need help with a few things. 1. When someone fills out the form (on the right), it then populates into the list (on the left). When I comment on the populated row within the list, the user I am tagging is not getting notifications. How can I fix this? I do not want anyone else having access to the full list, just my comment. 2. Is there a way to create a workflow for approvals? I.e. if it’s a primary buyer I want these 3 people to approve in order, but if it’s an investor then I want these 4 people to approve in order? Read More
Color coding bars in Project for the Web Timeline View?
In Project for the Web in the Timeline view, is there a way to color code the bars e.g., by assignee, team, etc?
In Project for the Web in the Timeline view, is there a way to color code the bars e.g., by assignee, team, etc? Read More
Is Azure Arc enabled extension necessary For Purview Data Sources?
I am adding data sources to our collections in Purview so I can start doing scans for our Data Catalog. I have added 5 data sources already and installed the Azure Arc extension and registered the data source as a SQL Server (Not arc enable just a regular SQL Server) and I have run some scans on those data sources.
Recently, I added another data source and just registered it as a regular SQL Server and skipped the Azure arc extension.
Do I have to install the extension for the scans to work? Is the extension just for if I wanted to manage the SQL Servers, which are on Prem, in the Azure environment…which I don’t at this time.
Thank you
I am adding data sources to our collections in Purview so I can start doing scans for our Data Catalog. I have added 5 data sources already and installed the Azure Arc extension and registered the data source as a SQL Server (Not arc enable just a regular SQL Server) and I have run some scans on those data sources. Recently, I added another data source and just registered it as a regular SQL Server and skipped the Azure arc extension. Do I have to install the extension for the scans to work? Is the extension just for if I wanted to manage the SQL Servers, which are on Prem, in the Azure environment…which I don’t at this time. Thank you Read More
Why does Excel STILL automatically remove zeroes at the start of numbers I export into the program?
Seriously, who designed this godawful application? It’s needlessly “helpful”, just auto-formatting all my information into whatever dumb setting it wants. I have to export 500 barcodes into a sheet and organize them but of course Excel HAS to jump in and remove ALLLLLLLLL the zeroes at the start so I need to waste my time going in and formatting all the cells to “text” (otherwise they’ll just keep “helpfully” re-formatting all my data without asking me).
Oh man, and don’t even get me started on trying to type the number “102320241016” into a cell. For some reason Excel decided that should be a scientific notation and constantly changes it to “1.0232E+11”.
Honestly why is this a feature? Who thought that we’d all want this? Why is EVERY Microsoft product I am forced to use for job so excessively anti-user? Between excel, teams, and outlook like an hour of my day ends up being spent trying to parse my way through these bloated, poorly running applications. It’s like they’re all designed to try and help but the way they “help” is by being as annoying and in the way as possible which just ends up slowing down everyone who actually knows what they’re doing.
Seriously, who designed this godawful application? It’s needlessly “helpful”, just auto-formatting all my information into whatever dumb setting it wants. I have to export 500 barcodes into a sheet and organize them but of course Excel HAS to jump in and remove ALLLLLLLLL the zeroes at the start so I need to waste my time going in and formatting all the cells to “text” (otherwise they’ll just keep “helpfully” re-formatting all my data without asking me). Oh man, and don’t even get me started on trying to type the number “102320241016” into a cell. For some reason Excel decided that should be a scientific notation and constantly changes it to “1.0232E+11”. Honestly why is this a feature? Who thought that we’d all want this? Why is EVERY Microsoft product I am forced to use for job so excessively anti-user? Between excel, teams, and outlook like an hour of my day ends up being spent trying to parse my way through these bloated, poorly running applications. It’s like they’re all designed to try and help but the way they “help” is by being as annoying and in the way as possible which just ends up slowing down everyone who actually knows what they’re doing. Read More
Integrating Jira with Sentinel
Hello Community,
I am having issues integrating Jira with Sentinel. I have connected the logic app to Jira API. Now the logic app is not flowing. I receive a 406 error code during a step in the logic. I made sure the project was the same as listed in Jira. I even tried using the project key.
Hello Community, I am having issues integrating Jira with Sentinel. I have connected the logic app to Jira API. Now the logic app is not flowing. I receive a 406 error code during a step in the logic. I made sure the project was the same as listed in Jira. I even tried using the project key. Read More
Can’t Add or Edit Contacts in Outlook 365
I can’t add or edit a contact in Outlook 365…whether on the web or in the app. However, I can on my iPhone. Any suggestions would be appreciated!
I can’t add or edit a contact in Outlook 365…whether on the web or in the app. However, I can on my iPhone. Any suggestions would be appreciated! Read More
Open Lake and Zafin offer transactable partner solutions in Azure Marketplace
Microsoft partners like Open Lake and Zafin deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:
Compliance Process Automation for Microsoft Teams: Designed for businesses of all sizes, Open Lake Technology’s Compliance Process Automation is an end-to-end supervision and analytics tool designed to ensure your company is compliant with financial regulations including MiFID II and Dodd-Frank. Built on Microsoft Teams, the tool includes automatic monitoring, real-time alerts, auditability, and a clear overview of your compliance status.
Compliance Recording for Microsoft Teams: This turnkey and managed solution deploys Open Lake Technology’s Compliance Recording to your business environment. Compliance Recording integrates with Microsoft Teams, Open Lake’s Compliance Automated Process, on-premises telephon systems, trading telephony systems, and legacy systems. With this solution, you can record chat, voice, video, and screen sharing to ensure full compliance.
A logo reading Zafin
Zafin Cloud: Zafin enables financial institutions to design and manage pricing, products, and packages while simplifying and modernizing core banking systems. Zafin Cloud, built on Microsoft Azure, enables financial institutions to modernize, augment, and extend their core banking technologies. The platform includes a catalog of customer-centric products and services, a personalized reward system, unified data, and much more.
Microsoft Tech Community – Latest Blogs –Read More
Azure Adaptive Cloud Pre-Days at Microsoft Ignite 2024
As the excitement builds for Microsoft Ignite 2024, tech enthusiasts and professionals worldwide are eagerly anticipating the Azure Adaptive Cloud Pre-Days to learn more about hybrid, multicloud, and edge with Microsoft Azure. Scheduled just before the main event (on Monday November 17th), these Pre-Days offer a unique opportunity to delve deep into the innovative world of Azure Adaptive Cloud, facilitating a seamless integration of cloud and edge technologies.
Microsoft Ignite Pre-Days feature two comprehensive workshop sessions, each designed to equip attendees with practical knowledge and tools to optimize and transform their infrastructure and operations and you can still book your spot!
Optimize and Secure Hybrid Infrastructure with a Unified Control Plane
Trying to optimize deployment and management of public cloud and existing local infrastructure to accelerate innovation? Tackle the challenge of unifying and extending existing systems across cloud and edge. Learn about Azure’s adaptive cloud approach, architecture patterns, and deploying cloud-native apps seamlessly using Azure Arc, AI-enhanced tools, and management services across environments. Securely store, process, and derive insights from data throughout digital and physical environments. This pre-day focusing on Azure Arc-enabled servers other Arc-enabled infrastructure.
Transforming Industries with Azure IoT, AI, Edge & Operational Excellence
Join us in a hands-on workshop on industrial and retail transformation. Explore Azure IoT Operations, Kubernetes, and AI-driven solutions like real-time footfall inferencing, intrusion detection, and loss prevention, enhanced by Edge computing. Discover how these innovations, alongside Azure Arc and Microsoft’s adaptive cloud approach, drive operational excellence across industries. Through labs, explore AI and IoT strategies for safer, more efficient, and responsive operations.
Booking and Additional Information
To attend these insightful Pre-Days sessions at Microsoft Ignite 2024, participants must book in advance on the Microsoft Ignite website. Note that these sessions come with an additional cost, reflecting the value and depth of the knowledge and skills imparted.
Why Attend the Azure Adaptive Cloud Pre-Days?
The Azure Adaptive Cloud Pre-Days offer a unique opportunity to:
Gain In-Depth Knowledge: These sessions provide a deep dive into the latest advancements in cloud and edge technologies, offering insights that are not typically covered in regular conference sessions.
Hands-On Experience: Through interactive workshops and labs, attendees gain practical experience that can be directly applied to their own projects and operations. (Hands-on lab only for “Transforming Industries with Azure IoT, AI, Edge & Operational Excellence”)
Expert Guidance: Learn from industry experts who bring a wealth of knowledge and experience. Their insights and best practices can help you navigate the complexities of modern infrastructure management and operational excellence.
Networking Opportunities: Connect with like-minded professionals and industry leaders. These sessions provide a platform for networking and collaboration, fostering relationships that can lead to future partnerships and opportunities.
The Azure Adaptive Cloud Pre-Days at Microsoft Ignite 2024 promise to be an invaluable experience for anyone looking to optimize their infrastructure and transform their operations with the latest in cloud and edge technologies. By attending these sessions, you will be equipped with the knowledge, tools, and strategies to drive innovation and efficiency in your organization.
Don’t miss out on this opportunity to stay ahead of the curve. Book your sessions today and prepare to embrace the future of cloud and edge integration with Microsoft Azure.
More Azure Adaptive Cloud at Microsoft Ignite
The official Ignite session catalog is published. Azure Adaptive Cloud will host 4 break-out sessions, 3 theater style demo sessions,1 Hands-on-lab and pre-day sessions for a deeper dive on scenarios and architectural patterns.
Adaptive cloud sessions include:
Breakout: Adaptive cloud: Unify hybrid, multi-cloud and edge with Azure Arc
Breakout: Simplify operations with AI: Copilot, Azure Arc, and Azure Monitor
Breakout: Scale apps and data with Azure Arc, Kubernetes, and Microsoft Fabric
Breakout: Operate infrastructure across distributed locations with Azure Arc
Demo: Fortify critical applications with Azure Business Continuity Center
Demo: Bringing the power of Azure AI to your adaptive cloud environments
Demo: Enhance cloud native troubleshooting with Azure Monitor & Chaos Studio
Theater: Explore next-gen industrial transformation architecture patterns
Hands-on-Lab: Accelerate Windows Server modernization and migration with Azure Arc
As you browse through the catalog, you will also see a range of partner and technical sessions that also highlight or relate to the adaptive cloud approach such as “How AI is transforming the Migration economic opportunity for Partners” and “Windows Server 2025: New ways to gain cloud agility and security”.
There is more to come such as Expert Meet-ups where you can meet Azure Adaptive cloud experts from Microsoft, as well was Microsoft MVPs.
Stay tuned and see you at Microsoft Ignite 2024!
Microsoft Tech Community – Latest Blogs –Read More