Category: News
MVP and MLSA Collaboration Benefits Tech and Community
From November to December 2023, an online session series titled “Learn Live: Build your AI portfolio with AI Kick-off Challenge Projects” was held for individuals looking to apply AI to solve real-world problems. The series kicked off with “Build a minigame console app with GitHub Copilot” as part of the Microsoft Ignite 2023 sessions, where Microsoft Cloud Advocates demonstrated how to create a game using GitHub Copilot.
Episodes 2 and 3 were both collaborations involving Microsoft MVPs and Microsoft Learn Student Ambassadors. Episode 2 featured a session titled “Add image analysis and generation capabilities to your application” led by Ivana Tilca, an AI and Microsoft Azure MVP who has been previously featured on this MVP Communities Blog (e.g. Microsoft Ignite & Microsoft MVP – Global Experience), and a Microsoft Learn Student Ambassador, Rachel Irabor. Episode 3, “Build a Speech Translator App,” was presented by AI MVP Charles Elwood and Microsoft Learn Student Ambassador Vidushi Gupta, explaining how to develop a Speech Translator App.
Charles reflected on the collaboration as both fun and meaningful. “The comments from the audience were so positive on the Learn Live episode, I think we helped a lot of people that day. It was a lot of work though. Vidushi and I spent so many hours debugging the code and the documentation was nonexistent for the Power Apps connectors.”
He continues, “Vidushi would work on this until 4am, then send us emails, and then had to study for finals. So, she had to take breaks, and I was in LA visiting family to take breaks. Then, Carlotta Castelluccio (Cloud Advocate) brought in some help, and Someleze Diko (Cloud Advocate) and Olanrewaju Oyinbooke (MI/AI Researcher, COSMOS at UALR) guided us through Power Apps. It was the most interesting collaboration where AI brought this amazing group of people together to solve a problem (and we were in all corners of the world). I wish I could bottle this up.”
Vidushi noted the scarcity of female Student Ambassadors focused on Power Platform and expressed her ambition to become an MVP someday. “She has the persistence to solve the big problems and she has such a good and friendly teaching style. I was so impressed,” Charles praised her potential and influence, offering advice and encouragement toward achieving her goal to become an AI MVP.
The “Learn Live: Build your AI portfolio with AI Kick-off Challenge Projects” series allowed community members worldwide to embark on new journeys of AI technology. Moreover, the preparation for these sessions provided MVPs and Student Ambassadors with shared experiences from which they could learn extensively from each other. Charles’s mentorship might pave the way for Vidushi’s future inclusion in the MVP program, potentially expanding the global MVP community circle and fostering ideal ecosystems through new collaborations between MVPs and Student Ambassadors.
The impact of such collaborations is immeasurable. The joint sessions between Microsoft MVPs and Microsoft Learn Student Ambassadors are a prime example of this. We encourage you to explore different collaborations with people in various roles, gaining new perspectives, knowledge, and experiences.
If you are eager to learn more about the topics shared in this series, official Learn Modules are available on Microsoft Learn:
– Challenge project – Build a minigame with GitHub Copilot and Python – Training | Microsoft Learn
– Challenge project – Build a speech translator app – Training | Microsoft Learn
For those interested in learning more about the “Learn Live: Build your AI portfolio with AI Kick-off Challenge Projects” series, a comprehensive review is available on the Educator Developer Blog.
Learn Live – Build your AI portfolio with AI Kick-off Challenge Projects – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
New on Microsoft AppSource: February 1-7, 2024
We continue to expand the Microsoft AppSource ecosystem. For this volume, 154 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
ADDU: Botnomics offers a smart framework for IT automation that reduces incidents, streamlines operations, and increases value. Suitable for businesses and OEMs of any size, the platform provides out-of-the-box automations, self-healing remediation, and self-service capabilities. Botnomics integrates with ITSM and ServiceNow to deliver non-invasive remote troubleshooting.
Advocat AI (Per User): Advocat AI simplifies legal document management for startups and small-to-midsize businesses. With Advocat AI, users can upload, create, negotiate, e-sign, store, analyze, and track legal documents. The platform offers AI summaries and real-time negotiating insights.
Annex Cloud Loyalty Experience Platform: Annex Cloud offers a highly configurable SaaS loyalty program management platform that motivates customer behavior across the entire customer journey. Annex Cloud helps marketers gain new customers, increase customer lifetime value, perform basket analysis, and reduce churn. The platform is scalable, innovative, and designed for even the most complex global brands.
Change Warden: Change Warden is a low-cost IT change management system that helps organizations track and manage IT changes, assess risks, seek approvals, and ensure successful completion of deployments. The system features change request management templates, RACI tracking, and integration with Microsoft Azure DevOps and Dynamics 365 Customer Service.
CommuteSaver: CommuteSaver is an AI-based mobile app that tracks and reports CO2 emissions for corporate commuting and business travel by detecting a user’s transport mode. The app provides data-based recommendations to reduce emissions and costs and includes scoreboards and incentives for employees to reduce CO2 emissions.
Compliance Recording for Microsoft Teams: Open Lake’s Compliance Recording provides a turnkey, managed solution for recording all modern workplace modalities, including chat, voice, video, and screen share. The service includes conversation archiving and multiple compliance services such as case management, secure recording sharing, and legal hold. Legacy recording systems can be imported into Open Lake’s Compliance Hub for replaying interactions from a single service.
ContractMatrix: ContractMatrix streamlines contract drafting, review, and analysis with generative AI-assisted interrogation and drafting. The platform provides real-time access to gold-standard precedents and policies, built-in risk management, and governance designed by Allen & Overy lawyers. ContractMatrix saves time, cost, and risk, allowing lawyers to focus on smart, fast decisions.
ControlDoc ECM/BPM/SGDEA: Available in Spanish, ControlDoc is an electronic document management system (SGDEA) that offers efficient, comprehensive, and automated content management for businesses. The system complies with MoReq (Modelo de Requisitos) standards and supports W3C accessibility standards, portability on various devices, and HTTPS protocols for secure communication.
Exclaimer: Exclaimer is an email signature management solution that allows users to design and manage professional email signatures for all emails. It offers automation and efficiency, reduces IT workload, and allows for collaboration with multiple user logins. The solution features the ability to assign different signature templates for individual users or departments.
Footprints for Commercial Properties: Footprints AI offers a retail media platform that helps commercial property owners and managers enhance profitability and growth by utilizing data on customer behavior within physical store environments. The solution can help generate a new revenue stream from retail media and provide valuable insights into consumer shopping habits.
Footprints for Retail Media: Footprints AI is an omnichannel retail media platform that helps retailers monetize their customer data by transforming anonymous shoppers into predictive media audiences. The solution offers premium retail media services across in-store, on-site, and off-site channels, generating increased profits. Footprints AI uses AI-generated customer insights, including psychographic and socio-demographic profiling to predict and influence omnichannel purchase behaviors.
Greenomy Company Portal: Greenomy is a multi-framework environmental, social, and governance (ESG) reporting solution that centralizes the collection, analysis, and reporting of ESG data. The platform offers AI-powered features to save time and costs, while ensuring compliance with the latest regulations. Greenomy helps companies advance ESG performance by identifying improvements and tracking progress over time.
Hybrid Benefits Calculator for Azure: This dashboard displays virtual machine details for your Microsoft Azure tenant, including configuration and vCPU count. Hybrid Benefits Calculator also retrieves SQL service type and calculates necessary core licenses for hybrid benefits licensing to help you manage your licensing by automating calculations.
HyperNet Sustainable Solution: HyperNym’s HyperNet solution uses IoT technology to monitor emissions, track sustainability progress, promote energy-efficient upgrades, and navigate toward carbon neutrality. Using data insights and advanced analytics, HyperNet can help you drive substantial reductions in environmental impact, resulting in better environmental stewardship, operational efficiency, and brand trust.
HyperNet Fleet Management System: HyperNym’s HyperNet fleet management solution is an IoT-powered system that uses sensors and connected devices to gather data on vehicle location, performance, and maintenance. The solution provides real-time insights on fuel consumption, driver behavior, and vehicle utilization, allowing fleet managers to make informed decisions that optimize schedules, improve safety, and reduce costs.
IBLook for Microsoft Teams: IBLook for Microsoft Teams is an attendance management app that displays team members’ presence and status, allowing for easy tracking of their location and work status. The app features include status management, message memo registration, comment registration, and integration with Microsoft Teams.
Imperium Power Sales CPQ: Built on Imperium Starter CRM using the Microsoft Power Platform, Imperium Power Sales CPQ is a quote management solution that connects businesses and customers. Businesses can use the solution to create a product catalog and provide customers with a portal for quote requests. Imperium Power Sales CPQ reduces errors, offers competitive pricing, accelerates sales cycles, and enhances customer satisfaction.
GPT SaaS Jurisprudence: Available in Portuguese, Jurisprudence GPT is an AI-powered tool trained on public documents from Brazilian courts. The tool is designed to enable easy and natural language-based searches of legal precedents.
LearnPro LMS: LearnPro is an enterprise learning management solution that offers access to a wide range of educational content, progress tracking tools, and detailed analysis. The solution enables companies to optimize their training processes, enhance staff performance, and stay competitive in an ever-changing business environment. LearnPro offers customization capabilities, personalized course recommendations, and advanced analytics tools to evaluate the impact of learning on business performance.
Pegatron Vision AI Smart Surveillance: The Vision AI Smart Surveillance system by Pegatron is a user-friendly platform that utilizes powerful AI technology to enhance surveillance capabilities without requiring coding skills.
Rapid Platform – Developer: Automate your business management with a customizable and scalable system that integrates Microsoft 365, SharePoint, Power BI, Outlook, and Azure and provides an intuitive no-code environment for process automation and documentation.
RapidStart Sales for Microsoft Dynamics 365: RapidStart Sales is a simple yet powerful app that streamlines customer interactions and sales opportunities with features like record hashtagging and shortcuts. Built on Microsoft Power Apps, the app seamlessly integrates with Microsoft 365, Outlook, and Teams while maintaining compatibility with Dynamics 365 Sales.
Skypoint AI Platform for Long-Term Care (LTC): Skypoint’s AI platform unifies structured and unstructured data as well as public external data into an Azure Data Lakehouse. Built on Microsoft Azure, Skypoint lets users query data using conversational language via Skypoint AI Private GPT. The platform offers business intelligence and reporting, data unification, task/process automation, and privacy compliance.
SmartDX: SmartDX offers a solution for tracking and identifying individual audience members across channels and platforms, enabling companies to deliver contextual omnichannel interactions and end-to-end attribution. The platform also provides a comprehensive snapshot of campaign results and marketing ROI, with an emphasis on conversion-focused customer experiences in real time.
Team Board In/Out Status List: Team Board In/Out Status List is an app for Microsoft Teams that allows you to see your entire team’s real-time availability. Use the app to improve communication, productivity, and transparency.
Velocity Learning JSC Cares Program: Velocity Learning’s JSC Cares Program offers a comprehensive learning solution for personal and professional development. The solution provides a wide range of courses and resources to help individuals and organizations achieve their goals.
Voluptuaria: QR Support: Voluptuaria provides a dedicated service based on Microsoft Azure to support businesses using dynamic QR codes so that every scan results in seamless access to your digital offerings.
Well: Available only in Australia, the Well app integrates with Microsoft Teams to catalyze daily habits and help keep everyone at work safe by delivering engaging content directly within Teams. Well drives behavioral change while aligning with the ISO 45003 global standard for psychological safety. The app has been designed to help manage the psychosocial hazards outlined in Safe Work Australia’s Code of Practice: Managing Psychosocial Hazards at Work.
WeTransact.io: WeTransact.io simplifies the process of listing products on the Microsoft commercial marketplaces by handling technical aspects and maintenance. The WeTransact platform enables quick publishing in days and compliance with the GDPR.
Go further with workshops, proofs of concept, and implementations
Beyondsoft AI-Powered Productivity for Microsoft 365 Copilot: Implementation: Beyondsoft offers AI-powered productivity services to help organizations implement Copilot for Microsoft 365. The implementation includes business case development, end-user readiness, change management programs, and more.
Catalyst Envision and Innovation for Customer Service: Workshop: The Catalyst Envision and Innovation Workshop for Microsoft Dynamics 365 Customer Service helps businesses understand whether Dynamics 365 can meet their priorities. Attendees will find the right transformation strategy, and the workshop ends with a presentation on next steps.
Catalyst Envision and Innovation for Marketing: Workshop: The Catalyst Envision and Innovation Workshop for Microsoft Dynamics 365 Marketing helps businesses understand whether Dynamics 365 can meet their priorities and identify how to build velocity toward business transformation using Microsoft Azure, Dynamics 365, and Power Platform.
Catalyst Envision and Innovation for Sales: Workshop: The Catalyst Envision and Innovation Workshop for Microsoft Dynamics 365 Sales helps businesses understand whether Dynamics 365 can meet their priorities. Attendees will learn how to transform business by using Microsoft Azure, Dynamics 365, and Power Platform.
Continual Professional Development Solution for Higher Education: 6-Month Implementation: Crimson will deploy a professional development solution built on Microsoft Dynamics 365 and Power Platform using pre-configured accelerators. The solution delivers a seamless experience for learners, improved automation, increased efficiency, and better-informed decisions.
Copilot for Microsoft 365 – Planning for Success: Workshop: UnifiedCommunications.com will identify use cases, deliver demonstrations, and discuss best practices to ensure your technical readiness, security, and user adoption of Microsoft 365 Copilot. This offer includes a pre-workshop survey to identify your key business concerns and goals.
Copilot for Microsoft 365: Deployment: True will help you take advantage of Microsoft 365 Copilot to increase creativity, boost productivity, and enhance user skills across applications. True will assess your organization’s requirements, develop a customized deployment plan, and guide your organization through training and adoption.
CRM2GO: Available in German, InsideAX’s CRM2GO engagement helps you quickly and affordably introduce a new CRM system built on Microsoft Dynamics 365 Sales. The starter package includes analysis of your processes, guidance through implementation and integration, and licensing support.
Digital Contact Centre for Housing: 6-Month Implementation: Crimson’s Digital Contact Centre solution for the housing sector is a cloud-based service that uses Microsoft Dynamics 365 Customer Service and Power Platform to connect and engage with customers across digital messaging channels. The solution offers contextual customer identification, real-time notification, integrated communication, and agent productivity tools to improve operational performance and increase revenue.
Dynamics 365 Customer Insights – Data & Journeys: 4-Week Implementation: Axazure will help you connect all your customer data with ease by resolving customer identities with AI-driven recommendations. After ingesting data, unifying profiles, and defining rules, Axazure consultants will take live your implementation of Microsoft Dynamics 365 Customer Insights. You’ll benefit from newly created customer segments for marketing campaigns, end-user training, and more.
Dynamics Go – Low-Cost Entry to Dynamics 365 Sales: Deployment: AXON-IT’s Dynamics Go is a cloud-based CRM platform designed to streamline and automate sales processes for SMBs. The platform offers a low cost of entry to Microsoft Dynamics 365, featuring pre-built templates, dashboards, and triage support. Dynamics Go includes lead and opportunity management, quotes, orders, invoices, and email integration.
Enforced Data Security: 12-Week Implementation: BDO offers data protection measures to safeguard businesses from cyberattacks, data breaches, and data loss by using Microsoft 365 data discovery tools. BDO will build a comprehensive plan to discover and protect data where it lives, as well as drive healthy ways to maintain data integrity and monitor data access against the HIPAA regulations and the widely recognized HITRUST industry standards.
Fast Forward to Dynamics 365 Cloud: Savaco’s Fast Forward Track is a migration service for companies using Microsoft Dynamics CRM on-premises that want to transition to a cloud-based solution. The service offers expertise in migrations, rich Dynamics 365 experience, in-house tools, and applications. Benefits include seamless transition, cost-effectiveness, scalability, and data security. Deliverables include a migration plan, post-migration support, and training.
Finchloom+ for Microsoft 365 Email Security: Finchloom’s email security managed service offers protection against phishing attempts with human-powered detection and swift response. Finchloom’s security operations center monitors suspicious domains and offers user training and phishing simulations. The service includes email security setup on Microsoft 365, unlimited user phishing submissions, and monthly threat reports.
Finchloom+ for Microsoft Intune: Finchloom offers setup, monitoring, and support for compliance, patch management, and software deployment using Microsoft Intune integrated with Micrsofot Defender for Endpoint. This service includes an environment assessment and onboarding, as well as monthly reporting and recurring services.
Frontline Workers: Workshop: Salt’s Frontline Solution Workshop helps organizations onboard frontline employees to be productive using Microsoft 365 apps. The workshop includes training, inclusivity, support for diversity needs, adoption planning, change management planning, physical onboarding, and a report with outcomes and recommendations. This service is designed to enhance the productivity, engagement, and satisfaction of frontline employees.
Google Workspace to Microsoft 365: Migration: Overcast offers seamless migration from Google Workspace to Microsoft 365, including migrating data from email, calendar, contacts, and Google Drive. Overcast provides training, support, and documentation. The scope and cost of the solution are customized to your needs.
Leap into Copilot: 2-Hour Workshop: Core will lead you through an exploration of the potential impact and the strategies for implementing Microsoft 365 Copilot. The session covers key features along with a demo and overview of Copilot suites, integration with corporate data, practical applications, benefits, and deployment options.
Microsoft 365 Copilot: Assessment and Implementation: Eide Bailly offers expert guidance on responsible AI usage for immediate business value built on Microsoft 365 Copilot. This service includes adjusting settings to industry standards, ensuring security, privacy, and compliance, and providing training for stakeholders and end-users.
Microsoft 365 Copilot: Implementation: Forefront’s offer includes a pilot program, phased rollout plan, updated training material, and governance model for Microsoft 365 Copilot. This service will help organizations unlock the full potential of Copilot, empowering teams to work smarter and more efficiently.
Microsoft 365 Copilot: Readiness & Deployment: ConXioNOne offers a customized solution for businesses to use Microsoft 365 Copilot as a generative AI assistant. This solution includes four phases: discovery, design, deployment, and adoption, with a focus on assessing technical readiness, identifying high-value use cases, and configuring security and compliance.
Microsoft 365 On-Premises to Tenant: Migration: Overcast can help migrate on-premises server infrastructure to Microsoft 365, including Exchange mailboxes, public folders, and SharePoint sites. This offer can be performed as a single event, phased migration, or split move. Overcast provides migration training and support, custom migration, data migration assessment, planning and timeline, documentation, and training sessions. The proposal is tailored to your needs and budget.
Microsoft Teams Phone: 4-Week Proof of Concept: Red X Carbon enhances Microsoft Teams phone systems by integrating Operator Connect, Direct Route, and native Microsoft calling plans. This proof of concept will demonstrate calling capabilities in Teams and includes system configurations, pilot testing, and deployment of Teams Rooms systems and phones.
Microsoft 365 Tenant to Tenant: Migration: Overcast can help migrate data from one Microsoft 365 tenant to another, including Exchange mailboxes, SharePoint sites, and Teams data. Overcast provides migration training and support, custom migration, and complete documentation. The service is customized to your needs and budget.
Onesec.AI: ONESEC offers a complete solution for companies to integrate AI safely and efficiently using Microsoft Entra, Microsoft Purview, and Microsoft 365 Copilot. ONESEC consultants will identify security risks, provide personalized protection strategies, and assist with adoption and process improvement.
Upgrade to Dynamics 365 Business Central (Essential): 2-Month Migration: Navision Tech offers customized technical data migration services from Microsoft Dynamics NAV to Dynamics 365 Business Central. The Essential Package includes everything in the Kick-start Package as well as extension migration, custom reports, and documentation.
Upgrade to Dynamics 365 Business Central (Kick-start): 2-Week Migration: Navision Tech offers customized data migration services from Microsoft Dynamics NAV to Dynamics 365 Business Central. The Kick-start Package includes project discovery, planning, data migration, user training, and go-live monitoring.
Upgrade to Dynamics 365 Business Central (Solid Foundation): 3-Month Migration: Navision Tech offers customized data migration services from Microsoft Dynamics NAV to Dynamics 365 Business Central. The Solid Foundation Package includes everything in the Essentials Package, as well as custom data migration, data integration, customization, and custom reports built on Microsoft Power BI.
Zones Data Governance and Compliance: Zones’ Data Governance and Compliance services help organizations optimize Microsoft 365 while addressing challenges such as inconsistent data quality, regulatory requirements, and data management strategies. Zones’ holistic approach ensures data becomes a catalyst for positive change, aligning with organizational objectives and extending the use of Microsoft 365 to drive growth and innovation.
Contact our partners
Advanced Manufacturing Reports
AI Candidate Assessment – MetaOPT
AI in Housing Art of Possible: 2-Day Assessment
Aptean Beverage Advanced Online Warehouse Management for Drink-IT Edition
Cloud Migration – License Assessment
Configit Ace – Configuration Lifecycle Management Solution (DE)
Configit Ace – Configuration Lifecycle Management Solution
Continia Payment Management (IN)
Copilot Spark: Readiness Check
Digital Distribution Platform for Insurance
Dynamics 365 Customer Engagement: Assessment
EX Managed Services – Essentials (Including Copilot)
Fairly AI Red-Teaming-in-a-Box
Feat Paper (Motion PDF & Analytics)
FI Acquisition and Spend Planner
Fusion5 Pack for Power Automate
Fusion5 Vendor Bank Account Approval
Gestisoft AMP Sales Accelerator
Integrated School Affairs Support System Eduo
Kovix WMS – Streamline Your Warehouse Operations
Learning Management System (UK)
Ledger Allocation Functionality
License Assessment: Optimize Your License Management
MD.ECO – Proactive Cybersecurity
Microsoft 365 Copilot Evaluation: 1- to 2-Hour Assessment
Microsoft 365 Copilot Readiness: 4-Week Assessment
Pre-Flight Check for Microsoft 365 Copilot: 2-Week Assessment
Process Digitization with Power Platform: 7-Day Assessment
Readiness for Launching Copilot for Microsoft 365: 5-Day Assessment
Redoflow ERP Requirements: 1-Week Assessment
Seamless Communication with Teams Phone
Sparxcloud Cloud Cost Management
Text, SMS, Instant Message, Chat + Dynamics 365 CRM – TextSMS4Dynamics
Trovex.ai – AI-Driven Training and Enablement
WooCommerce Integration with Dynamics 365 Finance – HexaSync Profile
Workstatz – Employee Efficiency and Productivity Monitoring Software
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
MODELLING MICROSOFT DYNAMICS 365 DATA USING DATA VAULT 2.0
This Article is Authored By Michael Olschimke, co-founder and CEO at Scalefree International GmbH and Co-authored with Markus Lewandowski Senior BI Consultant and Bibhush Nepal Technical Solutions Specialist from Scalefree
The Technical Review is done by Ian Clarke and Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft
The previous articles in this series covered individual aspects of Data Vault 2.0 on the Microsoft Azure platform. We discussed several architectures, such as the standard or base architecture, the real-time architecture, and one for self-service BI. We have also looked at individual entities, their purpose, and how to model them.
What is missing now is the big picture: we have the tools but how do we use them in a real-world example? How do we get started on an actual project? To answer these questions, we have teamed up with consultants from Scalefree, a consulting firm specializing in Big Data projects with Data Vault 2.0 and use some of their process patterns in this article to get you started.
Introduction
For this article, we have extracted data from the Microsoft Dynamics API. The goal of this article is to come up with the Raw Data Vault model, including important hubs, links, and satellites. We will take the standard Sales process of Microsoft Dynamics and use its business objects for this purpose. Those business objects are:
Lead
Account
Contact
Opportunity
Opportunity Product
Quote
Product
The Data Model we are using is a selection from the Microsoft Common Data Model where we picked all objects that play a role during the standard sales process.
The sales process can be described very roughly as follows:
Lead -> Account / Contact -> Opportunity -> Quote -> Opportunity Close
This Raw Data Vault could later be used to create reports, dashboards, and other data-centric solutions.
Note that, while more complete, this article does not describe the full capabilities of Data Vault 2.0. Data Vault 2.0 shows its full strength when data from multiple data sources must be integrated. This is not the case in this article as we only model data from Microsoft Dynamics CRM to keep the article simple. However, you can easily extend the model with additional data in the next iteration. Data Vault 2.0 has been designed for the agile development of the data analytical platform. Step by step, you would add more and more data from additional (or the same) data sources and integrate the data into the platform.
A Team Effort
Building a data analytics platform is not a job only for data modelers. Instead, a data analytics platform is a complex technical system, and building it requires expertise from technical and business areas. Therefore, it is a team effort, and many roles are involved, including data engineers, architects, testers, DevOps engineers, and more. But in addition to the typical data roles, it is highly recommended also to include two other types of roles: one that focuses on business value and the other focused on the data sources.
The business role represents the user and can explain the requirements. They know the business processes and how the business value, typically a report or dashboard, is used later on. They can also explain data quality issues and know what step in the business process causes them.
On the other side of the team, there are the data source specialists. They know the source system very well from a technical perspective and know the purpose of each data attribute, can explain data quality issues from a source system perspective, and know-how data relate to each other.
For the same reasons, we teamed up with the CRM experts at Scalefree to write this article and create a more comprehensive Data Vault 2.0 model for the Microsoft Dynamics 365 CRM solution.
Getting started with the Data Vault project is always a challenge. Typically, teams face two questions (among others):
How to start the overall project?
How to start with the modeling aspects?
The next two sections present our answers to these questions.
How to start the overall project?
At Scalefree, our practice is to initiate our client engagement with a series of sprints. The first sprint, called “sprint minus one” is all about resources and is performed by a senior role of Scalefree. Does the project have access to the right resources, including team members, business users, software and hardware, and meeting space? Would the team be able to get working as they join the context? Are user credentials set up?
Once the readiness of the project has been established, the next sprint, called “sprint zero” is used to set up the infrastructure and the initial information requirement for the first real sprint (“sprint one)”. The goal is to set up the architecture for the data analytics platform, including all the required layers. As a basis, one of the Data Vault 2.0 architectures from our previous articles . However, in most cases, we do not apply these architectures directly. In many cases, the architecture requires some adjustments to the circumstances of the client, for example, pre-existing components, tool stack, etc. With that in mind, we honestly don’t know if the architecture draft actually works in a project: many variables are unknown, some tools are new and sometimes even untested, and the team might lack skills or experience.
The worst case scenario would be to start developing with this architecture blueprint only to realize in two years from now, that the architecture choice was a bad one or doesn’t even work at all. Instead of wasting two years of budget, we follow a “fail hard fast” strategy. If the architecture blueprint doesn’t work, you want to fail as soon as possible to limit the wasted budget. And that is the main goal of sprint zero: set up the architecture and establish the data flow from an actual data source to the actual target. This data flow doesn’t have to use Data Vault models. We are more concerned about data flow. More important than the Data Vault model is to use the sprint to try assumptions and unknowns.
For example, in one of our projects, the client wanted to use binary hash keys in the information marts. It was unclear if the dashboard application was able to join on these binary hash keys. Instead of waiting for this to test in the future, we used sprint zero to test the assumption that it would work. If not, we could still adjust our architecture and implementation techniques for the later sprints.
The goal of sprint one, and most subsequent sprints, is to deliver business value. However, it is not the purpose of sprint one to define the business value. Instead, this should be done in a previous sprint. Since this is sprint one, sprint zero must define the business value with a first information requirement.
Most subsequent sprints should deliver business value or at least some progress. There might be some exceptions when technical debt piles up over time. In this case, a technical sprint might focus on reducing the technical debt and not delivering any business value. However, this is the exception, not the rule.
How to get started with the Data Vault model?
While the Data Vault model is not a concern of sprint zero, it is certainly a concern in subsequent sprints. Many less experienced Data Vault practitioners face the issue of getting started with the first Data Vault model, even if they went through proper training. A good start is always to define the client’s business model and focus on the major concepts. For example, the following diagram could present the business model of a customer-centric organization:
We call this diagram a “conceptual model” or ontology. It defines the business objects and their relationship. Typically, it is defined in a meeting with business users when we ask them early in the project to explain their business.
The next knowledge to extract from the business is the business keys which are used to identify the business objects in the conceptual model. When asking business users from various departments, different answers are often given. This is due to the fact that different departments often use different operational systems with different business key definitions to identify the same business objects. Therefore, expect multiple business keys per business object to be given as an answer, depending on who you ask. Add them to the conceptual model:
We have added the business keys using braces to the above diagram. However, we did not try to find the best business key for a certain business object. Do not judge the business keys, yet. Email addresses might not be a good choice, given privacy requirements, but add them nevertheless to the concept. These business key candidates will give us choice, so the more (actual ones), the better. With that in mind, make sure to invite all relevant business users to capture as many data sources as possible.
This model will be of great use when analyzing specific source systems, such as Microsoft Dynamics CRM. The basic idea of the business key, as discussed in article 7 of this series, is to be shared across source systems, so it can be used to integrate the separate data sets.
Identify the Concept and Type of Data
Once the organizational context of the model is known, it is time to analyze the actual dataset. The first step is to identify the concept and type of data:
Is this dataset related to one of the concepts in the conceptual diagram?
Does this dataset represent a business object (maybe in addition to the ones on the conceptual diagram) or transactional data?
What about reference data?
Is multi-active data (refer to article 8 of this series) involved?
These are some of the questions that should be asked during this phase.
The relationship between the raw data from the data source and the conceptual model defines to a good extent how to model the source dataset in the Raw Data Vault.
For example, if the source dataset is directly related to one of the business objects in the conceptual model and represents master data, we would often model the source dataset as a combination of a hub and satellite. This is the easiest way to capture changes to descriptive data attributes and forms the basis for many queries to produce dimension entities in star or snowflake schemas. Foreign keys in such data sets are often indicators for relationships with other hubs.
On the other hand, there is data that represents transactions, messages, and events. Such data is often modeled as non-historized links or dependent child links meaning data that is being processed more as a stream of individual records then changes to such records. This explains the name “non-historized,” but it doesn’t mean that we cannot capture changes to transactions, events, and messages if they occur. This is possible but out of the scope of this article.
Focus on the Business Keys
Business key identification is an important activity in designing the Data Vault model. In an ideal world, our consultants could analyze every single data source to identify the business key that is the one shared most. However, none of our customers are willing to set up the required budget, and they are right not to do this. Instead, we use the concept model to extend our limited view beyond the specific data source we’re trying to integrate. The business key candidates provided by the business users can be used as an indicator regarding the amount of sharing of individual attributes of the specific data source.
Once the concept of the dataset has been recognized, the available business key candidates are identified: of the business keys mentioned in the conceptual model, which business keys actually exist in the dataset to be ingested? If none of these candidates exist, what other options exist to uniquely identify the records in the dataset?
Ultimately, a business key must meet some important criteria. The most important ones are uniqueness across two dimensions:
Uniquely identify the business object across the enterprise
Uniquely identify the business object over time
Both aspects are important in data analytical platforms. The first dimension is important because the stated goal is to build an enterprise-wide data platform. Having a key local to a tenant colliding with another tenant’s local key (e.g. business key 42 identifies different customers at different tenants) presents an issue for identifying the business object uniquely across tenants. This can be easily resolved by the extension of the business key with some additional attributes, such as the tenant number. That combination of the local business key and the tenant ID is uniquely identifying the customer.
The second dimension is across time. In some cases, business keys are being reused: for example, when a bank account number is too short, many banks are reusing the old bank account number of closed accounts after a while. This works in operational systems, as these systems often have no history. However, data analytical platforms should provide historical data. In this context, we need to uniquely identify a bank account over time. The resolution is to extend the business key again. Typically some other attribute can be found to create a unique business key, such as the opening year of the account or in the case of aviation flights, the flight date. The combination of the bank account number and the opening year identifies the bank account uniquely over time.
Identify Relationships
A source dataset might include a number of business keys. Consider retail transactions: a customer (identified by a customer number) walks into a store (identified by a store number) and purchases a product (identified by a product number). The transaction table includes these three business keys. Just having them included in the same source entity is an indicator of a relationship between these business keys. Since these business keys are loaded into individual hubs, a link is required to establish the relationship.
Another important indicator of the need for links is foreign keys. If a foreign key refers to another source dataset that is providing the data for another business object, it indicates that a relationship between the two business objects exists.
Define and Split Satellites if necessary
Once business keys are identified, the remaining attributes in the source dataset are typically of descriptive character. Such descriptive attributes are loaded into satellite entities describing the hub or link. But what if multiple derived hubs and links are involved in the source dataset? In this case, it helps to answer a simple question on every data attribute: “If this is a descriptive attribute, what is it describing?” The answer can only be one of the hubs or links. Then, add the attribute to the satellite that describes the respective parent.
In addition, there might be reasons to split the satellites:
As a general rule, load a satellite only from a single data source
Split a satellite for rate of change if no table compression is available (less important with commercial database engines, such as Microsoft Synapse or Fabric)
Separate attributes by their privacy class (non-personal vs personal data attributes) to prepare a physical delete
Separate attributes by their security class (to implement column-level and row-level security)
Separate attributes for business reasons
The last reason should only be applied if the advantages exceed the disadvantages as they might have to be joined later on.
THE LEAD MODEL
The remainder of this article describes the Raw Data Vault models derived from the Microsoft Dynamics CRM data. Each model has been derived from a source dataset. While we describe them individually, they are part of an integrated model.
The first three source entities are relatively straightforward to turn into a Raw Data Vault model. Leads, accounts, and contacts are often considered business objects and operational master data. Therefore, the base Data Vault entity would be a hub and a satellite to capture the data in the Raw Data Vault. For the lead object, the identifier is the Lead ID in the data source. Given the data set we analyzed, the Lead ID is unique across all records. The center of the diagram shows the hub for leads and its business key:
Two satellites are used to capture the descriptive data: one for personal data and the other one for non-personal data. This is required to perform physical deletes of personal data when the lead exercises the right to be forgotten, as defined in GDPR. In addition, the satellite named “EffSat Lead” is an effectivity satellite that captures the deleted timestamp to indicate hard-deleted records from the source system, which are soft-deleted in the Raw Data Vault. To recap: the personal data satellite is used to support physical deletes of personal data in the data analytics platform, while the effectivity satellite supports soft-deleting records where they have been removed from the source system.
There are two links in the center of the diagram: a hierarchical link (HLink) to the left of the Hub Lead and a same-as-link (SAL) to the right. Both links are standard links, but they serve a specific purpose: the hierarchical link is used to capture parent-child hierarchies, as found in the source data with the Parent Link attribute. The attached effectivity satellite indicates if this relationship is still valid or outdated because the relationship has been changed.
The same-as-link is used for a different purpose: in many data sources there are duplicate records. This is especially true for CRM systems, but certainly not limited to them. We experience duplicate records everywhere, not only within one system but also across multiple systems. The same-as-link is used to indicate the duplicates and map them to their designated master records. In the case of Dynamics CRM, the source data set provides a Master Lead which can be used to map the records. However, in many other cases, the data source is not providing such data, or it might not be sufficient, and we discuss in one of the next articles of this series how we implement a same-as-link in the Business Vault using business logic, such as fuzzy matching or Soundex algorithms. Because these mappings can change over time, an effectivity satellite is used to capture the deleted date of outdated mappings.
There are three referenced hubs at the bottom of the diagram: hubs for account, customer, and contact. They are introduced because of foreign keys in the source dataset, which also result in respective links and effectivity satellites to capture changes on the foreign keys. Since all these related entities play multiple roles in the lead dataset (one is the parent, and one is a direct relationship), there are multiple links, including their effectivity satellites.
THE CONTACT MODEL
The contact model is centered on a hub for contact, with the business key Contact ID, similar to the lead model:
Again, this is due to the nature of the contact business object. The source data contains personal data and therefore the descriptive data is split into two satellites on the hub to separate personal data from the non-personal data. The source dataset contains a similar hierarchy and master indicator that lead to the hierarchical link and the same-as-link for the same reasons.
The biggest difference is the lower number of foreign keys in the source dataset, which leads to fewer links.
THE ACCOUNT MODEL
The Raw Data Vault model for accounts is very similar to the lead and contact model, due to the similar nature of the business object. Again, this is a clear business object and operational master data. Such cases are often modeled as hubs and satellites. Therefore, the center of the diagram shows the hub for an account:
However, GDPR doesn’t play much role in this case because (we assume) that the account object doesn’t hold any personal data. Corporate data is not affected by the right to be forgotten. Therefore, the descriptive attributes are not split into different satellites.
There are also fewer foreign keys, only the primary contact of the account and the originating lead. The Data Vault design follows the same principles in regard to the referenced hub, the link, and the effectivity satellite as the Lead model.
THE PRODUCT AND QUOTE MODEL
It is easy to argue that a Product is also a business object. Therefore, the Raw Data Vault model is again centered on a hub:
The source data set provides a product hierarchy which is captured by the hierarchical link to the left of the product hub. A link called “DLink Opportunity Line Item” is greyed out because it is discussed in the next section.
It could be argued that the quote object in the data source represents a business object or alternatively facts. The decision on how to model the quote data depends largely on this decision. Based on the conceptual model from the beginning of this article, the decision is made to consider the quote as a business object and therefore model it as a hub with satellites:
The design largely follows the same principles as the previous models. Since quotes might include personal data, we decided to split the attributes into two satellites for personal and non-personal data attributes. The quote dataset has a number of foreign keys, including customer, account, contact, and opportunity which is discussed in the next section. Every foreign key is implemented as a link with their effectivity satellites to capture soft deletes.
THE OPPORTUNITY MODEL
For the opportunity dataset in Dynamics CRM, it was decided that this data represents factual data. We consider this data as non-historized as the data represents typical business transactions.
In the following diagram, the opportunity is captured as two links and a hub:
The opportunity is identified by the Opportunity ID. This business key is captured by the Hub Opportunity. There is also an effectivity satellite to keep track of deletes in the source data as soft-deletes in the data warehouse.
But most of the actual data is captured by the two links to the left: “NHLink Opportunity” and “DLink Opportunity Line Item.” In Microsoft Dynamics, an opportunity can have multiple line items. We expect that the user would like to analyze opportunities on both levels (the opportunity level and the line item level) as facts. To capture such data efficiently, we typically use a non-historized link (here: NHLink Opportunity) or dependent child link (DLink Opportunity Line Item).
Both link entities have some similarities: they should capture the data in the same granularity as they arrive at the data analytics platform. For every opportunity, there should be one record in the non-historized link, and for every line item, there should be one record in the dependent-child link.
As the name suggests, the dependent child link contains a dependent child, the Line Item Number. This is required because the opportunity could have the same product multiple times on different line items, for example, to provide different discount levels or line item descriptions. To be able to capture each individual line item, the granularity of the link is extended by the dependent child, so each line item can be described individually by the attached satellites.
The same is true for the non-historized link on the opportunity level. There might be repeating patterns of the same data, for example, retail transactions, phone call records, etc., that require a similar model to capture individual descriptions. In this specific example, there is no need for a transaction ID, which would be typically added to the non-historized link in a similar manner to the dependent child. This is due to the opportunity hub as it provides a unique identifier for each opportunity. Therefore, the alternate key on the hub references in the non-historized link is unique per opportunity already.
With this in mind, one could argue that a standard link would be sufficient to capture the granularity of the opportunities. This is correct, but using a non-historized link in this example is helpful for power users and data scientists as they are looking for these link entities to deliver facts.
The Customer Model
There is one interesting aspect in this dataset: both leads and contacts have a customer reference in the source data. However, there is no customer dataset. Instead, a customer is either a contact or an account (individual or organization) in relation to the link.
A bad practice would be to implement some conditional logic in the loading process of the Raw Data Vault to distinguish between such cases. However, conditions can change over time or break due to errors. Therefore, the recommendation is not to apply any business rules or conditions when loading the Raw Data Vault.
Instead, we use a simple trick: the data source considers the customer to be a generalized business object with no descriptions by itself. A generalized hub is used to capture this perspective from the data source. The business keys are the customer IDs from the lead and account tables. Similar references exist to opportunities.
The following diagram illustrates the hub customer:
Note that the only new entities in the overall model are the customer hub (with the customer ID as the business key) and the effectivity satellite on the hub. All other entities in the above diagram have been introduced in the previous sections already.
Therefore, the hub customer contains both lead and contact identifiers. Business rules are required to distinguish between them. We leave this to one of the next articles when we implement them in the Business Vault.
Conclusion
We discussed the process of how to get started with the Data Vault project and start creating the Data Vault model in this article. We also demonstrated how to model an actual dataset from Microsoft Dynamics CRM to provide a real-world example with many of the real decisions to be made: while one could argue that quote data could be either a business object or fact data, the same is true for opportunities. Depending on the client’s context and business practices, opportunities could be seen as business objects or facts.
In the case of the opportunities, it was decided to model this as fact data. This is largely due to the business requirements to analyze opportunities in a star schema.
However, this could also be argued for the quote object. So, what is the best decision? To be honest: we don’t know. It actually depends on many factors, such as the source metadata, the source data, business preferences and expectations for the information mart model, and other factors. The good news is that both models are valid and can serve any business expectation. One will be more efficient than the other. But since both are valid models, we can always refactor the model if your modeling decision could be improved.
We left an unsolved problem in this article: the issue of the generalized customer hub that contains business keys for both leads and contacts. We will demonstrate how to deal with the customer hub and relate its records to either the contact or lead hub, depending on the business key. Since this requires business logic, we solve it using Business Vault entities in the next article.
About the Authors
Michael Olschimke is co-founder and CEO at Scalefree International GmbH, a Big-Data consulting firm in Europe, empowering clients across all industries to take advantage of Data Vault 2.0 and similar Big Data solutions. Michael has trained thousands of data warehousing individuals from the industry, taught classes in academia and publishes on these topics on a regular basis.
Markus Lewandowski works as a senior consultant at Scalefree International GmbH, a Big Data consulting firm in Europe. His focus is on designing and implementing CRM solutions for a diverse range of clients, specializing in enterprise-level integration. Additionally, he shares his expertise by teaching a CRM solutions class at a local university.
Bibhush Nepal works as a technical solutions specialist in Business Intelligence and Data Warehousing at Scalefree International GmbH, a Big-Data consulting firm in Europe. His focus mainly lies in assisting the Internal Data Warehousing team to develop and continuously improve their Internal Data Warehouse and other related projects. He is currently pursuing his Master’s Degree in Data Analytics.
<<< Back to Blog Series Title Page
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Build an LLM-based application, benchmark models and evaluate output performance with Prompt Flow
Overview
In this article, I’ll be covering some of the capabilities of Prompt Flow, a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt Flow is available through Azure Machine Learning and Azure AI Studio (Preview).
Through Prompt Flow, I will:
Create a NER (Named-Entity Recognition) application;
Test different LLMs (GPT-3.5-Turbo vs. GPT-4-Turbo) through variant capability;
Evaluate output performance using a built-in evaluation method (QnA F1 Score Evaluation)
Note: As Azure AI Studio is in preview currently (February 2024), I’ll leverage Prompt Flow through the Azure Machine Learning Studio. After the preview period, everyone should use Azure AI Studio with Prompt Flow.
Create a NER application
On the Azure portal, go on Azure Machine Learning service, create a workspace, and launch the studio.
On the Azure Machine Learning Studio, you will find different features such as:
Prompt Flow, is a feature that allows you to author flows. Flows are executable workflows often consist of three parts:
Inputs: Represent data passed into the flow. There can be different data types like strings, integers, or boolean.
Nodes: Represent tools that perform data processing, task execution, or algorithmic operations. Tools are LLM tool (enables custom prompt creation utilizing LLMs), Python tool (allows the execution of custom Python scripts), Prompt tool (prepares prompts as strings for complex scenarios or integration with other tools)
Outputs: Represent the data produced by the flow.
Model Catalog, is the hub to deploy a wide-variety of third-party (Mistral, Meta, Hugging Face, Deci, Nvidia, etc.) open source as well as Microsoft developed foundation models pre-trained for various language, speech and vision use-cases. You can consume some of these models directly through their inference API endpoints called “Models as a Service” (e.g. Meta and Mistral) or deploy a real-time endpoint on a dedicated infrastructure (e.g. GPU) that you manage;
Notebook, to allow data scientists to create, edit, and run Jupyter notebooks in a secure, cloud-based environment;
Compute, a managed cloud-based workstation for data scientists. Each compute instance has only one owner, although you can share files between multiple compute instances. Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
Now, click on “Prompt Flow”, create a Flow by selecting a “Standard flow”. Now you should have a flow similar to this one:
This flow represents an application with different blocks. Let me go through each block (called node):
Inputs
Takes as input (prompt) a topic
Joke (LLM node)
Condition the LLM “to tell good jokes” through a system message. Takes as input the initial prompt.
You need to enable a Connection to make this action able to interact with an endpoint (e.g. LLM inference API, Vector Index such as Azure Search, Azure OpenAI deployed models, Qdrant, etc). To do that, you need to create this connection in a dedicated pane within Prompt Flow by specifying the provider, the endpoint, and credentials.
Echo (Python node)
Python script which takes as input the output (completion) of the LLM and echo it.
Output
Outputs the … output of the Python script.
To test your flow, provide an input and click on Run. On the outputs tab can review the outputs:
Now that I have a flow, I want to edit it to become a NER (Named Entity Recognition) flow that leverages an LLM to find entities from a given text content. To do that, I’ll edit the LLM node (previously named “joke”) and the Python node (previously named “echo”).
LLM node
I’ll rename the LLM node in “NER_LLM” with the configuration below.
To perform the NER I’ll use this prompt:
system:
Your task is to find entities of a certain type from the given text content.
If there’re multiple entities, please return them all with comma separated, e.g. “entity1, entity2, entity3”.
You should only return the entity list, nothing else.
If there’s no such entity, please return “None”.
user:
Entity type: {{entity_type}}
Text content: {{text}}
Entities:
Python node
I’ll rename the Python node in “cleansing” with the configuration below.
It runs the following Python code:
from typing import List
from promptflow import tool
@tool
def cleansing(entities_str: str) -> List[str]:
# Split, remove leading and trailing spaces/tabs/dots
parts = entities_str.split(“,”)
cleaned_parts = [part.strip(” t.””) for part in parts]
entities = [part for part in cleaned_parts if len(part) > 0]
return entities
Basically, this code snippet takes as input an entity (or entities if more than one) and cleanses a comma-separated string by removing extraneous whitespace, tabs, and dots from each element and returns a list of non-empty, trimmed strings.
Test the flow
To test the flow, I asked an LLM to provide me with an example. I asked the model to provide me in JSON format an “entity_type” and a “text” that contains entity or entities to extract through NER process. I took GPT-4-Turbo model through the Azure OpenAI Playground interface. Here is the example:
{“entity_type”: “location”, “text”: “Mount Everest is the highest peak in the world.”}
Then, I pass those into the inputs node on my flow:
Finally, I can run my flow. Basically, it will execute nodes after nodes from inputs to the outputs nodes:
I can see the result in the Outputs section:
And get more information in the Trace section such as API calls in which node, time to proceed, number of tokens process on the prompt and on the completion side, etc.
Variants
If you want to test different prompts, system messages, even different models you can create Variants. A variant refers to a specific version of a tool node that has distinct settings such as another models, different temperature, different top_p parameter, another prompts, etc. This way you’re able to perform basic A/B testing.
Let’s say you want to compare results between two of the most used OpenAI’s models that are GPT-3.5-Turbo and GPT-4-Turbo.
To make that happens, go on the LLM node, and click on “Generate variants”. On this example I’ll keep same prompt, same temperature, same top_p parameter, but I’ll change the LLM to interact with (from GPT-4-Turbo to GPT-3.5-Turbo).
To test multiple variants at the same time, click on Run, and select all my variants (variant_0 refers to GPT-4-Turbo and variant_1 refers to GPT-3.5-Turbo), so that I aggregate results within same outputs tab:
We can see that we obtain the same results, independently of the LLM used behind. Let’s be honest, this example isn’t very complex and can be easily handled by smaller model than GPT-4-Turbo but let’s keep it simple as the complexity of the task is not the main purpose of this blog post.
Evaluation
Now that I have a NER flow and having been able to test the application with different LLMs, I’d like to evaluate the output performance. This is where the Evaluate capability of Prompt Flow comes in. The evaluation feature enables you to select built-in evaluation methods and build your own custom evaluation methods.
Here, I’ll use the built-in “QnA F1 Score Evaluation” method. I won’t go deep into the details, but this evaluation method computes the F1 score based on words in the predicted answers and the ground truth.
Then, I need data samples to run the flow and then evaluate the outputs at a larger scale than one example. One of the use-case around Generative AI is the way of these models generate data samples so I’ll use GPT-4-Turbo to generate 50 examples that will serve to run flows and evaluate outputs.
Here is my system message:
Your task is to generate in .jsonl format a data set that will be used to evaluate an LLM-based application.
This application is performing NER (Named Entity Recognition) with 2 inputs: “entity_type” as a string (e.g. “job title”) and “text” as a string (e.g. “The software engineer is working on a new update for the application.”). The desired output is the entity or entities if they’re multiple (e.g. “software engineer”).
Here is my prompt:
Generate 50 samples:
Here are the first five lines of the completion:
{“entity_type”: “person”, “text”: “Elon Musk has announced a new Tesla model.”, “entity”: “Elon Musk”}
{“entity_type”: “organization”, “text”: “Google is planning to launch a new feature in its search engine.”, “entity”: “Google”}
{“entity_type”: “job title”, “text”: “Dr. Susan will take over as the Chief Medical Officer next month.”, “entity”: “Chief Medical Officer”}
{“entity_type”: “location”, “text”: “The Eiffel Tower is one of the most visited places in Paris.”, “entity”: “Eiffel Tower”}
{“entity_type”: “date”, “text”: “The conference is scheduled for June 23rd, 2023.”, “entity”: “June 23rd, 2023”}
Once I’m happy with my sample, I select Evaluate in Prompt Flow, where I can edit the run display name, add description and tags, select for each LLM nodes the variants I want to evaluate. In my case I select the two variants I created:
Now I need to select a runtime, upload my sample, do the inputs mapping:
Then I select the evaluation method I want to perform (QnA F1 Score Evaluation method here). Here I need to specify data sources for the ground_truth (from the sample) and for the answer (generated by the LLM within the flow):
Finally I can click on “Review + Submit”. Behind the scene, Prompt Flow is executing my flow in 2 separate runs, one with variant_0 and the other with variant_1. Once these runs will be completed, it will perform the QnA F1 Score Evaluation method to both runs.
We can see the results of the executions on the Runs tab:
First observation is the duration of each execution:
The execution based on variant_0 (GPT-4-Turbo) took 1mn 14s to be completed;
The execution based on variant_1 (GPT-3.5-Turbo) took 14s to be completed.
One thing to keep in mind in the LLM world is that using a larger model will – most of the time – result in longer inference speed.
Now let’s have a look at the evaluations. By selecting both evaluation runs we can output results:
We can observe that the flow with highest F1 score is the one leveraging GPT-4-Turbo model (F1 score == 0.95) compared to GPT-3.5-Turbo model (F1 score == 0.89).
Although the larger model results in better output performance evaluation, leveraging GPT-3.5-Turbo model results in 80% faster inference speed and more cost effective as well.
Inference speed and tokens pricing model are some of the trade-off that you need to make in order to make sure you choose the right model to answer your need.
Conclusion
In conclusion, we’ve been covering Prompt Flow within Azure Machine Learning and Azure AI Studio to build and evaluate AI applications powered by Large Language Models (LLMs). This blog post walks through the process of creating a Named-Entity Recognition (NER) application, testing it with different LLMs (GPT-3.5-Turbo and GPT-4-Turbo), and evaluating the output performance using the built-in QnA F1 Score Evaluation method.
We’ve been demonstrating the use of variants to perform A/B testing between different models and a performance evaluation using generated data samples to calculate the F1 score, highlighting the trade-offs between inference speed, model size, and cost-effectiveness.
About the author
Alexandre Levret is a Technology Specialist within Microsoft working with Digital Native customers (Unicorns & Scaleups) in EMEA on AI/ML and GenAI projects.
Microsoft Tech Community – Latest Blogs –Read More
Managing MDTI Premium licenses in Microsoft Entra Admin Center
This blog details how to assign and manage Defender Threat Intelligence (MDTI) licenses and contains links to helpful content and resources. It is intended for customers who recently purchased the MDTI Premium SKU or a SKU that enables MDTI Premium access for its user base, such as Copilot for Security. Global administrators or identity governance administrators responsible for assigning MDTI user seat assignments will find it particularly useful.
Prerequisites to assigning MDTI premium licenses
Your Microsoft account team should have notified you that your MDTI procurement processing has been complete and requested that you view the available licenses within your respective tenant. If your agreement has not been fully processed, you will not be able to view the “Defender Threat Intelligence” licenses.
Instructions to assign MDTI Premium Licenses
As mentioned above, global administrators or identity governance administrators are responsible for assigning MDTI premium licenses to users, and should review the following Microsoft Learn resources for best practices for assigning licenses to users within Microsoft Entra Admin Center:
Tutorial – Manage access to resources in entitlement management – Microsoft Entra ID Governance | Microsoft Learn
Microsoft Entra built-in roles – Microsoft Entra ID | Microsoft Learn
Figure 1 – This is how your “Defender Threat Intelligence” MDTI Premium SKU licenses appear in Microsoft Entra Admin Center.
Troubleshooting MDTI Premium Seat Assignments
Ensure that you have the permissions to assign “Defender Threat Intelligence” licenses. Only global administrators or identity governance administrators have the appropriate permissions to assign user licenses.
Check with your Microsoft Account team that your MDTI Premium SKU agreement has been processed.
If you have completed the troubleshooting steps above and still cannot locate your “Defender Threat Intelligence” licenses in Microsoft Entra Admin Center, please work with your Microsoft account team to engage a CSA or another technical resource for further support.
We Want to Hear from You!
Be sure to join our fast-growing community of security pros and experts to provide product feedback and suggestions. Let us know how MDTI is helping your team stay on top of threats. With an open dialogue, we can create a safer internet together. Also, learn more about how to use MDTI to unmask adversaries and address threats here.
Microsoft Tech Community – Latest Blogs –Read More
Streamlining Azure Marketplace Deployments
Streamlining Azure Marketplace Deployments
Navigating the complexity of deploying solutions to the Azure Marketplace is a common challenge faced by many of our partners at Microsoft. Recognizing this, our Global Partner Services team has developed a powerful tool to simplify this process: the Commercial Marketplace Offer Deployment Manager, or MODM.
Introducing MODM
MODM is a dedicated, first-party installer designed to streamline the deployment of intricate solutions in the Azure Marketplace. It is especially crafted to support deployments using HashiCorp’s Terraform and Azure Bicep, enhancing the versatility and efficiency of the deployment process.
How MODM Simplifies Deployment
The deployment process with MODM is straightforward, involving two main steps:
Step 1: Create Your Application Package
The initial phase involves packaging your solution into an application package using the Azure CLI Partnercenter Extension. MODM accommodates two types of solutions for packaging:
1. HashiCorp’s Terraform: This popular open-source infrastructure as code tool is now seamlessly supported for Azure Marketplace deployments. Previously, Terraform-based solutions needed conversion to Azure Resource Manager templates, a process that demanded significant development and testing efforts. MODM eliminates this requirement.
2. Azure Bicep: Azure Bicep offers a more readable and concise syntax compared to the JSON of Azure Resource Manager templates. With MODM, converting your Azure Bicep templates to ARM templates is a thing of the past.
Both Terraform and Bicep solutions require minimal prerequisites to be compatible with MODM. Place your solution in a directory with a main entry point file (main.tf for Terraform, main.bicep for Bicep), install the Azure CLI extension for Partnercenter, and execute a single command to create an application package ready for Azure Marketplace.
Simply execute:
az partnercenter marketplace offer package build –id simpleterraform –src $src_dir –package-type AppInstaller
Step 2: Publish Your Application Package
Publishing your application package follows the same protocol as any other Marketplace solution. Utilize the Azure CLI Extension for Partnercenter or the Partnercenter Portal for this purpose.
Post-Deployment: Installing Your Published Package
Installing a marketplace offer deployed with MODM is as straightforward as installing any other managed app. A unique aspect of MODM is the inclusion of a user-friendly front-end experience that allows you to monitor the installation progress and troubleshoot any issues that arise. Detailed documentation and a helpful video tutorial on this process are available for further guidance.
MODM’s Architecture Overview
MODM’s architecture is anchored by the App Installer, a virtual machine that plays a pivotal role in the deployment process. This component takes the packaged app.zip from the Partnercenter CLI command and oversees the installation, managing aspects like retries and machine restarts. A detailed breakdown of MODM’s architecture is available in our GitHub documentation.
Educational Resources and Tutorials
To assist you further, we have prepared video tutorials covering various aspects of using MODM:
Packaging Terraform Solutions
Installing the Published Offer
Source Code Repositories
MODM Installer
Azure CLI Partnercenter Extension
Microsoft Tech Community – Latest Blogs –Read More
Leverage Secure Multi Party Computation (SMPC) for machine learning inference in rs-fMRI datasets.
@Alberto Santamaria-Pang, @Ivan Tarapov, @Yonas Woldesenbet, @Sam Preston, @Rahul Sharma, @Nishanth Chandran, @Divya Gupta, @Kashish Mittal, and @Ajay Manchepalli.
Machine learning models are useful in analyzing patient data, helping in detecting diseases early, and enabling clinicians in creating personalized treatments. However, using these models in healthcare is challenging because it requires accessing and processing sensitive patient data while ensuring patient privacy and complying with strict regulations.
Traditional encryption methods can only protect data when it is stored and not when it is being used for computation. One way to perform computation on encrypted data is to decrypt it in a trusted region like a secure enclave, which is done in Microsoft’s product offering Azure Confidential Computing. A cryptographic way of protecting information exists that can operate directly on encrypted data without the need for decryption – this technique is known as Secure Multi-party Computation (SMPC). SMPC helps ensure that sensitive healthcare data remains secure while enabling healthcare professionals to perform computations on the data they need to provide better care for patients.
Traditional encryption vs. SMPC
While both traditional encryption methods and Secure Multi-Party Computation (SMPC) offer similar levels of data security, SMPC has the added capability of allowing computations on encrypted data. For instance, in the case of wanting to conduct model inference on an encrypted DICOM image, it’s possible to directly use the encrypted image with SMPC. The additional computational load or overhead of using SMPC depends on the specific function or computation being performed on the encrypted data.
Comparison criteria
Traditional encryption methods
Secure Multi‑Party Computation (SMPC)
Data exposure
Raw data needs to be decrypted for analysis or use.
Computation is performed on encrypted data.
Inference speed
Encryption and decryption overhead is minimal.
Joint computation on encrypted data can introduce overhead in latency.
Trust assumptions
Rely on trusted third‑party or secure infrastructure.
Distributed computation with privacy assurance.
Figure 1 Traditional encryption methods vs. Secure Multi‑Party Computation (SMPC).
SMPC transforms healthcare data analysis and ML
SMPC provides a solution that allows multiple parties to work together on their data without revealing any sensitive information. It helps healthcare providers and researchers securely analyze patient data and use ML models while maintaining patient privacy.
Here are some key benefits of SMPC in the healthcare sector:
Privacy preservation. SMPC protects individual patient data during the computation process. Each party only sees their own data, and the others’ data is hidden. This lets healthcare providers and researchers work together and use more data without risking privacy.
Collaborative research. SMPC facilitates collaborative research among healthcare institutions, enabling them to pool their data resources without compromising privacy. Multiple parties can train ML models together on their combined data while keeping patient records and information safe. This helps improve the ML models in healthcare by using more and different data sources and larger samples.
Secure data sharing. SMPC helps enable healthcare providers to more securely share specific information from their datasets with other authorized parties. For example, when studying rare diseases, healthcare organizations may be able to share some patient data points or features while helping preserve their identity and privacy. This controlled sharing mechanism helps enhance research and contributes to the advancement of medical knowledge.
Privacy‑preserving ML to improve the security of fMRI data analysis in healthcare.
In this blog we explore the application of SMPC to medical image analysis via machine learning techniques for a specific use case of functional Magnetic Resonance Imaging (fMRI) analysis. Applying ML to fMRI data has the potential to revolutionize healthcare by providing insights into brain function and diagnosing neurological disorders. However, the sensitive nature of fMRI data raises significant privacy concerns. To address these challenges, one may employ privacy‑preserving ML techniques, such as data anonymization, secure data encryption, federated learning, and differential privacy, which would allow leveraging the benefits of ML in fMRI analysis while maintaining patient confidentiality and adhering to regulatory requirements.
Before diving into the details of how OnnxBridge (an end-to-end compiler for converting Onnx Models to Secure Cryptographic backends) enables secure machine learning for fMRI data, it is important to understand how fMRI is relevant for neuroscience research. Functional magnetic resonance imaging (fMRI) is a technique that measures brain activity by detecting changes in blood flow. By using fMRI, researchers can identify which brain regions are involved in different cognitive functions, such as memory, language, or emotion. This is known as functional localization. However, fMRI data is often sensitive and confidential, as it can reveal personal information about the participants’ health, preferences, or personality. Therefore, it is essential to protect the privacy and security of fMRI data when performing machine learning analysis on it.
In the rest of this blog post, we cover these topics:
What rs‑fMRI is and how it measures brain activity by detecting changes in blood flow.
How SMPC protects the privacy and security of fMRI data when performing machine learning analysis using EzPC‑OnnxBridge, a crucial part of the EzPC project from Microsoft Research India (MPC-MSRI, 2021).
How to use EzPC‑OnnxBridge for rs‑fMRI to identify brain regions involved in different cognitive functions.
What is rs‑fMRI and how is it used to localize brain networks?
Unlike traditional fMRI, which captures brain activity during specific tasks or stimuli, rs‑fMRI delves into the spontaneous fluctuations of the brain when it is in a state of rest or free thinking. It explores the intricate networks of communication among different brain regions, shedding light on the underlying functional architecture that forms the foundation of our cognition.
The power of rs‑fMRI lies in its ability to measure and analyse blood oxygen level ‑dependent (BOLD) signals. By detecting changes in blood flow and oxygenation, rs‑fMRI provides a window into the brain’s dynamic activity during rest. These fluctuations in the BOLD signal, known as resting ‑state connectivity, are like whispers of communication between various regions of the brain, even when we are not consciously engaged in any cognitive task.
Through advanced computational algorithms and sophisticated statistical analysis, researchers can map and visualize these functional connections within the brain. However, it is important to note that rs‑fMRI is not without its challenges and limitations. The interpretation of resting ‑state connectivity requires careful consideration, as it represents correlations between brain regions rather than direct causality. Moreover, factors such as participant motion, physiological noise, and data pre‑processing methods can influence the results and must be rigorously addressed to help ensure data quality and reliability. Here’s where ML algorithms can help neuro‑radiologists to efficiently map and visualize brain networks towards different number of clinical applications. In this blog, we provide an example of how to use SMPC to automatically identify and localize brain networks using work published in [3].
Figure 2 Visualization of brain networks from 3D dual regression volumes.
How SMPC works using EzPC-OnnxBridge
We begin with an overview of how secure multi‑party computation (SMPC) works and then describe how EzPC‑OnnxBridge can be used in the application described above. EzPC OnnxBridge allows using SMPC without any knowledge of cryptography. We will now walk through the steps for using EzPC OnnxBridge for this application.
SMPC is a cryptographic primitive introduced in the 1980s [4,5] that helps enable two or more parties who have private data to collaborate (or compute joint functions) on their private/secret data, without sharing it in the clear with any entity. This is done through an interactive cryptographic protocol – each party performs computations on their data and exchange (seemingly random looking) messages with other parties iteratively. At the end of such an interaction, the parties learn only the output of the joint function. As an example, if two parties A and B have private inputs a and b and wish to compute the function y = f(a,b) which outputs 1 if a>b and 0, otherwise, they can run an SMPC protocol to precisely compute y and nothing else. SMPC protocols have been extensively studied in the cryptography community over the last four decades with latest research, such as the EzPC technology [6,7,8,9], making SMPC practical for large scale ML models. In the application of secure machine learning for fMRI data, we have 2 parties – one that holds the machine learning model and the other that holds an input data point for inference. For the first party, the weights of the ML model are private, while for the second party, the input data point is private. In typical applications, including ours, the model architecture is public and known to both parties.
1. Identify sensitive data
We first identify the data involved in a single inferencing between two parties:
Machine Learning Model (Model Weights + Model Architecture).
Input data for inference.
Image by author using [2].
In the above the secret (or private) data to the two parties are:
Model Weights (obtained after training publicly available model architecture on private data) to one party.
Input data to the other party.
Image by author using [2].
Typically, model architectures are openly available and do not hold any proprietary data of any of the parties.
2. Strip ML model of weights
Now that we know what the secret data involved in an inference are, the next step is to strip the ML model of its model weights so that the model architecture can be shared. This is shown in the figure below.
Image by author using [2].
The above step helps us confirm that the secret data is in no way involved in generating crypto protocols, and give us full control over our data, which we input only at the time of secure inference.
In the above image we can see the mlp.onnx model before and after its secret data (i.e., the weights and bias of all layers) is stripped and represented as an input value, which means the model architecture do not contain any secret data and expects it at runtime.
3. Generate SMPC protocols from architecture
After we have the model architecture without weights, we need to convert this architecture to cryptographically secure protocols which will run on the secret data and give us output as if it was run without any crypto or security guarantees involved. This is done through EzPC‑OnnxBridge and is depicted below.
Image by author using [2].
4. Secure inference on private data
Finally, we need to run the above generated crypto protocols for each of two parties involved. These protocols will take the secret data as input and will communicate with each other some encrypted (masked) bits and pieces of data, which have strong mathematical assurances such that at any point the data being communicated does not reveal any information about the secret data.
At the end of the computation, the output of the computation is revealed to the specified parties (one or both) involved in the computation.
Using EzPC OnnxBridge for rs-fMRI
EzPC offers an inference ‑app that serves as a front-end for SMPC operations. This application presents users with a graphical user interface (GUI) through which they can upload images and obtain results securely. Next, we’ll walk through the steps required to get the app running.
Internally, the application utilizes OnnxBridge, an ‑‑ end to end compiler, to convert Onnx files to SMPC cryptographic protocols. The compiler helps with the removal of confidential data from models before converting them to Secure Multi‑Party Computation (SMPC) protocols. Thus, EzPC provides a user ‑friendly interface that facilitates a more secure compilation and execution of machine learning models.
Let’s take a look at the practical implementation of OnnxBridge to conduct secure inference using the mlp.onnx model specifically designed for rs‑fMRI (resting‑state functional magnetic resonance imaging) images.
The setup steps from the EzPC GitHub repo will help us to get the inference‑app running. The steps will be executed in following order:
1. Install dependencies for:
Cryptographic backend
Compiler OnnxBridge
2. Set up server (model owner and model processing).
Extract MLP model from the JHU GitHub repository
Strips the model of its weights and save them in a file.
Loads the stripped model architecture.
Generates the secure backend code for the model architecture.
Share the stripped model architecture with dealer/client.
3. Set up dealer.
Compiles the model architecture received from server.
Compute and share pre generated randomness for server/client to reduce communication drastically and speed up inference.
Note: For the randomness generation there has been no involvement of secret data.
4. Set up client (acting as image owner).
Compiles the model architecture received from server.
5. Set up inference app.
Encrypts the input image and sends it to client VM which starts inference. See screenshots below.
Step 1: Upload the image.
Step 2: Receive encryption from dealer.
Step 3: Encrypt the image.
Step 4: Start secure inference.
With the above we can see how EzPC gives us an interface and empowers us with superior cryptographic backends to follow SMPC ideally without any interaction with the secret data.
References
MPC-MSRI. (2021). EzPC: Easy Secure Multi-party Computation. GitHub. Retrieved from https://github.com/MPC-MSRI/EzPC.
AmmarPL. (2021). fMRI Classification JHU. GitHub. Retrieved from https://github.com/AmmarPL/fMRI-Classification-JHU.
Empower Medical Innovations: Intel Accelerates PadChest & fMRI Models on Microsoft Azure* Machine Learning. https://www.intel.com/content/www/us/en/developer/articles/technical/intel-accelerates-padchest-fmri-models-on-azure-ml.html
Dsouza, Trevor. Machine Learning Icon, distributed under CC BY 3.0.
Ghate, S., Santamaria-Pang, A., Tarapov, I., Sair, H., Jones, C. (2022). Deep Labeling of fMRI Brain Networks Using Cloud Based Processing. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13598. Springer, Cham. https://doi.org/10.1007/978-3-031-20713-6_21. https://doi.org/10.1007/978-3-031-20713-6_21.
Yao, A. (1982). Protocols for Secure Computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (pp. 160-164). IEEE.
Goldreich, O., Micali, S., & Wigderson, A. (1987). How to play any mental game or A completeness theorem for protocols with honest majority. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (pp. 218-229). ACM.
Kumar, N., Rathee, M., Chandran, N., Gupta, D., Rastogi, A., & Sharma, R. (2020). CrypTFlow: Secure TensorFlow Inference. In Proceedings of the 41st IEEE Symposium on Security and Privacy (pp. 1247-1264). IEEE.
Rathee, D., Rathee, M., Kumar, N., Chandran, N., Gupta, D., Rastogi, A., & Sharma, R. (2020). CrypTFlow2: Practical 2 Party Secure Inference. In Proceedings of the 27th ACM Conference on Computer and Communications Security (pp. 1639-1656). ACM.
Chandran, N., Gupta, D., Rastogi, A., Sharma, R., & Tripathi, S. (2019). EzPC: Programmable and Efficient Secure Two-Party Computation for Machine Learning. In Proceedings of the 4th IEEE European Symposium on Security and Privacy (pp. 123-138). IEEE.
Gupta, K., Kumaraswamy, D., Chandran, N., Gupta, D. (2022). LLAMA: A Low Latency Math Library for Secure Inference. In Proceedings of the Privacy Enhancing Technologies Symposium (PoPETS).
Do more with your data with Microsoft Cloud for Healthcare
With Azure AI Health Insights, health organizations can transform their patient experience.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More