Category: Microsoft
Category Archives: Microsoft
Vulnerabilities List from Defender API not filtering by My Organization
Hi all,
when I consult the api for the list of vulnerabilities detected by Defender using this URL: GET https://api.securitycenter.microsoft.com/api/Vulnerabilities, I get a list of >250K vulnerabilities, but when I consult in the Defender Portal (Vulnerability Management –> Weakness) the list is 11K.
It seems that the API is not filtering vulnerabilites on “my organization”, but the complete list of vulnerabilities known by Defender.
Is there any filter that I can apply to the API URL to bring only vulnerabilities on my organization?
Thanks and best regards,
Alberto Medina
Hi all,when I consult the api for the list of vulnerabilities detected by Defender using this URL: GET https://api.securitycenter.microsoft.com/api/Vulnerabilities, I get a list of >250K vulnerabilities, but when I consult in the Defender Portal (Vulnerability Management –> Weakness) the list is 11K. It seems that the API is not filtering vulnerabilites on “my organization”, but the complete list of vulnerabilities known by Defender. Is there any filter that I can apply to the API URL to bring only vulnerabilities on my organization? Thanks and best regards,Alberto Medina Read More
I cannot delete an old org in teams
Hello,
Unfortunately, I cannot delete an old org in my personal teams, For which I used to work years ago. The switch to make it invisible is grayed out. When I go to my settings and “My Groups”, there is no sign of that org.
What did I miss? There must be a simple explanation 🙂
Hello, Unfortunately, I cannot delete an old org in my personal teams, For which I used to work years ago. The switch to make it invisible is grayed out. When I go to my settings and “My Groups”, there is no sign of that org. What did I miss? There must be a simple explanation 🙂 Read More
Additional Standard Database Template (MS Access)
I was inspired by the Northwind Database and I decided to work on an alternative package to complement the MS Access standard templates. I developed what I am proposing, an additional accounting package template using MS Access which goes all the way to Balance sheet, Profit and Loss, Debtors and Creditors statement, Aging analysis, Project codes, Work in progress, Inventory valuation, Invoicing, Credit notes, General ledger transaction history and analysis. It also comes with user logon, assigning of access rights, locking transacting periods, inventory movement reports, sales analysis by product, by customer, by Sales rep and so much more. I believe it can be a great addition to the pre-loaded MS Access Databases and increase the uptake of Office MIS products
I was inspired by the Northwind Database and I decided to work on an alternative package to complement the MS Access standard templates. I developed what I am proposing, an additional accounting package template using MS Access which goes all the way to Balance sheet, Profit and Loss, Debtors and Creditors statement, Aging analysis, Project codes, Work in progress, Inventory valuation, Invoicing, Credit notes, General ledger transaction history and analysis. It also comes with user logon, assigning of access rights, locking transacting periods, inventory movement reports, sales analysis by product, by customer, by Sales rep and so much more. I believe it can be a great addition to the pre-loaded MS Access Databases and increase the uptake of Office MIS products Read More
CEF Collector ingesting logs to ‘Syslog’ table instead of ‘CommonSecurityLog’
I am forwarding Palo Alto and Fortinet Firewall logs to the CEF Collector but in Sentinel it is showing logs in ‘Syslog’ table instead of ‘CommonSecurityLog’. What could be the issue? Everything is in place including DCR as well.
I am forwarding Palo Alto and Fortinet Firewall logs to the CEF Collector but in Sentinel it is showing logs in ‘Syslog’ table instead of ‘CommonSecurityLog’. What could be the issue? Everything is in place including DCR as well. Read More
Exception for Forwarding External Email to Group with External Senders Blocked
Is it possible to create an exception to allow a single external email address to a group which blocks external senders?
Is it possible to create an exception to allow a single external email address to a group which blocks external senders? Read More
Missing entries in custom log table
We are writing to a custom log table in a few Log Analytics workspaces – these workspaces are targetted by a few different instances of our application (beta/staging/prods, etc).
Interestingly some 3 of these workspaces are missing certain logs while the other 5 or so do have it. There are no exceptions thrown in our asp.net core code where we do a SendMessage to OMS either.
Any ideas if something like this is possible and how to troubleshoot/fix?
Thanks
We are writing to a custom log table in a few Log Analytics workspaces – these workspaces are targetted by a few different instances of our application (beta/staging/prods, etc).Interestingly some 3 of these workspaces are missing certain logs while the other 5 or so do have it. There are no exceptions thrown in our asp.net core code where we do a SendMessage to OMS either. Any ideas if something like this is possible and how to troubleshoot/fix?Thanks Read More
Create error message when Currency Field exceeds maximum
I have a SharePoint form that I want to set the maximum allowed value and have an error message appear before the form is saved telling the requester that the field exceeds the maximum allowed value. I have tried a variety of things but am not having any luck. The form can’t be saved but it doesn’t inform a user as to why.
I have a SharePoint form that I want to set the maximum allowed value and have an error message appear before the form is saved telling the requester that the field exceeds the maximum allowed value. I have tried a variety of things but am not having any luck. The form can’t be saved but it doesn’t inform a user as to why. Read More
AI+API better together: Benefits & Best Practices using APIs for AI workloads
This blog post will give you an overview of benefits and best practices you will get harnessing APIs and an API Manager solution when integrating AI into your application landscape.
Adding Artificial intelligence (AI) to existing applications is becoming an important part in application development. The correct integration of AI is vital to meet business goals, functional and non-functional requirements and build applications that are efficient to maintain and enhance. APIs (Application Programming Interfaces) play a key part in this integration and an API Manager is fundamental to keep control of the usage, performance, and versioning of APIs – especially in enterprise landscapes.
Quick Refresher: What is an API & API Manager?
An API is a connector between software components, promoting the separation of components by adding an abstraction layer, so someone can interact with a system and use its capability without understanding the internal complexity. Every AI service we leverage is accessed via an API.
An API Manager is a service that manages the API’s lifecycle, acts as single point of entry for all API traffic, and is a place to observe APIs. For AI workloads it is an API gateway that sits between your intelligent app and the AI endpoint. Adding an API Gateway in front of your AI endpoints is a best practice to add functionality without increasing the complexity of your application code. You also create a continuous development environment to increase the agility and speed of bringing new capabilities into production while maintaining older versions.
This blog post will show the benefits and best practices of AI + APIs in 5 key areas:
Performance & Reliability
Security
Caching
Sharing & Monetization
Continuous Development
The best practices in bold are universal and apply to any technology. The detailed explanation focuses on the features of Azure API Management (APIM) and the Azure services surrounding it.
1. Performance & Reliability
If you aim to add AI capability to an existing application, it feels the easiest to just connect an AI Endpoint to an existing app. In fact, a lot of tutorials use this scenario.
While this is a faster setup at the beginning, it leads to challenges and code complexity eventually once application requirements increase or multiple applications use the same AI service. With more calls targeting an AI Endpoint, performance, reliability, and latency will become requirements. Azure AI services have limits and quotas, but exceeding those limits will lead to error responses or unresponsive applications. To ensure a good user experience in production workloads, an API manager between the intelligent app and the AI Endpoint is a best practice.
Azure APIM, acting as an AI Gateway, provides load balancing and monitoring of AI Endpoints to guarantee a consistent and reliable performance of your deployed AI models and your intelligent apps. For the best result, multiple instances of an AI model should be deployed in parallel for requests to be distributed evenly (see Figure 2). The number of instances depends on your business requirements, use cases and forecasted peak traffic scenarios. You can route the traffic randomly or via round robin to load balance it evenly. For a more targeted routing you can distribute traffic.
Distributing requests across multiple AI instances is more than just load balancing. Using built in-policies or writing custom policies in Azure APIM, enables you to route traffic to selected Azure AI Endpoints or forward traffic to a more regional endpoint closer to the user’s location. For more complex workloads, the use of backend pools can add value (see Figure 3). A backend pool defines a group of resources which can be targeted depending on their availability, time to respond or workload. APIM can distribute incoming requests across them based on patterns like the circuit breaker pattern, preventing applications from repeatedly trying to execute an operation that’s likely to fail. Both ways of distribution are a good practice to ensure optimal performance and reliability in case of planned outages (upgrades, maintenance) or unplanned outages (power outages, natural disasters), high traffic scenarios or data residency requirements.
Another method to keep performance high and requests under control is by adding a rate limiting pattern to throttle traffic to AI models. Limiting access by time, IP address, registered API consumer or API key allows you to protect the backend against volume bursts as well as potential denial of service attacks. Applying an AI token-based limit as a policy is a good practice to define throttling tokens per minute and restrict noisy neighbours (see Figure 4).
But rate limiting and load balancing are not enough to ensure high performance. A consistent monitoring of workloads is a fundamental part of operational excellence. This includes health checks of endpoints, connection attempts, request times or failure counts. Azure API Management can help to keep all information in one place by storing analytics and insights of requests in a Log Analytics workspace (see Figure 4). This allows you to gain insights into the usage and performance of the APIs, API operations and how they perform over time, or in different geographic regions. Adding Azure Monitor to the Log Analytics workspace allows you to visualize, query, and archive data coming from APIM, as well as trigger corresponding actions. These actions can be anomaly alerts sent to API operators via push notification, email, SMS, or voice messages on any critical event.
2. Security
Protecting an application is a key requirement for all businesses to prevent data loss, denial of service attacks or unauthorized data access. Security is a multi-layer approach including infrastructure, application, and data. APIs act as one security layer to provide input security, key management, access management, as well as output validation in a central place.
While there is not one right way of adding security, adding input validation at a central point is beneficial for easy maintenance and fast adjustment when it comes to new vulnerabilities. For external facing applications this should include an Application Firewall in front of APIM. Input validation in Azure includes that APIM scans all incoming requests based on rules and regular expressions to protect the backend against malicious activities and vulnerabilities such as SQL injection or cross site scripting. That allows only valid requests to be processed by the AI Endpoints. Validation is not limited to input but can also be used for output security, preventing data to be exposed to external or unauthorized resources or users (see Figure 5).
Access management is another pillar of security. To authenticate access to Azure AI endpoints you can provide the API keys to the developers and give them direct access to the AI Endpoints. This, however, leaves you out of control who is accessing your AI models. A better option is to store API keys in a central place like Azure APIM and create an input policy. Access is then restricted to authorized users and applications.
Microsoft Entra ID (formerly Azure Active Directory) is one cloud based identity and access management solution that can authenticate users and applications by SSO (Single-Sign-on) credentials, password, or an Azure managed identity. For more fine-grained access and as part of a defence-in-depth strategy a backend authorization with OAUTH 2.0 and JWT (JSON web token) validation is a good practice to add.
Within Azure API Management you can also fine tune access rights per user or user groups by adding RBAC (Role based access control). It is good practice to use the built-in roles as a starting point to keep the number of roles as low as possible. If the default roles do not match your company’s needs, custom roles can be created and assigned. Adding users to groups and maintaining the access rights at the group level is another good practice as it minimizes maintenance efforts and increases structure.
3. Caching
Do you have an FAQ page that covers the most common questions? If you do, you likely created it to lower costs for the company and save time for the user. A response cache works the same way and can store previously requested information in memory for a predefined time and scope. Information that does not change frequently or contain sensitive information can be stored and reused. When using a cache, every request from the front-end is analysed semantically to check if an answer is available in the cache. If the semantic search is successful, the response from the cache will be used otherwise the request is forwarded to the AI model and the response is sent to the requesting application and stored in the cache if requirements for caching are met.
There are different caching options (see Figure 7): (1.) Inside the Azure APIM cache for simple use case scenarios or (2.) in an external cache like Redis Cache for more control over the cache configurations.
To get insights into frequently asked questions and cache usage, analytics data can be collected with Application Insights and visualized in real time using Grafana Dashboards. This allows you to identify trends in your intelligent apps and share insights for application improvement with decision makers and model fine tuning with engineering teams.
4. Sharing, Chargeback & Monetization
Divide and conquer is a common IT paradigm which can help you with your AI use cases. Sharing content and learnings across divisions rather than working in isolation and repeating similar work increases the speed of innovation and decreases the costs for development of new IP (intellectual property). While this is not possible in every company, most organizations would welcome a more collaborative approach especially when developing and testing new AI use cases. Developing tailored AI components in a central team and reuse them throughout the company will add speed and agility. But how do you track the usage across all divisions and share costs?
Once you have overcome the difficult cultural aspect of sharing information across divisions, charging back costs will be mainly an engineering problem. With APIM, you can bill and chargeback per API usage. Depending on how you want to chargeback or monetize your AI capability, you have different billing methods to choose from; Subscription and Metered. With Subscription billing, the user pays a fixed fee upfront and uses the service according to the terms and condition, like a video streaming service. This billing model gives you, as the API owner, a predictable income and capacity planning. Conversely, with Metered billing, the user pays according to the frequency of their activity, similar to an energy bill. This option gives the user more freedom to only pay what they use but it is more suited for organisations with highly scalable infrastructure set ups, as Metered billing can make scaling out AI instances more complex.
Monitoring the analytics of each call can help with scaling and optimization. Without accessing the content itself monitoring gives you a powerful tool to track real time analytics. Through outbound policies the analytics data can be streamed with Event Hub to PowerBI to create real time dashboards or to Application Insights to view the token usage for each client (see Figure 8). This information can help to automate internal chargeback or generate revenue by monetizing IP. An optional integration of 3rd party payment providers facilitate the payments. This solves the cost question. But once you share your IP widely, how can you ensure high performance for all users?
Limiting the requests per user, token or time (as explained in the Performance section) controls how many requests a user can send based on policies. This gives each project the right performance for the APIs they use. Sending the requests to a specific AI instance based on the project’s development stage, helps you to balance performance and costs. For example, Dev/Test workloads can be directed to the less expensive Pay as you Go instances when latency is not critical, while production workloads can be directed to AI Endpoints that use provisioned throughput units (PTUs) (see Figure 9). This allocated infrastructure is ideal for production applications that need consistent response times and throughput. By using the capacity planner to plan the size for your PTU, you will have a reserved AI instance that suits your workloads. Future increases in traffic can be routed to either another PTU instance or a Pay as you go instance in the same region or another one.
5. Continuous Development
Keeping up with the quick evolving AI Models is challenging as new models come to the market within months. Companies need to choose with every new model available, if they want to stay on a former version or use the newest for their use case. To keep the development lifecycle most efficient, it is a good practice to have separate teams focusing on parts of the application; Divide & conquer. This can mean a parallel development of the consuming application and the corresponding AI capability within a project team or a central AI team sharing their AI capability with the wider company. For either model, using APIs to link the parts is paramount. But the more APIs created, the more complex the API landscape becomes.
A single API Manager is a best practice to manage and monitor all created APIs and provide one source of information for sharing APIs with your developers, allow them to test the API operations and request access. The information shared should include an overview of the available API versions, revisions, and their status, so developers can track changes and switch to a newer API version when needed or convenient for their development. A roadmap is a nice-to-have feature if your development team is comfortable sharing their plans.
While such an overview of APIs can be created anywhere and is still often seen in Wikis, it is best to keep the documentation directly linked to your APIs, so it stays up to date. Azure APIM automatically creates a so called Developer Portal, a customizable webpage containing all the details about the APIs in one place reflecting changes made in APIM immediately, as the two services are linked (see Figure 10). This additional free of charge portal provides significant benefits to API developer and API consumer. The API consumer can view the APIs, the documentation, and conduct tests of all API operations visible to them. The API developer can share additional business information, set up a fine granular access management for the portal and track API usage to get an overview which API versions are actively used and when it is safe to retire older versions or provide long-term support.
Application development is usually brown field, with existing applications or APIs deployed in different environments or on multiple clouds. APIM supports the Import of existing OpenAPI specification and other APIs to facilitate adding all APIs into one API Management. The APIM instances can then be deployed on Azure or other cloud environments as managed service. This allows you and your team to decide when to move workloads if wanted or needed.
Summary
AI-led applications usher in a new era of working, and we’re still in its early stages. This blog post gave insights why AI and APIs are a powerful combination, and how an API Manager can enhance your application to make it more agile, efficient, and reliable. The best practices I covered on performance, security, caching, sharing, and continuous development are based on Microsoft’s recommendations and customer projects I’ve worked on across various industries in the UK and Europe. I hope this guide will help you to design, develop, and maintain your next AI-led application.
Microsoft Tech Community – Latest Blogs –Read More
Help Needed: Microsoft 365 Features on School Chromebook?
Hi everyone,
I hope you’re all doing well. I’m considering installing Microsoft 365 on my school Chromebook and I’m curious about what features I can expect. Has anyone here installed Microsoft 365 on their Chromebook?
I’m particularly interested in knowing which features are fully functional and if there are any limitations compared to using Microsoft 365 on a traditional laptop. Are there any tips or tricks for optimizing its use on a school chromebook?
Your insights and experiences would be greatly appreciated. Thanks in advance for your help!
Best regards,
Jonathan Jone
Hi everyone, I hope you’re all doing well. I’m considering installing Microsoft 365 on my school Chromebook and I’m curious about what features I can expect. Has anyone here installed Microsoft 365 on their Chromebook? I’m particularly interested in knowing which features are fully functional and if there are any limitations compared to using Microsoft 365 on a traditional laptop. Are there any tips or tricks for optimizing its use on a school chromebook? Your insights and experiences would be greatly appreciated. Thanks in advance for your help! Best regards,Jonathan Jone Read More
Microsoft Windows Hardware Developer Program
Dear all,
I am writing this because I just created a new startup together with my co-founder, where we are developing a new kind of software to secure transactions in digital assets with. Long story short, we have a driver that needs to be signed, but it is our first time doing so & are looking for accurate info from someone who has experience with it.
Would you be so kind & help us answer the following questions:
– Do you know how long it takes to get your EV certificate?
– Do you know how long it takes the Microsoft Hardware Developer Program to sign your driver?
– Does the latter cost you anything?
Thanks a million in advance,
MJ
Dear all, I am writing this because I just created a new startup together with my co-founder, where we are developing a new kind of software to secure transactions in digital assets with. Long story short, we have a driver that needs to be signed, but it is our first time doing so & are looking for accurate info from someone who has experience with it. Would you be so kind & help us answer the following questions:- Do you know how long it takes to get your EV certificate?- Do you know how long it takes the Microsoft Hardware Developer Program to sign your driver? – Does the latter cost you anything?Thanks a million in advance,MJ Read More
Seeking Integration Advice: Using Exchange to Notify Followers About Social Media Updates
Hi everyone,
I’m working on a project that involves integrating email notifications with social media updates. Specifically, I’m looking to send updates from Instagram directly to a user’s email managed by Exchange. The goal is to ensure followers are kept up-to-date with the latest posts and stories.
Has anyone here successfully set up a similar integration? What are the best practices to ensure these notifications are timely and don’t end up in the spam folder?
For those interested in the social media aspect, the project is related to an app that helps increase Instagram followers by providing various tools and insights. You can check it out here.
Looking forward to your insights and suggestions!
Thanks,
Hi everyone,I’m working on a project that involves integrating email notifications with social media updates. Specifically, I’m looking to send updates from Instagram directly to a user’s email managed by Exchange. The goal is to ensure followers are kept up-to-date with the latest posts and stories.Has anyone here successfully set up a similar integration? What are the best practices to ensure these notifications are timely and don’t end up in the spam folder?For those interested in the social media aspect, the project is related to an app that helps increase Instagram followers by providing various tools and insights. You can check it out here.Looking forward to your insights and suggestions!Thanks, Read More
Hyper-V Manager in Windows 11 opening VM properties much to slow
I’m using Hyper-V on Windows 11 Pro and it takes about 20 seconds opening the properties of a VM.
That’s much too much…
E.g. creating a new VM from ISO, you have to press a key for booting from DVD, but that’s impossisble, cause Hyper-V manager doesn’t open the interface early enough.
I alreade deinstalled Hyper-V completely and reinstalled it – but no change…
I’m using Hyper-V on Windows 11 Pro and it takes about 20 seconds opening the properties of a VM.That’s much too much…E.g. creating a new VM from ISO, you have to press a key for booting from DVD, but that’s impossisble, cause Hyper-V manager doesn’t open the interface early enough.I alreade deinstalled Hyper-V completely and reinstalled it – but no change… Read More
Enhanced Filtering for Connectors – Improving Deliverability and Minimizing False Positives
Enhanced Filtering for Connectors (EFC) helps ensure that emails retain their original IP address and sender information when being routed through various services before being routed to Exchange Online by allowing for more accurate identification of spoofing attempts.
We’re rolling out an update that will reclassify messages with authentication issues and reduce false positives (e.g., the misidentification of legitimate emails as spoofed).
What’s changing?
When email messages travel through different servers, they can get modified along the way. Sometimes, these modifications unintentionally break the authentication process. Specifically, if a previous server in the chain doesn’t support a protocol called Authenticated Received Chain (ARC), it can lead to authentication failures. Authentication failures can occur where DomainKeys Identified Mail (DKIM) is the only source of alignment for Domain-based Message Authentication Reporting & Conformance (DMARC). With the changes we are rolling out, messages that would have previously failed email spoof checks will now have composite authentication compauth=none instead of compauth=fail. This will allow Exchange Online Protection (EOP) to recognize the failed DKIM due to modifications. This change will introduce new compauth codes of 4xx and 9xx.
What is expected after the change?
Decrease in False Positives: Legitimate emails that were previously mislabeled as spoofed will now be correctly identified.
Enhanced Accuracy: The accuracy of the filtering stack and machine learning models will be improved, leading to better detection and prevention of spoofing and phishing attempts.
Reliable Email Authentication: The use of SPF (Sender Policy Framework), DKIM, and DMARC will be more effective in establishing the reputation of sending domains, further aiding in the detection of impersonation and spoofing.
What should email and security admins do?
The change will rollout starting in early June 2024 and will be complete by mid-July 2024. It will be enabled by default for all tenants using Enhanced Filtering for Connectors, requiring no additional action from admins.
If your organization is using an Exchange Transport Rule (ETR) to bypass spam filtering when third-party filtering services are used, consider removing the ETR knowing that messages that had previously failed DKIM checks should be delivered to inboxes correctly after this change is rolled out. We will identify DKIM signatures that would have passed if a trusted third-party service had not modified the information.
This will allow you to deploy a defense-in-depth strategy for email messages, using your initial solution and Microsoft Defender for Office 365. Note, messages failing DKIM even without third-party intervention will continue to fail.
Finally, we strongly recommend organizations to adopt ARC whenever possible to preserve the original authentication statements in email messages.
Additional information
If your organization is already using EFC, you will find this change announced in the Message Center soon.
Enhanced filtering for connectors in Exchange Online
Anti-spam message headers – Microsoft Defender for Office 365 (Compauth and Authentication Results in Anti-Spam message headers)
Configure trusted ARC sealers – Microsoft Defender for Office 365
Manage mail flow using a third-party cloud service with Exchange Online
Getting started with defense in-depth configuration for email security – Microsoft Defender for Office 365
Microsoft Defender for Office 365 Team
Microsoft Tech Community – Latest Blogs –Read More
How Surface embedded firmware has evolved over 10+ years
Behind the scenes at Surface, a dedicated team of engineers ensures the hardware and software components of our devices function seamlessly. A crucial part of this integration is the embedded firmware — the software that operates on the microcontrollers and other low-level components of Surface devices. Have you ever wondered what happens after you press the power button and see the spinning circle that shows your system is booting up? That’s when the embedded firmware kicks in, managing power, thermal conditions, security, connectivity and other critical features—ensuring your device “just works.”
In this post, we’ll explore the history of embedded firmware in Surface devices, how we’ve tackled the challenges of supporting a growing product portfolio and how we evolved our firmware architecture to enhance efficiency, quality and scalability.
The early days: Custom firmware for each device
Initially, Surface offered just two products: the original Surface and Surface Pro. Each had custom firmware tailored to its specific needs. While effective for a small lineup, this approach didn’t scale. As we expanded our range to more form factors along with accessories like headphones, firmware development grew increasingly complex and costly. Customizing firmware for each device, with their unique features, introduced new challenges. There was more duplication and inconsistency, making it harder to maintain quality. Common issues such as power management glitches had to be addressed across multiple firmware bases, and new features like Instant On needed to be implemented individually, significantly increasing development time and risk.
A Common firmware architecture
As the Surface family expanded, the embedded firmware team looked for a solution that allowed code and resource sharing across devices while maintaining the flexibility for customization. The answer was a shared, common firmware architecture. This innovation provided core functionality for most Surface devices, with device-specific firmware extensions. We could make a single fix or add a feature and apply it across all Surface models. The result: quick and efficient security updates that reduced coding and testing cycles for each new product. Introduced nearly nine years ago, this was the first standardized embedded firmware architecture used across the Surface portfolio.
A more flexible and robust firmware architecture
Despite the success of the original architecture, evolving product requirements and an expanding feature set posed new challenges. Key issues included hardware scalability, software coupling and the need for greater per-product flexibility. The common firmware was excellent for consistency but limited the customization for unique device requirements. And as firmware codebases grew amid shrinking release cycles, we looked to automation and continuous integration/continuous delivery (CI/CD) as the most efficient way to deliver quality and reliability.
In response, our team developed a more flexible and robust firmware architecture, now used in nearly every product we ship. This architecture supports a range of silicon platforms and maximizes developer efficiency through code reusability, robust automation and CI/CD capabilities. It ensures a consistent customer experience across diverse devices like the Surface Pro, Surface Dock and Surface Laptop.
The future of Surface embedded firmware
Despite our success, the journey is far from over. We’re always looking ahead and assessing the needs of the device ecosystem to deliver the best possible firmware platform for our customers, partners and developers. Whether we’re enhancing device security, improving performance through advanced sensor integration or introducing convenient features like the Copilot key, it’s an exciting time to be in embedded firmware development. Plus, new initiatives like RUST-based security measures are a game changer. We look forward to sharing how these innovations can build security into Windows systems by design.
Microsoft Tech Community – Latest Blogs –Read More
How to download and install Windows 24H2 for Insider Preview, Beta?
I am currently on Beta channel. I am not able to find the Windows Insider Preview 24H2 version on any of the website. Is the ISO available right now to download/install manually (for Beta) or do I have to wait? If so, any estimated/approximate ETA wait time?
I am currently on Beta channel. I am not able to find the Windows Insider Preview 24H2 version on any of the website. Is the ISO available right now to download/install manually (for Beta) or do I have to wait? If so, any estimated/approximate ETA wait time? Read More
is it possible to stop users downloading chrome apps via intune?
Hi all,
Is it possible to stop users downloading app versions of sites ie youtube on chrome?
If so can this be done via intune?
Hi all, Is it possible to stop users downloading app versions of sites ie youtube on chrome? If so can this be done via intune? Read More
How can I Fix Intuit Data Protect to Backup Company Files?
I’m encountering issues with Intuit Data Protect to Backup Company Files. It’s been failing consistently, and I’m worried about data loss. What could be causing this problem, and how can I troubleshoot it effectively?
I’m encountering issues with Intuit Data Protect to Backup Company Files. It’s been failing consistently, and I’m worried about data loss. What could be causing this problem, and how can I troubleshoot it effectively? Read More
Logic app Standard Storage issue investigation using Slots
Logic Apps require an Azure Storage Account file share to host their files. Even though the Logic App site will work without a storage account, it will not be scalable across multiple instances.
Unfortunately, if the storage is inaccessible due to DNS or other network issues, both the main site and the Kudu site will not work, resulting in the following error:
System.Private.CoreLib: The network path was not found. : ‘C:\home\data\Functions\secrets\Sentinels’
Usually, to resolve network issues, we need the Kudu console working, but in this case, it is not because the site is broken.
We can make it work without a storage account by using the Slots option, which utilizes local storage.
Steps
Rename the following two environment variables by adding X at the beginning of the key name:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING
WEBSITE_CONTENTSHARE
Go to Slots and add a new one.
Click on the new Slot and go to Advanced Tools (Kudu).
In the CMD console, download the PS1 file responsible for diagnostics using the command:
curl -o “LADiag.ps1” https://raw.githubusercontent.com/mbarqawi/logicappfiles/main/LADiag.ps1
After downloading the PS1 file, switch to PowerShell to execute the script. You can find the script here: LADiag.ps1
Run the PS1 script by typing:
./LADiag.ps1
Check the file list; you will find an HTML file named ConnectionAndDnsResults.html.
Test cases
In the PowerShell:
TCPPing on all storage four endpoints. This command will ensure that the HTTPS endpoint is pingable.
Use NameResolver on all endpoints to ensure that the DNS is resolvable.
Connect to the file endpoint and list all the shares using REST HTTPS on port 443.
Perform TCPPing to the file endpoint on SMB port 445.
For additional network-related commands, you can follow this article: Networking related commands for Azure App Services.
Cleaning
After you generated the report and figure out the issue , you can delete the slot and rename back the environment variable
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Training Search Experience using Azure AI Video Indexer
Training and development are vital for Companies often have vast archives of training videos, webinars, and recorded sessions. However, efficiently finding the right training content from video archives can be challenging. This example demonstrates how Azure AI Video indexer, combined with Azure Open AI, can make finding relevant video content easier for learners and trainers. Organizations struggle with the following issues in managing their training video archives:
Difficulty in locating specific content: Employees often spend excessive time searching for specific training modules or information within hundreds of training videos.
Inconsistent search results:
Poor user experience: The lack of contextually relevant search results can degrade the overall user experience, reducing the effectiveness of training programs.
The Solution
Creating a video archive search solution using Azure Video Indexer can address these challenges effectively. Here’s how:
Indexing training videos: Use Azure AI Video Indexer to extract metadata, such as transcripts, keywords, Object Character Recognition OCR and detect peopleobjects.
Advanced search capabilities: including exact time stamps in the video.
Enhanced retrieval: Use the RAG (retrieval-augmented generation) pattern to retrieve relevant segments from the training videos and generate detailed, informative responses using Large Language Models (LLM). This ensures that users not only find the right video but also get specific answers to their queries.
Benefits
Improved Search: Employees can easily find specific training content, enhancing their learning experience.
Time Efficiency: Reduces the time spent searching for information, allowing employees to focus more on learning and development.
Contextual Relevance: Delivers accurate and contextually relevant search results, improving the overall effectiveness of training programs.
Enhanced User Experience: Provides seamless and intuitive search experience, increasing user satisfaction and engagement.
General Implementation Steps
Below is a general description of the implementation steps.
Visit the Azure AI Video Indexer sample repository on GitHub for a detailed guide on how to implement the solution.
Step 1: Data Indexing with Video Indexer
Extract Metadata: Use Azure AI Video Indexer to analyze and extract metadata from your training videos. This includes transcripts, keywords, OCR and other relevant data.
Index Metadata: Index the extracted metadata using Azure AI Search or other Vector DB to create a searchable database.
Step 2: Configure Azure OpenAI Service
Set Up ChatGPT: Configure the Azure OpenAI Service to access the ChatGPT model, enabling natural language understanding and generation capabilities.
Integrate with Search: Connect the Azure OpenAI Service to your indexed data, allowing it to process and respond to user queries.
Step 3: Develop Search Interface
User Interface: Create a user-friendly interface where employees can input their queries. This interface should support natural language queries and provide clear, concise search results.
Query Processing: Implement query processing using the RAG pattern. Retrieve relevant video segments from the indexed data and use ChatGPT to generate detailed responses.
Example Scenario
For example, an auto dealer sales representative wants to learn more about a new car model, Lux XS, to prepare for an upcoming sales event. The representative quickly accesses the Video Q&A internal portal and asks, “What are the engine specifications of the Lux XS model?” The response provides a list of training videos and the time stamps of relevant content. They can click on items in the list and view the exact spot in the video.
Want to explore Video Indexer and stay up to date on all releases? Here are some helpful resources:
Use Azure Video Indexer website to access the product website and get a free trial experience.
Visit Azure Video Indexer Developer Portal to learn about our APIs.
Search the Azure Video Indexer GitHub repository
Review the product documentation.
Get to know the recent features using Azure Video Indexer release notes.
Use Stack overflow community for technical questions.
To report an issue with Azure Video Indexer (paid account customers) Go to Azure portal Help + support. Create a new support request. Your request will be tracked within SLA.
Read our recent blogs in Azure Tech Community.
Microsoft Tech Community – Latest Blogs –Read More
On prem exchange SMIME configuraton
Hi Guys
Is there any document available On prem Exchange server 2019 CU 14 SMIME installation & configuration. I read Microsoft official portal but i didn’t understanding
Hi Guys Is there any document available On prem Exchange server 2019 CU 14 SMIME installation & configuration. I read Microsoft official portal but i didn’t understanding Read More