Month: June 2024
Booking – No availability on this date. Choose another.
Bom dia,
Utilizo Office 365 (online) e tenho página de reservas no Booking, tenho disponibilidade de reservas para terças e quintas entre as 14h00 e as 16h00, com período mínimo de reserva de 24 horas e máximo de 365 dias. Porém, ao tentarem marcar consultas, descobrem que não há horário disponível. Um ponto a ressaltar é que funcionou por aproximadamente 1 mês, após esse período começamos a ter esse problema. Você poderia nos ajudar?
Conforme imagem anexa do integrante da equipe, o horário de trabalho está de acordo com o horário disponível na agenda:
Bom dia,
Utilizo Office 365 (online) e tenho página de reservas no Booking, tenho disponibilidade de reservas para terças e quintas entre as 14h00 e as 16h00, com período mínimo de reserva de 24 horas e máximo de 365 dias. Porém, ao tentarem marcar consultas, descobrem que não há horário disponível. Um ponto a ressaltar é que funcionou por aproximadamente 1 mês, após esse período começamos a ter esse problema. Você poderia nos ajudar?Conforme imagem anexa do integrante da equipe, o horário de trabalho está de acordo com o horário disponível na agenda: Read More
Vulnerabilities List from Defender API not filtering by My Organization
Hi all,
when I consult the api for the list of vulnerabilities detected by Defender using this URL: GET https://api.securitycenter.microsoft.com/api/Vulnerabilities, I get a list of >250K vulnerabilities, but when I consult in the Defender Portal (Vulnerability Management –> Weakness) the list is 11K.
It seems that the API is not filtering vulnerabilites on “my organization”, but the complete list of vulnerabilities known by Defender.
Is there any filter that I can apply to the API URL to bring only vulnerabilities on my organization?
Thanks and best regards,
Alberto Medina
Hi all,when I consult the api for the list of vulnerabilities detected by Defender using this URL: GET https://api.securitycenter.microsoft.com/api/Vulnerabilities, I get a list of >250K vulnerabilities, but when I consult in the Defender Portal (Vulnerability Management –> Weakness) the list is 11K. It seems that the API is not filtering vulnerabilites on “my organization”, but the complete list of vulnerabilities known by Defender. Is there any filter that I can apply to the API URL to bring only vulnerabilities on my organization? Thanks and best regards,Alberto Medina Read More
I cannot delete an old org in teams
Hello,
Unfortunately, I cannot delete an old org in my personal teams, For which I used to work years ago. The switch to make it invisible is grayed out. When I go to my settings and “My Groups”, there is no sign of that org.
What did I miss? There must be a simple explanation 🙂
Hello, Unfortunately, I cannot delete an old org in my personal teams, For which I used to work years ago. The switch to make it invisible is grayed out. When I go to my settings and “My Groups”, there is no sign of that org. What did I miss? There must be a simple explanation 🙂 Read More
Additional Standard Database Template (MS Access)
I was inspired by the Northwind Database and I decided to work on an alternative package to complement the MS Access standard templates. I developed what I am proposing, an additional accounting package template using MS Access which goes all the way to Balance sheet, Profit and Loss, Debtors and Creditors statement, Aging analysis, Project codes, Work in progress, Inventory valuation, Invoicing, Credit notes, General ledger transaction history and analysis. It also comes with user logon, assigning of access rights, locking transacting periods, inventory movement reports, sales analysis by product, by customer, by Sales rep and so much more. I believe it can be a great addition to the pre-loaded MS Access Databases and increase the uptake of Office MIS products
I was inspired by the Northwind Database and I decided to work on an alternative package to complement the MS Access standard templates. I developed what I am proposing, an additional accounting package template using MS Access which goes all the way to Balance sheet, Profit and Loss, Debtors and Creditors statement, Aging analysis, Project codes, Work in progress, Inventory valuation, Invoicing, Credit notes, General ledger transaction history and analysis. It also comes with user logon, assigning of access rights, locking transacting periods, inventory movement reports, sales analysis by product, by customer, by Sales rep and so much more. I believe it can be a great addition to the pre-loaded MS Access Databases and increase the uptake of Office MIS products Read More
CEF Collector ingesting logs to ‘Syslog’ table instead of ‘CommonSecurityLog’
I am forwarding Palo Alto and Fortinet Firewall logs to the CEF Collector but in Sentinel it is showing logs in ‘Syslog’ table instead of ‘CommonSecurityLog’. What could be the issue? Everything is in place including DCR as well.
I am forwarding Palo Alto and Fortinet Firewall logs to the CEF Collector but in Sentinel it is showing logs in ‘Syslog’ table instead of ‘CommonSecurityLog’. What could be the issue? Everything is in place including DCR as well. Read More
Exception for Forwarding External Email to Group with External Senders Blocked
Is it possible to create an exception to allow a single external email address to a group which blocks external senders?
Is it possible to create an exception to allow a single external email address to a group which blocks external senders? Read More
Missing entries in custom log table
We are writing to a custom log table in a few Log Analytics workspaces – these workspaces are targetted by a few different instances of our application (beta/staging/prods, etc).
Interestingly some 3 of these workspaces are missing certain logs while the other 5 or so do have it. There are no exceptions thrown in our asp.net core code where we do a SendMessage to OMS either.
Any ideas if something like this is possible and how to troubleshoot/fix?
Thanks
We are writing to a custom log table in a few Log Analytics workspaces – these workspaces are targetted by a few different instances of our application (beta/staging/prods, etc).Interestingly some 3 of these workspaces are missing certain logs while the other 5 or so do have it. There are no exceptions thrown in our asp.net core code where we do a SendMessage to OMS either. Any ideas if something like this is possible and how to troubleshoot/fix?Thanks Read More
Create error message when Currency Field exceeds maximum
I have a SharePoint form that I want to set the maximum allowed value and have an error message appear before the form is saved telling the requester that the field exceeds the maximum allowed value. I have tried a variety of things but am not having any luck. The form can’t be saved but it doesn’t inform a user as to why.
I have a SharePoint form that I want to set the maximum allowed value and have an error message appear before the form is saved telling the requester that the field exceeds the maximum allowed value. I have tried a variety of things but am not having any luck. The form can’t be saved but it doesn’t inform a user as to why. Read More
AI+API better together: Benefits & Best Practices using APIs for AI workloads
This blog post will give you an overview of benefits and best practices you will get harnessing APIs and an API Manager solution when integrating AI into your application landscape.
Adding Artificial intelligence (AI) to existing applications is becoming an important part in application development. The correct integration of AI is vital to meet business goals, functional and non-functional requirements and build applications that are efficient to maintain and enhance. APIs (Application Programming Interfaces) play a key part in this integration and an API Manager is fundamental to keep control of the usage, performance, and versioning of APIs – especially in enterprise landscapes.
Quick Refresher: What is an API & API Manager?
An API is a connector between software components, promoting the separation of components by adding an abstraction layer, so someone can interact with a system and use its capability without understanding the internal complexity. Every AI service we leverage is accessed via an API.
An API Manager is a service that manages the API’s lifecycle, acts as single point of entry for all API traffic, and is a place to observe APIs. For AI workloads it is an API gateway that sits between your intelligent app and the AI endpoint. Adding an API Gateway in front of your AI endpoints is a best practice to add functionality without increasing the complexity of your application code. You also create a continuous development environment to increase the agility and speed of bringing new capabilities into production while maintaining older versions.
This blog post will show the benefits and best practices of AI + APIs in 5 key areas:
Performance & Reliability
Security
Caching
Sharing & Monetization
Continuous Development
The best practices in bold are universal and apply to any technology. The detailed explanation focuses on the features of Azure API Management (APIM) and the Azure services surrounding it.
1. Performance & Reliability
If you aim to add AI capability to an existing application, it feels the easiest to just connect an AI Endpoint to an existing app. In fact, a lot of tutorials use this scenario.
While this is a faster setup at the beginning, it leads to challenges and code complexity eventually once application requirements increase or multiple applications use the same AI service. With more calls targeting an AI Endpoint, performance, reliability, and latency will become requirements. Azure AI services have limits and quotas, but exceeding those limits will lead to error responses or unresponsive applications. To ensure a good user experience in production workloads, an API manager between the intelligent app and the AI Endpoint is a best practice.
Azure APIM, acting as an AI Gateway, provides load balancing and monitoring of AI Endpoints to guarantee a consistent and reliable performance of your deployed AI models and your intelligent apps. For the best result, multiple instances of an AI model should be deployed in parallel for requests to be distributed evenly (see Figure 2). The number of instances depends on your business requirements, use cases and forecasted peak traffic scenarios. You can route the traffic randomly or via round robin to load balance it evenly. For a more targeted routing you can distribute traffic.
Distributing requests across multiple AI instances is more than just load balancing. Using built in-policies or writing custom policies in Azure APIM, enables you to route traffic to selected Azure AI Endpoints or forward traffic to a more regional endpoint closer to the user’s location. For more complex workloads, the use of backend pools can add value (see Figure 3). A backend pool defines a group of resources which can be targeted depending on their availability, time to respond or workload. APIM can distribute incoming requests across them based on patterns like the circuit breaker pattern, preventing applications from repeatedly trying to execute an operation that’s likely to fail. Both ways of distribution are a good practice to ensure optimal performance and reliability in case of planned outages (upgrades, maintenance) or unplanned outages (power outages, natural disasters), high traffic scenarios or data residency requirements.
Another method to keep performance high and requests under control is by adding a rate limiting pattern to throttle traffic to AI models. Limiting access by time, IP address, registered API consumer or API key allows you to protect the backend against volume bursts as well as potential denial of service attacks. Applying an AI token-based limit as a policy is a good practice to define throttling tokens per minute and restrict noisy neighbours (see Figure 4).
But rate limiting and load balancing are not enough to ensure high performance. A consistent monitoring of workloads is a fundamental part of operational excellence. This includes health checks of endpoints, connection attempts, request times or failure counts. Azure API Management can help to keep all information in one place by storing analytics and insights of requests in a Log Analytics workspace (see Figure 4). This allows you to gain insights into the usage and performance of the APIs, API operations and how they perform over time, or in different geographic regions. Adding Azure Monitor to the Log Analytics workspace allows you to visualize, query, and archive data coming from APIM, as well as trigger corresponding actions. These actions can be anomaly alerts sent to API operators via push notification, email, SMS, or voice messages on any critical event.
2. Security
Protecting an application is a key requirement for all businesses to prevent data loss, denial of service attacks or unauthorized data access. Security is a multi-layer approach including infrastructure, application, and data. APIs act as one security layer to provide input security, key management, access management, as well as output validation in a central place.
While there is not one right way of adding security, adding input validation at a central point is beneficial for easy maintenance and fast adjustment when it comes to new vulnerabilities. For external facing applications this should include an Application Firewall in front of APIM. Input validation in Azure includes that APIM scans all incoming requests based on rules and regular expressions to protect the backend against malicious activities and vulnerabilities such as SQL injection or cross site scripting. That allows only valid requests to be processed by the AI Endpoints. Validation is not limited to input but can also be used for output security, preventing data to be exposed to external or unauthorized resources or users (see Figure 5).
Access management is another pillar of security. To authenticate access to Azure AI endpoints you can provide the API keys to the developers and give them direct access to the AI Endpoints. This, however, leaves you out of control who is accessing your AI models. A better option is to store API keys in a central place like Azure APIM and create an input policy. Access is then restricted to authorized users and applications.
Microsoft Entra ID (formerly Azure Active Directory) is one cloud based identity and access management solution that can authenticate users and applications by SSO (Single-Sign-on) credentials, password, or an Azure managed identity. For more fine-grained access and as part of a defence-in-depth strategy a backend authorization with OAUTH 2.0 and JWT (JSON web token) validation is a good practice to add.
Within Azure API Management you can also fine tune access rights per user or user groups by adding RBAC (Role based access control). It is good practice to use the built-in roles as a starting point to keep the number of roles as low as possible. If the default roles do not match your company’s needs, custom roles can be created and assigned. Adding users to groups and maintaining the access rights at the group level is another good practice as it minimizes maintenance efforts and increases structure.
3. Caching
Do you have an FAQ page that covers the most common questions? If you do, you likely created it to lower costs for the company and save time for the user. A response cache works the same way and can store previously requested information in memory for a predefined time and scope. Information that does not change frequently or contain sensitive information can be stored and reused. When using a cache, every request from the front-end is analysed semantically to check if an answer is available in the cache. If the semantic search is successful, the response from the cache will be used otherwise the request is forwarded to the AI model and the response is sent to the requesting application and stored in the cache if requirements for caching are met.
There are different caching options (see Figure 7): (1.) Inside the Azure APIM cache for simple use case scenarios or (2.) in an external cache like Redis Cache for more control over the cache configurations.
To get insights into frequently asked questions and cache usage, analytics data can be collected with Application Insights and visualized in real time using Grafana Dashboards. This allows you to identify trends in your intelligent apps and share insights for application improvement with decision makers and model fine tuning with engineering teams.
4. Sharing, Chargeback & Monetization
Divide and conquer is a common IT paradigm which can help you with your AI use cases. Sharing content and learnings across divisions rather than working in isolation and repeating similar work increases the speed of innovation and decreases the costs for development of new IP (intellectual property). While this is not possible in every company, most organizations would welcome a more collaborative approach especially when developing and testing new AI use cases. Developing tailored AI components in a central team and reuse them throughout the company will add speed and agility. But how do you track the usage across all divisions and share costs?
Once you have overcome the difficult cultural aspect of sharing information across divisions, charging back costs will be mainly an engineering problem. With APIM, you can bill and chargeback per API usage. Depending on how you want to chargeback or monetize your AI capability, you have different billing methods to choose from; Subscription and Metered. With Subscription billing, the user pays a fixed fee upfront and uses the service according to the terms and condition, like a video streaming service. This billing model gives you, as the API owner, a predictable income and capacity planning. Conversely, with Metered billing, the user pays according to the frequency of their activity, similar to an energy bill. This option gives the user more freedom to only pay what they use but it is more suited for organisations with highly scalable infrastructure set ups, as Metered billing can make scaling out AI instances more complex.
Monitoring the analytics of each call can help with scaling and optimization. Without accessing the content itself monitoring gives you a powerful tool to track real time analytics. Through outbound policies the analytics data can be streamed with Event Hub to PowerBI to create real time dashboards or to Application Insights to view the token usage for each client (see Figure 8). This information can help to automate internal chargeback or generate revenue by monetizing IP. An optional integration of 3rd party payment providers facilitate the payments. This solves the cost question. But once you share your IP widely, how can you ensure high performance for all users?
Limiting the requests per user, token or time (as explained in the Performance section) controls how many requests a user can send based on policies. This gives each project the right performance for the APIs they use. Sending the requests to a specific AI instance based on the project’s development stage, helps you to balance performance and costs. For example, Dev/Test workloads can be directed to the less expensive Pay as you Go instances when latency is not critical, while production workloads can be directed to AI Endpoints that use provisioned throughput units (PTUs) (see Figure 9). This allocated infrastructure is ideal for production applications that need consistent response times and throughput. By using the capacity planner to plan the size for your PTU, you will have a reserved AI instance that suits your workloads. Future increases in traffic can be routed to either another PTU instance or a Pay as you go instance in the same region or another one.
5. Continuous Development
Keeping up with the quick evolving AI Models is challenging as new models come to the market within months. Companies need to choose with every new model available, if they want to stay on a former version or use the newest for their use case. To keep the development lifecycle most efficient, it is a good practice to have separate teams focusing on parts of the application; Divide & conquer. This can mean a parallel development of the consuming application and the corresponding AI capability within a project team or a central AI team sharing their AI capability with the wider company. For either model, using APIs to link the parts is paramount. But the more APIs created, the more complex the API landscape becomes.
A single API Manager is a best practice to manage and monitor all created APIs and provide one source of information for sharing APIs with your developers, allow them to test the API operations and request access. The information shared should include an overview of the available API versions, revisions, and their status, so developers can track changes and switch to a newer API version when needed or convenient for their development. A roadmap is a nice-to-have feature if your development team is comfortable sharing their plans.
While such an overview of APIs can be created anywhere and is still often seen in Wikis, it is best to keep the documentation directly linked to your APIs, so it stays up to date. Azure APIM automatically creates a so called Developer Portal, a customizable webpage containing all the details about the APIs in one place reflecting changes made in APIM immediately, as the two services are linked (see Figure 10). This additional free of charge portal provides significant benefits to API developer and API consumer. The API consumer can view the APIs, the documentation, and conduct tests of all API operations visible to them. The API developer can share additional business information, set up a fine granular access management for the portal and track API usage to get an overview which API versions are actively used and when it is safe to retire older versions or provide long-term support.
Application development is usually brown field, with existing applications or APIs deployed in different environments or on multiple clouds. APIM supports the Import of existing OpenAPI specification and other APIs to facilitate adding all APIs into one API Management. The APIM instances can then be deployed on Azure or other cloud environments as managed service. This allows you and your team to decide when to move workloads if wanted or needed.
Summary
AI-led applications usher in a new era of working, and we’re still in its early stages. This blog post gave insights why AI and APIs are a powerful combination, and how an API Manager can enhance your application to make it more agile, efficient, and reliable. The best practices I covered on performance, security, caching, sharing, and continuous development are based on Microsoft’s recommendations and customer projects I’ve worked on across various industries in the UK and Europe. I hope this guide will help you to design, develop, and maintain your next AI-led application.
Microsoft Tech Community – Latest Blogs –Read More
How can I develop a matlab code that used to fine the phase profile of a Reconfigurable intelligent surface? with contains a beamforming algorithm to locate the received wave
The code with a beamforming algorithm
How can I get the sample values for the input and output values
Create the RIS by defining its size, position, and reflection coefficients
equation to calculate the reflected waveThe code with a beamforming algorithm
How can I get the sample values for the input and output values
Create the RIS by defining its size, position, and reflection coefficients
equation to calculate the reflected wave The code with a beamforming algorithm
How can I get the sample values for the input and output values
Create the RIS by defining its size, position, and reflection coefficients
equation to calculate the reflected wave #ris, beamform MATLAB Answers — New Questions
Arduino Uno and Matlab Simulink using IR sensor and LCD I2C display only
Hi,
I’m building my project with Arduino Uno and Matlab Simulink using IR sensor and LCD I2C display only. In my project, My concept is that I want the IR sensor to detect the object and the I2c display to start the counting in seconds. If the sensor detects another object, the counting should stop immediately.
if any could help, thnks in advancedHi,
I’m building my project with Arduino Uno and Matlab Simulink using IR sensor and LCD I2C display only. In my project, My concept is that I want the IR sensor to detect the object and the I2c display to start the counting in seconds. If the sensor detects another object, the counting should stop immediately.
if any could help, thnks in advanced Hi,
I’m building my project with Arduino Uno and Matlab Simulink using IR sensor and LCD I2C display only. In my project, My concept is that I want the IR sensor to detect the object and the I2c display to start the counting in seconds. If the sensor detects another object, the counting should stop immediately.
if any could help, thnks in advanced ir_sensor, simulink MATLAB Answers — New Questions
How do I extract column name of table in MATLAB?
Can you suggest me a way to extract name of specific column of table in MATLAB?Can you suggest me a way to extract name of specific column of table in MATLAB? Can you suggest me a way to extract name of specific column of table in MATLAB? table, uitable MATLAB Answers — New Questions
Help Needed: Microsoft 365 Features on School Chromebook?
Hi everyone,
I hope you’re all doing well. I’m considering installing Microsoft 365 on my school Chromebook and I’m curious about what features I can expect. Has anyone here installed Microsoft 365 on their Chromebook?
I’m particularly interested in knowing which features are fully functional and if there are any limitations compared to using Microsoft 365 on a traditional laptop. Are there any tips or tricks for optimizing its use on a school chromebook?
Your insights and experiences would be greatly appreciated. Thanks in advance for your help!
Best regards,
Jonathan Jone
Hi everyone, I hope you’re all doing well. I’m considering installing Microsoft 365 on my school Chromebook and I’m curious about what features I can expect. Has anyone here installed Microsoft 365 on their Chromebook? I’m particularly interested in knowing which features are fully functional and if there are any limitations compared to using Microsoft 365 on a traditional laptop. Are there any tips or tricks for optimizing its use on a school chromebook? Your insights and experiences would be greatly appreciated. Thanks in advance for your help! Best regards,Jonathan Jone Read More
Microsoft Windows Hardware Developer Program
Dear all,
I am writing this because I just created a new startup together with my co-founder, where we are developing a new kind of software to secure transactions in digital assets with. Long story short, we have a driver that needs to be signed, but it is our first time doing so & are looking for accurate info from someone who has experience with it.
Would you be so kind & help us answer the following questions:
– Do you know how long it takes to get your EV certificate?
– Do you know how long it takes the Microsoft Hardware Developer Program to sign your driver?
– Does the latter cost you anything?
Thanks a million in advance,
MJ
Dear all, I am writing this because I just created a new startup together with my co-founder, where we are developing a new kind of software to secure transactions in digital assets with. Long story short, we have a driver that needs to be signed, but it is our first time doing so & are looking for accurate info from someone who has experience with it. Would you be so kind & help us answer the following questions:- Do you know how long it takes to get your EV certificate?- Do you know how long it takes the Microsoft Hardware Developer Program to sign your driver? – Does the latter cost you anything?Thanks a million in advance,MJ Read More
Seeking Integration Advice: Using Exchange to Notify Followers About Social Media Updates
Hi everyone,
I’m working on a project that involves integrating email notifications with social media updates. Specifically, I’m looking to send updates from Instagram directly to a user’s email managed by Exchange. The goal is to ensure followers are kept up-to-date with the latest posts and stories.
Has anyone here successfully set up a similar integration? What are the best practices to ensure these notifications are timely and don’t end up in the spam folder?
For those interested in the social media aspect, the project is related to an app that helps increase Instagram followers by providing various tools and insights. You can check it out here.
Looking forward to your insights and suggestions!
Thanks,
Hi everyone,I’m working on a project that involves integrating email notifications with social media updates. Specifically, I’m looking to send updates from Instagram directly to a user’s email managed by Exchange. The goal is to ensure followers are kept up-to-date with the latest posts and stories.Has anyone here successfully set up a similar integration? What are the best practices to ensure these notifications are timely and don’t end up in the spam folder?For those interested in the social media aspect, the project is related to an app that helps increase Instagram followers by providing various tools and insights. You can check it out here.Looking forward to your insights and suggestions!Thanks, Read More
Hyper-V Manager in Windows 11 opening VM properties much to slow
I’m using Hyper-V on Windows 11 Pro and it takes about 20 seconds opening the properties of a VM.
That’s much too much…
E.g. creating a new VM from ISO, you have to press a key for booting from DVD, but that’s impossisble, cause Hyper-V manager doesn’t open the interface early enough.
I alreade deinstalled Hyper-V completely and reinstalled it – but no change…
I’m using Hyper-V on Windows 11 Pro and it takes about 20 seconds opening the properties of a VM.That’s much too much…E.g. creating a new VM from ISO, you have to press a key for booting from DVD, but that’s impossisble, cause Hyper-V manager doesn’t open the interface early enough.I alreade deinstalled Hyper-V completely and reinstalled it – but no change… Read More
Enhanced Filtering for Connectors – Improving Deliverability and Minimizing False Positives
Enhanced Filtering for Connectors (EFC) helps ensure that emails retain their original IP address and sender information when being routed through various services before being routed to Exchange Online by allowing for more accurate identification of spoofing attempts.
We’re rolling out an update that will reclassify messages with authentication issues and reduce false positives (e.g., the misidentification of legitimate emails as spoofed).
What’s changing?
When email messages travel through different servers, they can get modified along the way. Sometimes, these modifications unintentionally break the authentication process. Specifically, if a previous server in the chain doesn’t support a protocol called Authenticated Received Chain (ARC), it can lead to authentication failures. Authentication failures can occur where DomainKeys Identified Mail (DKIM) is the only source of alignment for Domain-based Message Authentication Reporting & Conformance (DMARC). With the changes we are rolling out, messages that would have previously failed email spoof checks will now have composite authentication compauth=none instead of compauth=fail. This will allow Exchange Online Protection (EOP) to recognize the failed DKIM due to modifications. This change will introduce new compauth codes of 4xx and 9xx.
What is expected after the change?
Decrease in False Positives: Legitimate emails that were previously mislabeled as spoofed will now be correctly identified.
Enhanced Accuracy: The accuracy of the filtering stack and machine learning models will be improved, leading to better detection and prevention of spoofing and phishing attempts.
Reliable Email Authentication: The use of SPF (Sender Policy Framework), DKIM, and DMARC will be more effective in establishing the reputation of sending domains, further aiding in the detection of impersonation and spoofing.
What should email and security admins do?
The change will rollout starting in early June 2024 and will be complete by mid-July 2024. It will be enabled by default for all tenants using Enhanced Filtering for Connectors, requiring no additional action from admins.
If your organization is using an Exchange Transport Rule (ETR) to bypass spam filtering when third-party filtering services are used, consider removing the ETR knowing that messages that had previously failed DKIM checks should be delivered to inboxes correctly after this change is rolled out. We will identify DKIM signatures that would have passed if a trusted third-party service had not modified the information.
This will allow you to deploy a defense-in-depth strategy for email messages, using your initial solution and Microsoft Defender for Office 365. Note, messages failing DKIM even without third-party intervention will continue to fail.
Finally, we strongly recommend organizations to adopt ARC whenever possible to preserve the original authentication statements in email messages.
Additional information
If your organization is already using EFC, you will find this change announced in the Message Center soon.
Enhanced filtering for connectors in Exchange Online
Anti-spam message headers – Microsoft Defender for Office 365 (Compauth and Authentication Results in Anti-Spam message headers)
Configure trusted ARC sealers – Microsoft Defender for Office 365
Manage mail flow using a third-party cloud service with Exchange Online
Getting started with defense in-depth configuration for email security – Microsoft Defender for Office 365
Microsoft Defender for Office 365 Team
Microsoft Tech Community – Latest Blogs –Read More
How Surface embedded firmware has evolved over 10+ years
Behind the scenes at Surface, a dedicated team of engineers ensures the hardware and software components of our devices function seamlessly. A crucial part of this integration is the embedded firmware — the software that operates on the microcontrollers and other low-level components of Surface devices. Have you ever wondered what happens after you press the power button and see the spinning circle that shows your system is booting up? That’s when the embedded firmware kicks in, managing power, thermal conditions, security, connectivity and other critical features—ensuring your device “just works.”
In this post, we’ll explore the history of embedded firmware in Surface devices, how we’ve tackled the challenges of supporting a growing product portfolio and how we evolved our firmware architecture to enhance efficiency, quality and scalability.
The early days: Custom firmware for each device
Initially, Surface offered just two products: the original Surface and Surface Pro. Each had custom firmware tailored to its specific needs. While effective for a small lineup, this approach didn’t scale. As we expanded our range to more form factors along with accessories like headphones, firmware development grew increasingly complex and costly. Customizing firmware for each device, with their unique features, introduced new challenges. There was more duplication and inconsistency, making it harder to maintain quality. Common issues such as power management glitches had to be addressed across multiple firmware bases, and new features like Instant On needed to be implemented individually, significantly increasing development time and risk.
A Common firmware architecture
As the Surface family expanded, the embedded firmware team looked for a solution that allowed code and resource sharing across devices while maintaining the flexibility for customization. The answer was a shared, common firmware architecture. This innovation provided core functionality for most Surface devices, with device-specific firmware extensions. We could make a single fix or add a feature and apply it across all Surface models. The result: quick and efficient security updates that reduced coding and testing cycles for each new product. Introduced nearly nine years ago, this was the first standardized embedded firmware architecture used across the Surface portfolio.
A more flexible and robust firmware architecture
Despite the success of the original architecture, evolving product requirements and an expanding feature set posed new challenges. Key issues included hardware scalability, software coupling and the need for greater per-product flexibility. The common firmware was excellent for consistency but limited the customization for unique device requirements. And as firmware codebases grew amid shrinking release cycles, we looked to automation and continuous integration/continuous delivery (CI/CD) as the most efficient way to deliver quality and reliability.
In response, our team developed a more flexible and robust firmware architecture, now used in nearly every product we ship. This architecture supports a range of silicon platforms and maximizes developer efficiency through code reusability, robust automation and CI/CD capabilities. It ensures a consistent customer experience across diverse devices like the Surface Pro, Surface Dock and Surface Laptop.
The future of Surface embedded firmware
Despite our success, the journey is far from over. We’re always looking ahead and assessing the needs of the device ecosystem to deliver the best possible firmware platform for our customers, partners and developers. Whether we’re enhancing device security, improving performance through advanced sensor integration or introducing convenient features like the Copilot key, it’s an exciting time to be in embedded firmware development. Plus, new initiatives like RUST-based security measures are a game changer. We look forward to sharing how these innovations can build security into Windows systems by design.
Microsoft Tech Community – Latest Blogs –Read More
Using Simulink for Solver use
Hello Matlab,
I have taken the help of last advise given by Matlab. I found out navigation etc. My issue currently is:
I have uploaded one file to Matlab Drive.
In the Simulink, I tried to import data. It did not work but there in the drive.
Then I clicked on New Script, pasted script and Saved it. But nothings happening when I clicked on RUN button.
My need is, I have a mathematical model, non linear, in the excel. The script defines Objective functions, Variable cells and Constraints. This was AI generated. I wanted to optimise that. Is there an alternative to script and if not, can I have tutorials on how to write a Matlab code for : Objective functions, Variable cells and Constraints?
Regards.Hello Matlab,
I have taken the help of last advise given by Matlab. I found out navigation etc. My issue currently is:
I have uploaded one file to Matlab Drive.
In the Simulink, I tried to import data. It did not work but there in the drive.
Then I clicked on New Script, pasted script and Saved it. But nothings happening when I clicked on RUN button.
My need is, I have a mathematical model, non linear, in the excel. The script defines Objective functions, Variable cells and Constraints. This was AI generated. I wanted to optimise that. Is there an alternative to script and if not, can I have tutorials on how to write a Matlab code for : Objective functions, Variable cells and Constraints?
Regards. Hello Matlab,
I have taken the help of last advise given by Matlab. I found out navigation etc. My issue currently is:
I have uploaded one file to Matlab Drive.
In the Simulink, I tried to import data. It did not work but there in the drive.
Then I clicked on New Script, pasted script and Saved it. But nothings happening when I clicked on RUN button.
My need is, I have a mathematical model, non linear, in the excel. The script defines Objective functions, Variable cells and Constraints. This was AI generated. I wanted to optimise that. Is there an alternative to script and if not, can I have tutorials on how to write a Matlab code for : Objective functions, Variable cells and Constraints?
Regards. solver MATLAB Answers — New Questions
How to remove noise of low frequency signals
Hello,
I have to create a model of following signal in such a way a way that red signal will not fluctuate with the green signal and will always take mean of green signal. I have used moving average filter, but still there is some fluctuation, I cannot increase N of moving average filter, since it will make it less responsive. Could you suggest me such a way where I could make red signals more or less like straight tine. is there any filter or algorithm available for this.
Note – this signal manipulation should be done in realtime, model is created in SimulinkHello,
I have to create a model of following signal in such a way a way that red signal will not fluctuate with the green signal and will always take mean of green signal. I have used moving average filter, but still there is some fluctuation, I cannot increase N of moving average filter, since it will make it less responsive. Could you suggest me such a way where I could make red signals more or less like straight tine. is there any filter or algorithm available for this.
Note – this signal manipulation should be done in realtime, model is created in Simulink Hello,
I have to create a model of following signal in such a way a way that red signal will not fluctuate with the green signal and will always take mean of green signal. I have used moving average filter, but still there is some fluctuation, I cannot increase N of moving average filter, since it will make it less responsive. Could you suggest me such a way where I could make red signals more or less like straight tine. is there any filter or algorithm available for this.
Note – this signal manipulation should be done in realtime, model is created in Simulink filter, low frequency, signal processing MATLAB Answers — New Questions