Category: Microsoft
Category Archives: Microsoft
Get o365 reports using MS Graph
I want a proper guidance where i can generate reports of o365 using MSGraph using app based authentication and also where can i find ready made scripts for the same.
I want a proper guidance where i can generate reports of o365 using MSGraph using app based authentication and also where can i find ready made scripts for the same. Read More
Add AWS Cognito authentication to a Blazor WebAssembly standalone app
I cannot find a step by step guide for using AWS Cognito in a Blazor WebAssembly standalone app for net 8
Microsoft did produce a guide but not for Cognito. I did find a very good guide for adding Cognito but to a Blazor web app which does require a server.
I cannot find a step by step guide for using AWS Cognito in a Blazor WebAssembly standalone app for net 8Microsoft did produce a guide but not for Cognito. I did find a very good guide for adding Cognito but to a Blazor web app which does require a server. Read More
Microsoft IP’s on blacklist
Have noticed that a range of Microsoft IP’s are on blacklists.
Examples (there is a whole range of them):
40.107.108.97 – mail-me3aus01on2097.outbound.protection.outlook.com
Listed on spamcop.
40.107.108.98 – mail-me3aus01on2098.outbound.protection.outlook.com
Listed on:
0SPAM
Sender Score Reputation Network
SORBS SPAM
Emails from Exchange Online systems or those using Outlook Protection are being delayed to us by several hours.
Have noticed that a range of Microsoft IP’s are on blacklists. Examples (there is a whole range of them):40.107.108.97 – mail-me3aus01on2097.outbound.protection.outlook.comListed on spamcop. 40.107.108.98 – mail-me3aus01on2098.outbound.protection.outlook.comListed on:0SPAMSender Score Reputation NetworkSORBS SPAM Emails from Exchange Online systems or those using Outlook Protection are being delayed to us by several hours. Read More
Preview: Introducing Reporting Capabilities for Azure Site Recovery
As a Backup and Disaster Recovery Admin, one of your key roles is to obtain insights on data that spans a long time. Similar to Azure Backup, Azure Site Recovery provides a reporting solution that uses Azure Monitor logs and Azure workbooks. These resources will help you get rich insights on your estate protected with Site Recovery.
Reporting for azure site recovery will help meet requirements such as:
Troubleshooting
Auditing of failover and replication
Identifying key trends at different levels of granularity
Reporting Scenarios
Site recovery reports are available to provide historical information on Site Recovery Jobs and, Site Recovery Replicated Items
Site Recovery reports are supported for Azure VM replication to Azure, Hyper-V replication to Azure, VMWare replication to Azure – Classic & Modernized. You can find them as filterable options under Replication Scenario: Azure/ Hybrid.
Pre-requisites for Reports
1. Create a Log Analytics workspace or use an existing one
Set up one or more Log Analytics workspaces to store your reporting data. The location and subscription where this Log Analytics workspace can be created is independent of the location and subscription where your vaults exist. To set up a Log Analytics workspace, see Create a Log Analytics workspace in the Azure portal.
The data in a Log Analytics workspace is kept for 30 days by default. If you want to see data for a longer time span, change the retention period of the Log Analytics workspace. To change the retention period, see Configure data retention and archive policies in Azure Monitor Logs.
2. Configure diagnostics settings for your vaults
Azure Resource Manager resources, such as Recovery Services vaults, record information about site recovery jobs and replicated items as diagnostics data. To configure diagnostics settings for your vaults, follow these steps:
Go to Azure Portal -> Recovery Services Vault of concern -> Monitoring blade -> Diagnostic Settings
Specify the target for the Recovery Services Vault’s diagnostic data. To learn more about using diagnostic events, see Use diagnostics settings for Recovery Services vaults.
Select Azure Site Recovery Jobs and Azure Site Recovery Replicated Item Details tables to populate the site recovery reports.
View reports in Business Continuity Center
After you have configured your vault to send data to Log Analytics workspace, you can view your reports by going to the Business Continuity Center -> Monitoring+Reporting blade -> Reports. Under the Azure Site Recovery tab, you will see the following reports.
Once you click on your report of interest it will require you to choose the workspace subscription(s), log analytics workspace(s), and the replication scenario of choice, to be able to populate the report with information.
ASR Job History
Use this report to get information on the site recovery jobs by operation type and completion status. This also helps get details job start time and duration of the job associated with the replicated item and it’s corresponding vault, subscription, etc. It also offers multiple filters such as time range, job operation, resource group, job status and search item to get a more focused report and visualization.
ASR Replication History
Use this report to get information on the replicated items and their status over a specific duration of time. In addition to that, it provides the failover date and a detailed list of replication health errors for troubleshooting. It also offers multiple filters such as time range, vault subscription, resource group and search item to get a more focused report and visualization.
Additional Capabilities
Export to excel: Select the down arrow button in the upper right of any widget, like a table or chart, to export the contents of that widget as an Excel sheet as-is with existing filters applied. To export more rows of a table to Excel, you can increase the number of rows displayed on the page by using the Rows Per Page drop-down arrow at the top of each grid.
Pin to Dashboard: Select the pin button at the top of each widget to pin the widget to your Azure portal dashboard. This feature helps you create customized dashboards tailored to display the most important information that you need.
Cross Tenant Reports: If you use Azure Lighthouse with delegated access to subscriptions across multiple tenant environments, you can use the default subscription filter. Select the filter button in the upper-right corner of the Azure portal to choose all the subscriptions for which you want to see data. Doing so lets you select Log Analytics workspaces across your tenants to view multi-tenanted reports.
Frequently Asked Questions
Q. From how long ago are reports available?
A. By default the retention of the Log Analytics Workspace and hence reports is 30 days. The retention for the report is same as that of the Log Analytics Workspace set by you. To change the retention period, see Configure data retention and archive policies in Azure Monitor Logs.
Q. Can reports show data from the past?
A. The reports have access to data starting from the time of configuration of log analytics workspace and diagnostic settings. Reports will not show data unless the pre-requisites are met.
Q. How long after configuring diagnostic settings will I see data in the reports?
A. After you configure diagnostics, it might take up to 24 hours for the initial data push to complete. After data starts flowing into the Log Analytics workspace, you might not see data in the reports immediately because data for the current partial day isn’t shown in the reports. We recommend that you start viewing the reports two days after you configure your vaults to send data to Log Analytics.
Q. Can I use Azure Policy to configure at scale?
A. Azure Site Recovery does not provide a built-in Azure Policy definition which automates the configuration of diagnostics settings for all Recovery Services vaults in a given scope, as of today.
Q. I am looking at the report for last 7 days but I don’t see the data for today. Why?
A. When the selected value of Time Range is Last 7 days, the report shows records for the last seven completed days. The current day isn’t included. The report is updated every end of day UTC time.
Resources
Configure Azure Site Recovery reports – Azure Site Recovery | Microsoft Learn
New to Azure Site Recovery? Azure Site Recovery documentation | Microsoft Learn
Need Help? Reach out to Microsoft supported products on Q&A | Microsoft Learn azure forum for support and learn from Azure Site Recovery documentation | Microsoft Learn
Follow us on Twitter @AzureBackup
Microsoft Tech Community – Latest Blogs –Read More
Azure Blogs – Articles from 13-May-2024 to 19-May-2024
AI + Machine Learning
Covering: Anomaly Detector, Azure Bot Services, Azure Cognitive Search, Azure ML, Azure Open Datasets, Azure Cognitive Services, Azure Video Indexer, Computer Vision, Content Moderator, Custom Vision, Data Science VM, Face API, Azure Form Recognizer, Azure Immersive Reader, Kinect DK, Language Understanding (LUIS), Microsoft Genomics, Personalizer, Project Bonsai, QnA Maker, Speaker recognition, Speech to Text, Speech translation, Cognitive Service for Language, Text to Speech, Translator, Azure Metrics Advisor, Health Bot, Azure Percept, Azure Applied AI Services, Azure OpenAI Service
Global AI Bootcamp 2024 with MVP Communities
The LLM Latency Guidebook: Optimizing Response Times for GenAI Applications
Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure
A Guide to Optimizing Performance and Saving Cost of your Machine Learning (ML) Service – Part 2
Get Started with Azure AI Services | Open AI and Deployment Models
Choosing the Right Tool: A Comparative analysis of the Assistants API & Chat Completions API
Evaluate Small Language Models for RAG using Azure Prompt Flow (LLama3 vs Phi3)
[AI Search] Minimum RBAC role for AI search when selecting it as data source in AI studio playground
Finetune Small Language Model (SLM) Phi-3 using Azure Machine Learning
How AI and digital transformation are driving inclusion, productivity, and accessibility
Security consideration of Azure OpenAI with Retrieval Augmented Generative pattern (part 1 of 3)
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
Build a chatbot service to ensure safe conversations: Using Azure OpenAI & Azure Content Safety
Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure
Microsoft Build 2024: Essential Guide for AI Developers at Startups and Cloud-First Companies
Sharpist 2024 – AI Hackathon in Uzbekistan
Real-time predictions with the azure_ai extension (Preview)
Real-time text translation using the azure_ai extension in Azure Database for PostgreSQL
Building AI solutions with partners: Empowering transformation with copilots
Create your own copilot using Azure Prompt flow and Streamli
Analytics
Covering: Azure Analysis Services, Azure Data Explorer, Azure Data Factory, Azure Data Lake Storage, Azure Data Share, Azure Databricks, Azure Stream Analytics, Azure Synapse Analytics, Data Catalog, Data Lake Analytics, HDInsight, Power BI Embedded, R Server for HDInsight, Microsoft Purview, Microsoft Graph Data Connect, Azure Chaos Studio
Getting started with Private Clusters on HDInsight on AKS for securing your analytics workloads
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
Upcoming changes to Databricks materialized views and streaming tables in Databricks SQL
Step-by-Step Guide: Building and Integrating Custom Package in ADF Workflow Orchestration Manager
Compute
Covering: Azure CycleCloud, Azure Quantum, Azure Spot Virtual Machines, Azure VMware Solution, Batch, Linux Virtual Machines, Virtual Machine Scale Sets, Virtual Machines, Azure Dedicated Host, Azure VM Image Builder, Azure Functions, Service Fabric
Generally Available: Ubuntu 24.04 LTS for Azure Virtual Machines
Public Preview: Azure Site Recovery support for Azure Trusted Launch VMs (Windows OS)
Azure Lab Services – Lab Plan Outage
The availability of Azure compute reservations will continue until further notice
VMware HCX Troubleshooting with Azure VMware Solution
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
The Impact of RedHat Linux 7 Extended Life Cycle Support on Azure Guest Patching Customers
Azure VMware Solution Security Design Considerations
General availability: Azure Bastion Developer SKU
Setting Up Slurm Cloud Bursting Using CycleCloud on Azure
Containers
Covering: Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, Azure Container Apps, Web App for Containers, Azure Container Instances, Azure Container Registry
Secure your Container Apps with Key Vault Certificates
Getting started with Private Clusters on HDInsight on AKS for securing your analytics workloads
Advance Networking in Azure Kubernetes: A Comprehensive Overview Part1
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
SQL Server Always On Availability group on AKS with DH2i’s DxOperator and Rancher by SUSE
New: Secure Sandboxes at Scale with Azure Container Apps Dynamic Sessions
Deep Dive: Secure Orchestration of Confidential Containers on Azure Kubernetes Service
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Databases
Covering: Azure Cache for Redis, Azure Cosmos DB, Azure Database for MariaDB, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL, Azure SQL Database, Azure SQL Edge, Azure SQL Managed Instance, SQL Server on Azure VM, Table Storage, Azure Managed Instance for Apache Cassandra, Azure Confidential Ledger
Lesson Learned #485: Index Recomendation or the Importance of Index Selection in SQL Server
Lesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Microsoft wins Cloud Marketplace Partner of the Year Award from MongoDB
Buffer pool performance parameters for Azure Database for MySQL
NL to SQL Architecture Alternatives
What’s new with Postgres at Microsoft, 2024 edition
French language collation not working on Synapse Serverless SQL
Lesson Learned #487: Identifying Parallel and High-Volume Queries in Azure SQL Database
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
SQL Server Always On Availability group on AKS with DH2i’s DxOperator and Rancher by SUSE
General Availability: Data API builder
Lesson Learned #488: A severe error occurred on the current command. Operation cancelled by user.
Data API builder is now Generally Available | Data Exposed
Spatial Workflows in Azure Database for PostgreSQL – Flexible Server
Cumulative Update #13 for SQL Server 2022 RTM
Azure Custom Policy- PostgreSQL Product – Compliance Report not Available- New Feature Request
Lesson Learned #489:Investigating CPU Spikes with Query Store Overall Resource Consumption Report
Upcoming changes to Databricks materialized views and streaming tables in Databricks SQL
MVP’s Favorite Content: Microsoft AI, SQL, Power Platform
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
Lesson Learned #490: Monitoring Access from Specific Applications to Azure SQL Database
April 2024 Recap: Azure PostgreSQL Flexible Server
Real-time predictions with the azure_ai extension (Preview)
Real-time text translation using the azure_ai extension in Azure Database for PostgreSQL
Developer Tools
Covering: App Configuration, Azure DevTest Labs, Azure Lab Services, SDKs, Visual Studio, Visual Studio Code, Azure Load Testing
What’s new in Orleans 8 for Scalable Distributed Applications
Research: Quantifying GitHub Copilot’s impact in the enterprise with Accenture
Unlock Your Python Potential with Azure
What to expect from Microsoft Learn at Microsoft Build
.NET and .NET Framework May 2024 Servicing Updates
The process was terminated due to an internal error in the .NET Runtime at IP 00007FFBFBEE2CAD.
Building Better Apps: Better Together
Join the GitHub Challenge – Microsoft Build Edition!
DevOps
Covering: Azure Artifacts, Azure Boards, Azure DevOps, Azure Pipelines, Azure Repos, Azure Test Plans, DevOps tool integrations, Azure Load Testing
Research: Quantifying GitHub Copilot’s impact in the enterprise with Accenture
Azure CI/CD: Govern seamlessly from start to finish
Securing Git: Addressing 5 new vulnerabilities
Scaling accessibility within GitHub and beyond
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Join the GitHub Challenge – Microsoft Build Edition!
The most common way to publish custom jar files as a Maven artifact in Azure DevOps
Hybrid
Covering: Microsoft Azure Stack, Azure Arc
No New Articles
Identity
Covering: Azure Active Directory, Multi-factor Authentication, Azure Active Directory Domain Services, Azure Active Directory External Identities
How to Apply Easy Auth on Web App under a High-security policy environment
Announcing a new login experience with Azure PowerShell and Azure CLI
Easily detect CVE-2024-21427 with Microsoft Defender for Identity
New developments in Microsoft Entra ID Protection
Completing DFSR SYSVOL migration of domains that use Entra ID passwordless SSO
Microsoft Entra delivers increased transparency
Meet us at Identiverse: May 28-31 in Las Vegas
Integration
Covering: API Management, Event Grid, Logic Apps , Service Bus
Securing your API Management service from day one with Defender for APIs
Logic Apps Aviators Community Day 2024
Logic Apps Aviators Newsletter – May 2024
Upcoming Data Mapper improvements
Setting up Azure API on Postman and Azure CLI – Step-by-step guide
Internet Of Things
Covering: Azure IoT Central, Azure IoT Edge, Azure IoT Hub, Azure RTOS, Azure Sphere, Azure Stream Analytics, Azure Time Series Insights, Microsoft Defender for IoT, Azure Percept, Windows for IoT
No New Articles
Management and Governance
Covering: Automation, Azure Advisor, Azure Backup, Azure Blueprints, Azure Lighthouse, Azure Monitor, Azure Policy, Azure Resource Manager, Azure Service Health, Azure Site Recovery, Cloud Shell, Cost Management, Azure Portal, Network Watcher, Azure Automanage, Azure Resource Mover, Azure Chaos Studio, Azure Managed Grafana
Public Preview: Azure Site Recovery support for Azure Trusted Launch VMs (Windows OS)
Unlock savings potential with Azure Advisor’s Cost Optimization workbook
New on Azure Marketplace: April 26-30, 2024
Slash Your Azure Bill: Top Tips for Startups
Troubleshooting Common Custom Policy Issues in Policy Development
Introducing Microsoft Learn for Organizations Playbook, customizable Plans
Slash Your Azure Bill: Top Tips for Startups
Media
Covering: Azure Media Player, Content Protection, Encoding, Live and On-Demand Streaming, Media Services
No New Articles
Migration
Covering: Azure Database Migration Service, Azure Migrate, Data Box, Azure Site Recovery
No New Articles
Mixed Reality
Covering: Digital Twins, Kinect DK, Spatial Anchors, Remote Rendering, Object Anchors
No New Articles
Mobile
Covering: Azure Maps, MAUI, Notification Hubs, Visual Studio App Center, Xamarin, Azure Communication Services
Did you know Azure Maps is HIPAA compliant?
Azure Communication Services at Microsoft Build 2024
Discover Azure Programmable Connectivity: A developer’s gateway to innovative mobile applications
Networking
Covering: Application Gateway, Bastion, DDoS Protection, DNS, Azure ExpressRoute, Azure Firewall, Load Balancer, Firewall Manager, Front Door, Internet Analyzer, Azure Private Link, Content Delivery Network, Network Watcher, Traffic Manager, Virtual Network, Virtual WAN, VPN Gateway, Web Application Firewall, Azure Orbital, Route Server, Network Function Manager, Virtual Network Manager, Azure Private 5G Core
Public preview: Azure Application Gateway v2 Basic SKU
Advanced routing capabilities using Application Gateway Rewrite Rules
Public preview: Sensitive data protection for Azure Front Door Web Application Firewall
Azure Front Door server variable enhancement general available
Organizing rule collections and rule collection groups in Azure Firewall Policy
Skilling snack: Advanced network security
Security
Covering: Defender for Cloud, DDoS Protection, Dedicated HSM, Azure Information Protection, Microsoft Sentinel, Key Vault, Microsoft Defender for Cloud, Microsoft Defender for IoT, Microsoft Azure Attestation, Azure Confidential Ledger
Secure your Container Apps with Key Vault Certificates
Securing your API Management service from day one with Defender for APIs
Host Microsoft Defender data locally in Switzerland
Preparing for CMMC 2.0: Build New or Fix Old?
Easily detect CVE-2024-21427 with Microsoft Defender for Identity
Loop DDoS Attacks: Understanding the Threat and Azure’s Defense
A BlackByte Ransomware intrusion case study
Storage
Covering: Archive Storage, Avere vFXT for Azure, Azure Data Lake Storage, Azure Data Share, Files, FXT Edge Filer, HPC Cache, NetApp Files, Blob Storage, Data Box, Disk Storage, Queue Storage, Storage Accounts, Storage Explorer, StorSimple
General Availability: Azure Files geo-redundancy for standard large file shares
Storage migration: Combine Azure Storage Mover and Azure Data Box
Web
Covering: App Configuration, App Service, Azure Cognitive Search, Azure Maps, Azure SignalR Service, Static Web Apps, Azure Communication Services, Azure Web PubSub, Azure Fluid Relay, Web App for Containers
How to Apply Easy Auth on Web App under a High-security policy environment
PHP 8.3 now available on App Service
The All-Inclusive Update for Everything TLS on App Service
A Step-by-Step Guide to Datadog Integration with Linux App Service via Sidecars
Azure Communication Services at Microsoft Build 2024
Did you know Azure Maps is HIPAA compliant?
App Service Environment version 1 and version 2 will be retired on 31 August 2024
How to integrate continuous integration and deployment with WordPress on App Service
How to set up staging slots in WordPress on App Service
Announcing Memory intensive SKUs for App Service Environment v3
Azure Virtual Desktop
Covering: Windows Virtual Desktop, VMware Horizon Cloud on Microsoft Azure, Citrix Virtual Apps and Desktops for Azure
No New Articles
AI + Machine Learning
Covering: Anomaly Detector, Azure Bot Services, Azure Cognitive Search, Azure ML, Azure Open Datasets, Azure Cognitive Services, Azure Video Indexer, Computer Vision, Content Moderator, Custom Vision, Data Science VM, Face API, Azure Form Recognizer, Azure Immersive Reader, Kinect DK, Language Understanding (LUIS), Microsoft Genomics, Personalizer, Project Bonsai, QnA Maker, Speaker recognition, Speech to Text, Speech translation, Cognitive Service for Language, Text to Speech, Translator, Azure Metrics Advisor, Health Bot, Azure Percept, Azure Applied AI Services, Azure OpenAI Service
Global AI Bootcamp 2024 with MVP Communities
The LLM Latency Guidebook: Optimizing Response Times for GenAI Applications
Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure
A Guide to Optimizing Performance and Saving Cost of your Machine Learning (ML) Service – Part 2
Get Started with Azure AI Services | Open AI and Deployment Models
Choosing the Right Tool: A Comparative analysis of the Assistants API & Chat Completions API
Evaluate Small Language Models for RAG using Azure Prompt Flow (LLama3 vs Phi3)
[AI Search] Minimum RBAC role for AI search when selecting it as data source in AI studio playground
Finetune Small Language Model (SLM) Phi-3 using Azure Machine Learning
How AI and digital transformation are driving inclusion, productivity, and accessibility
Security consideration of Azure OpenAI with Retrieval Augmented Generative pattern (part 1 of 3)
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
Build a chatbot service to ensure safe conversations: Using Azure OpenAI & Azure Content Safety
Introducing GPT-4o: OpenAI’s new flagship multimodal model now in preview on Azure
Microsoft Build 2024: Essential Guide for AI Developers at Startups and Cloud-First Companies
Sharpist 2024 – AI Hackathon in Uzbekistan
Real-time predictions with the azure_ai extension (Preview)
Real-time text translation using the azure_ai extension in Azure Database for PostgreSQL
Building AI solutions with partners: Empowering transformation with copilots
Create your own copilot using Azure Prompt flow and Streamli
Analytics
Covering: Azure Analysis Services, Azure Data Explorer, Azure Data Factory, Azure Data Lake Storage, Azure Data Share, Azure Databricks, Azure Stream Analytics, Azure Synapse Analytics, Data Catalog, Data Lake Analytics, HDInsight, Power BI Embedded, R Server for HDInsight, Microsoft Purview, Microsoft Graph Data Connect, Azure Chaos Studio
Getting started with Private Clusters on HDInsight on AKS for securing your analytics workloads
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
Upcoming changes to Databricks materialized views and streaming tables in Databricks SQL
Step-by-Step Guide: Building and Integrating Custom Package in ADF Workflow Orchestration Manager
Compute
Covering: Azure CycleCloud, Azure Quantum, Azure Spot Virtual Machines, Azure VMware Solution, Batch, Linux Virtual Machines, Virtual Machine Scale Sets, Virtual Machines, Azure Dedicated Host, Azure VM Image Builder, Azure Functions, Service Fabric
Generally Available: Ubuntu 24.04 LTS for Azure Virtual Machines
Public Preview: Azure Site Recovery support for Azure Trusted Launch VMs (Windows OS)
Azure Lab Services – Lab Plan Outage
The availability of Azure compute reservations will continue until further notice
VMware HCX Troubleshooting with Azure VMware Solution
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
Public Preview: Migrate virtual machine backups using standard backup policy to enhanced backup policy
The Impact of RedHat Linux 7 Extended Life Cycle Support on Azure Guest Patching Customers
Azure VMware Solution Security Design Considerations
General availability: Azure Bastion Developer SKU
Setting Up Slurm Cloud Bursting Using CycleCloud on Azure
Containers
Covering: Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, Azure Container Apps, Web App for Containers, Azure Container Instances, Azure Container Registry
Azure Machine Learning Service for Kubernetes Architects: Deploy Your First Model on AKS with AZ CLI v2
Secure your Container Apps with Key Vault Certificates
Getting started with Private Clusters on HDInsight on AKS for securing your analytics workloads
Advance Networking in Azure Kubernetes: A Comprehensive Overview Part1
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
SQL Server Always On Availability group on AKS with DH2i’s DxOperator and Rancher by SUSE
New: Secure Sandboxes at Scale with Azure Container Apps Dynamic Sessions
Deep Dive: Secure Orchestration of Confidential Containers on Azure Kubernetes Service
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Databases
Covering: Azure Cache for Redis, Azure Cosmos DB, Azure Database for MariaDB, Azure Database for MySQL, Azure Database for PostgreSQL, Azure SQL, Azure SQL Database, Azure SQL Edge, Azure SQL Managed Instance, SQL Server on Azure VM, Table Storage, Azure Managed Instance for Apache Cassandra, Azure Confidential Ledger
Lesson Learned #485: Index Recomendation or the Importance of Index Selection in SQL Server
Lesson Learned #486: Snapshot Isolation Transaction Failed Due to a Concurrent DDL Statement
Microsoft wins Cloud Marketplace Partner of the Year Award from MongoDB
Buffer pool performance parameters for Azure Database for MySQL
NL to SQL Architecture Alternatives
What’s new with Postgres at Microsoft, 2024 edition
French language collation not working on Synapse Serverless SQL
Lesson Learned #487: Identifying Parallel and High-Volume Queries in Azure SQL Database
Scale Real-Time Streams to Delta Lakehouse with Apache Flink on Azure HDInsight on AKS
SQL Server Always On Availability group on AKS with DH2i’s DxOperator and Rancher by SUSE
General Availability: Data API builder
Lesson Learned #488: A severe error occurred on the current command. Operation cancelled by user.
Data API builder is now Generally Available | Data Exposed
Spatial Workflows in Azure Database for PostgreSQL – Flexible Server
Cumulative Update #13 for SQL Server 2022 RTM
Azure Custom Policy- PostgreSQL Product – Compliance Report not Available- New Feature Request
Lesson Learned #489:Investigating CPU Spikes with Query Store Overall Resource Consumption Report
Upcoming changes to Databricks materialized views and streaming tables in Databricks SQL
MVP’s Favorite Content: Microsoft AI, SQL, Power Platform
Unlock power of data in Azure- With SQL Server on Linux Azure VMs and Azure AI search
Lesson Learned #490: Monitoring Access from Specific Applications to Azure SQL Database
The Art of SQL Server Tuning
April 2024 Recap: Azure PostgreSQL Flexible Server
Real-time predictions with the azure_ai extension (Preview)
Real-time text translation using the azure_ai extension in Azure Database for PostgreSQL
Developer Tools
Covering: App Configuration, Azure DevTest Labs, Azure Lab Services, SDKs, Visual Studio, Visual Studio Code, Azure Load Testing
What’s new in Orleans 8 for Scalable Distributed Applications
Research: Quantifying GitHub Copilot’s impact in the enterprise with Accenture
Unlock Your Python Potential with Azure
What to expect from Microsoft Learn at Microsoft Build
.NET and .NET Framework May 2024 Servicing Updates
The process was terminated due to an internal error in the .NET Runtime at IP 00007FFBFBEE2CAD.
Building Better Apps: Better Together
Join the GitHub Challenge – Microsoft Build Edition!
DevOps
Covering: Azure Artifacts, Azure Boards, Azure DevOps, Azure Pipelines, Azure Repos, Azure Test Plans, DevOps tool integrations, Azure Load Testing
Research: Quantifying GitHub Copilot’s impact in the enterprise with Accenture
Azure CI/CD: Govern seamlessly from start to finish
Securing Git: Addressing 5 new vulnerabilities
Scaling accessibility within GitHub and beyond
Get Certified with GitHub
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Certifícate con GitHub
Join the GitHub Challenge – Microsoft Build Edition!
The most common way to publish custom jar files as a Maven artifact in Azure DevOps
Hybrid
Covering: Microsoft Azure Stack, Azure Arc
No New Articles
Identity
Covering: Azure Active Directory, Multi-factor Authentication, Azure Active Directory Domain Services, Azure Active Directory External Identities
How to Apply Easy Auth on Web App under a High-security policy environment
Announcing a new login experience with Azure PowerShell and Azure CLI
Easily detect CVE-2024-21427 with Microsoft Defender for Identity
New developments in Microsoft Entra ID Protection
Completing DFSR SYSVOL migration of domains that use Entra ID passwordless SSO
Microsoft Entra delivers increased transparency
Meet us at Identiverse: May 28-31 in Las Vegas
Integration
Covering: API Management, Event Grid, Logic Apps , Service Bus
Securing your API Management service from day one with Defender for APIs
Logic Apps Aviators Community Day 2024
Logic Apps Aviators Newsletter – May 2024
Data mapper improvements
Upcoming Data Mapper improvements
Setting up Azure API on Postman and Azure CLI – Step-by-step guide
Internet Of Things
Covering: Azure IoT Central, Azure IoT Edge, Azure IoT Hub, Azure RTOS, Azure Sphere, Azure Stream Analytics, Azure Time Series Insights, Microsoft Defender for IoT, Azure Percept, Windows for IoT
No New Articles
Management and Governance
Covering: Automation, Azure Advisor, Azure Backup, Azure Blueprints, Azure Lighthouse, Azure Monitor, Azure Policy, Azure Resource Manager, Azure Service Health, Azure Site Recovery, Cloud Shell, Cost Management, Azure Portal, Network Watcher, Azure Automanage, Azure Resource Mover, Azure Chaos Studio, Azure Managed Grafana
Public Preview: Azure Site Recovery support for Azure Trusted Launch VMs (Windows OS)
Unlock savings potential with Azure Advisor’s Cost Optimization workbook
New on Azure Marketplace: April 26-30, 2024
Slash Your Azure Bill: Top Tips for Startups
Troubleshooting Common Custom Policy Issues in Policy Development
Introducing Microsoft Learn for Organizations Playbook, customizable Plans
Slash Your Azure Bill: Top Tips for Startups
Media
Covering: Azure Media Player, Content Protection, Encoding, Live and On-Demand Streaming, Media Services
No New Articles
Migration
Covering: Azure Database Migration Service, Azure Migrate, Data Box, Azure Site Recovery
No New Articles
Mixed Reality
Covering: Digital Twins, Kinect DK, Spatial Anchors, Remote Rendering, Object Anchors
No New Articles
Mobile
Covering: Azure Maps, MAUI, Notification Hubs, Visual Studio App Center, Xamarin, Azure Communication Services
Did you know Azure Maps is HIPAA compliant?
Azure Communication Services at Microsoft Build 2024
Discover Azure Programmable Connectivity: A developer’s gateway to innovative mobile applications
Networking
Covering: Application Gateway, Bastion, DDoS Protection, DNS, Azure ExpressRoute, Azure Firewall, Load Balancer, Firewall Manager, Front Door, Internet Analyzer, Azure Private Link, Content Delivery Network, Network Watcher, Traffic Manager, Virtual Network, Virtual WAN, VPN Gateway, Web Application Firewall, Azure Orbital, Route Server, Network Function Manager, Virtual Network Manager, Azure Private 5G Core
Public preview: Azure Application Gateway v2 Basic SKU
Advanced routing capabilities using Application Gateway Rewrite Rules
Public preview: Sensitive data protection for Azure Front Door Web Application Firewall
Azure Front Door server variable enhancement general available
Organizing rule collections and rule collection groups in Azure Firewall Policy
Skilling snack: Advanced network security
Security
Covering: Defender for Cloud, DDoS Protection, Dedicated HSM, Azure Information Protection, Microsoft Sentinel, Key Vault, Microsoft Defender for Cloud, Microsoft Defender for IoT, Microsoft Azure Attestation, Azure Confidential Ledger
Secure your Container Apps with Key Vault Certificates
Securing your API Management service from day one with Defender for APIs
Host Microsoft Defender data locally in Switzerland
Preparing for CMMC 2.0: Build New or Fix Old?
Easily detect CVE-2024-21427 with Microsoft Defender for Identity
Loop DDoS Attacks: Understanding the Threat and Azure’s Defense
A BlackByte Ransomware intrusion case study
Storage
Covering: Archive Storage, Avere vFXT for Azure, Azure Data Lake Storage, Azure Data Share, Files, FXT Edge Filer, HPC Cache, NetApp Files, Blob Storage, Data Box, Disk Storage, Queue Storage, Storage Accounts, Storage Explorer, StorSimple
General Availability: Azure Files geo-redundancy for standard large file shares
Storage migration: Combine Azure Storage Mover and Azure Data Box
Web
Covering: App Configuration, App Service, Azure Cognitive Search, Azure Maps, Azure SignalR Service, Static Web Apps, Azure Communication Services, Azure Web PubSub, Azure Fluid Relay, Web App for Containers
How to Apply Easy Auth on Web App under a High-security policy environment
PHP 8.3 now available on App Service
The All-Inclusive Update for Everything TLS on App Service
A Step-by-Step Guide to Datadog Integration with Linux App Service via Sidecars
Azure Communication Services at Microsoft Build 2024
Did you know Azure Maps is HIPAA compliant?
App Service Environment version 1 and version 2 will be retired on 31 August 2024
How to integrate continuous integration and deployment with WordPress on App Service
How to set up staging slots in WordPress on App Service
Announcing Memory intensive SKUs for App Service Environment v3
Azure Virtual Desktop
Covering: Windows Virtual Desktop, VMware Horizon Cloud on Microsoft Azure, Citrix Virtual Apps and Desktops for Azure
No New Articles
Read More
Issues with mail going to Junk folder on Outlook clients
I’m trying to get rid of our on-prem Exchange 2013 server, to this I’ve installed the Exchanger 2019 tools on a Windows server. I’ve re-written our new user script such that it now calls the “Enable-RemoteMailbox” cmd locally instead of opening a PSSession to the Exchange server and running it. The ONLY difference in the script is where it executes the remote mailbox command from.
This is the specific line.
Enable-RemoteMailbox -identity $Alias -RemoteRoutingAddress $Alias”@company.mail.onmicrosoft.com”
The issue I have is that any user created using the PS tools for Exchange, any mail sent to a Hotmail or outlook.com mailbox will end up in junk folder. Gmail and other O365 recipients are all fine, it only affects outlook.com consumer mailboxes.
I can replicate the same issue just by creating a new user with a mailbox directly in Office365, so I don’t think it has anything to do with our hybrid AD config. (I can also replicate this same behaviour in other m365 tenants so this seems like a systemic MS issue)
The mail is all 100% perfect SPF/DKIM/DMARC pass. As mentioned, the only difference is the way I am invoking the enable-remotemailbox command.
I’ve had MS Premier support cases open about this before, but they were useless. literally told me to post on public forums for help or talk to the outlook help bot. So here we are.
I’m trying to get rid of our on-prem Exchange 2013 server, to this I’ve installed the Exchanger 2019 tools on a Windows server. I’ve re-written our new user script such that it now calls the “Enable-RemoteMailbox” cmd locally instead of opening a PSSession to the Exchange server and running it. The ONLY difference in the script is where it executes the remote mailbox command from.This is the specific line.Enable-RemoteMailbox -identity $Alias -RemoteRoutingAddress $Alias”@company.mail.onmicrosoft.com” The issue I have is that any user created using the PS tools for Exchange, any mail sent to a Hotmail or outlook.com mailbox will end up in junk folder. Gmail and other O365 recipients are all fine, it only affects outlook.com consumer mailboxes.I can replicate the same issue just by creating a new user with a mailbox directly in Office365, so I don’t think it has anything to do with our hybrid AD config. (I can also replicate this same behaviour in other m365 tenants so this seems like a systemic MS issue) The mail is all 100% perfect SPF/DKIM/DMARC pass. As mentioned, the only difference is the way I am invoking the enable-remotemailbox command. I’ve had MS Premier support cases open about this before, but they were useless. literally told me to post on public forums for help or talk to the outlook help bot. So here we are. Read More
[PowerPoint Add-in] The iframe disappears when printing the slide.
Hi everyone. My add-in embeds an iframe to display a website in the slide. I can view and interact with the iframe normally in both edit mode and presentation mode. However, when I print the slide, the iframes do not display. Does anyone know why this is happening? Thank you very much.
Hi everyone. My add-in embeds an iframe to display a website in the slide. I can view and interact with the iframe normally in both edit mode and presentation mode. However, when I print the slide, the iframes do not display. Does anyone know why this is happening? Thank you very much. Read More
MS Teams Voice PSTN Call recording & transcription
Hey all
I have 2 questions regarding MS teams voice PSTN direct routing.
1. Can calls made via PSTN direct routing be natively recorded or does this require a 3rd party tool?
2. Is it possible to generate transcripts of PSTN calls and have them available to internal users within the tenant?
Thanks
Daniel
Hey all I have 2 questions regarding MS teams voice PSTN direct routing.1. Can calls made via PSTN direct routing be natively recorded or does this require a 3rd party tool?2. Is it possible to generate transcripts of PSTN calls and have them available to internal users within the tenant? ThanksDaniel Read More
Azure Functions at Build 2024 – addressing customer feedback with deep engineering
Azure Functions is Azure’s primary serverless service used in production by hundreds of thousands of customers who run trillions of executions on it monthly. It was first released in early 2016 and since then we have learnt a lot from our customers on what works and where they would like to see more.
Taking all this feedback into consideration, the Azure Functions team has worked hard to improve the experience across the stack from the initial getting started experience all the way to running at very high scale while at the same time adding features to help customers build AI apps. Please see this link for a list of all the capabilities we have released in this year’s Build conference. Taking everything into account, this is one of the most significant set of releases in Functions history.
In this blog post, I will share customer feedback and behind the scenes technical work that the Functions and other partner teams did to meet the expectations of our customers. In future, we will go deeper into each of those topics, this is a brief overview.
Flex Consumption: Burst scale your apps with networking support
We are releasing a new SKU of Functions, Flex Consumption. This SKU addresses a lot of the feedback that we have received over the years on the Functions Consumption plans – including faster scale, more instance sizes, VNET support, higher instance limits and much more. We have looked at each part of the stack and made improvements at all levels. There are many new capabilities including:
Scales much faster than before with user controlled per-instance concurrency
Scale to many more instances than before (upto 1000)
Serverless “scale to zero” SKU that also supports VNET integrated event sources
Supports always allocated workers
Supports multiple memory sizes
Purpose built backend “Legion”
To enable Flex Consumption, we have created a brand-new purpose-built backend internally called Legion.
To host customer code, Legion relies on nested virtualization on Azure VMSS. This gives us the Hyper-V isolation that is a pre-requisite for hostile multi-tenant workloads. Legion was built right from the outset to support scaling to thousands of instances with VNET injection. Efficient use of subnet IP addresses by use of kernel level routing was also a unique achievement in Legion.
For all languages, functions have a strict goal for cold start. To achieve this cold start metric for all languages and versions, and to support functions image update for all these variants, we had to create a construct called Pool Groups that allows functions to specify all the parameters of the pool, as well as networking and upgrade policies.
All this work led us to a solid, scalable and fast infrastructure on which to build Flex Consumption on.
“Trigger Monitor” – scale to 0 and scale out with network restrictions
Flex Consumption also introduces networking features to limit access to the Function app and to be able to trigger on event sources which are network restricted. Since these event sources are network restricted the multi-tenant scaling component scale controller that monitors the rate of events to determine to scale out or scale in cannot access them. In the Elastic Premium plan in which we scale down to 1 instance – we solved this by that instance having access to the network restricted event source and then communicating scale decisions to the scale controller. However, in the Flex Consumption plan we wanted to scale down to 0 instances.
To solve this, we implemented a small scaling component we call “Trigger Monitor” that is injected into the customers VNET. This component is now able to access the network restricted event source. The scale controller now communicates with this component to get scaling decisions.
Scaling Http based apps based on concurrency
When scaling Http based workloads on Function apps our previous implementation used an internal heuristic to decide when to scale out. This heuristic was based on Front End servers,: pinging the workers that are currently running customers workload and deciding to scale based on the latency of the responses. This implementation used SQL Azure to track workers and assignments for these workers.
In Flex Consumption we have rewritten this logic where now scaling is based on user configured concurrency. User configured concurrency gives customers flexibility in deciding based on the language and workload what concurrency they want to set per instance. So, for example, for Python customers they don’t have to think about multithreading and can set concurrency =1 (which is also the default for Python apps). This approach makes the scaling behavior predictable, and it gives customers the ability to control the cost vs performance tradeoff – if they are willing to tolerate the potential for higher latency, they might unlock cost savings by running each worker at higher levels of concurrency.
In our implementation, we use “request slots” that are managed by the Data Role. We split instances into “request slots” and assign them to different Front End servers. For example: If the per-instance concurrency is set to 16, then once the Data Role chooses an instance to allocate a Function app to, there are 16 request slots that it can hand out to Front Ends. It might give all 16 to a single Front End, or share them across multiple. This removes the need for any coordination between Front Ends – they can use the request slots they receive as much as they like, with the restriction of only one concurrent request per request slot. Also, this implementation uses Cosmos DB to track assignments and workers.
Along with the Legion as the compute provider, significantly large compute allocation per app and rapid scale in and capacity reclamation allows us to give customers much better experience than before.
Scaling Non-Http based apps based on concurrency
Similar to Http apps, we have also enabled Non-Http based apps to scale based on concurrency. We refer to this as Target Based Scaling. . From an implementation perspective we have moved to have various extensions implement scaling logic within the extension and the scale controller hosts these extensions. This unifies the scaling logic in one place and unifies all scaling based on concurrency.
Moving configuration to the Control Plane
One more change that we are making directionally based on feedback from our customers is to move from using AppSettings for various configuration properties to moving them to the Control Plane. For Public Preview we are doing this for the areas of Deployment, Scaling, Language. This is an example configuration which shows the new Control Plane properties. By GA we will move other properties as well.
Azure Load Testing integration
Customers have always asked us how to configure their Function apps for optimum throughput. Till now we have just given them guidance to run performance tests on their own. Now they have another option, we are introducing native Integration with Azure Load Testing. A new performance optimizer is now available that enables you to decide the right configuration for your App by helping you to create and run tests by specifying different memory and Http concurrency configurations.
Functions on Azure Container Apps: Cloud-native microservices deployments
At Build we are also announcing GA of Functions running on Azure Container Apps. This new SKU allows customers to run their apps using the Azure Functions programming model and event driven triggers alongside other microservices or web applications co-located on the same environment. It allows a customer to leverage common networking resources and observability for all their applications. Furthermore, this allows Functions customers wanting to leverage frameworks (like Dapr) and compute options like GPU’s which are only available on Container Apps environments.
We had to keep this SKU consistent with other Function SKUs/plans, even though it ran and scaled on a different platform (Container Apps).
In particular,
We created a new database for this SKU that can handle different schema needs (because of the differences in the underlying infra compared to regular Functions) and improved the query performance. We also redesigned some parts of the control plane for Functions on ACA.
We used ARM extensions routing to securely route the traffic to host and enable Function Host APIs via ARM for Apps running inside an internal VNET
We built a sync trigger service inside Azure Container Apps environment that detects Function App, reads trigger information from customer’s functions code and automatically creates corresponding KEDA scaler rules for the Function App. This enables automatic scaling of Function Apps on Azure Container Apps (ACA), without customers having to know about the KEDA scaling platform involved.
We developed a custom KEDA external scaler to support scale-to-zero scenario for Timer trigger functions.
VSCode.Web support: Develop your functions in the browser
The Azure Functions team values developer productivity and our VSCode integration and Core Tools are top-notch and one of the main advantages in experience over other similar products in this category. However, we are always striving to enhance this experience.
It is often challenging for developers to configure their local dev machine with the right pre-requisites before they can begin. This setup also needs to be updated with the new versions of local tools and language versions. On the other hand, GitHub codespaces and similar developer environments have demonstrated that we can have effective development environments hosted in the cloud.
We are launching a new getting started experience using VSCode for the Web for Azure Functions. This experience allows developers to write, debug, test and deploy their function code directly from their browser using VS Code for the Web, which is connected to a container-based-compute. This is the same exact experience that a developer would have locally. This container comes ready with all the required dependencies and supports the rich features offered by VS Code, including extensions. This experience can also be used for function apps that already have code deployed to them as well.
To build this functionality we built an extension that launches VS Code for the Web, a lightweight VS Code that runs in a user’s browser. This VS Code client will communicate with Azure Functions backend infrastructure t to establish a connection to a VS Code server using a Dev Tunnel. With the VS Code client and server connected via a DevTunnel, the user will be able to edit their function as desired.
Open AI extension to build AI apps effortlessly
Azure Functions aims to simplify the development of different types of apps, such as web apps, data pipelines and other related work loads. AI apps is a clear new domain. Azure Functions has a rich extensibility model helping developers abstract away many of the mundane tasks that are required for integration along with making the capability be available for all the languages that Functions support.
We are releasing an extension on top of OpenAI which enables the following scenarios in just a few lines of code:
Retrieval Augmented Generation (Bring your own data)
Text completion and Chat Completion
Assistants’ capability
Key here is that developers can build AI apps in any language of their choice that is supported by Functions and are hosted in a service that can be used within minutes.
Have a look at the following code snippet in C# where in a few lines of code:
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
public class SemanticSearchRequest
{
[JsonPropertyName(“Prompt”)]
public string? Prompt { get; set; }
}
[Function(“PromptFile”)]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, “post”)] SemanticSearchRequest unused,
[SemanticSearchInput(“AISearchEndpoint”, “openai-index”, Query = “{Prompt}”, ChatModel = “%CHAT_MODEL_DEPLOYMENT_NAME%”, EmbeddingsModel = “%EMBEDDING_MODEL_DEPLOYMENT_NAME%”)] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = “text/plain” };
}
The challenges of building an extension are making sure that it hides enough of “glue code” and at the same time give enough flexibility to the developer for their business use case.
Furthermore, these were some additional challenges we faced:
To save state across invocations in the chat completion scenarios we experimented with various implementations including Durable Functions and finally we move to using Table storage for preserving state during conversations.
We had to figure out which embeddings store we should pursue support – we currently support Azure AI Search, Cosmos DB and Azure Data Explorer
Like any technology that is moving fast we had to figure out the right strategy to use the underlying Open AI models and SDKS.
Streaming support in Node and Python
Another long asked for support that was added at Build is streaming support in Node (GA) and Python (preview)
With this feature, customers can stream HTTP requests to and responses from their Function Apps, using function exposed request and response APIs. Previously with HTTP requests, the amount of data that could be transmitted was limited to the SKU instance memory size. With HTTP streaming, large amounts of data can be processed with chunking. Especially relevant today is that this feature enables new scenarios when creating AI apps including processing large data streaming OpenAI responses and delivering dynamic content.
The journey to enable streaming support is interesting. It started with us first aiming for parity between in-proc and isolated models for .NET. To achieve this we implemented a new Http pipeline where-in the Http request would be proxied from the Functions Host onto the isolated worker. We were able to piggyback on the same technology to build streaming support in other out-of-proc languages.
OpenTelemetry support
In Build we are releasing support for OpenTelemetry in Functions. This allows customers to export telemetry data from both the Functions Host and from the language workers using OpenTelemetry semantics. These are some of the interesting design directions we took for this work:
The customer’s code ignores the Functions host and re-creates the context in each language worker for a smooth experience.
Telemetry is the same for ApplicationInsights and other vendors; customers get the same telemetry data no matter what they use. LiveLogs works with AI, but the overall experience doesn’t change.
To make things easier for our customers, each language worker has a package/module that removes extra code.
Thank you and going forward
Thank you to all the customers and developers who have used Azure Functions through the years. We would love for you to try out these new features and capabilities and provide feedback and suggestions.
Going forward we will be working on:
Getting Flex Consumption to GA to enable more scale for our most demanding customers and keep making improvements in the meanwhile.
Continue to keep enhancing the Open AI extension with more scenarios and models to make Azure Functions the easiest and fastest way to create an AI service.
Continue to enhance our getting started experience and take VSCode.Web integration to more languages and to GA.
Adding support for Streaming to other languages including Java.
Microsoft Tech Community – Latest Blogs –Read More
Group creation needed to start a campaign
Hello all – We recently were granted a trial of Amplify. I went to create a campaign and received an error message. The error message I received said that I do not have access to create groups. I did a little research and found this:
Ensure Microsoft 365 group creation has been enabled for you. You can connect with your admin to check if you have the necessary permissions. Learn more about Microsoft 365 group creation permissions.
Question for this group: How did your organizations get around allowing group creation for those with campaign access? Any guidance is greatly appreciated.
Hello all – We recently were granted a trial of Amplify. I went to create a campaign and received an error message. The error message I received said that I do not have access to create groups. I did a little research and found this: Ensure Microsoft 365 group creation has been enabled for you. You can connect with your admin to check if you have the necessary permissions. Learn more about Microsoft 365 group creation permissions.Question for this group: How did your organizations get around allowing group creation for those with campaign access? Any guidance is greatly appreciated. Read More
Azure File Share – NTFS Permission Extremely Slow
Recently moved file server data to Azure File Share. No issue with mapping, opening files. The issue is when managing the permission. When updating/adding NTFS permission per folder, it is EXTREMELY slow. Any advise or workaround that you could share please.
Thank you.
Recently moved file server data to Azure File Share. No issue with mapping, opening files. The issue is when managing the permission. When updating/adding NTFS permission per folder, it is EXTREMELY slow. Any advise or workaround that you could share please. Thank you. Read More
How can I join Microsoft
I would be honored to join the Microsoft team and contribute to the Copilot team’s mission of providing innovative solutions to enterprises. I recognize the immense potential of this technology to benefit businesses, and I am eager to be a part of the team that is driving its development. I kindly request your guidance on the process of joining the team. Thank you for considering my interest.
I would be honored to join the Microsoft team and contribute to the Copilot team’s mission of providing innovative solutions to enterprises. I recognize the immense potential of this technology to benefit businesses, and I am eager to be a part of the team that is driving its development. I kindly request your guidance on the process of joining the team. Thank you for considering my interest. Read More
GDAP and not allowing global admin to auto renew
Hi all,
The relationships we created two years ago are quickly approaching their expiration date, and I’m interested in how other people are handling the creation of new relationships.
With the introduction of relationships that auto renew, have you found this to be a viable path? We are a Managed Service Provider and our customers expect us to turn ALL the knobs for them in the Microsoft portals.
I want to have the flexibility of techs only enabling the roles they need, but there are a LOT of roles. Creating a relationship with 34 roles is a bit extreme. Plus, it looks like we need 43 built-in roles to have the same level as access as Global Admin, and some of those roles are not available via GDAP today.
The role that stands out the most is “Organizational Branding Administrator.” Am I missing something, or is the only way to change sign-in branding to use the Global Administrator role (which prevents auto-renewal) or use a local tenant admin account?
What would partners think if Microsoft allowed the Global Admin role to auto-renew until Microsoft adds all the built in roles to GDAP roles needed to replace Global Admin? Maybe put some sort of extra warning on the role acceptance side advising the client this is not recommended and let the client make that informed choice themselves?
What do you think customers opinion of this move would be?
From my conversations with different people, I am under the impression that customers didn’t want Microsoft to allow partners the option of letting the Global Admin role auto-renew. Since I have never met a customer that shared this view, I can’t comment on the accuracy of that statement, but that what I’ve heard.
Hi all,The relationships we created two years ago are quickly approaching their expiration date, and I’m interested in how other people are handling the creation of new relationships. With the introduction of relationships that auto renew, have you found this to be a viable path? We are a Managed Service Provider and our customers expect us to turn ALL the knobs for them in the Microsoft portals. I want to have the flexibility of techs only enabling the roles they need, but there are a LOT of roles. Creating a relationship with 34 roles is a bit extreme. Plus, it looks like we need 43 built-in roles to have the same level as access as Global Admin, and some of those roles are not available via GDAP today. The role that stands out the most is “Organizational Branding Administrator.” Am I missing something, or is the only way to change sign-in branding to use the Global Administrator role (which prevents auto-renewal) or use a local tenant admin account? What would partners think if Microsoft allowed the Global Admin role to auto-renew until Microsoft adds all the built in roles to GDAP roles needed to replace Global Admin? Maybe put some sort of extra warning on the role acceptance side advising the client this is not recommended and let the client make that informed choice themselves? What do you think customers opinion of this move would be? From my conversations with different people, I am under the impression that customers didn’t want Microsoft to allow partners the option of letting the Global Admin role auto-renew. Since I have never met a customer that shared this view, I can’t comment on the accuracy of that statement, but that what I’ve heard. Read More
Azure Functions at Build 2024 – Solving customer problems with deep engineering
Azure Functions is Azure’s primary serverless service used in production by hundreds of thousands of customers who run trillions of executions on it monthly. It was first released in early 2016 and since then we have learnt a lot from our customers on what works and where they would like to see more.
Taking all this feedback into consideration, the Azure Functions team has worked hard to improve the experience across the stack from the initial getting started experience all the way to running at very high scale. Please see this link for a list of all the capabilities we have released in this year’s Build conference. Taking everything into account, this is one of the most significant set of releases in Functions history.
In this blog post, I will share a brief glimpse behind the scenes of some of the technical work that the Functions and other partner teams did to meet the expectations of our customers. We will write more technical blogs to explain these areas in depth this is a brief overview.
Flex Consumption: Burst scale your apps with networking support
We are releasing a new SKU of Functions, Flex Consumption. This SKU addresses all the feedback that we have received over the years on the Functions Consumption plans. We have looked at each part of the stack and made improvements at all levels. There are many new capabilities including:
Scales much faster than before with user controlled per-instance concurrency
Scale to many more instances than before (upto 100)
Serverless “scale to zero” SKU that also supports VNET integrated event sources
Supports always allocated workers
Supports multiple memory sizes
Purpose built backend “Legion”
To enable Flex Consumption, we have created a brand-new purpose-built backend internally called Legion.
To host customer code, Legion relies on nested virtualization on Azure VMSS. This gives us the Hyper-V isolation that is a pre-requisite for hostile multi-tenant workloads. Legion was built right from the outset to support scaling to thousands of instances with VNET injection. Efficient use of subnet IP addresses by use of kernel level routing was also a unique achievement in Legion.
For all languages, functions have a strict goal for cold start. To achieve this cold start metric for all languages and versions, and to support functions image update for all these variants, we had to create a construct called Pool Groups that allows functions to specify all the parameters of the pool, as well as networking and upgrade policies.
All this work led us to a solid, scalable and fast infrastructure on which to build Flex Consumption on.
“Trigger Monitor” – scale to 0 and scale out with network restrictions
Flex Consumption also introduces networking features to limit access to the Function app and to be able to trigger on event sources which are network restricted. Since these event sources are network restricted the multi-tenant scaling component scale controller that monitors the rate of events to determine to scale out or scale in cannot access them. In the Elastic Premium plan in which we scale down to 1 instance – we solved this by that instance having access to the network restricted event source and then communicating scale decisions to the scale controller. However, in the Flex Consumption plan we wanted to scale down to 0 instances.
To solve this, we implemented a small scaling component we call “Trigger Monitor” that is injected into the customers VNET. This component is now able to access the network restricted event source. The scale controller now communicates with this component to get scaling decisions.
Scaling Http based apps based on concurrency
When scaling Http based workloads on Function apps our previous implementation used an internal heuristic to decide when to scale out. This heuristic was based on Front End servers,: pinging the workers that are currently running customers workload and deciding to scale based on the latency of the responses. This implementation used SQL Azure to track workers and assignments for these workers.
In Flex Consumption we have rewritten this logic where now scaling is based on user configured concurrency. User configured concurrency gives customers flexibility in deciding based on the language and workload what concurrency they want to set per instance. So, for example, for Python customers they don’t have to think about multithreading and can set concurrency =1 (which is also the default for Python apps). This approach makes the scaling behavior predictable, and it gives customers the ability to control the cost vs performance tradeoff – if they are willing to tolerate the potential for higher latency, they might unlock cost savings by running each worker at higher levels of concurrency.
In our implementation, we use “request slots” that are managed by the Data Role. We split instances into “request slots” and assign them to different Front End servers. For example: If the per-instance concurrency is set to 16, then once the Data Role chooses an instance to allocate a Function app to, there are 16 request slots that it can hand out to Front Ends. It might give all 16 to a single Front End, or share them across multiple. This removes the need for any coordination between Front Ends – they can use the request slots they receive as much as they like, with the restriction of only one concurrent request per request slot. Also, this implementation uses Cosmos DB to track assignments and workers.
Along with the Legion as the compute provider, significantly large compute allocation per app and rapid scale in and capacity reclamation allows us to give customers much better experience than before.
Scaling Non-Http based apps based on concurrency
Similar to Http apps, we have also enabled Non-Http based apps to scale based on concurrency. We refer to this as Target Based Scaling. . From an implementation perspective we have moved to have various extensions implement scaling logic within the extension and the scale controller hosts these extensions. This unifies the scaling logic in one place and unifies all scaling based on concurrency.
Moving configuration to the Control Plane
One more change that we are making directionally based on feedback from our customers is to move from using AppSettings for various configuration properties to moving them to the Control Plane. For Public Preview we are doing this for the areas of Deployment, Scaling, Language. This is an example configuration which shows the new Control Plane properties. By GA we will move other properties as well.
Functions on Azure Container Apps: Cloud-native microservices deployments
At Build we are also announcing GA of Functions running on Azure Container Apps. This new SKU allows customers to run their apps using the Azure Functions programming model and event driven triggers alongside other microservices or web applications co-located on the same environment. It allows a customer to leverage common networking resources and observability for all their applications. Furthermore, this allows Functions customers wanting to leverage frameworks (like Dapr) and compute options like GPU’s which are only available on Container Apps environments.
We had to keep this SKU consistent with other Function SKUs/plans, even though it ran and scaled on a different platform (Container Apps).
In particular,
We created a new database for this SKU that can handle different schema needs (because of the differences in the underlying infra compared to regular Functions) and improved the query performance. We also redesigned some parts of the control plane for Functions on ACA.
We used ARM extensions routing to securely route the traffic to host and enable Function Host APIs via ARM for Apps running inside an internal VNET
We built a sync trigger service inside Azure Container Apps environment that detects Function App, reads trigger information from customer’s functions code and automatically creates corresponding KEDA scaler rules for the Function App. This enables automatic scaling of Function Apps on Azure Container Apps (ACA), without customers having to know about the KEDA scaling platform involved.
We developed a custom KEDA external scaler to support scale-to-zero scenario for Timer trigger functions.
VSCode.Web support: Develop your functions in the browser
The Azure Functions team values developer productivity and our VSCode integration and Core Tools are top-notch and one of the main advantages in experience over other similar products in this category. However, we are always striving to enhance this experience.
It is often challenging for developers to configure their local dev machine with the right pre-requisites before they can begin. This setup also needs to be updated with the new versions of local tools and language versions. On the other hand, GitHub codespaces and similar developer environments have demonstrated that we can have effective development environments hosted in the cloud.
We are launching a new getting started experience using VSCode for the Web for Azure Functions. This experience allows developers to write, debug, test and deploy their function code directly from their browser using VS Code for the Web, which is connected to a container-based-compute. This is the same exact experience that a developer would have locally. This container comes ready with all the required dependencies and supports the rich features offered by VS Code, including extensions. This experience can also be used for function apps that already have code deployed to them as well.
To build this functionality we built an extension that launches VS Code for the Web, a lightweight VS Code that runs in a user’s browser. This VS Code client will communicate with Azure Functions backend infrastructure t to establish a connection to a VS Code server using a Dev Tunnel. With the VS Code client and server connected via a DevTunnel, the user will be able to edit their function as desired.
Open AI extension to build AI apps effortlessly
Azure Functions aims to simplify the development of different types of apps, such as web apps, data pipelines and other related work loads. AI apps is a clear new domain. Azure Functions has a rich extensibility model helping developers abstract away many of the mundane tasks that are required for integration along with making the capability be available for all the languages that Functions support.
We are releasing an extension on top of OpenAI which enables the following scenarios in just a few lines of code:
Retrieval Augmented Generation (Bring your own data)
Text completion and Chat Completion
Assistants’ capability
Key here is that developers can build AI apps in any language of their choice that is supported by Functions and are hosted in a service that can be used within minutes.
Have a look at the following code snippet in C# where in a few lines of code:
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
public class SemanticSearchRequest
{
[JsonPropertyName(“Prompt”)]
public string? Prompt { get; set; }
}
[Function(“PromptFile”)]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, “post”)] SemanticSearchRequest unused,
[SemanticSearchInput(“AISearchEndpoint”, “openai-index”, Query = “{Prompt}”, ChatModel = “%CHAT_MODEL_DEPLOYMENT_NAME%”, EmbeddingsModel = “%EMBEDDING_MODEL_DEPLOYMENT_NAME%”)] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = “text/plain” };
}
The challenges of building an extension are making sure that it hides enough of “glue code” and at the same time give enough flexibility to the developer for their business use case.
Furthermore, these were some additional challenges we faced:
To save state across invocations in the chat completion scenarios we experimented with various implementations including Durable Functions and finally we move to using Table storage for preserving state during conversations.
We had to figure out which embeddings store we should pursue support – we currently support Azure AI Search, Cosmos DB and Azure Data Explorer
Like any technology that is moving fast we had to figure out the right strategy to use the underlying Open AI models and SDKS.
Streaming support in Node and Python
Another long asked for support that was added at Build is streaming support in Node (GA) and Python (preview)
With this feature, customers can stream HTTP requests to and responses from their Function Apps, using function exposed request and response APIs. Previously with HTTP requests, the amount of data that could be transmitted was limited to the SKU instance memory size. With HTTP streaming, large amounts of data can be processed with chunking. Especially relevant today is that this feature enables new scenarios when creating AI apps including processing large data streaming OpenAI responses and delivering dynamic content.
The journey to enable streaming support is interesting. It started with us first aiming for parity between in-proc and isolated models for .NET. To achieve this we implemented a new Http pipeline where-in the Http request would be proxied from the Functions Host onto the isolated worker. We were able to piggyback on the same technology to build streaming support in other out-of-proc languages.
OpenTelemetry support
In Build we are releasing support for OpenTelemetry in Functions. This allows customers to export telemetry data from both the Functions Host and from the language workers using OpenTelemetry semantics. These are some of the interesting design directions we took for this work:
The customer’s code ignores the Functions host and re-creates the context in each language worker for a smooth experience.
Telemetry is the same for ApplicationInsights and other vendors; customers get the same telemetry data no matter what they use. LiveLogs works with AI, but the overall experience doesn’t change.
To make things easier for our customers, each language worker has a package/module that removes extra code.
Thank you and going forward
Thank you to all the customers and developers who have used Azure Functions through the years. We would love for you to try out these new features and capabilities and provide feedback and suggestions.
Going forward we will be working on:
Getting Flex Consumption to GA and keep making improvements in the meanwhile.
Continue to keep enhancing the Open AI extension with more scenarios and models to make Azure Functions the easiest and fastest way to create an AI service.
Continue to enhance our getting started experience and take VSCode.Web integration to more languages and to GA.
Adding support for Streaming to other languages including Java.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions at Build 2024 – Technical underpinnings and challenges
Azure Functions is Azure’s primary serverless service used in production by hundreds of thousands of customers who run trillions of executions on it monthly. It was first released in early 2016 and since then we have learnt a lot from our customers on what works and where they would like to see more.
Taking all this feedback into consideration, the Azure Functions team has worked hard to improve the experience across the stack from the initial getting started experience all the way to running at very high scale. Please see this link for a list of all the capabilities we have released in this year’s Build conference. Taking everything into account, this is one of the most significant set of releases in Functions history.
In this blog post, I will share a brief glimpse behind the scenes of some of the technical work that the Functions and other partner teams did to meet the expectations of our customers. We will write more technical blogs to explain these areas in depth this is a brief overview.
Flex Consumption: Burst scale your apps with networking support
We are releasing a new SKU of Functions, Flex Consumption. This SKU addresses all the feedback that we have received over the years on the Functions Consumption plans. We have looked at each part of the stack and made improvements at all levels. There are many new capabilities including:
Scales much faster than before with user controlled per-instance concurrency
Serverless “scale to zero” SKU that also supports VNET integrated event sources
Supports always allocated workers
Supports multiple memory sizes
Purpose built backend “Legion”
To enable Flex Consumption, we have created a brand-new purpose-built backend internally called Legion.
To host customer code, Legion relies on nested virtualization on Azure VMSS. This gives us the Hyper-V isolation that is a pre-requisite for hostile multi-tenant workloads. Legion was built right from the outset to support scaling to thousands of instances with VNET injection. Efficient use of subnet IP addresses by use of kernel level routing was also a unique achievement in Legion.
For all languages, functions have a strict goal for cold start. To achieve this cold start metric for all languages and versions, and to support functions image update for all these variants, we had to create a construct called Pool Groups that allows functions to specify all the parameters of the pool, as well as networking and upgrade policies.
All this work led us to a solid, scalable and fast infrastructure on which to build Flex Consumption on.
“Trigger Monitor” – scale to 0 and scale out with network restrictions
Flex Consumption also introduces networking features to limit access to the Function app and to be able to trigger on event sources which are network restricted. Since these event sources are network restricted the multi-tenant scaling component scale controller that monitors the rate of events to determine to scale out or scale in cannot access them. In the Elastic Premium plan in which we scale down to 1 instance – we solved this by that instance (which also has access to the network restricted event source) also communicating scale decisions to the scale controller. However, in the Flex Consumption plan we wanted to scale down to 0 instances.
To solve this, we implemented a small scaling component we call “Trigger Monitor” that is injected into the customers VNET. This component is now able to access the network restricted event source. The scale controller now communicates with this component to get scaling decisions.
Scaling Http based apps based on concurrency
When scaling Http based workloads on Function apps our previous implementation used an internal heuristic to decide when to scale out. This heuristic was based on Front End servers,: pinging the workers that are currently running customers workload and deciding to scale based on the latency of the responses. This implementation used SQL Azure to track workers and assignments for these workers.
In Flex Consumption we have rewritten this logic where now scaling is based on user configured concurrency. User configured concurrency gives customers flexibility in deciding based on the language and workload what concurrency they want to set per instance. So, for example, for Python customers they don’t have to think about multithreading and can set concurrency =1 (which is also the default for Python apps). This approach makes the scaling behavior predictable, and it gives customers the ability to control the cost vs performance tradeoff – if they are willing to tolerate the potential for higher latency, they might unlock cost savings by running each worker at higher levels of concurrency.
In our implementation, we use “request slots” that are managed by the Data Role. We split instances into “request slots” and assign them to different Front End servers. For example: If the per-instance concurrency is set to 16, then once the Data Role chooses an instance to allocate a Function app to, there are 16 request slots that it can hand out to Front Ends. It might give all 16 to a single Front End, or share them across multiple. This removes the need for any coordination between Front Ends – they can use the request slots they receive as much as they like, with the restriction of only one concurrent request per request slot. Also, this implementation uses Cosmos DB to track assignments and workers.
Along with the Legion as the compute provider, significantly large compute allocation per app and rapid scale in and capacity reclamation allows us to give customers much better experience than before.
Scaling Non-Http based apps based on concurrency
Similar to Http apps, we have also enabled Non-Http based apps to scale based on concurrency. We refer to this as Target Based Scaling. . From an implementation perspective we have moved to have various extensions implement scaling logic within the extension and the scale controller hosts these extensions. This unifies the scaling logic in one place and unifies all scaling based on concurrency.
Moving configuration to the Control Plane
One more change that we are making directionally is to move from using AppSettings for various configuration properties to moving them to the Control Plane. For Public Preview we are doing this for the areas of Deployment, Scaling, Language. This is an example configuration which shows the new Control Plane properties. By GA we will move other properties as well.
Functions on Azure Container Apps: Cloud-native microservices deployments
At Build we are also announcing GA of Functions running on Azure Container Apps. This new SKU allows customers to run their apps using the Azure Functions programming model and event driven triggers alongside other microservices or web applications co-located on the same environment. It allows a customer to leverage common networking resources and observability for all their applications. Furthermore, this allows Functions customers wanting to leverage frameworks (like Dapr) and compute options like GPU’s which are only available on Container Apps environments.
We had to keep this SKU consistent with other Function SKUs/plans, even though it ran and scaled on a different platform (Container Apps).
In particular,
We created a new database for this SKU that can handle different schema needs (because of the differences in the underlying infra compared to regular Functions) and improved the query performance. We also redesigned some parts of the control plane for Functions on ACA.
We used ARM extensions routing to securely route the traffic to host and enable Function Host APIs via ARM for Apps running inside an internal VNET
We built a sync trigger service inside Azure Container Apps environment that detects Function App, reads trigger information from customer’s functions code and automatically creates corresponding KEDA scaler rules for the Function App. This enables automatic scaling of Function Apps on Azure Container Apps (ACA), without customers having to know about the KEDA scaling platform involved.
We developed a custom KEDA external scaler to support scale-to-zero scenario for Timer trigger functions.
VSCode.Web support: Develop your functions in the browser
The Azure Functions team values developer productivity and our VSCode integration and Core Tools are top-notch and one of the main advantages in experience over other similar products in this category. However, we are always striving to enhance this experience.
It is often challenging for developers to configure their local dev machine with the right pre-requisites before they can begin. This setup also needs to be updated with the new versions of local tools and language versions. On the other hand, GitHub codespaces and similar developer environments have demonstrated that we can have effective development environments hosted in the cloud.
We are launching a new getting started experience using VSCode for the Web for Azure Functions. This experience allows developers to write, debug, test and deploy their function code directly from their browser using VS Code for the Web, which is connected to a container-based-compute. This is the same exact experience that a developer would have locally. This container comes ready with all the required dependencies and supports the rich features offered by VS Code, including extensions. This experience can also be used for function apps that already have code deployed to them as well.
To build this functionality we built an extension that launches VS Code for the Web, a lightweight VS Code that runs in a user’s browser. This VS Code client will communicate with Azure Functions backend infrastructure t to establish a connection to a VS Code server using a Dev Tunnel. With the VS Code client and server connected via a DevTunnel, the user will be able to edit their function as desired.
Open AI extension to build AI apps effortlessly
Azure Functions aims to simplify the development of different types of apps, such as web apps, data pipelines and other related work loads. AI apps is a clear new domain. Azure Functions has a rich extensibility model helping developers abstract away many of the mundane tasks that are required for integration along with making the capability be available for all the languages that Functions support.
We are releasing an extension on top of OpenAI which enables the following scenarios in just a few lines of code:
Retrieval Augmented Generation (Bring your own data)
Text completion and Chat Completion
Assistants’ capability
Key here is that developers can build AI apps in any language of their choice that is supported by Functions and are hosted in a service that can be used within minutes.
Have a look at the following code snippet in C# where in a few lines of code:
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
public class SemanticSearchRequest
{
[JsonPropertyName(“Prompt”)]
public string? Prompt { get; set; }
}
[Function(“PromptFile”)]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, “post”)] SemanticSearchRequest unused,
[SemanticSearchInput(“AISearchEndpoint”, “openai-index”, Query = “{Prompt}”, ChatModel = “%CHAT_MODEL_DEPLOYMENT_NAME%”, EmbeddingsModel = “%EMBEDDING_MODEL_DEPLOYMENT_NAME%”)] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = “text/plain” };
}
The challenges of building an extension are making sure that it hides enough of “glue code” and at the same time give enough flexibility to the developer for their business use case.
Furthermore, these were some additional challenges we faced:
To save state across invocations in the chat completion scenarios we experimented with various implementations including Durable Functions and finally we move to using Table storage for preserving state during conversations.
We had to figure out which embeddings store we should pursue support – we currently support Azure AI Search, Cosmos DB and Azure Data Explorer
Like any technology that is moving fast we had to figure out the right strategy to use the underlying Open AI models and SDKS.
Streaming support in Node and Python
Another long asked for support that was added at Build is streaming support in Node (GA) and Python (preview)
With this feature, customers can stream HTTP requests to and responses from their Function Apps, using function exposed request and response APIs. Previously with HTTP requests, the amount of data that could be transmitted was limited to the SKU instance memory size. With HTTP streaming, large amounts of data can be processed with chunking. Especially relevant today is that this feature enables new scenarios when creating AI apps including processing large data streaming OpenAI responses and delivering dynamic content.
The journey to enable streaming support is interesting. It started with us first aiming for parity between in-proc and isolated models for .NET. To achieve this we implemented a new Http pipeline where-in the Http request would be proxied from the Functions Host onto the isolated worker. We were able to piggyback on the same technology to build streaming support in other out-of-proc languages.
OpenTelemetry support
In Build we are releasing support for OpenTelemetry in Functions. This allows customers to export telemetry data from both the Functions Host and from the language workers using OpenTelemetry semantics. These are some of the interesting design directions we took for this work:
The customer’s code ignores the Functions host and re-creates the context in each language worker for a smooth experience.
Telemetry is the same for ApplicationInsights and other vendors; customers get the same telemetry data no matter what they use. LiveLogs works with AI, but the overall experience doesn’t change.
To make things easier for our customers, each language worker has a package/module that removes extra code.
Thank you and going forward
Thank you to all the customers and developers who have used Azure Functions through the years. We would love for you to try out these new features and capabilities and provide feedback and suggestions.
Going forward we will be working on:
Getting Flex Consumption to GA and keep making improvements in the meanwhile.
Continue to keep enhancing the Open AI extension with more scenarios and models to make Azure Functions the easiest and fastest way to create an AI service.
Continue to enhance our getting started experience and take VSCode.Web integration to more languages and to GA.
Adding support for Streaming to other languages including Java.
Microsoft Tech Community – Latest Blogs –Read More
Edge Browser window (all tabs included) recovery ?
A while ago, about two months ago roughly. I had accidently closed one of a few Microsoft Edge Browser Windows. I know that Edge has a recovery option but in my case the browser window was accidently closed; on top of that, disk space is low on the computer; for now.
Therefore it would be outrageous but not surprising if browser windows with all the tabs for that browser window was not kept somewhere on the computer. I know even with limited space that this file if it exists would still exist so I can retrieve the file ?
A while ago, about two months ago roughly. I had accidently closed one of a few Microsoft Edge Browser Windows. I know that Edge has a recovery option but in my case the browser window was accidently closed; on top of that, disk space is low on the computer; for now. Therefore it would be outrageous but not surprising if browser windows with all the tabs for that browser window was not kept somewhere on the computer. I know even with limited space that this file if it exists would still exist so I can retrieve the file ? Read More
how can i make linux terminal with asp.net
hello guys ,
so im working on project for graduation , i want to make at least a simple web application that contains a linux terminal emulator only , that takes commands from the user and execute them in the server and display the ouput back in the web page for the user , and im only begining to learn asp.net , so can someone plz explain to me how can i achieve that or at least give me a roadmap and the tools i need to get to the result i want cuz im realy feeling LOST right now :
hello guys ,so im working on project for graduation , i want to make at least a simple web application that contains a linux terminal emulator only , that takes commands from the user and execute them in the server and display the ouput back in the web page for the user , and im only begining to learn asp.net , so can someone plz explain to me how can i achieve that or at least give me a roadmap and the tools i need to get to the result i want cuz im realy feeling LOST right now : Read More
Support tip: Organizational messages is moving to Microsoft 365 admin center
The Intune experience for managing organizational messages will be removed no earlier than August 2024. You can now view and manage your messages created in Intune in the new experience within the Microsoft 365 admin center. The new experience includes new, top requested features such as the ability to author custom messages and message delivery on Microsoft 365 apps. To learn more about the new experience, review: Introducing organizational messages (preview) in the Microsoft 365 admin center.
Key points: What this means for messages created in Intune
If you’re using organizational messages in Intune, there are several key things to be aware of with the experience moving to Microsoft 365:
There’s no impact to your users unless you choose to cancel or delete messages.
Existing messages that you have created in Intune will be available in the new experience in Microsoft 365 admin center for you to continue viewing and managing.
Get Started messages cannot be created in Microsoft 365 admin center. Existing Get Started messages will continue to work until they are cancelled or deleted (which can be done in either Intune or the Microsoft 365 admin center).
All Intune role-based access control (RBAC) roles with organizational messages permissions will no longer work and you will have to create new roles (or custom roles) in Microsoft Entra.
Scope tags are only available in Intune and will not be applicable after this change.
Update RBAC assignments for organizational messages
Intune RBAC roles won’t work once the experience has been removed from Intune. You’ll want to update your Intune roles to the Microsoft Entra roles before August 2024 to ensure your admins are able to continue managing organizational messages.
To create organizational messages in Microsoft 365 admin center, create a new custom role or use one of the built-in roles in Microsoft Entra:
Organizational Messages Approver
Organizational Messages Writer
Microsoft Entra Global Administrator (not recommended as security best practice)
For instructions on creating custom roles or assigning roles in Entra, review the following documentation:
Create and assign a custom role in Microsoft Entra ID
Assign Microsoft Entra roles to users
Common questions
Will I be able to access organizational messages from Intune?
You can continue to access and manage organizational messages in Intune until the experience is removed no earlier than August 2024. After that, the organizational messages user interface within the Intune admin center (Tenant administration > Organizational messages) will be removed.
What will happen to messages I have created in Intune organizational messages?
Your entire messages history and all active messages from Intune’s organizational messages will be available to view, cancel, or delete in the Microsoft 365 admin center. All active messages from Intune will continue to be delivered until the expiration date you have specified.
What will change for my organization and users?
The following functionality will be impacted:
Scope tags will not be supported in or migrated to Microsoft 365 admin center. Any scope tags created in Intune will no longer be honored.
Any automated scripts for managing organizational messages in Intune will not work.
Any Intune RBAC roles will not be honored. You must enable the Entra RBAC roles highlighted above or create a custom role to create and manage organizational messages.
The authoring of new Get Started messages will not be available in Microsoft 365 admin center. These messages don’t expire and won’t be canceled until you take action to cancel or delete. Admins may cancel or delete any existing Get Started messages at any time.
How will an admin distinguish messages authored from Intune in the Microsoft 365 admin center?
Under the Message Detail pane, the Source field will indicate messages authored from Intune or other entry points.
What happens to audit logs of organizational messages in Intune?
Intune audit logs can be viewed in the Intune admin center for the last 30 days or up to 1 year when using Graph API.
How will this impact the existing setting which allows you to block Microsoft messaging?
There’ll be no changes to the existing management of Microsoft messaging policy within Intune before August 2024. If you currently block messages that come from Microsoft, you can continue to do so while also allowing organizational messages to come through. Later, after August 2024, this functionality will migrate to organizational messages in the Microsoft 365 admin center.
Sign in to the Microsoft Intune admin center.
Go to Tenant administration > Organizational messages.
In the Overview tab, go to step 2 under “Before you create a message”.
Decide whether to block messages directly from Microsoft, while allowing admin messages to display by:
Switching the toggle to Allow to allow both Microsoft messages and organizational messages.
Switching the toggle to Block to block Microsoft messages and allow organizational messages.
Stay tuned to this post for updates on the exact timing of this change! If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Can’t Delete Marketplace Listings
Partner Center does not allow you to delete a marketplace listing (only drafts), but does allow you to hide. Why? This has a huge impact on downstream integrations with Tackle.io, Workspan, etc.
Partner Center does not allow you to delete a marketplace listing (only drafts), but does allow you to hide. Why? This has a huge impact on downstream integrations with Tackle.io, Workspan, etc. Read More
VLOOKUP Name error
I am using =Vlookup for a database to display my acct names and passwords.
It seems to be working although I get a #NAME? error. I am wanting the password to be displayed
The DB is located on a 2nd tab.
This my formula: =VLOOKUP($D$5,Database,3,Database!D2*********FALSE) (asterisks=password)
Help highlights $D$5
I am very new to Excel.
Thanks in advance!
I am using =Vlookup for a database to display my acct names and passwords.It seems to be working although I get a #NAME? error. I am wanting the password to be displayedThe DB is located on a 2nd tab. This my formula: =VLOOKUP($D$5,Database,3,Database!D2*********FALSE) (asterisks=password)Help highlights $D$5 I am very new to Excel.Thanks in advance! Read More