Month: October 2024
Azure Monitor agent sends logs to two LA workspaces in different accounts
Our company has many different Azure accounts and subscriptions.
Can we install AMA on one server to support sending logs to LA workspaces under different accounts?
For example, logs are sent to East Asia and China (East Asia and China are physically isolated)
Our company has many different Azure accounts and subscriptions.Can we install AMA on one server to support sending logs to LA workspaces under different accounts?For example, logs are sent to East Asia and China (East Asia and China are physically isolated) Read More
Microsoft Powerpoint – Insert > Icons / 3D Models not working
I try to incert icons in MS power point, but after click the icons, local folder pop-out instead of suggestions of icons like always. Turns out MS Word also face the same prob
Here’s what’s happening :
Heres what happen –
What I’ve Tried That Didn’t Work:
Updated Microsoft 365.Checked for add-ons (I don’t have any).Uninstalled and reinstalled Microsoft 365, then restarted.Contacted Microsoft support, where remote access was used to repeat the above steps again.
I try to incert icons in MS power point, but after click the icons, local folder pop-out instead of suggestions of icons like always. Turns out MS Word also face the same prob Here’s what’s happening : Heres what happen -What I’ve Tried That Didn’t Work:Updated Microsoft 365.Checked for add-ons (I don’t have any).Uninstalled and reinstalled Microsoft 365, then restarted.Contacted Microsoft support, where remote access was used to repeat the above steps again. Read More
Icons / 3D Models not working” />
How do I port my alerts from the old Purview portal to the new?
Hi all,
We have alerting in the old Purview portal for things like admin submissions, forwarding rule creations etc. The admin submissions also show in Defender XDR, but the rule creations alert only shows in Purview. These were set up before my arrival at the company I work for.
However, these are not (as far as I can see) available in the new Purview portal. I have checked under DLP and Compliance alerts in the new portal, and the only alerts I get are for a change to the Compliance Score.
How do I replicate and/or aggregate these alerts in the new portal?
Hi all, We have alerting in the old Purview portal for things like admin submissions, forwarding rule creations etc. The admin submissions also show in Defender XDR, but the rule creations alert only shows in Purview. These were set up before my arrival at the company I work for. However, these are not (as far as I can see) available in the new Purview portal. I have checked under DLP and Compliance alerts in the new portal, and the only alerts I get are for a change to the Compliance Score. How do I replicate and/or aggregate these alerts in the new portal? Read More
SharePoint File creation or Modification trigger issue in power automate
We have a flow process which kick off once the file added to the folder, For each folder we have new flow which trigger once file move it to their respective folder. the issue here is when we copy the file it works file, but when other flow move the file to folder, in this scenario the automated flow not kick off automatically.
We are using SharePoint File Created or updated properties only trigger.
We have a flow process which kick off once the file added to the folder, For each folder we have new flow which trigger once file move it to their respective folder. the issue here is when we copy the file it works file, but when other flow move the file to folder, in this scenario the automated flow not kick off automatically. We are using SharePoint File Created or updated properties only trigger. Read More
Can’t access cloud files
I am very tech illiterate. When I click on the folder for my files, this message comes up every time for the last several weeks and I don’t know how to solve. Please help. I need a powerpoint that is in my cloud files but I have no way of accessing it.
I am very tech illiterate. When I click on the folder for my files, this message comes up every time for the last several weeks and I don’t know how to solve. Please help. I need a powerpoint that is in my cloud files but I have no way of accessing it. Read More
dbgeng.h: GetTotalNumberThreads Returns Incorrect Thread Count (According to DAC)
When writing custom extensions for Windbg to analyse user-mode crash dumps (using the IDebugSystemObjects4 interface provided by dbgeng.h), IDebugSystemObjects4–>GetTotalNumberThreads returns a smaller number than Strike/SOS.
There is no documentation about where IDebugSystemObjects4 gets the thread count from — it just states:
The GetTotalNumberThreads method returns the total number of threads for all the processes in the current target, in addition to the largest number of threads in any process for the current target.
(Emphasis mine.)
Below is an example output from Windbg:
0:000> <Custom Windbg Extension Method Here>
Getting IDebugSymbols…
Getting IDebugSystemObjects…
Getting GetTotalNumberThreads…
Total Threads: 581
Largest Process: 581
Frames: 32
0:000> !threads
ThreadCount: 587
UnstartedThread: 0
BackgroundThread: 26
PendingThread: 0
DeadThread: 47
Hosted Runtime: no
Note that the IDebugSystemObjects4–>GetTotalNumberThreads method is returning 581 threads but Strike/SOS is returning 587.
For what it’s worth, Strike/SOS gets this data from the DAC — which is, presumably, a different source than IDebugSystemObjects4 is getting the thread count from.
Is this a bug in dbgeng.h? If not, is it because IDebugSystemObjects4 ignores finaliser threads; whereas those are not ignored when committed to the DAC?
Also, sorry if this is the wrong place for this, I was thinking Windows SDK-related questions/bugs would fall under “Windows OS Platform”.
When writing custom extensions for Windbg to analyse user-mode crash dumps (using the IDebugSystemObjects4 interface provided by dbgeng.h), IDebugSystemObjects4–>GetTotalNumberThreads returns a smaller number than Strike/SOS. There is no documentation about where IDebugSystemObjects4 gets the thread count from — it just states: The GetTotalNumberThreads method returns the total number of threads for all the processes in the current target, in addition to the largest number of threads in any process for the current target. (Emphasis mine.) Below is an example output from Windbg: 0:000> <Custom Windbg Extension Method Here>
Getting IDebugSymbols…
Getting IDebugSystemObjects…
Getting GetTotalNumberThreads…
Total Threads: 581
Largest Process: 581
Frames: 32
0:000> !threads
ThreadCount: 587
UnstartedThread: 0
BackgroundThread: 26
PendingThread: 0
DeadThread: 47
Hosted Runtime: no Note that the IDebugSystemObjects4–>GetTotalNumberThreads method is returning 581 threads but Strike/SOS is returning 587. For what it’s worth, Strike/SOS gets this data from the DAC — which is, presumably, a different source than IDebugSystemObjects4 is getting the thread count from. Is this a bug in dbgeng.h? If not, is it because IDebugSystemObjects4 ignores finaliser threads; whereas those are not ignored when committed to the DAC? Also, sorry if this is the wrong place for this, I was thinking Windows SDK-related questions/bugs would fall under “Windows OS Platform”. Read More
Backup folders from a 2nd computer that don’t sync with 1st
Hi, I have a work laptop that only I have access to (I’m a sole trader) and that is connected to OneDrive for online backup. I now have a 2nd computer used for games and music creation. I want to backup a few of those folders to the same OneDrive account but I don’t want them to sync with my work files. Likewise, I don’t want any files from my work laptop syncing with my 2nd PC. Is this possible? Both PCs use the same account login. Thanks for your help, Nigel.
Hi, I have a work laptop that only I have access to (I’m a sole trader) and that is connected to OneDrive for online backup. I now have a 2nd computer used for games and music creation. I want to backup a few of those folders to the same OneDrive account but I don’t want them to sync with my work files. Likewise, I don’t want any files from my work laptop syncing with my 2nd PC. Is this possible? Both PCs use the same account login. Thanks for your help, Nigel. Read More
Plotting Multiple Lines Changing 1 Variable
Hello,
I am trying to plot multiple lines on the same graph when only one factor is changing, however, the points for each graph are all different.
For example:
Is it possible to plot the function of y = ax^2 + 1, for say 10 different values of ‘a’ on the same chart, without having to type separate calculations for each value. Example of what I don’t want to have to do shown below:
Example of graph I would like to produce without having to copy and paste the calculations multiple times.
For some context, I am trying to see how changing different parameters affect the process one at a time and then see which parameter is most critical. Some of the parameters are simple to change as they do not affect many of the calcs. However, some of them affect about 20 different values before producing the final graph, so I am hoping there is some sort of function such as “plot ‘y’ for each value of ‘a'” to reduce the clutter on my worksheet.
I hope someone understands what I mean.
Thank you in advance.
Hello, I am trying to plot multiple lines on the same graph when only one factor is changing, however, the points for each graph are all different. For example: Is it possible to plot the function of y = ax^2 + 1, for say 10 different values of ‘a’ on the same chart, without having to type separate calculations for each value. Example of what I don’t want to have to do shown below: Example of graph I would like to produce without having to copy and paste the calculations multiple times. For some context, I am trying to see how changing different parameters affect the process one at a time and then see which parameter is most critical. Some of the parameters are simple to change as they do not affect many of the calcs. However, some of them affect about 20 different values before producing the final graph, so I am hoping there is some sort of function such as “plot ‘y’ for each value of ‘a'” to reduce the clutter on my worksheet. I hope someone understands what I mean. Thank you in advance. Read More
Stock Allocation Against Available stock & Credit Limit
Hi,
Pls go thru my file, with reference to other post (https://techcommunity.microsoft.com/t5/excel/to-allocate-stock-from-closing-stock-by-formula/m-p/510624) I made my sheet but I need to add few more logics, I had to add extra column of Available Credit Limit. First formula in allocation qty column will check whether the customer has credit limit or not, if available then it will check the most earliest contract No as per contract date, in same contract number there may be multiple items, first it will check the lowest price of item & its required qty & available qty, if all covers then only the max covered qty will be allocated which will cover the credit limit. In case credit limit is crossing then the max last qty ( fraction will be ignored) which will cover under credit limit that will be allocated. In this way allocation will be continued. I mean in case of multiple items in one single contract number, formula will check first all these logics, first lowest price item then gradually it will search nest higher price item, & will be continued. In my file i already did custom sort in this way:
If required you can suggest me any other best way out for my desired result. Pls mind that Available credit limit will be considered only greater than zero only.
Apart from that there another column will be added ( That I yet to add) for manual allocation, in case of special cases manual allocation also will be considered, & in that case pls suggest me how that can be done, if I add qty manually after a certain time a formula in any other column will restrict me to allocated qty when it will cross available credit limit.
It will be an immense help to get the solution for these
Thanks in advance
Regards
Hi,Pls go thru my file, with reference to other post (https://techcommunity.microsoft.com/t5/excel/to-allocate-stock-from-closing-stock-by-formula/m-p/510624) I made my sheet but I need to add few more logics, I had to add extra column of Available Credit Limit. First formula in allocation qty column will check whether the customer has credit limit or not, if available then it will check the most earliest contract No as per contract date, in same contract number there may be multiple items, first it will check the lowest price of item & its required qty & available qty, if all covers then only the max covered qty will be allocated which will cover the credit limit. In case credit limit is crossing then the max last qty ( fraction will be ignored) which will cover under credit limit that will be allocated. In this way allocation will be continued. I mean in case of multiple items in one single contract number, formula will check first all these logics, first lowest price item then gradually it will search nest higher price item, & will be continued. In my file i already did custom sort in this way: If required you can suggest me any other best way out for my desired result. Pls mind that Available credit limit will be considered only greater than zero only.Apart from that there another column will be added ( That I yet to add) for manual allocation, in case of special cases manual allocation also will be considered, & in that case pls suggest me how that can be done, if I add qty manually after a certain time a formula in any other column will restrict me to allocated qty when it will cross available credit limit.It will be an immense help to get the solution for theseThanks in advanceRegards Read More
Removing guest account from Teams personal
Hi,
I was once a long time ago added as a guest to my now-previous employer’s Teams, via my personal Teams account (using an @live.com account). However, the employer is not longer trading and I cannot get them to remove me as a guest.
Until recently this was not a problem as it was just another option hidden away in a menu that I could ignore. However in the last week, Teams has defaulted to using this guest account so I have been trying to delete this guest account from my account. All of the solutions I can see rely on on accessing https://myapps.microsoft.com (e.g. https://myapps.microsoft.com), but as my personal account is not valid for that site, I am hitting a brick wall.
Please could you advise how I can either fully remove this guest account from my account, or at least stop it being selected as the default app for Teams? It’s frustrating that in order to use my own Teams now, I have to have two separate Teams windows open!
Hi, I was once a long time ago added as a guest to my now-previous employer’s Teams, via my personal Teams account (using an @live.com account). However, the employer is not longer trading and I cannot get them to remove me as a guest. Until recently this was not a problem as it was just another option hidden away in a menu that I could ignore. However in the last week, Teams has defaulted to using this guest account so I have been trying to delete this guest account from my account. All of the solutions I can see rely on on accessing https://myapps.microsoft.com (e.g. https://myapps.microsoft.com), but as my personal account is not valid for that site, I am hitting a brick wall. Please could you advise how I can either fully remove this guest account from my account, or at least stop it being selected as the default app for Teams? It’s frustrating that in order to use my own Teams now, I have to have two separate Teams windows open! Read More
Trouble access my Hotmail account
Hello does anyone know how to access Hotmail account
I have entered the password wrong it blocked me out I tried code but don’t receive one and I tried recovering account but that doesn’t work and I need to get my email back
Hello does anyone know how to access Hotmail account I have entered the password wrong it blocked me out I tried code but don’t receive one and I tried recovering account but that doesn’t work and I need to get my email back Read More
File Explorer bug – Windows 11
When i click on 3 dot option it always show with up. I can’t see all the option when file explorer in the full mode. Now how can I fix this problem.
Sincerly
Windows 11 User
When i click on 3 dot option it always show with up. I can’t see all the option when file explorer in the full mode. Now how can I fix this problem. SincerlyWindows 11 User Read More
Routing options for VMs from Private Subnets
Virtual Machines deployed in Azure used to have Default Outbound Internet Access. Until today, this allows virtual machines to connect to resources on the internet (including public endpoints of Azure PaaS services) even if the Cloud administrators have not configured any outbound connectivity method for their virtual machines explicitly. Implicitly, Azure’s network stack performed source network address translation (SNAT) with a public IP address that was provided by the platform.
As part of their commitment to increase security on customer workloads, Microsoft will deprecate Default Outbound Internet Access on 30 September 2025 (see the official announcement here). As of this day, customers will need to configure an outbound connectivity method explicitly if their virtual machine requires internet connectivity. Customers will have the following options:
Attach a dedicated Public IP Address to a virtual machine.
Deploy a NAT gateway and attach it to the VNet subnet the VM is connected to.
Deploy a Load Balancer and configure Load Balancer Outbound Rules for virtual machines.
Deploy a Network Virtual Appliance (NVA) to perform SNAT, such as Azure Firewall, and route internet-bound traffic to the NVA before egressing to the internet.
Today, customers can start preparing their workloads for the updated platform behavior. By setting property defaultOutboundAccess to false during subnet creation, VMs deployed to this subnet will not benefit from the conventional default outbound access method, but adhere to the new conventions. Subnets with this configuration are also referred to as ‘private subnets’.
In this article, we are demonstrating (a) the limited connectivity of virtual machines deployed to private VNets. We are also exploring different options to (b) route traffic from these virtual machines to public internet and to (c) optimize the communication path for management and data plane operations targeting public endpoints of Azure services.
We will be focusing on connectivity with Azure services’ public endpoints. If you use Private Endpoints to expose services to your virtual network instead, routing in a private subnet remains unchanged.
The following architecture diagram presents the sample setup that we’ll use to explore the network traffic with different components.
The setup comprises following components:
A virtual network with a private subnet (i.e., a subnet that does not offer default outbound connectivity to the internet).
A virtual machine (running Ubuntu Linux) connected to this subnet.
A Key Vault including a stored secret as sample Azure PaaS service to explore Azure-bound connectivity.
A Log Analytics Workspace, storing audit information (i.e., metadata of all control and data plane operations) from that Key Vault.
A Bastion Host to securely connect to the virtual machine via SSH.
In the following sections, we will integrate following components to control the network traffic and explore the effects on communication flow:
An Azure Firewall as central Network Virtual Appliance to route outbound internet traffic.
An Azure Load Balancer with Outbound Rules to route Azure-bound traffic through the Azure Backbone (we’ll use the Azure Resource Manager in this example).
A Service Endpoint to route data plane operations directly to the service.
We’ll use following examples to illustrate the communication paths:
A simple http-call to ifconfig.io which (if successful) will return the public IP address that will be used to make calls to public internet resources.
An invocation of the Azure CLI to get Key Vault metadata (az keyvault show), which (if successful) will return information about the Key Vault resources. This call to the Azure Resource Manager represents a management plane operation.
An invocation of the Azure CLI to get a secret stored in the Key Vault (az keyvault secret show), which (if successful) will return a secret stored in the Key Vault. This represents a data plane operation.
A query to the Key Vault’s audit log (stored in the Log Analytics Workspace), to reveal the IP address of the caller for management and data plane operations.
The repository Azure-Samples/azure-networking_private-subnet-routing on GitHub contains all required Infrastructure as Code assets, allowing you to easily reproduce the setup and exploration in your own Azure subscription.
The implementation uses the following tools:
bash as Command Line Interpreter (consider using Windows Subsystem for Linux if you are on Windows)
git to clone the repository (find installation instructions here)
Azure Command-Line Interface to interact with deployed Azure components (find installation instructions here)
HashiCorp Terraform (find installation instructions here).
jq to parse and process JSON input (find installation instructions here)
Clone the Git repository from [TODO: Repo link here] and change cd into its repository root.
$ cd azure-networking_private-subnet-routing
Login to your Azure subscription via Azure CLI and ensure you have access to your subscription.
$ az account show
We kick off our journey by deploying the infrastructure depicted in the architecture diagram above; we’ll do that using the IaC (Infrastructure as Code) assets from the repository.
Open file terraform.tfvars in your favorite code editor, and adjust the values of variables location (the region to which all resource will be deployed) and prefix (the shared name prefix for all resources). Also don’t forget to provide login credentials for your VM by setting values for admin_username and admin_password.
Set environment variable ARM_SUBSCRIPTION_ID to point terraform to the subscription you are currently logged on to.
Using your CLI and terraform, deploy the demo setup:
Initializing the backend…
[…]
Terraform has been successfully initialized!
$ terraform apply
[…]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only ‘yes’ will be accepted to approve.
Enter a value: yes
[…]
Apply complete!
[…]
☝️ In case you are not familiar with Terraform, this tutorial might be insightful for you.
Explore the deployed resources in the Azure Portal. Note that although the network infrastructure components shown in the architecture drawing above are already deployed, they are not yet configured for use from the Virtual Machine:
The Azure Firewall is deployed, but the route table attached to the VM subnet does not (yet) have any route directing traffic to the firewall (we will add this in Scenario 2).
The Azure Load Balancer is already deployed, but the virtual machine is not yet member of its backend pool (we will change this in Scenario 3).
Log in to the Virtual Machine using the Bastion Host.
azureuser@localhost’s password:
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.15.0-1064-azure x86_64)
azureuser@no-doa-demo-vm:~$
At this point, our virtual machine is deployed to a private subnet. As we do not have any outbound connectivity method set up, all calls to public internet resources as well as to the public endpoints of Azure resources will time out.
Test 1: Call to public internet
curl: (28) Connection timed out after 10004 milliseconds
Test 2: Call to Azure Resource Manager
curl: (28) Connection timed out after 10001 milliseconds
Test 3: Call to Azure Key Vault (data plane)
curl: (28) Connection timed out after 10002 milliseconds
Typically, customers deploy a central Firewall in their network to ensure all outbound traffic is consistently SNATed through the same public IPs and all outbound traffic is centrally controlled and governed. In this scenario, we therefore modify our existing route table and add a default route (i.e., for CIDR range 0.0.0.0/0), directing all outbound traffic to the private IP of our Azure Firewall.
Add Firewall and routes.
Browse to network.tf, uncomment the definition of azurerm_route.default-to-firewall.
Update your deployment.
Terraform will perform the following actions:
# azurerm_route.default-to-firewall will be created
[…]
Test 1: Call to public internet, revealing that outbound calls are routed through the firewall’s public IP.
4.184.163.38
Now that you have access to the internet, install Azure CLI.
Login to Azure with the Virtual machine’s managed identity.
Test 2: Call to Azure Resource Manager (you might need to change the Key Vault name if you changed the prefix in your terraform.tfvars)
Location Name ResourceGroup
—————— ————– ————–
germanywestcentral no-doa-demo-kv no-doa-demo-rg
Test 3: Call to Azure Key Vault (data plane)
ContentType Name Value
————- ——- ————-
message Hello, World!
Query Key Vault Audit Log.
☝️ The ingestion of audit logs into the Log Analytics Workspace might take some time. Please make sure to wait for up to ten minutes before starting to troubleshoot.
Get Application ID of VM’s system-assigned managed identity:
AppId for Principal ID f889ca69-d4b0-45a7-8300-0a88f957613e is: 8aa9503c-ee91-43ee-96c7-49dc005ebecc
Go to Log Analytics Workspace, run the following query.
where identity_claim_appid_g == “[Replace with App ID!]”
| project TimeGenerated, Resource, OperationName, CallerIPAddress
| order by TimeGenerated desc
Alternatively, run the prepared script kv_query-audit.sh:
CallerIPAddress OperationName Resource TableName TimeGenerated
—————– ————— ————– ————- —————————-
4.184.163.38 VaultGet NO-DOA-DEMO-KV PrimaryResult 2024-06-14T08:25:29.4821689Z
4.184.163.38 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-14T08:26:07.0067419Z
🗣 Note that both calls to the Key Vault succeed as they are routed through the central Firewall; both requests (to Azure Management plane and Key Vault data plane) hit their endpoints with the Firewall’s public IP.
At this point all, internet and Azure-bound traffic to public endpoints is routed through the Azure Firewall. Although this allows you to centrally control all traffic, you might have good reasons to prefer to offload some communication from this component by routing traffic targeting a specific IP address range through a different component for SNAT — for example to optimize latency or reduce load on the firewall component for communication with well-known hosts.
☝️ As mentioned before, dedicated Public IP addresses, NAT Gateways and Azure Load Balancers are alternative options to configure SNAT for outbound access. You can find a detailed discussion about all options here.
In this scenario, we assume that we want network traffic to the Azure Management plane to bypass the central Firewall (we pick this service for demonstration purposes here). Instead, we want to use the SNAT capabilities of an Azure Load Balancer with outbound rules to route traffic to the public endpoints of the Azure Resource Manager. We can achieve this by adding a more-specific route to the route table, directing traffic targeting the corresponding service tag (which is like a symbolic name comprising a set of IP ranges) to a different destination.
The integration of outbound load balancing rules into the communication path works differently than integrating a Network Virtual Appliance: While we defined the latter by setting the NVA’s private IP address as next hop in our user defined route in scenario 1, we only integrate the Load Balancer implicitly into our network flow — by specifying Internet as next hop in our route table. (Essentially, next hop ‘Internet’ instructs Azure to use either (a) the Public IP attached to the VM’s NIC, (b) the Load Balancer associated to the VM’s NIC with the help of an outbound rule, or (c) a NAT Gateway attached to the subnet the VM’s NIC is connected to.) Therefore, we need to take two steps to send traffic through our Load Balancer:
Deploy a more-specific user-defined route for the respective service tag.
Add our VM’s NIC to a load balancer’s backend pool with an outbound load balancing rule.
In our scenario, we’ll do this for the Service tag AzureResourceManager, which (amongst others) also comprises the IP addresses for management.azure.com, which is the endpoint for the Azure control plane. This will affect the az keyvault get operation to retrieve the Key Vault’s metadata.
Add more specific route for AzureResourceManager
Browse to network.tf, uncomment the definition of azurerm_route.azurerm_2_internet.
☝️ Note that this route specifies Internet (!) as next hop type for any communication targeting IPs of service tag AzureResourceManager.
Update your deployment.
Terraform will perform the following actions:
# azurerm_route.azurerm_2_internet will be created
[…]
(optional) Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
4.184.163.38
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
Test 2: Call to Azure Resource Manager
<urllib3.connection.HTTPSConnection object at 0x7f76628435d0>: Failed to establish a new connection: [Errno 101] Network is unreachable
🗣 While the call to the Key Vault data plane succeeds, the call to the resource manager fails: Route azurerm_2_internet directs traffic to next hop type Internet. However, as the VM’s subnet is private, defining the outbound route is not sufficient and we still need to attach the VM’s NIC to the Load Balancers outbound rule.
Add virtual machine’s NIC to a backend pool linked with an outbound load balancing rule.
Browse to vm.tf, uncomment the definition of azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb.
Update your deployment.
Terraform will perform the following actions:
# azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb will be created
[…]
(optional) Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
Repeat Test 2: Call to Azure Resource Manager
Location Name ResourceGroup
—————— ————– —————
germanywestcentral no-doa-demo-kv no-doa-demo-rg
Re-run the prepared script kv_query-audit.sh:
CallerIPAddress OperationName Resource TableName TimeGenerated
—————– ————— ————– ————- —————————-
4.184.163.38 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T12:49:30.7165964Z
4.184.161.169 VaultGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T12:44:35.6599439Z
[…]
🗣 After adding the NIC to the backend of the outbound load balancer, routes with next hop type Internet will use the load balancer for outbound traffic. As we specified Internet as next hop type for AzureResourceManager, the VaultGet operation is now hitting the management plane from the load balancer’s public IP. (Communication with the Key Vault data plane remains unchanged; the SecretGet operation still hits the Key Vault from the Firewall’s Public IP.)
☝️ We explored this path for the platform-defined service tag AzureResourceManager. However, it’s equally possible to define this communication path for your self-defined IP addresses or ranges.
For communication with many platform services, Azure offers customers Virtual Network Service Endpoints to enable an optimized connectivity method that keeps traffic on its backbone network. Customers can use this, for example, to offload traffic to platform services from their network resources and increase security by enabling access restrictions on their resources.
☝️ Note that service endpoints are not specific for individual resource instances; they will enable optimized connectivity for all deployments of this resource type (across different subscriptions, tenants and customers). You may want to make sure to deploy complementing firewall rules to your resource as an additional layer of security.
In this scenario, we’ll deploy a service endpoint for Azure Key Vaults. We’ll see that the platform will no longer SNAT traffic to our Kay Vault’s data plane but use the VM’s private IP for communication.
Deploy Service Endpoint for Key Vault
Browse to network.tf, uncomment the definition of serviceEndpoints in azapi_resource.subnet-vm.
Update your deployment.
Terraform will perform the following actions:
# azapi_resource.subnet-vm will be updated in-place
[…]
(optional) Repeat test 1 (call to public internet) and test 2 (call to Azure management plane) to confirm behavior remains unchanged.
Test 3: Call to Azure Key Vault (data plane)
ContentType Name Value
————- ——- ————-
message Hello, World!
Re-run the prepared script kv_query-audit.sh:
CallerIPAddress OperationName Resource TableName TimeGenerated
—————– ————— ————– ————- —————————-
10.3.1.4 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T14:21:28.3388285Z
[…]
🗣 After deploying a service endpoint, we see that traffic is hitting the Azure Key Vault data plane from the virtual machine’s private IP address, i.e., not passing through Firewall or outbound load balancer.
Eventually, let’s explore how the different connectivity methods show up in the virtual machine’s NIC’s effective routes. Use one of the following options to show them:
In Azure portal, browse to the VM’s NIC and explore the ‘Effective Routes’ section in the ‘Help’ section.
Alternatively, run the provided script (please note that the script will only show the first IP address prefix in the output for brevity).
Source FirstIpAddressPrefix NextHopType NextHopIpAddress
——– ———————- —————————– ——————
Default 10.0.0.0/16 VnetLocal
User 191.234.158.0/23 Internet
Default 0.0.0.0/0 Internet
Default 191.238.72.152/29 VirtualNetworkServiceEndpoint
User 0.0.0.0/0 VirtualAppliance 10.254.1.4
🗣 See that…
…system-defined route 191.238.72.152/29 to VirtualNetworkServiceEndpoint is sending traffic to Azure Key Vault data plane via service endpoint.
…user-defined route 191.234.158.0/23 to Internet is implicitly sending traffic to AzureResourceManager via Outbound Load Balancer (by defining Internet as next hop type for a VM attached to an outbound load balancer rule).
…user-defined route 0.0.0.0/0 to VirtualAppliance (10.254.1.4) is sending all remaining internet-bound traffic to the Firewall.
Microsoft Tech Community – Latest Blogs –Read More
how to create and save a note in the sub-folder to “Notes” folder from outlook.com account
There is a folder “Notes” in outlook.com account. If we save any note from the browser while on outlook.com account, the note is saved in the folder “Notes” on the outlook.com account. This “Notes” folder is visible on the web when you login to the outlook.com account.
However if we create a sub-folder to the “Notes” folder of an outlook.com account on a web browser, I am unable to create a note in that sub-folder while on the web browser. This sub-folder is also invisible to any apps connecting to the outlook.com and using services of mail, contacts, notes, etc.
The only Microsoft support document available on this subject, does not mention anything about sub-folder to “Notes” folder. Link given below.
My question is: how to create and save a note in the sub-folder to “Notes” folder from outlook.com account.
There is a folder “Notes” in outlook.com account. If we save any note from the browser while on outlook.com account, the note is saved in the folder “Notes” on the outlook.com account. This “Notes” folder is visible on the web when you login to the outlook.com account.However if we create a sub-folder to the “Notes” folder of an outlook.com account on a web browser, I am unable to create a note in that sub-folder while on the web browser. This sub-folder is also invisible to any apps connecting to the outlook.com and using services of mail, contacts, notes, etc.The only Microsoft support document available on this subject, does not mention anything about sub-folder to “Notes” folder. Link given below.https://support.microsoft.com/en-us/office/create-edit-and-view-sticky-notes-in-outlook-com-or-outlook-on-the-web-d9ee4b90-96cf-4d56-b622-ceed8e4f6b10My question is: how to create and save a note in the sub-folder to “Notes” folder from outlook.com account. Read More
Email not being delivered to M365 and being forwarded back on-prem
Hi All
Hopefully I can explain the issue given it is a bit puzzling and a complex setup.
We have 2 environments/tenants. contosedev.com for dev work and contoso.com for production. We have an on-prem Exchange 2019 infrastructure for contosodev.com and a on-prem Exchange 2016 infrastructure for contoso.com.
Between the on-prem environment we have an Exchange 2019 edge server (not AD Sync’d) for each environment (dev and production) that takes email from on-prem and sends to M365. The on-prem Exchange server has a send connector that routes email destined for contosodev.mail.onmicrosoft.com (dev) or contoso.mail.onmicrosoft.com (production) via these edge servers. The edge servers have a receive connector to take this email and a send connector to then send on to M365. The connectors use certificate validation in each case.
The M365 tenants have an inbound connector to receive this email also with certificate validation. All connectors are setup the same apart from the obvious difference in domains. The tenants are authoritative for their respective domains. For dev contosodev.com & contosodev.mail.onmicrosoft.com (and also the default contosodev.onmicrosoft.com). For production contoso.com & contoso.mail.onmicrosoft.com (and also the default contoso.onmicrosoft.com).
The tenants have outbound connectors to route all email via on-premise Exchange servers. So any email in M365 for say contosodev.com (dev) and contoso.com (production) get routed to the outbound connector and hence on-prem Exchange where they can either be delivered locally or if it an external address they are routed out via our gateway infrastructure.
Each tenant has a test mailbox (shared). The mailbox has been migrated from the on-prem infrastructure to M365. Each has email addresses of contosodev.mail.onmicrosoft.com & contosodev.com for the dev environment and contoso.mail.onmicrosoft.com & contoso.com for production.
Now the puzzling bit.
In the dev environment, if I send an email from an on-prem mailbox to email address removed for privacy reasons, Exchange on-prem sees this as a remote mailbox and sends the email via the edge servers. It arrives in M365, sees it has a mail.onmicrosoft.com address and is delivered successfully to the test mailbox.
In the production environment, If I send an email from an on-prem mailbox to email address removed for privacy reasons, Exchange on-prem sees this as a remote mailbox and sends the email via the edge servers. It arrives in M365, sees it has a mail.onmicrosoft.com address, but instead of delivering it to the mailbox, it then routes it back to on-prem using the contoso.com address, which then causes a mail loop that eventually fails.
The message trace seems to indicate the email is being forwarded, however there are no forward rules or inbox rules. I’ve even tried another completely blank mailbox that I migrated to M365 with the same result.
Now I’ve been over the config of both environments, looked at various articles in regards to attribution, but cannot see any difference between what I’ve setup in the dev environments vs the production one.
I just can’t work out why, when the mailbox obviously exists in M365 with all the correct email addresses, it just doesn’t get delivered. M365 seems to ignore that and decide to send it out via the outbound connector. The other weird part is if I disable that outbound connector in M365, the email is delivered to the mailbox correctly!
Anyway, lengthy I know and hopefully have explained the infrastructure, so if anyone has any ideas where I might check next it would be greatly appreciated.
Cheers
Peter
Hi AllHopefully I can explain the issue given it is a bit puzzling and a complex setup.We have 2 environments/tenants. contosedev.com for dev work and contoso.com for production. We have an on-prem Exchange 2019 infrastructure for contosodev.com and a on-prem Exchange 2016 infrastructure for contoso.com.Between the on-prem environment we have an Exchange 2019 edge server (not AD Sync’d) for each environment (dev and production) that takes email from on-prem and sends to M365. The on-prem Exchange server has a send connector that routes email destined for contosodev.mail.onmicrosoft.com (dev) or contoso.mail.onmicrosoft.com (production) via these edge servers. The edge servers have a receive connector to take this email and a send connector to then send on to M365. The connectors use certificate validation in each case.The M365 tenants have an inbound connector to receive this email also with certificate validation. All connectors are setup the same apart from the obvious difference in domains. The tenants are authoritative for their respective domains. For dev contosodev.com & contosodev.mail.onmicrosoft.com (and also the default contosodev.onmicrosoft.com). For production contoso.com & contoso.mail.onmicrosoft.com (and also the default contoso.onmicrosoft.com).The tenants have outbound connectors to route all email via on-premise Exchange servers. So any email in M365 for say contosodev.com (dev) and contoso.com (production) get routed to the outbound connector and hence on-prem Exchange where they can either be delivered locally or if it an external address they are routed out via our gateway infrastructure.Each tenant has a test mailbox (shared). The mailbox has been migrated from the on-prem infrastructure to M365. Each has email addresses of contosodev.mail.onmicrosoft.com & contosodev.com for the dev environment and contoso.mail.onmicrosoft.com & contoso.com for production.Now the puzzling bit.In the dev environment, if I send an email from an on-prem mailbox to email address removed for privacy reasons, Exchange on-prem sees this as a remote mailbox and sends the email via the edge servers. It arrives in M365, sees it has a mail.onmicrosoft.com address and is delivered successfully to the test mailbox.In the production environment, If I send an email from an on-prem mailbox to email address removed for privacy reasons, Exchange on-prem sees this as a remote mailbox and sends the email via the edge servers. It arrives in M365, sees it has a mail.onmicrosoft.com address, but instead of delivering it to the mailbox, it then routes it back to on-prem using the contoso.com address, which then causes a mail loop that eventually fails.The message trace seems to indicate the email is being forwarded, however there are no forward rules or inbox rules. I’ve even tried another completely blank mailbox that I migrated to M365 with the same result.Now I’ve been over the config of both environments, looked at various articles in regards to attribution, but cannot see any difference between what I’ve setup in the dev environments vs the production one.I just can’t work out why, when the mailbox obviously exists in M365 with all the correct email addresses, it just doesn’t get delivered. M365 seems to ignore that and decide to send it out via the outbound connector. The other weird part is if I disable that outbound connector in M365, the email is delivered to the mailbox correctly!Anyway, lengthy I know and hopefully have explained the infrastructure, so if anyone has any ideas where I might check next it would be greatly appreciated.CheersPeter Read More
SMB over QUIC Client Access Control is inconsistent
We have set up SMB over QUIC on some Windows 2025 file servers and generally it works well. Unfortunately of course, it is not secure by design since there is no MFA or conditional access in the picture. Thus securing the connections falls to its Client Access Control feature where you can allowlist or blacklist connections using client certificates.
We implemented this in multiple environments (different domains) and although it works initially, it then starts failing with no changes having been made. The behavior is always the same across various domains once it starts failing – first the connection shows successful:
The SMB connection was successfully established.
Endpoint Name: FILES
Transport: Quic
Server socket address: x.x.x.x:443
Client socket address: x.x.x.x:8205
Connection ID: 0xB1D0039C01XXXXXX
Mutual authentication: Yes
Access control: Yes
Then immediately it fails less than a second later:
Quic connection shutdown.
Error: Mutual authentication failed.
Reason: Server close the connection.
Endpoint Name: FILES
Transport Name: DeviceSmbQUICIpv4_0006_x.x.x.x
Guidance:
This event indicates that the winquic connection is shutting down by the server. This event commonly occurs because the server certificate mapping is not created. It may also be caused by the server failed to configure the winquic connections.
We have set up SMB over QUIC on some Windows 2025 file servers and generally it works well. Unfortunately of course, it is not secure by design since there is no MFA or conditional access in the picture. Thus securing the connections falls to its Client Access Control feature where you can allowlist or blacklist connections using client certificates. We implemented this in multiple environments (different domains) and although it works initially, it then starts failing with no changes having been made. The behavior is always the same across various domains once it starts failing – first the connection shows successful: The SMB connection was successfully established.Endpoint Name: FILESTransport: QuicServer socket address: x.x.x.x:443Client socket address: x.x.x.x:8205Connection ID: 0xB1D0039C01XXXXXXMutual authentication: YesAccess control: Yes Then immediately it fails less than a second later: Quic connection shutdown.Error: Mutual authentication failed.Reason: Server close the connection.Endpoint Name: FILESTransport Name: DeviceSmbQUICIpv4_0006_x.x.x.xGuidance:This event indicates that the winquic connection is shutting down by the server. This event commonly occurs because the server certificate mapping is not created. It may also be caused by the server failed to configure the winquic connections. Read More
Intune policy conflict
Hello All,
We currently manage settings locally on our workgroup devices via gpedit. We are now planning to enroll these devices in Intune and configure the same settings using device configuration policies.
How will conflicts between local and Intune policies be handled? Is it possible to enforce Intune policies across all devices in this scenario?
Your guidance would be appreciated. Thanks!
Hello All,We currently manage settings locally on our workgroup devices via gpedit. We are now planning to enroll these devices in Intune and configure the same settings using device configuration policies.How will conflicts between local and Intune policies be handled? Is it possible to enforce Intune policies across all devices in this scenario?Your guidance would be appreciated. Thanks! Read More
ProjectWebApp REST API: OAuth Authentication – ClientCredentials Grant type only
Hi All,
We are trying to read the Project Information (Flow: SAP<—>SharePointOnline) using the ProjectWebApp(PWA) REST APIs : https://domainName/sites/pwa/_api/ProjectData/Projects
for which we have registered the App in Azure Portal and then given the required permissions.
we are able to read the project information using the OAuth Grant Type: Authorization Code, where user interacts with authorization server to get the access token. But with the GrantType : Client Credentials we are NOT able to read?
Since its Server-to-Server interaction, we do not want User Auhorization code to interact.
Let us know how can we access the PWA APIs with OAuth: Client Credentials GrantType?
When we try to access with GrantType : ClientCredentials after getting the access token, we get the below error.
Kindly provide documents that can guide us how to achieve the above requirement.
API : https://domainName/sites/pwa/_api/ProjectData/Projects
We have the below details after registering the app:
Toke URL to fetch the access token : https://login.microsoftonline.com/TenantID/tokens/OAuth/2
grant_type = client_credentials
client_id = ClientID@TenantID
client_secret = xxx
resource = resource/<domain-name>@TenantID
scope = https://localhost
Token Fetch: SuccessFull
API call: Error
Thanks
Sai
Hi All, We are trying to read the Project Information (Flow: SAP<—>SharePointOnline) using the ProjectWebApp(PWA) REST APIs : https://domainName/sites/pwa/_api/ProjectData/Projectsfor which we have registered the App in Azure Portal and then given the required permissions. we are able to read the project information using the OAuth Grant Type: Authorization Code, where user interacts with authorization server to get the access token. But with the GrantType : Client Credentials we are NOT able to read?Since its Server-to-Server interaction, we do not want User Auhorization code to interact. Let us know how can we access the PWA APIs with OAuth: Client Credentials GrantType?When we try to access with GrantType : ClientCredentials after getting the access token, we get the below error. Kindly provide documents that can guide us how to achieve the above requirement. API : https://domainName/sites/pwa/_api/ProjectData/Projects We have the below details after registering the app:Toke URL to fetch the access token : https://login.microsoftonline.com/TenantID/tokens/OAuth/2grant_type = client_credentialsclient_id = ClientID@TenantIDclient_secret = xxxresource = resource/<domain-name>@TenantIDscope = https://localhost Token Fetch: SuccessFull API call: Error ThanksSai Read More
جلبـ الحبيبـ 5952049.37 0.0966 – جلبــ الحبيبـ لـلزواجــ – * قطر * البحرين * الإمارات * عُمان *
جلبـ الحبيبـ 5952049.37 0.0966 – جلبــ الحبيبـ لـلزواجــ – * قطر * البحرين * الإمارات * عُمان *
جلبـ الحبيبـ 5952049.37 0.0966 – جلبــ الحبيبـ لـلزواجــ – * قطر * البحرين * الإمارات * عُمان * Read More
MECM Supersedence issues with resigned MSIX packages
Without access to a time stamp server solution, we’ve needed to re-sign a selection of MSIX packages deployed to windows 10 devices with a new code signing certificate previously. That time has come around again and thinking ahead as we prepare for windows 11, we know the list of packages will increase as we convert more AppV packages, a review of the MSIX toolkit has been started.
A couple of observations, I’ll also place on the github issues list, perhaps somecan provide feedback and sights in to. @jvintzel
Firstly the packages get updates with the new certificate and publisher information as checked using the get-appxpackage cmdlet; however the publisherid is not substituted in the msix package name as per the MPT default naming standard when saving msix packages, so these have needed to be renamed with another powershell script afterwards.
Secondly, we have seen some unexpected results when deploying the resigned packages via MECM targeting the previously signed application with supersedence rules to swap the packages around without an application version update.
Scenarios where with the uninstall tick box selected the previous package is removed but MECM doesn’t proceed to attempt to install the new replacement package.
Or the uninstall selection is not made an the update results in the new package passing for detection but leaving only the previous package actually installed.
Whilst I know some of this could be added to a MECM community thread, any shared experiences with using the toolkit would be welcome.
Without access to a time stamp server solution, we’ve needed to re-sign a selection of MSIX packages deployed to windows 10 devices with a new code signing certificate previously. That time has come around again and thinking ahead as we prepare for windows 11, we know the list of packages will increase as we convert more AppV packages, a review of the MSIX toolkit has been started. A couple of observations, I’ll also place on the github issues list, perhaps somecan provide feedback and sights in to. @jvintzel Firstly the packages get updates with the new certificate and publisher information as checked using the get-appxpackage cmdlet; however the publisherid is not substituted in the msix package name as per the MPT default naming standard when saving msix packages, so these have needed to be renamed with another powershell script afterwards.Secondly, we have seen some unexpected results when deploying the resigned packages via MECM targeting the previously signed application with supersedence rules to swap the packages around without an application version update.Scenarios where with the uninstall tick box selected the previous package is removed but MECM doesn’t proceed to attempt to install the new replacement package.Or the uninstall selection is not made an the update results in the new package passing for detection but leaving only the previous package actually installed. Whilst I know some of this could be added to a MECM community thread, any shared experiences with using the toolkit would be welcome. Read More