Month: October 2024
Pivot/Transpose Issue
Hello all,
On the left is my current table. I want it to look like the table on the right. (I manually created a sample for reference, but my table has over 7000 rows so I can’t manually create the whole thing).
As you can see I have multiple entries per day. When I transpose it in PQ, it shortens it to one entry per day, basically erasing all the other entries.
Help would be greatly appreciated.
Thank you!
Hello all, On the left is my current table. I want it to look like the table on the right. (I manually created a sample for reference, but my table has over 7000 rows so I can’t manually create the whole thing). As you can see I have multiple entries per day. When I transpose it in PQ, it shortens it to one entry per day, basically erasing all the other entries. Help would be greatly appreciated. Thank you! Read More
New Outlook is missing too many features
Trying to update an email account to new IMAP and SMTP details and you can’t do that because that feature does not exist, the only option is to delete the account and start over.
However, there are emails only on the machine in outlook for the account that we need to keep, which wouldn’t be a problem in the old outlook, we could just export them…but not with new outlook, now we can’t export files because once again that feature does not exist.
Why would they release such an incomplete product and tell people it’s new and improved?
Ended up downloading a different mail app for the new email address just so we could keep the old emails, such a farce.
Trying to update an email account to new IMAP and SMTP details and you can’t do that because that feature does not exist, the only option is to delete the account and start over.However, there are emails only on the machine in outlook for the account that we need to keep, which wouldn’t be a problem in the old outlook, we could just export them…but not with new outlook, now we can’t export files because once again that feature does not exist.Why would they release such an incomplete product and tell people it’s new and improved?Ended up downloading a different mail app for the new email address just so we could keep the old emails, such a farce. Read More
Vancouver Power BI and Modern Excel User Group Meet-up
Title: Writing Powerful Custom Functions with LET and LAMBDA
Session Outline:
Microsoft expert Diarmuid Early will be showing us how to write powerful custom functions using LET and LAMBDA. He’ll start things off with a brief introduction to the basics of writing custom functions with LAMBDA, including how to write, store, and share, as well as some comparison vs VBA user-defined functions. We’ll then move onto writing more complex formulas with LET – and how to do it while maintaining auditability. Then, he’ll show us how to combine LET and LAMBDA to make complex multi-step custom functions.
Title: Writing Powerful Custom Functions with LET and LAMBDASession Outline:Microsoft expert Diarmuid Early will be showing us how to write powerful custom functions using LET and LAMBDA. He’ll start things off with a brief introduction to the basics of writing custom functions with LAMBDA, including how to write, store, and share, as well as some comparison vs VBA user-defined functions. We’ll then move onto writing more complex formulas with LET – and how to do it while maintaining auditability. Then, he’ll show us how to combine LET and LAMBDA to make complex multi-step custom functions. Read More
How to remove personal retention tags from all users mailboxes programmatically
My customer is using MRM and has been for a long time in office 365. Recently they noticed that employees were putting personal tags on some items, this is against their policy. The way to prevent users from using personal tags is documented here.
Users can use all personal retention tags regardless of retention policy in Exchange Online – Microsoft Support
Once that is complete, they’d like to remove any personal tags on email items or folders in all user mailboxes. I am told you need an EWS script to do this. Does anyone have an examples script?
My customer is using MRM and has been for a long time in office 365. Recently they noticed that employees were putting personal tags on some items, this is against their policy. The way to prevent users from using personal tags is documented here.Users can use all personal retention tags regardless of retention policy in Exchange Online – Microsoft SupportOnce that is complete, they’d like to remove any personal tags on email items or folders in all user mailboxes. I am told you need an EWS script to do this. Does anyone have an examples script? Read More
Vancouver Power BI and Modern Excel User Group Meet-up
Title: Power BI Performance Troubleshooting
Session Outline:
First, we’ll learn about some of the newest features and updates in Power BI with Joseph Yeates. Next, Daniel Szepesi and Jocelyn Porter will present on troubleshooting performance in Power BI, from Desktop to Log Analytics.
Troubleshooting performance on a Software as a Service (SaaS) solution can be very difficult. You don’t own the machines anymore so how do you get to performance data? Power BI at its core is Analysis Services Tabular so some of the troubleshooting tools might be familiar to SQL performance people but it has some built-in capability to help isolate performance issues as well.
This session will include a discussion of Power BI architecture and then will be a demo-heavy walkthrough of using the Power BI Performance Analyzer, SQL Profiler and Azure Log Analytics to help isolate and resolve performance issues with Power BI.
Title: Power BI Performance TroubleshootingSession Outline:First, we’ll learn about some of the newest features and updates in Power BI with Joseph Yeates. Next, Daniel Szepesi and Jocelyn Porter will present on troubleshooting performance in Power BI, from Desktop to Log Analytics.Troubleshooting performance on a Software as a Service (SaaS) solution can be very difficult. You don’t own the machines anymore so how do you get to performance data? Power BI at its core is Analysis Services Tabular so some of the troubleshooting tools might be familiar to SQL performance people but it has some built-in capability to help isolate performance issues as well.This session will include a discussion of Power BI architecture and then will be a demo-heavy walkthrough of using the Power BI Performance Analyzer, SQL Profiler and Azure Log Analytics to help isolate and resolve performance issues with Power BI. Read More
How can I highlight lowest value in multiple excel sheet?
Dear Community,
I am desperately trying my luck here in this community. Can anyone help me how to highlight lowest value in multiple excel sheet?
Thank you in advance for your help
Dear Community,I am desperately trying my luck here in this community. Can anyone help me how to highlight lowest value in multiple excel sheet?Thank you in advance for your help Read More
cannot cast nvarchar to numeric error
I’m getting the following error no matter what I do. I’ve casted all the columns to nvarchar in the sql query and the input parameter is text in SSRS and nvarchar(500) in the stored procedure. If I run the query/stored procedure it works fine. but if I run it in SSMS I get this error no matter what I do?
What I have tried:
1. Casted all the select columns to nvarchar
2. the ssrs report parameter is “text” and the stored procedure parameters are numeric but casted as nvarchar
3. Some of the data has null rows would that cause this error?
SQL (obfuscated of course)
Error:
An error has occurred during report processing. (rsProcessingAborted)
Cannot read the next data row for the dataset datasource1. (rsErrorReadingNextDataRow)
Error converting data type nvarchar to numeric.
I’m getting the following error no matter what I do. I’ve casted all the columns to nvarchar in the sql query and the input parameter is text in SSRS and nvarchar(500) in the stored procedure. If I run the query/stored procedure it works fine. but if I run it in SSMS I get this error no matter what I do? What I have tried:1. Casted all the select columns to nvarchar2. the ssrs report parameter is “text” and the stored procedure parameters are numeric but casted as nvarchar3. Some of the data has null rows would that cause this error? SQL (obfuscated of course)ALTER PROCEDURE SSRS_GetSomething @ListOfIds as nvarchar(500)ASBEGINSET NOCOUNT ON; —Check the string for multiple premiseid’s and parse the string to convert from premise to numericDECLARE @result as nvarchar(500) select TRY_CAST(id as nvarchar) as ‘ID1’,TRY_CAST(id2 as nvarchar) as ‘ID2’,streetaddress as ‘StreetAddress’,TRY_CAST([PossibleYear] as nvarchar) as ‘Year’,TRY_CAST(test2 as nvarchar) as ‘test2’,TRY_CAST(year3 as nvarchar) as ‘Year3’,TRY_CAST(numeric8 as nvarchar) as ‘NumericColumn’,TRY_CAST(numeric9 as nvarchar) as ‘numeric9’,TRY_CAST(numeric10 as nvarchar) as ‘Numeric10’,TRY_CAST(numeric11 as nvarchar) as ‘numeric11’,TRY_CAST(numeric12 as nvarchar) as ‘Numeric12’ from table1 border by b.numeric1 desc,b.numeric2 , b.numeric3; ENDGO Error:An error has occurred during report processing. (rsProcessingAborted)Cannot read the next data row for the dataset datasource1. (rsErrorReadingNextDataRow)Error converting data type nvarchar to numeric. Read More
Azure Virtual Desktop – Problems Attempting Locked down Exam Environment
I am attempting to build an Azure Virtual Desktop that will be used for an Exam Environment. The physical workstations are thin clients that auto-direct to RDP and access our AVD environment. Using a Windows11+Office365 image, I created a basic image. I added shortcuts to the desktop (Word, Excel, PowerPoint, a web shortcut)
I want a non-persistent desktop that does not pull any information from a user’s account. Not OneDrive, not Teams, no roaming files. It will pass-through the user’s credentials for obtaining a Office license and nothing more. All internet will be blocked, and only a couple of sites will be accessed.
All lockdown settings are handled through GPO (GPO settings File). **not shown in the file is the usage of Teams where I used a fake tenant ID so it was impossible to loadlink MSTeams. But to block OneDrive as indicated in the file, I did use the correct tenant ID.
Does it work? Yes. But it has issues loading up.
When you attempt to sign in, it will take exactly 10min ‘Preparing Windows’.
Once this completes, in the bottom left corner is a minimized DOS window. When you restore this window it shows it’s attempting to run UsrLogon.cmd but has the error “The command prompt has been disabled by your administrator. Press any key to continue…”, which when done takes you to the desktop.
Lastly, the weblink is created for the student to upload their exam to our CRM when completed. When you access this weblink it opens with “Welcome-new-device is blocked” and you have to use TaskManager to kill Edge, and then restart it so it will then properly access the CRM site.
While this does work as intended once you get to this point, it’s an unnecessary delay. Especially when attempting to take an exam.
I need help trying to identify those (3) problems;
– Why is it taking 10 minutes to prepare windows, and how do you make it stop doing that?
– Why is the usrlogon.cmd failing to run (and I don’t appear to need it, so how to stop it completely?)
– What GPO did I miss with Edge to not have that ‘welcome new deviceuser’ to appear and just load the website I want.
I am attempting to build an Azure Virtual Desktop that will be used for an Exam Environment. The physical workstations are thin clients that auto-direct to RDP and access our AVD environment. Using a Windows11+Office365 image, I created a basic image. I added shortcuts to the desktop (Word, Excel, PowerPoint, a web shortcut)I want a non-persistent desktop that does not pull any information from a user’s account. Not OneDrive, not Teams, no roaming files. It will pass-through the user’s credentials for obtaining a Office license and nothing more. All internet will be blocked, and only a couple of sites will be accessed.All lockdown settings are handled through GPO (GPO settings File). **not shown in the file is the usage of Teams where I used a fake tenant ID so it was impossible to loadlink MSTeams. But to block OneDrive as indicated in the file, I did use the correct tenant ID. Does it work? Yes. But it has issues loading up. When you attempt to sign in, it will take exactly 10min ‘Preparing Windows’.Once this completes, in the bottom left corner is a minimized DOS window. When you restore this window it shows it’s attempting to run UsrLogon.cmd but has the error “The command prompt has been disabled by your administrator. Press any key to continue…”, which when done takes you to the desktop.Lastly, the weblink is created for the student to upload their exam to our CRM when completed. When you access this weblink it opens with “Welcome-new-device is blocked” and you have to use TaskManager to kill Edge, and then restart it so it will then properly access the CRM site. While this does work as intended once you get to this point, it’s an unnecessary delay. Especially when attempting to take an exam. I need help trying to identify those (3) problems;- Why is it taking 10 minutes to prepare windows, and how do you make it stop doing that?- Why is the usrlogon.cmd failing to run (and I don’t appear to need it, so how to stop it completely?)- What GPO did I miss with Edge to not have that ‘welcome new deviceuser’ to appear and just load the website I want. Read More
Create a dax to filter only last four hours and lookup a column
Hi Team,
I have below sample data:
We need to create a dax using below logics:
filter HeatSense_Device table for last four hours(use ‘UpdatedOn'(max&min datetime) column for this)Then look if the ‘UserMode’ contains ‘heat’ in all rows.If the above 2 conditions are true then return “Heating on Last 4 Hours” else blank.
We tried to create the below logic but its not giving correct output:
Last 4 Hours in Heat =
var latestdatetime = max(HeatSense_Device[UpdatedOn])
var earliestdatetime = latestdatetime-4/24
var last4hours =
CALCULATETABLE(HeatSense_Device,HeatSense_Device[UpdatedOn]<=latestdatetime && HeatSense_Device[UpdatedOn]>=earliestdatetime)
var maximummode = MAXX(last4hours,HeatSense_Device[UserMode])
var minimummode = MINX(last4hours,HeatSense_Device[UserMode])
return
if(and(maximummode=”heat”,minimummode=”heat”),”Heating on Last 4 Hours”,BLANK())
Ouput:
When I bring in UpdatedOn column into visual to test the above dax, it retuns the whole 24hours instead of last four hours. I want to show only last four hours where there is heat in table.
Could you please help us in creating a dax or modify the above dax ?
PFA file here Heatsense – Copy.pbix
Please advise!
Thanks in advance!
Hi Team, I have below sample data: We need to create a dax using below logics:filter HeatSense_Device table for last four hours(use ‘UpdatedOn'(max&min datetime) column for this)Then look if the ‘UserMode’ contains ‘heat’ in all rows.If the above 2 conditions are true then return “Heating on Last 4 Hours” else blank.We tried to create the below logic but its not giving correct output: Last 4 Hours in Heat =
var latestdatetime = max(HeatSense_Device[UpdatedOn])
var earliestdatetime = latestdatetime-4/24
var last4hours =
CALCULATETABLE(HeatSense_Device,HeatSense_Device[UpdatedOn]<=latestdatetime && HeatSense_Device[UpdatedOn]>=earliestdatetime)
var maximummode = MAXX(last4hours,HeatSense_Device[UserMode])
var minimummode = MINX(last4hours,HeatSense_Device[UserMode])
return
if(and(maximummode=”heat”,minimummode=”heat”),”Heating on Last 4 Hours”,BLANK()) Ouput: When I bring in UpdatedOn column into visual to test the above dax, it retuns the whole 24hours instead of last four hours. I want to show only last four hours where there is heat in table.Could you please help us in creating a dax or modify the above dax ? PFA file here Heatsense – Copy.pbixPlease advise! Thanks in advance!@SergeiBaklan Read More
Editing in classic mode just reloads the page
Hi all – Strange problem. I have inherited a system where a user fills out a form and, on submission, a subsite is generated (Classic Mode). About a month ago, “Edit Page” ceased working. All it does is reload the page. “Stop Editing” is greyed out and no web parts are selectable. I viewed in Devleoper Mode (Edge) and there are no errors or JS bugs. Has Microsoft deprecated features in Classic Mode that would affect editing web parts in Classic?
Hi all – Strange problem. I have inherited a system where a user fills out a form and, on submission, a subsite is generated (Classic Mode). About a month ago, “Edit Page” ceased working. All it does is reload the page. “Stop Editing” is greyed out and no web parts are selectable. I viewed in Devleoper Mode (Edge) and there are no errors or JS bugs. Has Microsoft deprecated features in Classic Mode that would affect editing web parts in Classic? Read More
Is it safe to convert MBOX to PDF ?
Yes, it is generally safe to convert MBOX files to PDF as long as you use trusted software or tools. MBOX files contain email messages, and converting them to PDF simply transforms the message content into a readable, printable format. However, here are some safety considerations:
1. Use Trusted Software: Ensure that the tool or software you are using for the conversion is from a reliable source to avoid malware or other security risks.
2. Check Privacy Policies: If you are using an online tool, make sure it has a good reputation for privacy and does not store your emails or data.
3. Backup Your Data: Before converting, it’s a good practice to back up your MBOX files to prevent any data loss.
4. Be Cautious of Attachments: If your emails contain attachments, ensure that sensitive attachments are handled properly in the conversion process.
If you are mindful of these points, the conversion process should be safe.
Yes, it is generally safe to convert MBOX files to PDF as long as you use trusted software or tools. MBOX files contain email messages, and converting them to PDF simply transforms the message content into a readable, printable format. However, here are some safety considerations: 1. Use Trusted Software: Ensure that the tool or software you are using for the conversion is from a reliable source to avoid malware or other security risks. 2. Check Privacy Policies: If you are using an online tool, make sure it has a good reputation for privacy and does not store your emails or data. 3. Backup Your Data: Before converting, it’s a good practice to back up your MBOX files to prevent any data loss. 4. Be Cautious of Attachments: If your emails contain attachments, ensure that sensitive attachments are handled properly in the conversion process. If you are mindful of these points, the conversion process should be safe. Read More
Sharepoint file locking alternative
Good evening!
I manage a fleet of laptops using Microsoft Intune.
Our Microsoft 365 Business Premium subscriptions include Sharepoint Online and I’d like to use it as a file server.
Everything works fine except for one specific point: locking files during editing.
With a Windows Server file share, when someone wants to access a file being edited, a pop-up appears, inviting the user to open it in read-only mode, notify the editor, etc., which avoids the risk of conflict.
With Sharepoint, it’s more complicated.
Here’s the scenario:
Sharepoint files are synchronized on PCs via OneDrive with the Intune configuration “Configure team site libraries to sync automatically, Use OneDrive Files on-demand and Convert synced team site files to online-only files” enabled.User 1 double-clicks on one of the files in the OneDrive folder, and a local copy is downloaded.User 1 edits the file via Word installed locally, while the Internet connection is lost.At the same time, user 2 edits the file and saves it. His version ends up on Sharepoint.A few minutes later, once User 1’s connection is re-established, his version is saved on Sharepoint.
User 2’s changes will go completely unnoticed, creating confusion and wasted time.
The check-out/check-in system proposed by Sharepoint has the merit of existing, but is very restrictive for users.
Is there a way of ensuring that Sharepoint files appear in the OneDrive folder in Windows File Explorer, but that modifications can only be made online, to avoid this type of problem?
I thought that the “Convert synced team site files to online-only files” option would meet this need, but it seems that this is not the case.
Thanks for your help!
Jo
Good evening! I manage a fleet of laptops using Microsoft Intune.Our Microsoft 365 Business Premium subscriptions include Sharepoint Online and I’d like to use it as a file server.Everything works fine except for one specific point: locking files during editing.With a Windows Server file share, when someone wants to access a file being edited, a pop-up appears, inviting the user to open it in read-only mode, notify the editor, etc., which avoids the risk of conflict.With Sharepoint, it’s more complicated.Here’s the scenario:Sharepoint files are synchronized on PCs via OneDrive with the Intune configuration “Configure team site libraries to sync automatically, Use OneDrive Files on-demand and Convert synced team site files to online-only files” enabled.User 1 double-clicks on one of the files in the OneDrive folder, and a local copy is downloaded.User 1 edits the file via Word installed locally, while the Internet connection is lost.At the same time, user 2 edits the file and saves it. His version ends up on Sharepoint.A few minutes later, once User 1’s connection is re-established, his version is saved on Sharepoint.User 2’s changes will go completely unnoticed, creating confusion and wasted time.The check-out/check-in system proposed by Sharepoint has the merit of existing, but is very restrictive for users. Is there a way of ensuring that Sharepoint files appear in the OneDrive folder in Windows File Explorer, but that modifications can only be made online, to avoid this type of problem? I thought that the “Convert synced team site files to online-only files” option would meet this need, but it seems that this is not the case. Thanks for your help!Jo Read More
DLP Exception for “Permission Controlled” Not Working (Microsoft Purview | RMS Template | Encrypt)
Hello,
We are in the process of moving some of our mail-flow / transport rules over to Microsoft Purview.
We don’t want the DLP policy to apply when people click their “Encrypt” or “Do not Forward” buttons (RMS templates; OME encryption.)
Putting “Permission Controlled” in the exceptions group should theoretically let the emails go through. The exception we have for when people put “Encrypt” in the subject line works (we have a mail-flow rule that encrypts those emails.)
But actually clicking “Options” > “Set permissions on this item” > “Encrypt” doesn’t remove the policy tip on an email draft, and people are unable to send the emails.
Can someone verify this is rule constructed properly? If so, we may have to reach out to Microsoft Support. Thank you so much for your time and help!
Hello, We are in the process of moving some of our mail-flow / transport rules over to Microsoft Purview. We don’t want the DLP policy to apply when people click their “Encrypt” or “Do not Forward” buttons (RMS templates; OME encryption.)Putting “Permission Controlled” in the exceptions group should theoretically let the emails go through. The exception we have for when people put “Encrypt” in the subject line works (we have a mail-flow rule that encrypts those emails.)But actually clicking “Options” > “Set permissions on this item” > “Encrypt” doesn’t remove the policy tip on an email draft, and people are unable to send the emails. Can someone verify this is rule constructed properly? If so, we may have to reach out to Microsoft Support. Thank you so much for your time and help! Read More
SharePoint List need multi-line column with timestamp and allow updates
I’m setting up a SharePoint list to support attestation by owners for different business virtual assets. This will support a review to ensure we aren’t retaining stale virtual assets.
When the owners review SharePoint list rows where they are listed as an owner, I’d like them to provide a note detailing their rationale for why the asset should be retained but have that note timestamped so the next time they perform a review they can see the previous rationale and the date they added that note.
What is the best way to do this? Calculated, multi-line column? If so, what formula would work? Or would this need to be run through Power Automate and have a flow that updates any input to that column to concatenate it with a date?
I’m setting up a SharePoint list to support attestation by owners for different business virtual assets. This will support a review to ensure we aren’t retaining stale virtual assets. When the owners review SharePoint list rows where they are listed as an owner, I’d like them to provide a note detailing their rationale for why the asset should be retained but have that note timestamped so the next time they perform a review they can see the previous rationale and the date they added that note. What is the best way to do this? Calculated, multi-line column? If so, what formula would work? Or would this need to be run through Power Automate and have a flow that updates any input to that column to concatenate it with a date? Read More
Dynamic Multi-Cloud Networking: Configuring a BGP-Enabled VPN Between Azure and AWS
Introduction
In my previous blog post, I demonstrated how to set up a basic VPN connection between Azure and AWS. This updated guide builds on that foundation by incorporating BGP (Border Gateway Protocol) to enable dynamic routing and redundancy across two VPN tunnels. By following this configuration, you can establish a more resilient multi-cloud VPN connection that supports automatic route exchanges between Azure VPN Gateway and AWS Virtual Private Gateway over IPsec tunnels. This approach ensures reliable connectivity and helps simplify network management between Azure and AWS environments.
Step 1: Set Up Your Azure Environment
1.1. Create a Resource Group
Go to Azure Portal > Resource groups > Create.
Select your subscription and region, and give the resource group a name like RG-AzureAWSVPN-BGP.
1.2. Create a Virtual Network (VNet) and Subnet
In the Azure Portal, go to Virtual Networks > Create.
Name the VNet AzureVNetBGP and specify an address space of 172.16.0.0/16.
Under Subnets, create a subnet named Subnet-AzureVPN with the address range 172.16.1.0/24.
Add a GatewaySubnet with a /27 address block (e.g., 172.16.254.0/27) for the VPN gateway.
1.3. Set Up the Azure VPN Gateway
Go to +Create a resource, search for Virtual Network Gateway, and select Create.
Fill in the details:
Name: AzureVPNGatewayBGP
Gateway Type: VPN
SKU: VpnGw1 (or higher for redundancy/performance).
Public IP Address: Create a new one and name it AzureVPNGatewayPublicIP.
Enable BGP: Yes.
ASN: Use an Autonomous System Number (ASN) for Azure, e.g., 65010.
Azure APIPA BGP IP Address: Use 169.254.21.2 for the first tunnel with AWS and 169.254.22.2 for the second tunnel with AWS.
Note: For this example, we’ll create an Active-Standby setup so Active-Active Mode will not be enabled. If you wanted to change from active-standby to active-active later follow this: Configure active-active VPN gateways: Azure portal – Azure VPN Gateway | Microsoft Learn.
Step 2: Set Up Your AWS Environment with BGP
2.1. Create a VPC and Subnet in AWS
In the AWS Console, go to VPC > Create VPC.
Use an address space (e.g., 10.0.0.0/16) for the AWS VPC.
Under Subnets, create a subnet with a name like Subnet-AWSVPN and the address space 10.0.1.0/24 for your subnet.
2.2. Create an AWS Virtual Private Gateway (VGW)
In the AWS VPC Console, go to Virtual Private Gateway and create a new VGW named AWS-VPN-VGW-BGP.
Attach the VGW to the VPC.
During the VGW creation, set the ASN for AWS. AWS will assign one by default (e.g., 64512), but you can customize this if needed.
2.3. Set Up a Customer Gateway (CGW)
In the AWS Console, go to Customer Gateway, and create a CGW using the public IP address of the Azure VPN Gateway (obtained during the Azure VPN Gateway setup). Name it Azure-CGW-BGP.
Set the BGP ASN for the Customer Gateway to 65010, the same ASN as set in Azure.
2.4. Create the Site-to-Site VPN Connection with BGP setting
In AWS Console, go to Site-to-Site VPN Connections > Create VPN Connection.
Select the Virtual Private Gateway created earlier.
Select the Customer Gateway created earlier.
Routing Options: Select Dynamic (requires BGP) to enable dynamic routing with BGP.
Tunnels: AWS will automatically create two tunnels for redundancy.
2.4.1. Tunnel Configuration – Optional Settings
Under the Optional Tunnel Settings, configure the Inside IPv4 CIDR for each tunnel:
For Tunnel 1: Set the Inside IPv4 CIDR to 169.254.21.0/30.
For Tunnel 2: Set the Inside IPv4 CIDR to 169.254.22.0/30.
This ensures proper BGP peering between Azure and AWS for both tunnels.
2.4.3. Download the VPN Configuration File
After the VPN is set up, download the configuration file.
Select Generic for the platform and Vendor agnostic for the software.
Select IKEv2 for the IKE version.
Step 3: Finish the Azure Side Configuration with the two tunnels and BGP setup
3.1. Create Two Local Network Gateways
To support two tunnels, you will need to create two Local Network Gateways on Azure, one for each tunnel.
In the Azure Portal, go to Local Network Gateway > Create.
Local Network Gateway 1 (for the first tunnel):
ASN: Set to 64512 (AWS ASN).
BGP Peer IP Address: Enter 169.254.21.1(AWS BGP peer IP for the first tunnel).
Name: AWSLocalNetworkGatewayBGP-Tunnel1
Public IP Address: Enter the public IP for the first AWS VPN tunnel (from the configuration file).
BGP Settings: Go to the Advanced Tab, select Yes for Configure BGP Settings, then:
Note: You do not need to specify an address space when creating the Local Network Gateway. Only the public IP and BGP settings are required.
3. Local Network Gateway 2 (for the second tunnel):
Name: AWSLocalNetworkGatewayBGP-Tunnel2
Public IP Address: Enter the public IP for the second AWS VPN tunnel.
BGP Settings: Go to the Advanced Tab, select Yes for Configure BGP Settings, then:
ASN: Set to 64512 (AWS ASN).
BGP Peer IP Address: Enter 169.254.22.1 (AWS BGP peer IP for the second tunnel).
Note: Enter the ASN first, followed by the BGP Peer IP Address in this order.
3.2. Create the VPN Connection for Both Tunnels
Go to Azure Portal > Virtual Network Gateway > Connections > + Add.
For the first tunnel:
Primary Custom BGP Address: Enter 169.254.21.2 for Tunnel 1.
Name: AzureAWSVPNConnectionBGP-Tunnel1
Connection Type: Site-to-site (IPsec).
Virtual Network Gateway: Select AzureVPNGatewayBGP.
Local Network Gateway: Select AWSLocalNetworkGatewayBGP-Tunnel1.
Shared Key (PSK): Use the shared key from the AWS VPN configuration file for tunnel 1.
IKE Protocol: Ensure that IKEv2 is selected.
Enable BGP: Mark the checkbox to enable.
After selecting Enable BGP, check the box for Enable Custom BGP Addresses and set:
IPSec/IKE Policy: Set this to Default.
Use Policy-Based Traffic Selector: Set to Disabled.
DPD (Dead Peer Detection) Timeout: Set the Timeout in Seconds to 45 seconds.
Connection Mode: Leave this as Default (no need to change to initiator-only or responder-only).
In about 3 minutes, you can check the VPN connection established.
Repeat the same process for the second tunnel:
Name: AzureAWSVPNConnectionBGP-Tunnel2
Local Network Gateway: Select AWSLocalNetworkGatewayBGP-Tunnel2.
Shared Key (PSK): Use the shared key from the AWS VPN configuration file for tunnel 2.
Enable BGP: Mark the checkbox to enable.
Check the box for Enable Custom BGP Addresses and set:
Primary Custom BGP Address: Enter 169.254.22.2 for Tunnel 2.
In about 3 minutes, you can check the VPN connection established.
3.3. Ensure the VPN is established
From Site-to-Site VPN connections on AWS, go to Tunnel details and check that the Tunnel 1 is UP:
2. From Azure side, check if the status of the VPN connections is Connected:
In BPG peers, you can see the BGP peers and the BGP learned routes:
Step 4: Add Routes and Configure Security
4.1. AWS Route Table Configuration
In AWS Console, go to Route Tables and select the route table for your AWS VPC.
Navigate to Route Propagation and select Edit Route Propagation.
Enable route propagation to ensure that BGP dynamically propagates the routes between AWS and Azure, removing the need for manual static route entries. Almost instantaneously after enabling the route propagation, you will be able to see the new routes
4.2. Add an Internet Gateway (IGW)
Note: An Internet Gateway (IGW) is required for the EC2 instance to be accessible via its public IP address. Without the IGW, the EC2 instance won’t be reachable over the public internet, preventing you from logging into the EC2 using their public IP address. This is the sole purpose of deploying the IGW.
4.3. Set Security Group and NSG Rules
AWS Security Group: Ensure that the Security Group for the AWS EC2 instance allows ICMP (ping), SSH, and any other necessary protocols.
Azure NSG (Network Security Group): Ensure that the NSG attached to the Azure VM’s NIC allows inbound traffic from AWS for the required protocols, such as ICMP and SSH.
Step 5: Test Connectivity Between Azure and AWS VMs
To test connectivity between Azure and AWS, first deploy a virtual machine in the appropriate subnet on each cloud provider—an EC2 instance on AWS and a VM on Azure. Once both machines are running, connect to each VM using their respective public IP addresses. After logging in, use the private IP addresses of both instances to run a ping test and verify private network connectivity between them.
If you decided to not create the IGW to make the EC2 VM accessible over the internet, you can just login into the Azure VM using their public IP address and test unilaterally running the ping command against the private IP of the EC2 VM.
5.1. Ensure ICMP Traffic Is Allowed
Both the AWS Security Group and the Azure NSG (Network Security Group) should allow ICMP (ping) traffic for proper testing of connectivity between the virtual machines.
5.2. Test Connectivity with ping
From the Azure VM, ping the AWS VM using its private IP.
From the AWS VM, ping the Azure VM using its private IP.
Ensure that the pings are successful in both directions to verify that the VPN tunnels are functioning correctly.
Troubleshooting Common Issues
BGP Not Establishing
Double-check that the BGP peer IP addresses and ASNs are correctly configured for both tunnels.
Ensure that BGP is enabled on both the Azure Virtual Network Gateway and the AWS VPN connection.
Ensure that route propagation is enabled on AWS, allowing dynamic routes to be exchanged through BGP.
No Inbound Traffic on Azure VPN Gateway
Verify that AWS route propagation is enabled and that the Azure routes are correctly learned from AWS.
Check the NSG rules on Azure to ensure inbound traffic is allowed from AWS.
Dead Peer Detection (DPD) Issues
Mismatched DPD settings may cause tunnels to drop. Ensure that both Azure and AWS have consistent DPD configurations. The recommended DPD Timeout for both Azure and AWS is 45 seconds.
Tunnel Status Showing as Down
If one or both tunnels show as down, ensure that the IKEv2/IPsec policies match on both sides. Double-check the encryption algorithms, hashing functions, and Diffie-Hellman group settings between Azure and AWS for Phase 1 and Phase 2.
Restart the VPN connection on both Azure and AWS to re-initiate the tunnels.
Conclusion
By following this guide, you’ve successfully set up a VPN connection between Azure and AWS using BGP with two tunnels for redundancy. This configuration ensures robust and reliable connectivity between the two clouds, with dynamic route propagation handled by BGP. The use of managed services minimizes operational overhead and simplifies management.
For more advanced configurations, such as custom IPsec/IKE policies, enabling failover, or using BGP with Active-Active Mode, refer to the official documentation for Azure VPN Gateway.
Microsoft Tech Community – Latest Blogs –Read More
Part 2 – Multichannel notification system using Azure Communication Services and Azure Functions
In the first part of this topic, we setup all the Azure resources like the Azure Communication Services for Email, SMS, WhatsApp for Business and developed the Azure Functions code for the email trigger. In this second part, we will complete coding the remaining Azure Functions triggers and then go ahead to deploy the multichannel notification system to Azure Functions, testing the Email, SMS, and WhatsApp triggers with OpenAI GPTs. Let’s get started!
Prerequisite
To follow this tutorial, ensure you have completed the first part of this topic.
Coding the SMSTrigger
Enhancing the SMSTrigger Azure Function from the default template involves a series of steps. These steps will transform the basic Function into one that can send SMS messages using Azure Communication Services. Below is a guide to get you from the default HTTP triggered function to the finished SMSTrigger.
Step 1: Set Up the Function Template
Follow the instructions for setting up the function template from the Email section and name the trigger as ‘SMSTrigger’ or any other string you prefer.
Step 2: Add Azure Communication Services SMS Reference
Add a reference to using Azure.Communication.Sms then create a property in the SMS Trigger class to hold an instance of SmsClient and a property to hold the email sender address.
csharp
private string? sender = Environment.GetEnvironmentVariable(“SENDER_PHONE_NUMBER”);
Step 3: Read Configuration and Initialize SmsClient
In the constructor of the SMSTrigger class, read the Azure Communication Services connection string from the environment variables using the Environment.GetEnvironmentVariable() method and initialize the SmsClient instance.
Be sure to check if the connection string is null, and if so, throw an exception to indicate that the environment variable is missing:
csharp
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_smsClient = new SmsClient(connectionString);
Step 4: Define the Request Model
Create a request model class within the SMSTrigger class called SmsRequest. This model should contain properties for the message text and the phone number to which the message will be sent.
csharp
{
public string Message { get; set; } = string.Empty;
public string PhoneNumber { get; set; } = string.Empty;
}
Step 5: Parse the Request Body
Change the Run function to be async as we will perform asynchronous operations. Use a StreamReader to read the request body as a string and deserialize it into an SmsRequest object using JsonSerializer.
csharp
If the request body fails to deserialize into SmsRequest, return a BadRequestResult:
csharp
SmsRequest? data = JsonSerializer.Deserialize<SmsRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
Step 6: Define the Sender and Send an SMS
Retrieve the sender’s phone number from the environment variables with Environment.GetEnvironmentVariable(). Then, attempt to send the SMS with a try-catch block, handling any RequestFailedException that may occur and logging the relevant information:
csharp
{
_logger.LogInformation(“Sending SMS…”);
SmsSendResult smsSendResult = await _smsClient.SendAsync(
sender,
data.PhoneNumber,
data.Message
);
_logger.LogInformation($”SMS Sent. Successful = {smsSendResult.Successful}”);
_logger.LogInformation($”SMS operation id = {smsSendResult.MessageId}”);
}
catch (RequestFailedException ex)
{
_logger.LogInformation($”SMS send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}”);
// Return an appropriate error response if needed
}
Step 7: Return a Success Response
If sending the SMS is successful, return an OkObjectResult to the caller indicating that the SMS has been sent.
csharp
Final Code
The final SMSTrigger Azure Function, with the steps implemented, should look as follows:
csharp
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
using Azure;
using Azure.Communication.Messages;
using System.Text.Json;
using System.IO;
using System.Threading.Tasks;
using System.Linq;
using System.Collections.Generic;
namespace ACSGPTFunctions
{
public class WhatsAppTrigger
{
private readonly ILogger<WhatsAppTrigger> _logger;
private readonly NotificationMessagesClient _messagesClient;
private string? sender = Environment.GetEnvironmentVariable(“WHATSAPP_NUMBER”);
public WhatsAppTrigger(ILogger<WhatsAppTrigger> logger)
{
_logger = logger;
string? connectionString = Environment.GetEnvironmentVariable(“COMMUNICATION_SERVICES_CONNECTION_STRING”);
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_messagesClient = new NotificationMessagesClient(connectionString);
}
public class WhatsAppRequest
{
public string PhoneNumber { get; set; } = string.Empty;
public string TemplateName { get; set; } = “appointment_reminder”;
public string TemplateLanguage { get; set; } = “en”;
public List<string> TemplateParameters { get; set; } = new List<string>();
}
[Function(“WhatsAppTrigger”)]
public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, “get”, “post”)] HttpRequest req)
{
_logger.LogInformation(“Processing request.”);
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
WhatsAppRequest? data = JsonSerializer.Deserialize<WhatsAppRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
var recipientList = new List<string> { data.PhoneNumber };
var values = data.TemplateParameters
.Select((parameter, index) => new MessageTemplateText($”value{index + 1}”, parameter))
.ToList();
var bindings = new MessageTemplateWhatsAppBindings(
body: values.Select(value => value.Name).ToList()
);
var template = new MessageTemplate(data.TemplateName, data.TemplateLanguage, values, bindings);
var sendTemplateMessageOptions = new SendMessageOptions(sender, recipientList, template);
try
{
Response<SendMessageResult> templateResponse = await _messagesClient.SendMessageAsync(sendTemplateMessageOptions);
_logger.LogInformation(“WhatsApp message sent successfully!”);
}
catch (RequestFailedException ex)
{
_logger.LogError($”WhatsApp send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}”);
return new ObjectResult(new { error = ex.Message }) { StatusCode = 500 };
}
return new OkObjectResult(“WhatsApp sent successfully!”);
}
}
}
This completed SMSTrigger Azure Function can now facilitate SMS as part of your multichannel notification system.
Coding the WhatsAppTrigger
Creating a functional WhatsAppTrigger Azure Function involves iterating on the default HTTP-triggered function template provided by Azure Functions for C#. We will modify this template to integrate Azure Communication Services for sending WhatsApp messages via template messages. Follow the steps below to transform this template into a complete WhatsAppTrigger function:
Step 1: Set Up the Function Template
Follow the instructions in the first step for setting up SMS trigger and name the function as WhatsAppTrigger. Set the authorization level to anonymous or function, depending on your security preference.
Step 2: Reference the Azure Communication Services Messages Package
Ensure the Azure.Communication.Messages NuGet package is included in your project to enable messaging features needed for WhatsApp. Install the package with the following command in Visual Studio Code’s terminal:
bash
Add a reference to using Azure.Communication.Messages then create a property in the WhatsApp Trigger class to hold an instance of NotificationMessagesClient and a property to hold the WhatsApp identifier.
csharp
private string? sender = Environment.GetEnvironmentVariable(“WHATSAPP_NUMBER”);
Step 3: Read Configuration and Initialize NotificationMessagesClient
Update the WhatsAppTrigger class constructor to read the Azure Communication Services connection string from environment variables using Environment.GetEnvironmentVariable() and initialize NotificationMessagesClient with this connection string:
csharp
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_messagesClient = new NotificationMessagesClient(connectionString);
Step 4: Define the Request Model
Create a request model class named WhatsAppRequest within the WhatsAppTrigger class, containing properties for the destination phone number, template name, language, and template parameters:
csharp
{
public string PhoneNumber { get; set; } = string.Empty;
public string TemplateName { get; set; } = “appointment_reminder”;
public string TemplateLanguage { get; set; } = “en”;
public List<string> TemplateParameters { get; set; } = new List<string>();
}
Step 5: Parse the Request Body
Convert the Run function to be async to enable asynchronous work. Use StreamReader to read the request body and deserialize it to a WhatsAppRequest instance using System.Text.Json.JsonSerializer with JsonNamingPolicy.CamelCase.
csharp
Handle potential deserialization failure by returning BadRequestResult:
csharp
WhatsAppRequest? data = JsonSerializer.Deserialize<WhatsAppRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
Step 6: Prepare Template Message and Send WhatsApp Message
Modify the try-catch block to construct a SendMessageOptions object using MessageTemplateWhatsAppBindingsand MessageTemplate, and then make a call to _messagesClient.SendMessageAsync(sendTemplateMessageOptions):
csharp
{
_logger.LogInformation(“Sending WhatsApp message…”);
List<string> recipientList = new List<string> { data.PhoneNumber };
List<MessageTemplateText> values = data.TemplateParameters
.Select((parameter, index) => new MessageTemplateText($”value{index + 1}”, parameter))
.ToList();
MessageTemplateWhatsAppBindings bindings = new MessageTemplateWhatsAppBindings(
body: values.Select(value => value.Name).ToList()
);
MessageTemplate template = new MessageTemplate(data.TemplateName, data.TemplateLanguage, values, bindings);
SendMessageOptions sendTemplateMessageOptions = new SendMessageOptions(sender, recipientList, template);
Response<SendMessageResult> templateResponse = await _messagesClient.SendMessageAsync(sendTemplateMessageOptions);
_logger.LogInformation(“WhatsApp message sent successfully!”);
}
catch (RequestFailedException ex)
{
_logger.LogError($”WhatsApp send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}”);
return new ObjectResult(new { error = ex.Message }) { StatusCode = 500 };
}
Step 7: Return Success Response
After sending the WhatsApp message successfully, return an OkObjectResult stating “WhatsApp sent successfully!”.
csharp
Final Code
Following the described steps, the final WhatsAppTrigger Azure Function should look like this:
csharp
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
using Azure;
using Azure.Communication.Messages;
using System.Text.Json;
using System.IO;
using System.Threading.Tasks;
using System.Linq;
using System.Collections.Generic;
namespace ACSGPTFunctions
{
public class WhatsAppTrigger
{
private readonly ILogger<WhatsAppTrigger> _logger;
private readonly NotificationMessagesClient _messagesClient;
private string? sender = Environment.GetEnvironmentVariable(“WHATSAPP_NUMBER”);
public WhatsAppTrigger(ILogger<WhatsAppTrigger> logger)
{
_logger = logger;
string? connectionString = Environment.GetEnvironmentVariable(“COMMUNICATION_SERVICES_CONNECTION_STRING”);
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_messagesClient = new NotificationMessagesClient(connectionString);
}
public class WhatsAppRequest
{
public string PhoneNumber { get; set; } = string.Empty;
public string TemplateName { get; set; } = “appointment_reminder”;
public string TemplateLanguage { get; set; } = “en”;
public List<string> TemplateParameters { get; set; } = new List<string>();
}
[Function(“WhatsAppTrigger”)]
public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Function, “get”, “post”)] HttpRequest req)
{
_logger.LogInformation(“Processing request.”);
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
WhatsAppRequest? data = JsonSerializer.Deserialize<WhatsAppRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
var recipientList = new List<string> { data.PhoneNumber };
var values = data.TemplateParameters
.Select((parameter, index) => new MessageTemplateText($”value{index + 1}”, parameter))
.ToList();
var bindings = new MessageTemplateWhatsAppBindings(
body: values.Select(value => value.Name).ToList()
);
var template = new MessageTemplate(data.TemplateName, data.TemplateLanguage, values, bindings);
var sendTemplateMessageOptions = new SendMessageOptions(sender, recipientList, template);
try
{
Response<SendMessageResult> templateResponse = await _messagesClient.SendMessageAsync(sendTemplateMessageOptions);
_logger.LogInformation(“WhatsApp message sent successfully!”);
}
catch (RequestFailedException ex)
{
_logger.LogError($”WhatsApp send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}”);
return new ObjectResult(new { error = ex.Message }) { StatusCode = 500 };
}
return new OkObjectResult(“WhatsApp sent successfully!”);
}
}
}
The WhatsAppTrigger Azure Function is now ready to send WhatsApp template messages. Be sure to test it extensively and remember to handle any issues related to input validation and communicate with the Azure Communication Services API correctly.
Deployment and Testing
After developing the multichannel notification system using Azure Functions, the next step is to deploy and test the functions. This section will guide you through deploying your Azure Function to the cloud and testing the Email, SMS, and WhatsApp triggers.
Deploying the Azure Function
Deployment of your Azure Function can be done right from Visual Studio Code with the Azure Functions extension.
Publish the Function App: In Visual Studio Code, sign in to Azure if you haven’t already. In the Azure Functions extension tab, find the ‘Deploy to Function App…’ button and select it.
Choose Your Function App: You can either create a new Function App or deploy it to an existing one. If it’s the first time you are deploying, choose ‘Create New Function App in Azure…’.
Set the Configuration: Provide a unique name for your Function App, select a runtime stack (.NET Core in this case), choose the appropriate region, and confirm your selections.
Wait for Deployment: The deployment process will take a few minutes. Monitor the output window for completion status and any potential errors.
Set Up Application Settings
After deployment, you need to configure the application settings (environment variables) in Azure.
Open the Function App: Navigate to the Azure Portal, and find your Function App under ‘All Resources’ or by searching the name you provided.
Access Application Settings: In the Function App’s menu, go to ‘Configuration’ under the ‘Settings’ section.
Add the Settings: Click on ‘New application setting’ and add the key-value pairs for the environment variables specified in your local.settings.json: COMMUNICATION_SERVICES_CONNECTION_STRING, SENDER_EMAIL_ADDRESS, SENDER_PHONE_NUMBER, WHATSAPP_NUMBER, etc.,
json
“IsEncrypted”: false,
“Values”: {
“AzureWebJobsStorage”: “”,
“FUNCTIONS_WORKER_RUNTIME”: “dotnet-isolated”,
“COMMUNICATION_SERVICES_CONNECTION_STRING”: “<<connection string>>”,
“SENDER_PHONE_NUMBER”: “<<phone number>>”,
“SENDER_EMAIL_ADDRESS”: “<<email address>>”,
“WHATSAPP_NUMBER”:”<<WhatsApp id>>”
}
}
Save and Restart: After adding the required settings, make sure to save the configurations and restart the Function App to ensure the new settings take effect.
Alternatively, when the Function has finished deploying, you can click on ‘Upload settings’ to upload your settings from local.settings.json. Don’t forget to restart the Function App after uploading the settings.
Testing the Function
With the deployment complete and the environment configured, it’s time to verify that your function works as intended through each communication channel.
Testing Email Notifications
To test the EmailTrigger function:
Send an HTTP POST Request: Use a tool like Postman to send a POST request to the Function App’s URL suffixed with /api/EmailTrigger. The body should contain JSON with keys for subject, htmlContent, and recipient.
Verify Email Receipt: Check the recipient’s email inbox for the message. Ensure that the subject and content match what you sent through the POST request.
Testing SMS Notifications
To test the SMSTrigger function:
Send an HTTP POST Request: Using Postman, send a POST request to the Function App’s URL with /api/SMSTrigger at the end. The body of your request should contain JSON with message and phoneNumberkeys.
Check for SMS: Ensure that the specified phone number receives the SMS and the message content matches the request.
Testing WhatsApp Notifications
To test the WhatsAppTrigger function:
Send an HTTP POST Request: Use Postman again to POST to the Function URL, this time ending with /api/WhatsAppTrigger. Include a JSON body with keys for phoneNumber, templateName, templateLanguage, and templateParameters.
Confirm WhatsApp Message: Verify that the WhatsApp message reaches the intended recipient with correct template filling.
Integrate with OpenAI GPTs
In OpenAI GPTs editor, click ‘new GPT’ and ‘configure’. Name it “Email Sender” and set the description and instructions as mentioned.
Help author short and delightful emails. Ask for details on the nature of the email content and include creative ideas for topics. Compose the email with placeholders for the sender’s name and receiver’s name. You do not need a full name. Share a draft of the email and ask for the sender’s name, and the receiver’s name and email address. Provide a draft of the final email and confirm the user is happy with it. When the user provides a recipient’s email address ask if it is correct before sending. Do not send the email until you provide a final draft and you have a confirmed recipient email address.
Add Actions and JSON Schema
Click ‘Create new action’ in your GPT configuration. Enter the following JSON:
json
“openapi”: “3.1.0”,
“info”: {
“title”: “Send Message API”,
“description”: “API for sending a message to a specified email address.”,
“version”: “v1.0.0”
},
“servers”: [
{
“url”: “https://<<function-app-url>>.azurewebsites.net”
}
],
“paths”: {
“/api/emailtrigger”: {
“post”: {
“description”: “Send a message to a given email address”,
“operationId”: “SendMessage”,
“requestBody”: {
“required”: true,
“content”: {
“application/json”: {
“schema”: {
“type”: “object”,
“properties”: {
“recipient”: {
“type”: “string”,
“format”: “email”,
“description”: “Email address of the recipient”
},
“subject”: {
“type”: “string”,
“description”: “The message subject”
},
“htmlContent”: {
“type”: “string”,
“description”: “The body content of the email encoded as escaped HTML”
}
},
“required”: [
“to”,
“message”
]
}
}
}
},
“deprecated”: false
}
}
},
“components”: {
“schemas”: {}
}
}
Leave Authentication to none, and Privacy Policy blank.
Test Your GPT
Finally, try out your GPT in the preview pane to see it in action!
By following these steps, you can easily integrate Azure Communication Services with OpenAI GPTs to send emails effortlessly.
Conclusion and Further Reading
We have successfully walked through the journey of building a serverless multichannel notification system using Azure Functions and Azure Communication Services. This system can send timely and personalized notifications across multiple channels, such as Email, SMS, and WhatsApp. In addition, we have explored how to enhance our system with sophisticated content generation capabilities using OpenAI GPTs.
The modular nature of the Azure Functions framework allows your application to scale and adapt easily to changing requirements and traffic demands. Meanwhile, Azure Communication Services enrich the user experience by meeting customers on their preferred platforms, contributing to a seamless and cohesive communication strategy.
As developers, there’s always room to expand our knowledge and add robust features to our applications. Here are some suggestions for further exploration and resources that can assist you in taking your applications to the next level:
Azure Communication Services AI samples: One stop shop for GitHub samples for AI-powered communication solutions.
Azure Functions Best Practices: Learn about best practices for designing and implementing Azure Functions by visiting Azure Functions best practices.
Azure Communication Services Documentation: Explore the full capabilities of Azure Communication Services including chat, phone numbers, video calling, and more on the Azure Communication Services documentation.
Security and Compliance in Azure: Understand the best practices for security and compliance in Azure applications, particularly relevant for handling sensitive user communication data. Check the Microsoft Azure Trust Center.
OpenAI GPT Documentation: For more insight into using and customizing OpenAI GPTs, refer to the OpenAI API documentation.
Azure AI Services: Azure offers a range of AI services beyond just communication. Explore Azure AI services for more advanced scenarios such as speech recognition, machine translation, and anomaly detection at Azure AI services documentation.
Handling Large-scale Data: To handle a large amount of data and improve the performance of communication systems, consider learning about Azure’s data-handling services like Azure Cosmos DB, Azure SQL Database, and Azure Cache for Redis. Start with the Azure Data storage documentation.
Monitoring and Diagnostics: Improve the reliability of your applications by implementing robust monitoring and diagnostics tools. Azure offers several tools such as Azure Monitor and Application Insights. Dive into Application Insights for Azure Functions.
Serverless Workflow Automation with Azure Logic Apps: Enhance your serverless applications using Azure Logic Apps to automate and simplify workflows. Learn more about Azure Logic Apps at What is Azure Logic Apps?.
Happy coding!
Microsoft Tech Community – Latest Blogs –Read More
Part 1 – Multichannel Notification System with Azure Communication Services and Azure Functions
In the interconnected digital era, it’s crucial for businesses and services to communicate effectively with their audience. A robust notification system that spans various communication channels can greatly enhance user engagement and satisfaction.
This blog post is part 1 of the two-part tutorial for a step-by-step guide on building such a multichannel notification system with Azure Functions and Azure Communication Services.
Leveraging serverless architecture and the reach of Azure Communication Services, your application can dynamically generate and send messages via SMS, Email, and WhatsApp. By incorporating OpenAI GPTs, the system can create content that is not only relevant and timely but personalized, making communication more impactful.
Example email
Architecture diagram
Here are some practical scenarios where a multichannel notification system is valuable:
Financial Alerts: Banks and financial services can send fraud alerts, transaction confirmations, and account balance updates.
Healthcare Reminders: Clinics and pharmacies can notify patients about appointment schedules, vaccinations, or prescription refills.
Security Verification: Services requiring secure authentication can utilize two-factor authentication prompts sent via SMS or WhatsApp.
Marketing and Promotions: Retailers can craft and distribute targeted marketing messages and promotions, driving customer engagement.
The foundation of this solution is Azure Functions, for event-driven platform for running scalable applications and Azure Communication Services, for reliable Email, SMS, and WhatsApp messaging. To generate content, we use OpenAI GPTs, which enables the creation of sophisticated, context-aware text that can be used in notifications.
Now, let’s get started on your path to building a serverless messaging system on Azure.
Prerequisites
Before we dive into building our multichannel notification system with Azure Functions and Azure Communication Services, you will need to ensure that the following tools and accounts set up:
Azure Account: You need a Microsoft Azure account to create and manage resources on Azure. If you haven’t got one yet, you can create a free account here.
Visual Studio Code: We use Visual Studio Code (VS Code) as our Integrated Development Environment (IDE) for writing and debugging our code. Download and install it from here.
Azure Functions Extension for Visual Studio Code: This extension provides you with a seamless experience for developing Azure Functions. It can be installed from the VS Code marketplace.
C# Dev Kit: Since we write our Azure Functions in C#, this extension is necessary for getting C# support in VS Code. You can install it from the VS Code marketplace.
Azure CLI: The Azure Command-Line Interface (CLI) will be used to create and manage Azure resources from the command line. For installation instructions, visit the Azure CLI installation documentation page.
Postman: Although not strictly necessary, Postman is a handy tool for testing our HTTP-triggered Azure Functions without having to write a front-end application. You can download Postman from getpostman.com.
With the prerequisites in place, you’re ready to set up your development environment, which we will cover in the following section.
Creating Resources
To get started with building a multichannel notification system, we’ll need to create several resources within Azure. This section will walk you through setting up your Azure environment using the Azure CLI. Ensure that you have the Azure CLI installed on your machine and that you’re logged into your Azure account.
Azure Communication Services
Azure Communication Services (ACS) provides the backbone for our notification system, allowing us to send SMS, Email, and WhatsApp messages. Follow these steps to create resources for all three communication channels. However, you can choose one or more depending upon your preference. Log in to Azure:
az login
Create a Resource Group (if necessary): This groups all your resources in one collection.
az group create –name <YourResourceGroupName> –location <PreferredLocation>
Replace <YourResourceGroupName> with a name for your new resource group and <PreferredLocation> with the Azure region you prefer, such as eastus.
Create ACS Resource: This will be the main ACS resource where we manage communications capabilities.
az communication create –name <YourACSResourceName> –location Global –data-location UnitedStates –resource-group <YourResourceGroupName>
Replace <YourACSResourceName> with a unique name for your ACS resource and <YourResourceGroupName> with the name of your resource group.
After creating the resource, retrieve the connection string as you will need it to connect your Azure Function to ACS. Copy the one marked as primary.
az communication list-key –name <YourACSResourceName> –resource-group <YourResourceGroupName>
Azure Communication Services for Email
To set up Azure Communication Services Email, you’ll need to follow a few steps in the Azure Portal:
Create the Email Communications Service resource using the portal: Provision a new Email Communication Services resource in Azure portal using the instructions here. Make sure to select the same resource group as your ACS resource.
Configure the Email Communications Service: You will need to configure domains and sender authentication for email. Provision an Azure Managed Domain or set up your Custom Verified Domain depending on your use case.
Azure Communication Services for SMS
To send SMS messages, you will need to acquire a phone number through ACS. You will have to submit a phone number verification application for enabling the number for sending or receiving SMS. This may take a couple of weeks. You can choose to skip SMS and continue the tutorial with Email and WhatsApp.
Get a Phone Number: Navigate to the Phone Numbers blade in your ACS resource on the Azure portal and follow the steps to get a phone number that’s capable of sending and receiving SMS.
Toll Free verification: Apply for verification of your number using Apply for toll-free verification.
Note the Phone Number: After acquiring a phone number, record it to use when sending SMS messages from your Azure Function.
WhatsApp for Business
Sending WhatsApp messages requires setting up a WhatsApp Business account.
Set up a WhatsApp Business Account: Follow the instructions for connecting a WhatsApp business account with Azure Communication Services.
Note the WhatsApp Configuration: Once set up, make a note of the necessary configuration details such as the phone number and WhatsApp Business API credentials, as they will be needed in your Azure Function.
By following these steps, you create the resources needed to build a multichannel notification system that can reach users through SMS, Email, and WhatsApp. Next, we set up your Azure Function and integrating these services into it.
Setting Up The Environment
With the prerequisites out of the way, let’s prepare our environment to develop our multichannel notification system using Azure Functions and Azure Communication Services.
Creating the Function App Project
Open Visual Studio Code and follow these steps to create a new Azure Functions project:
Click on the Azure icon in the Activity Bar on the side of Visual Studio Code to open the Azure Functions extension.
In the Azure Functions extension, click on the Create New Project icon, choose a directory for your project, and select Create New Project Here.
Choose the language for your project. We select C# for this tutorial.
Select the template for your first function. For this project, an HTTP-triggered function is a good starting point since we want to receive HTTP requests to send out notifications.
Provide a function name, such as EmailTrigger, and set the authorization level to anonymous or function, depending on your security preference.
After you have completed these steps, your Azure Functions project is set up with all the necessary files in the chosen directory.
Installing the Necessary Packages
Now it’s time to add the packages necessary for integrating Azure Communication Services:
Open the integrated terminal in Visual Studio Code by clicking on ‘Terminal’ in the top menu and then selecting ‘New Terminal’.
Add the Azure Communication Services packages to your project:
bash
dotnet add package Azure.Communication.Sms
dotnet add package Azure.Communication.Messages –prerelease
Setting Up Environment Variables
You should store configuration details like connection strings and phone numbers as environment variables instead of hardcoding them into your functions. To do so in Azure Functions, add them to the local.settings.json file, which is used for local development.
Edit the local.settings.json file to include your Azure Communication Services (ACS) connection string and phone numbers:
json
“IsEncrypted”: false,
“Values”: {
“AzureWebJobsStorage”: “”,
“FUNCTIONS_WORKER_RUNTIME”: “dotnet”,
“COMMUNICATION_SERVICES_CONNECTION_STRING”: “<acs_connection_string>”,
“SENDER_PHONE_NUMBER”: “<acs_sms_phone_number>”,
“WHATSAPP_NUMBER”: “<acs_whatsapp_number>”,
“SENDER_EMAIL_ADDRESS”: “<acs_email_address>”
}
}
Be sure to replace <acs_connection_string>, <acs_sms_phone_number>, <acs_whatsapp_number>, and <acs_email_address> with your actual Azure storage account connection string, Azure Communication Services connection string, SMS phone number, WhatsApp number, and sending email address.
Remember not to commit the local.settings.json file to source control if it contains sensitive information. Configure similar settings in the Application Settings for your Azure Function when you deploy to Azure.
Coding the EmailTrigger
Creating a functional EmailTrigger Azure Function involves starting from the default template provided by Azure Functions for C# and enhancing it with the necessary logic and services to handle email sending. In this section, we guide you through the steps to transform the default template into the finished EmailTrigger function.
Step 1: Set Up the Function Template
Start by using the default HTTP triggered function template provided by Visual Studio Code for creating an Azure Functions project. It will have the necessary usings, function name attribute, and a simple HTTP trigger that returns a welcome message. Select your project in the Workspace pane and click on the ‘Create Function’ button in the Azure Functions extension. Choose ‘HTTP trigger’ as the template and provide a name for the function, such as EmailTrigger. Set the authorization level to anonymous or function, depending on your security preference.
Step 2: Add Azure Communication Services Email Reference
Add a reference to using Azure.Communication.Email then create a property in the EmailTrigger class to hold an instance of EmailClient and a property to hold the email sender address.
csharp
private string? sender = Environment.GetEnvironmentVariable(“SENDER_EMAIL_ADDRESS”);
Step 3: Read Configuration and Initialize EmailClient
Within the EmailTrigger class constructor, read the Azure Communication Services connection string from the environment variables using Environment.GetEnvironmentVariable() method and initialize an instance of EmailClient with the connection string.
Make sure to handle the possibility that the environment variable may be null and throw an appropriate exception if it is not set.
csharp
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_emailClient = new EmailClient(connectionString);
Step 4: Define the Request Model
Create a request model class EmailRequest inside the EmailTrigger class to represent the expected payload. This model includes the subject, HTML content, and recipient email address.
csharp
{
public string Subject { get; set; } = string.Empty;
public string HtmlContent { get; set; } = string.Empty;
public string Recipient { get; set; } = string.Empty;
}
Step 5: Parse the Request Body
Modify the Run function to be async since we’ll be performing asynchronous operations.
csharp
Use StreamReader to read the request body and deserialize it into the EmailRequest object using System.Text.Json.JsonSerializer.
Handle the case where the deserialization fails by returning a BadRequestResult.
csharp
EmailRequest? data = JsonSerializer.Deserialize<EmailRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
Step 6: Define the Sender and Send the Email
Instantiate a sender email address string that will be passed to the SendAsync method of the EmailClient instance. Replace the static email ‘DoNotReply@effaa622-a003-4676-b27e-6b9e7a783581.azurecomm.net‘ with your configured sender address in the actual implementation.
Use a try-catch block to send the email using the SendAsync method and catch any RequestFailedException to log any errors.
csharp
EmailSendOperation emailSendOperation = await _emailClient.SendAsync(
Azure.WaitUntil.Completed,
sender,
data.Recipient,
data.Subject,
data.HtmlContent
);
_logger.LogInformation($”Email Sent. Status = {emailSendOperation.Value.Status}”);
_logger.LogInformation($”Email operation id = {emailSendOperation.Id}”);
Step 7: Return a Success Response
Once the email send operation is completed, return an OkObjectResult indicating the success of the operation.
csharp
}
Final Code
After completing all the above steps, your EmailTriggerAzure Function should look as follows:
csharp
using System.IO;
using System.Text.Json;
using System.Threading.Tasks;
using Azure;
using Azure.Communication.Email;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Extensions.Logging;
namespace ACSGPTFunctions
{
public class EmailTrigger
{
private readonly ILogger<EmailTrigger> _logger;
private readonly EmailClient _emailClient;
public EmailTrigger(ILogger<EmailTrigger> logger)
{
_logger = logger;
string? connectionString = Environment.GetEnvironmentVariable(“COMMUNICATION_SERVICES_CONNECTION_STRING”);
if (connectionString is null)
{
throw new InvalidOperationException(“COMMUNICATION_SERVICES_CONNECTION_STRING environment variable is not set.”);
}
_emailClient = new EmailClient(connectionString);
}
public class EmailRequest
{
public string Subject { get; set; } = string.Empty;
public string HtmlContent { get; set; } = string.Empty;
public string Recipient { get; set; } = string.Empty;
}
[Function(“EmailTrigger”)]
public async Task<IActionResult> Run([HttpTrigger(AuthorizationLevel.Anonymous, “post”)] HttpRequest req)
{
_logger.LogInformation(“Processing request.”);
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
EmailRequest? data = JsonSerializer.Deserialize<EmailRequest>(requestBody, new JsonSerializerOptions() {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
if (data is null)
{
return new BadRequestResult();
}
var sender = “DoNotReply@effaa622-a003-4676-b27e-6b9e7a783581.azurecomm.net”;
try
{
_logger.LogInformation(“Sending email…”);
EmailSendOperation emailSendOperation = await _emailClient.SendAsync(
Azure.WaitUntil.Completed,
sender,
data.Recipient,
data.Subject,
data.HtmlContent
);
_logger.LogInformation($”Email Sent. Status = {emailSendOperation.Value.Status}”);
_logger.LogInformation($”Email operation id = {emailSendOperation.Id}”);
}
catch (RequestFailedException ex)
{
_logger.LogInformation($”Email send operation failed with error code: {ex.ErrorCode}, message: {ex.Message}”);
return new ObjectResult(new { error = ex.Message }) { StatusCode = 500 };
}
return new OkObjectResult(“Email sent successfully!”);
}
}
}
This completed EmailTriggerAzure Function is now ready to be part of a multichannel notification system, handling the email communication channel.
Next Steps
Continue to the next part of this topic to further explore building, deploying and testing your intelligent app for a multichannel notification system.
Microsoft Tech Community – Latest Blogs –Read More
Boosting Power BI Performance with Azure Databricks through Automatic Aggregations
This post is authored in conjunction with Yatish Anand, Senior Solutions Architect at Databricks, and Andrey Mirskiy, Senior Specialist Solutions Architect at Databricks.
Figure 1. Power BI automatic aggregations overview, source.
Introduction
In today’s fast-paced data landscape, timely insights can make all the difference. Power BI’s Automatic Aggregations feature is a breakthrough, designed to push performance boundaries by delivering low-latency query results on large datasets. It harnesses AI-powered caching to streamline DirectQuery models, combining the best-in-class performance of Power BI on Azure Databricks with low-latency BI for modern reporting needs. Used with DirectQuery mode, Automatic Aggregations do not face data volume limitations and allow you to scale regardless of data size without compromising on BI performance.
This innovation makes it easy for Power BI users of all skill levels to tap into advanced performance without worrying about backend strain or complex data modeling. Imagine your reports updating in real-time, even with billions of records in play. With this approach, you can deliver actionable insights faster than ever, freeing up time to focus on your business, not your data infrastructure. In this blog, we will showcase the integration of Power BI Automatic Aggregation with Azure Databricks and how this integration will help improve the performance of your Power BI reports.
What Are Automatic Aggregations?
Automatic aggregations streamline the process of improving BI query performance by maintaining an in-memory cache of aggregated data. This means that a substantial portion of report queries can be served directly from this in-memory cache instead of relying on the backend data sources. Power BI automatically builds these aggregations using AI based on your query patterns and then intelligently decides which queries can be served from the in-memory cache and which are routed to the data source through DirectQuery, resulting in faster visualizations and reduced load on the backend systems.
Key Benefits of Automatic Aggregations
Faster Report Visualizations: Automatic aggregations optimize most report queries by caching aggregated query results in advance, including those generated when users interact with reports. Only outlier queries that cannot be resolved via the cache are directed to the data source.
Balanced Architecture: Compared to using pure DirectQuery mode, automatic aggregations enable a more balanced approach. Most frequently used queries are served from Power BI query in-memory cache, which reduces the processing load on data sources during peak reporting times, improves scalability, and decreases costs.
Simplified Setup: Model owners can easily activate automatic aggregations and schedule regular refreshes. Once the initial training and refresh are complete, the system autonomously develops an aggregation framework tailored to the specific queries and data patterns.
Configuring Automatic Aggregations
Setting up automatic aggregations is straightforward. Users can enable the feature in the model settings and schedule one or more refresh operations. It is essential to review comprehensive guidelines on how automatic aggregations function to ensure they are suitable for your specific environment.
Figure 2. Enabling automatic aggregations, source.
Once configured, Power BI will utilize a query log to track user interactions and optimize the aggregations cache over time. The training operation, which evaluates query patterns, occurs during the first scheduled refresh, allowing Power BI to adapt to changing usage patterns.
Requirements for Automatic Aggregations
Automatic aggregations are compatible with several Power BI plans, including:
Power BI Premium per capacity
Fabric F Sku Capacity
Power BI Premium per user
Power BI Embedded models
Automatic aggregations are specifically designed for DirectQuery models, including composite models that utilize both import tables and DirectQuery connections.
Automatic Aggregation Walkthrough with Azure Databricks Integration
In this example we will showcase how to enable Automatic Aggregations on Power BI semantic models and train Automatic Aggregations in order to enhance the performance of reports using Azure Databricks as a data source.
Pre-requisites
Before you begin, ensure you have the following:
An Azure Databricks account, access to an Azure Databricks workspace, and a Databricks SQL Warehouse.
Power BI Desktop installed on your machine. The latest version is highly recommended.
Power BI workspace
DAX Studio or any other DAX parser tool
Step by Step Instructions
1. Create an initial Power BI semantic model based on samples catalog, tpch schema. Add tables and relationships as shown on the screenshot below. The dimension tables customer and nation should be set to Dual storage mode. The fact tables orders and lineitem should be set to DirectQuery storage mode. Below is the data model for the sample report.
For Best practices around Power BI storage mode please refer to this Git repo
2. Create a simple tabular report showing the count of orders and min shipment date, sum of discounts and sum of quantities. Also add the slicer with nation names, as shown below.
3. Now publish this report to a Power BI workspace.
4. As shown below when we run the report it Power BI takes ~20 sec to run the query. Below is the snapshot from the Network Trace:
Also below screenshot shows query hit the Databricks SQL Warehouse and read 38M records.
5. Enable the Automatic Aggregations in the semantic model settings. You can set the Query coverage according to your needs. This setting will increase the number of user queries analyzed and considered for performance improvement. The higher percentage of Query coverage will lead to more queries being analyzed, hence higher potential benefits, however, aggregation training will take longer.
6. For Power BI to be able to create aggregations, we need to populate the Power BI query log which stores internal queries created by Power BI when users interact with a report. Thus, you can either open the deployed Power BI Report and interact with the report by selecting different nation names in the slicer or you can open the DAX studio and run the sample DAX query mentioned below.
Please note that for better model training you need to set different values for the slicer or the filter in DAX-query and run it multiple times.
TREATAS({“BRAZIL”}, ‘nation'[n_name])
One of the guidelines to populate a query log is that before making a report available to users , the report publisher should open the report and try with different slicer filters. In our scenario as mentioned above we populated the query log by selecting the different names in the report slicer . This step would help end user have faster report rendering.
7. You can now start the model training manually or schedule it.
8. Once the model is trained, Power BI will have aggregated values in in-memory cache. The next time you interact with the report using similar patterns (dimensions, measures, filters) Power BI will leverage cached aggregations to serve the queries and will not send queries to Databricks SQL Warehouse. Hence, you may expect sub-second report refresh performance.
As shown in the below screenshot post enabling Automatic Aggregation we can see that the report visual is now getting rendered in ~1.6 sec as compared to 20 sec earlier. This is because the data is now getting read from query log cache.
Also as shown below there is no SQL query fired at the DBSQL as well
Monitoring and Managing Automatic Aggregations
Power BI continuously refines the in-memory aggregations cache through scheduled refreshes. Semantic model owners can choose to trigger training operations on demand if necessary. It’s also crucial to monitor the refresh history to ensure operations complete successfully and to identify any potential issues.
Power BI provides detailed refresh history logs that display the performance of each operation, enabling users to keep track of memory usage and other critical metrics
Conclusion
In today’s data-driven world, the integration of Azure Databricks and Power BI Automatic Aggregations is a game-changer, delivering unparalleled performance for even the most demanding data environments. While Azure Databricks excels at processing multi-terabyte datasets, Automatic Aggregations uses AI on your query patterns to intelligently cache aggregates, dramatically accelerating performance and reducing costs. This combination addresses the limitations of Import and Direct Lake modes, which are limited at working with large volumes while enhancing the efficiency of DirectQuery models. As shown in our blog with Automatic Aggregation on DirectQuery models you can now get sub-second report performance without constantly querying the underlying data source. With this innovative approach, you can focus on delivering lightning-fast BI reports at any scale rather than manually tuning your semantic model.
Microsoft Tech Community – Latest Blogs –Read More
File Size vs Size on Disk Info
Why would a JPG file from 24 years ago show that the Size is 365KB but Size on Disk shows 128Mb. How can I see what makes up all that empty space? Is there any negative impact from a volume full of files like this?
Why would a JPG file from 24 years ago show that the Size is 365KB but Size on Disk shows 128Mb. How can I see what makes up all that empty space? Is there any negative impact from a volume full of files like this? Read More
IP-based redirection
Hello!
I am running a Linux VM on Azure (IaaS) which is providing an SFTP service to the Internet. Sadly, many customers are connecting to this service via public IP address (as opposed to FQDN).
I am migrating this service back to on-premises, through a firewall on a different public IP address.
Linux VM has public IP 1.1.1.1 right on its NIC.
Firewall’s IP is 2.2.2.2.
I want to redirect traffic to the on-premises firewall.
Is there an Azure service/resource that can take inbound connections to 1.1.1.1, then NAT the destination IP to 2.2.2.2 and then also NAT the source IP to 1.1.1.1 or another public IP (like 3.3.3.3) on that service/resource?
Thanks!
Hello! I am running a Linux VM on Azure (IaaS) which is providing an SFTP service to the Internet. Sadly, many customers are connecting to this service via public IP address (as opposed to FQDN). I am migrating this service back to on-premises, through a firewall on a different public IP address. Linux VM has public IP 1.1.1.1 right on its NIC.Firewall’s IP is 2.2.2.2. I want to redirect traffic to the on-premises firewall. Is there an Azure service/resource that can take inbound connections to 1.1.1.1, then NAT the destination IP to 2.2.2.2 and then also NAT the source IP to 1.1.1.1 or another public IP (like 3.3.3.3) on that service/resource? Thanks! Read More