Tag Archives: microsoft
If identical values in one column return identical or different values in another column True/False
Hi all,
I have duplicate cell values in column 1 (PLU). Can someone please provide me with a formula to determine when duplicate values in column 1 (PLU) return non-identical cell values in another and return a True/False?
I manually added the two green tables to show what I’m after.
Thanks !!! I’ve uploaded the book here if you need
Hi all, I have duplicate cell values in column 1 (PLU). Can someone please provide me with a formula to determine when duplicate values in column 1 (PLU) return non-identical cell values in another and return a True/False? I manually added the two green tables to show what I’m after. Thanks !!! I’ve uploaded the book here if you need sample book.xlsx Read More
Using Publisher for a yacht club photo directory
I am the secretary for a small yacht club on Lake Ontario. I want to produce a member photo directory showing each individual member and his/her family members along with their names. Members come and go so each year I (or my successor) can delete photos and insert new ones. I’m hoping that there is some way to link rows the same way that you can link columns. There will ultimately be many hundreds of photos. Can this be accomplished?
I am the secretary for a small yacht club on Lake Ontario. I want to produce a member photo directory showing each individual member and his/her family members along with their names. Members come and go so each year I (or my successor) can delete photos and insert new ones. I’m hoping that there is some way to link rows the same way that you can link columns. There will ultimately be many hundreds of photos. Can this be accomplished? Read More
FAQ: Private offers with plans from different solutions + creating bundled offers
Q:
1) I understand it is possible to create a private OFFER in such a way that it includes PLANS from several different solutions (e.g., a publisher might want to include a plan from their price-management solution, a plan from their inventory-management solution, etc. in the same private OFFER). Are there boundaries/constraints/limitations on that ability?
2) Alternative approach is to create a “bundle” offer (SaaS) with no price, and then negotiate which capabilities will be included in the bundle – noting that the combination and pricing may be different for every deal. So customer A is presented a private plan that says “you’re getting the inventory-management and price-management stuff for $xxxxx” and customer B is presented a private plan on that same “bundle” offer that says “you’re buying the back-office management and the warehouse-management stuff for $yyyy” Does that approach violate Marketplace policy?
A: When a publisher starts to create a private offer they are presented with choices:
Direct to customer
to CSP
MPO to specific channel partner and end customer.
Then they have another tree of choices – you can read about the use case for each in our documentation ISV to customer private offers – Marketplace publisher | Microsoft Learn
We recommend #1 option above. ISVs should list individual transactable products in Marketplace and then use private offers to bundle multiple products/plans as desired by the customer. In this case “bundling” means that the “ISV can up add up to 10 offers/plans” to the private offer.
Q:
1) I understand it is possible to create a private OFFER in such a way that it includes PLANS from several different solutions (e.g., a publisher might want to include a plan from their price-management solution, a plan from their inventory-management solution, etc. in the same private OFFER). Are there boundaries/constraints/limitations on that ability?
2) Alternative approach is to create a “bundle” offer (SaaS) with no price, and then negotiate which capabilities will be included in the bundle – noting that the combination and pricing may be different for every deal. So customer A is presented a private plan that says “you’re getting the inventory-management and price-management stuff for $xxxxx” and customer B is presented a private plan on that same “bundle” offer that says “you’re buying the back-office management and the warehouse-management stuff for $yyyy” Does that approach violate Marketplace policy?
A: When a publisher starts to create a private offer they are presented with choices:
Direct to customer
to CSP
MPO to specific channel partner and end customer.
Then they have another tree of choices – you can read about the use case for each in our documentation ISV to customer private offers – Marketplace publisher | Microsoft Learn
We recommend #1 option above. ISVs should list individual transactable products in Marketplace and then use private offers to bundle multiple products/plans as desired by the customer. In this case “bundling” means that the “ISV can up add up to 10 offers/plans” to the private offer. Read More
FAQ: API for listing and initiating private offers + reporting on private offers
Q: Is there a Marketplace API that can help programmatically list and initiate private offers, as well as generate reports on consumption/charge of private offers by each?
A: For reports on consumption and charges you should have API provided by billing and cost management – Charges – List – REST API (Azure Consumption) | Microsoft Learn and Marketplaces – List – REST API (Azure Consumption) | Microsoft Learn
Q: Is there a Marketplace API that can help programmatically list and initiate private offers, as well as generate reports on consumption/charge of private offers by each?
A: For reports on consumption and charges you should have API provided by billing and cost management – Charges – List – REST API (Azure Consumption) | Microsoft Learn and Marketplaces – List – REST API (Azure Consumption) | Microsoft Learn
Read More
Storage migration: Combine Azure Storage Mover and Azure Data Box
Migrating storage from on-premises can be challenging. That’s why we are on a mission to make your migrations as simple as possible. We’ve developed robust solutions that enable you to transfer your files and folders to Azure, tailored to meet your specific migration needs.
At times, the optimal approach is to migrate your files and folders via your network from on-premises to Azure. In such instances, we provide Azure Storage Mover, a fully-managed migration service. Learn more
Alternatively, migrating data offline might be more suitable: Azure Data Box allows you to transport terabytes of data to Azure swiftly, affordably, and dependably. You will receive a specialized Data Box storage device to load with your data and send directly to an Azure data center. Learn more
Did you know these two services can be comnined to form an effective file and folder migration solution that you can use to predict and minimize downtime for your workloads?
Offline migration, online catch-up
Utilizing Azure Data Box likely conserved a significant amount of bandwidth. However, any active workload on your source storage likely made changes while your Data Box was in transit to Azure.
Consequently, you’ll also need to bring those changes to your cloud storage, before a workload can be cut-over to it.
Catch-up copies typically need minimal bandwidth since most of the data already resides in Azure, and only the delta needs to be transferred. Azure Storage Mover is an excellent tool for this purpose.
We ensure that Storage Mover jobs can detect the differences between your on-site storage and cloud storage. Storage Mover will then effectively transfer any updates and new files not previously captured by your Data Box transfer.
Maximizing your upload bandwidth is crucial. For instance, if only a file’s metadata (such as permissions) has changed, Storage Mover will upload only the new metadata instead of the entire file content.
Storage Mover’s copy modes, merge and mirror, allow you to tailor your cloud storage updates to your specific needs.
Storage Mover can be also used independently of Data Box.
Of course, you can also use Storage Mover without Data Box. In that case you’d migrate entirely over your network. Using Data Box may bring both time and bandwidth savings but isn’t needed in every migration scenario.
Minimizing and predicting workload downtime
When transitioning on-premises workloads to Azure Storage, you typically aim to:
Reduce the duration your on-prem application is offline during the switch.
Establish a predictable downtime period for users and business operations reliant on the workload.
Azure Storage Mover is designed to assist in achieving both goals.
The idea behind this approach is that you migrate your data from source to target several times.
Whether you opt for Data Box or Storage Mover, the initial transfer will be the most time-consuming, as it requires moving all your data to the cloud.
How long exactly this first copy will take, depends on many factors and is hard to predict. Therefore, it is not advisable to take any workloads that depend on this data offline prior to initiating this bulk copy step. Instead, maintain your workloads active on the source data.
Keeping your workloads active on the source constantly introduces changes and new files to the source. It may even prevent some of your files from being migrated, because they are in use. But that’s OK.
After your bulk migration finishes, you immediately start this catch-up migration job. Now, you only need to transfer the changes that have occurred since the initial bulk migration started. Likely, this catch-up migration job will complete more quickly since there are fewer bytes that need to be transferred across your network.
This speed-up migration job is optional. Initiate this job immediately after the completion of the preceding “catch-up” job.
As the last job concluded more quickly than the initial “bulk-migration” job, there was less time for changes to accumulate. Consequently, this speed-up job is expected to complete even more swiftly.
Multiple speed-up jobs can be executed consecutively. Eventually, you will reach a point where the processing time of a job is no longer decreasing, and reaches it’s minimum for the given namespace. At this stage, almost no data needs to be transferred over the network, and the majority of the time is spent on determining whether a file requires migration. Additional local compute cores and RAM can be beneficial.
Once your speed-up copy job(s) no longer finish any faster than the preceding ones, it’s probable that you’ve reached the minimum that the combination of your namespace (number of files) and the local compute resources allow for.
This implies that executing an additional job will probably complete in a similar timeframe. You have identified a predictable, minimal downtime for your workload(s) that depend on this namespace.
It’s time to take the workloads offline for this predicted period.
Execute your final migration job.
After its completion, connect your workload to the fully migrated data in the cloud.
And just like that, you are up and running again.
It’s important to note the limitations of this method.
An extensive collection of small files with a high change rate might necessitate longer downtime. Moreover, this technique won’t capture files that are in constant use until the final cut-over migration job. If there’s a considerable number of such files or their total size is large, achieving a predictable minimum with this method is hardly feasible.
Consequently, this method is not suitable for migrating active database files, for example. The convergent, n-pass migration strategy is designed for general-purpose namespaces. For databases or files that are always open, it’s best to use specialized migration tools tailored for those specific workloads.
Ready to get started?
Data Box:
Documentation home
Which Data Box device is right for me?
Training: Import data offline with Data Box
Storage Mover
Documentation home
Storage Mover overview
Plan for a storage mover deployment
Microsoft Tech Community – Latest Blogs –Read More
Check out our marketplace cheat sheet for this year’s Microsoft Build
Microsoft Build is next week! My colleague @KMCloudgirl put together this great cheat sheet of a few sessions to check out to learn the most on what’s up with marketplace. Check it out and let me know if you have any questions, and if we can plan on seeing you there!
Blog: Accelerate AI innovation with the Microsoft commercial marketplace | Microsoft Azure Blog by Anthony Joseph
Sessions:
AI-powered commerce with the Microsoft commercial marketplace
Presenters: Will Kearl + Ryan Storgaard
Maximize cloud investments with the Microsoft commercial marketplace
Presenters: Kristyn Maddox + Felipe Ospina
Launch AI applications and get to market faster with marketplace
Presenters: Yvonne Muench + Olga Karpman + Partner Guests
Shout out to our friend and colleague, @ElizabethBeals who has done a fantastic job putting together an amazing set of content for marketplace!
Microsoft Build is next week! My colleague @KMCloudgirl put together this great cheat sheet of a few sessions to check out to learn the most on what’s up with marketplace. Check it out and let me know if you have any questions, and if we can plan on seeing you there!
Blog: Accelerate AI innovation with the Microsoft commercial marketplace | Microsoft Azure Blog by Anthony Joseph
Sessions:
AI-powered commerce with the Microsoft commercial marketplacePresenters: Will Kearl + Ryan Storgaard
Maximize cloud investments with the Microsoft commercial marketplacePresenters: Kristyn Maddox + Felipe Ospina
Launch AI applications and get to market faster with marketplacePresenters: Yvonne Muench + Olga Karpman + Partner GuestsShout out to our friend and colleague, @ElizabethBeals who has done a fantastic job putting together an amazing set of content for marketplace! Read More
Windows 11 File Explorer Freezing
Trying to convert our AVD cluster from Windows 10 to Windows 11. We use fslogix with VHDXs on an Azure Storage blob. We have some mapped drives going to an on-prem NAS.
Windows file explorer will become complexity unresponsive. We have tried disconnecting the mapped drives, disabling OneDrive, disabling Widows Search and nothing seems to resolve the issue. Then randomly it will work and then eventually go back to being unresponsive.
Trying to convert our AVD cluster from Windows 10 to Windows 11. We use fslogix with VHDXs on an Azure Storage blob. We have some mapped drives going to an on-prem NAS. Windows file explorer will become complexity unresponsive. We have tried disconnecting the mapped drives, disabling OneDrive, disabling Widows Search and nothing seems to resolve the issue. Then randomly it will work and then eventually go back to being unresponsive. Read More
Excel sheet locked by password but never entered password before
When i open an unprotected excel file and try to edit the sheet, it pops out a window ask for password. I 100% sure that i never set up any protect sheet or other micro function things, just a very standard and normal excel file with standard use. And this happened to another two of my colleagues also. Can anyone help with? Thank you.
When i open an unprotected excel file and try to edit the sheet, it pops out a window ask for password. I 100% sure that i never set up any protect sheet or other micro function things, just a very standard and normal excel file with standard use. And this happened to another two of my colleagues also. Can anyone help with? Thank you. Read More
GPT-4o now available through Azure OpenAI Service
We’re happy to share that Microsoft has recently made GPT-4o available through its Azure OpenAI Service after OpenAI’s announcement of the release of GPT-4o, a new flagship, multimodal model.
This multimodal model integrates text, vision, and audio capabilities, setting a new standard for generative and conversational AI experiences. This is the first time Microsoft is announcing same-day model access as OpenAI.
Want to learn more about GPT-4o in Azure OpenAI? Check out our recent blog post:
Introducing GPT 4o: OpenAI’s new flagship model and Accessibility on Azure OpenAI
Learn, experiment, and deploy:
Check out MS Learn
Try out GPT-4o in Azure OpenAI Service Chat Playground (in preview)
Also, register to attend Microsoft Build 2024. At Microsoft Build 2024 we will continue to share updates regarding GPT-4o in Azure OpenAI Service, as well as other advancements in Microsoft AI services and capabilities.
We’re happy to share that Microsoft has recently made GPT-4o available through its Azure OpenAI Service after OpenAI’s announcement of the release of GPT-4o, a new flagship, multimodal model. This multimodal model integrates text, vision, and audio capabilities, setting a new standard for generative and conversational AI experiences. This is the first time Microsoft is announcing same-day model access as OpenAI.
Want to learn more about GPT-4o in Azure OpenAI? Check out our recent blog post:Introducing GPT 4o: OpenAI’s new flagship model and Accessibility on Azure OpenAILearn, experiment, and deploy:
Check out MS Learn
Try out GPT-4o in Azure OpenAI Service Chat Playground (in preview)
Also, register to attend Microsoft Build 2024. At Microsoft Build 2024 we will continue to share updates regarding GPT-4o in Azure OpenAI Service, as well as other advancements in Microsoft AI services and capabilities.
Read More
Data mapper improvements
This past November, we announced the general availability of our Data Mapper, a tool for developers to perform data transformation tasks inside of Azure Logic Apps. Through customer engagements, we have gathered valuable feedback and are ready to share some enhancement plans to address this feedback. These updates streamline data mapping tasks, rendering your workflow more intuitive and efficient. Let’s explore the new features and discuss how these changes can improve your data mapping experience.
Please let us know what you think in the feedback section. Your feedback helps shape future updates.
Improvements
Create data map
Upload a new schema or select an existing one. The source schema, previously floating, now docks on the opposite side of the destination schema, enhancing visibility.
Map a property to another property
Use drag-and-drop to assign source properties to destinations.
Understand a property type
Hover over a property to discover its data type.
Add a function then map to properties
Function chaining
Address more complex requirements by chaining functions together. Collapse the functions together to save valuable real estate.
Rename functions and add notes
To reduce complexity, we now allow function renaming for clarity and the option to add notes. This prevents confusion and makes editing and reviewing more straightforward.
Reorder source properties
Add static values and reorder properties to refine output at destination
Expand/collapse hierarchy
Support for complex schemas includes starting with nested properties in a collapsed state and expand as required to access deeper properties.
Adjust width of side panel
Modify a side panel’s width to address scaling for deep schema trees.
Search within a schema
Search functionality to discover specific elements
Favorite function
Pin frequently used functions for quick access.
View underlying code
Open the YAML file in read-only mode to read the code that powers the mapping process.
Test map
Select an existing source payload matching your schema type and check whether the mapper yields desired output.
Understand if there has been an error
Easily detect and address errors during mapping
Conditional mapping and looping improvements to follow in next part soon.
Feedback
Please use this questionnaire to provide detailed feedback or file a feature request using the Data mapper tag on our GitHub Issues.
Microsoft Tech Community – Latest Blogs –Read More
Intune Enrollment Issues – Found a workaround but it doesn’t make sense
Hello,
I am curious if anybody else has this issue and knows a fix…
Basically, we had a bunch of these devices that were originally in Intune and working fine. These were Enrolled into Intune via Group Policy. (Note: All devices get automatically converted to Autopilot devices also).
These users eventually got terminated and the devices were removed from Active Directory. Later on, the business decided to re-use these devices. Some were reimaged via WDS, some were just re-added to the domain… long story short none of them will enroll into Intune.
When I looked at the enrollment errors, I got the following error message: This device attempted to enroll via a method not allowed from the device’s Autopilot profile.
I thought it was interesting because we are not even trying to enroll it via Autopilot or even using it in this case as the device was never reset.
I decided to delete a few of them from Autopilot just to see what would happen. Now I get a new error saying: This device can’t be enrolled as a personal device while the platform is Blocked under Device Type Restrictions.
Workaround:
I eventually figured out that if you add someone as an “Enrollment manager”, they can bypass this… so I had a tech sign into some of the devices and they enroll… They just need to switch the primary user back to the new user as it registers as themselves.
What I am confused about is why is it working this way? It wasn’t like this before. Should I allow Windows (MDM) personal devices to be enrolled? If so, how do I actually block true personal devices?
These devices are in AD & Entra and those are the only “Windows” devices we want to be allowed to enroll into Intune, unless they are actually enrolled via Autopilot (resetting) of course.
Also, using Autopilot does work and does enroll the devices without issues.
What I haven’t tested: Keeping the device in Autopilot and having an “Enrollment Manager” sign in
Hello, I am curious if anybody else has this issue and knows a fix… Basically, we had a bunch of these devices that were originally in Intune and working fine. These were Enrolled into Intune via Group Policy. (Note: All devices get automatically converted to Autopilot devices also). These users eventually got terminated and the devices were removed from Active Directory. Later on, the business decided to re-use these devices. Some were reimaged via WDS, some were just re-added to the domain… long story short none of them will enroll into Intune. When I looked at the enrollment errors, I got the following error message: This device attempted to enroll via a method not allowed from the device’s Autopilot profile. I thought it was interesting because we are not even trying to enroll it via Autopilot or even using it in this case as the device was never reset. I decided to delete a few of them from Autopilot just to see what would happen. Now I get a new error saying: This device can’t be enrolled as a personal device while the platform is Blocked under Device Type Restrictions. Workaround:I eventually figured out that if you add someone as an “Enrollment manager”, they can bypass this… so I had a tech sign into some of the devices and they enroll… They just need to switch the primary user back to the new user as it registers as themselves. What I am confused about is why is it working this way? It wasn’t like this before. Should I allow Windows (MDM) personal devices to be enrolled? If so, how do I actually block true personal devices? These devices are in AD & Entra and those are the only “Windows” devices we want to be allowed to enroll into Intune, unless they are actually enrolled via Autopilot (resetting) of course. Also, using Autopilot does work and does enroll the devices without issues. What I haven’t tested: Keeping the device in Autopilot and having an “Enrollment Manager” sign in Read More
Tabs Crash When Using Drop Down Menu in Dev 126.0.2578.1
After updated to Dev Channel build 126.0.2578.1 on macOS 14.4.1 there appears to be a bug where using a drop down menu on a webpage that’s drawn using by the OS (so not something custom to the website) the tab crashes moments later. Reported this bug in-app but anyone else seeing a similar experience?
After updated to Dev Channel build 126.0.2578.1 on macOS 14.4.1 there appears to be a bug where using a drop down menu on a webpage that’s drawn using by the OS (so not something custom to the website) the tab crashes moments later. Reported this bug in-app but anyone else seeing a similar experience? Read More
New Blog | Microsoft Entra Private Access for on-prem users
By Ashish Jain
The emergence of cloud technology and the hybrid work model, along with the rapidly increasing intensity and sophistication of cyber threats, are significantly reshaping the work landscape. As organizational boundaries become increasingly blurred, private applications and resources that were once secure for authenticated users are now vulnerable to intrusion from compromised systems and users. When users connect to a corporate network through a traditional virtual private network (VPN), they’re granted extensive access to the entire network, which potentially poses significant security risks. These challenges have introduced new demands that traditional network security approaches struggle to meet. Even Gartner predicts that by 2025, at least 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021.
Microsoft Entra Private Access, part of Microsoft’s Security Service Edge (SSE) solution, securely connects users to any private resource and application, reducing the operational complexity and risk of legacy VPNs. It enhances the security posture of your organization by eliminating excessive access and preventing lateral movement. As traditional VPN enterprise protections continue to wane, Private Access improves a user’s ability to connect securely to private applications easily from any device and any network—whether they are working at home, remotely, or in their corporate office.
Enable secure access to private apps that use Domain Controller for authentication
With Private Access (Preview), you can now implement granular app segmentation and enforce multifactor authentication (MFA) on any on-premises resource authenticating to domain controller (DC) for on-premises users, across all devices and protocols without granting full network access. You can also protect your DCs from identity threats and prevent unauthorized access by simply enabling privileged access to the DCs by enforcing MFA and Privileged Identity Management (PIM).
To enhance your security posture and minimize the attack surface, it’s crucial to implement robust Conditional Access controls, such as MFA, across all private resources and applications including legacy or proprietary applications that may not support modern auth. By doing so, you can safeguard your DCs—the heart of your network infrastructure.
A closer look at the mechanics of Private Access for on-prem user scenario
Here’s how Private Access helps secure access to on-prem resources and applications and provides a seamless way for employees to access the on-premises resources when they’re locally accessing these resources, while ensuring the security of the company’s critical services. Imagine a scenario where an employee is working on-premises at their company’s headquarters. They need to access the company’s DCs to retrieve some important information for their project or make some changes. However, when they try to access the DC directly, they find that access is blocked. This is because the company has enabled privileged access, which restricts direct access to the DC for security reasons.
Instead of accessing the DC directly, the employee’s traffic is intercepted by the Global Secure Access Client and routed to the Microsoft Entra ID and Private Access Cloud for authentication. This ensures that only authorized users can access the DC and its resources.
When the employee attempts to access the private resources they need, they’re prompted to authenticate using MFA. This additional layer of security ensures that only legitimate users can gain entry to the DC. Private Access also extends MFA to all on-premises resources, even those that lack built-in MFA support. This means that even legacy applications can benefit from the added security of MFA. With Private Access, the company has also enabled granular app segmentation, which allows them to segment access to specific applications or resources within their on-premises environment. This means that the employee can only interact with the services they’re authorized to access, ensuring the security of critical services.
Despite these added security measures, the employee’s user experience remains seamless. Only authentication traffic leaves the corporate network, while application traffic remains local within the corporate network. This minimizes latency and ensures that the employee can access the information they need quickly and efficiently.
Read the full post here: Microsoft Entra Private Access for on-prem users
By Ashish Jain
The emergence of cloud technology and the hybrid work model, along with the rapidly increasing intensity and sophistication of cyber threats, are significantly reshaping the work landscape. As organizational boundaries become increasingly blurred, private applications and resources that were once secure for authenticated users are now vulnerable to intrusion from compromised systems and users. When users connect to a corporate network through a traditional virtual private network (VPN), they’re granted extensive access to the entire network, which potentially poses significant security risks. These challenges have introduced new demands that traditional network security approaches struggle to meet. Even Gartner predicts that by 2025, at least 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021.
Microsoft Entra Private Access, part of Microsoft’s Security Service Edge (SSE) solution, securely connects users to any private resource and application, reducing the operational complexity and risk of legacy VPNs. It enhances the security posture of your organization by eliminating excessive access and preventing lateral movement. As traditional VPN enterprise protections continue to wane, Private Access improves a user’s ability to connect securely to private applications easily from any device and any network—whether they are working at home, remotely, or in their corporate office.
Enable secure access to private apps that use Domain Controller for authentication
With Private Access (Preview), you can now implement granular app segmentation and enforce multifactor authentication (MFA) on any on-premises resource authenticating to domain controller (DC) for on-premises users, across all devices and protocols without granting full network access. You can also protect your DCs from identity threats and prevent unauthorized access by simply enabling privileged access to the DCs by enforcing MFA and Privileged Identity Management (PIM).
To enhance your security posture and minimize the attack surface, it’s crucial to implement robust Conditional Access controls, such as MFA, across all private resources and applications including legacy or proprietary applications that may not support modern auth. By doing so, you can safeguard your DCs—the heart of your network infrastructure.
A closer look at the mechanics of Private Access for on-prem user scenario
Here’s how Private Access helps secure access to on-prem resources and applications and provides a seamless way for employees to access the on-premises resources when they’re locally accessing these resources, while ensuring the security of the company’s critical services. Imagine a scenario where an employee is working on-premises at their company’s headquarters. They need to access the company’s DCs to retrieve some important information for their project or make some changes. However, when they try to access the DC directly, they find that access is blocked. This is because the company has enabled privileged access, which restricts direct access to the DC for security reasons.
Instead of accessing the DC directly, the employee’s traffic is intercepted by the Global Secure Access Client and routed to the Microsoft Entra ID and Private Access Cloud for authentication. This ensures that only authorized users can access the DC and its resources.
When the employee attempts to access the private resources they need, they’re prompted to authenticate using MFA. This additional layer of security ensures that only legitimate users can gain entry to the DC. Private Access also extends MFA to all on-premises resources, even those that lack built-in MFA support. This means that even legacy applications can benefit from the added security of MFA. With Private Access, the company has also enabled granular app segmentation, which allows them to segment access to specific applications or resources within their on-premises environment. This means that the employee can only interact with the services they’re authorized to access, ensuring the security of critical services.
Despite these added security measures, the employee’s user experience remains seamless. Only authentication traffic leaves the corporate network, while application traffic remains local within the corporate network. This minimizes latency and ensures that the employee can access the information they need quickly and efficiently.
Read the full post here: Microsoft Entra Private Access for on-prem users
Help us shape Windows Server (survey)
Help us shape Windows Server
Complete a 10-minute survey to help shape the future of Windows Server. Your feedback is crucial in helping us understand your needs and preferences with our product.
We will not ask for your personal information and your responses will contribute directly to the development of Windows Server. The survey will be closed on May 23, 2024.
Help us shape Windows Server
Complete a 10-minute survey to help shape the future of Windows Server. Your feedback is crucial in helping us understand your needs and preferences with our product.
We will not ask for your personal information and your responses will contribute directly to the development of Windows Server. The survey will be closed on May 23, 2024.
Survey Link
Privacy Statement Read More
Ctrl+click to follow hyperlink
Hi!
Recently, my excel web app enabled the function “use ctrl+click to follow hyperlink” previously it was disabled by default, How can I disable it for online/web version? I do not have access to “options” menu in my web app, May be caused due to my company settings and privacy.
Hi! Recently, my excel web app enabled the function “use ctrl+click to follow hyperlink” previously it was disabled by default, How can I disable it for online/web version? I do not have access to “options” menu in my web app, May be caused due to my company settings and privacy. Read More
SUMIFS formula
Hi, I’m trying to write a SUMIFS formula that will sum a total duration based off of a condition rating but stop once it hits a duration value of zero.
For example please see my current formula:
=IF($AQ2=5,SUMIFS(‘2.0_Durations’!$U$2:$U$100000,’2.0_Durations’!$C$2:$C$100000,$F2,’2.0_Durations’!$L$2:$L$100000,”=5″),”-“)
AQ = “Condition Rating” so I only want it to sum duration based on condition rating equal to 5
U = “Duration” in years
C = Unique numerical digit range assigned to that asset
F = Unique numerical digit assigned to that asset
L = Condition rating range
My issue is that let’s say there are 23 rows of values for this specific asset, the first 19 are in condition rating 5, the next two in condition rating 6, and the last two back to condition rating 5 . My current formula is summing all durations for that asset that are in condition rating 5 and returning a total duration of 29.2 years. I only want the formula to sum column ‘U’ up to row 20 the most recent durations of the asset when it was consecutively in condition rating 5 and/or until it first changes from a 5 to a 6 condition rating (please see screenshot below of data).
Thanks!
Justin
Hi, I’m trying to write a SUMIFS formula that will sum a total duration based off of a condition rating but stop once it hits a duration value of zero. For example please see my current formula: =IF($AQ2=5,SUMIFS(‘2.0_Durations’!$U$2:$U$100000,’2.0_Durations’!$C$2:$C$100000,$F2,’2.0_Durations’!$L$2:$L$100000,”=5″),”-“) AQ = “Condition Rating” so I only want it to sum duration based on condition rating equal to 5U = “Duration” in yearsC = Unique numerical digit range assigned to that assetF = Unique numerical digit assigned to that assetL = Condition rating range My issue is that let’s say there are 23 rows of values for this specific asset, the first 19 are in condition rating 5, the next two in condition rating 6, and the last two back to condition rating 5 . My current formula is summing all durations for that asset that are in condition rating 5 and returning a total duration of 29.2 years. I only want the formula to sum column ‘U’ up to row 20 the most recent durations of the asset when it was consecutively in condition rating 5 and/or until it first changes from a 5 to a 6 condition rating (please see screenshot below of data). Thanks!Justin Read More
New Blog | Loop DDoS Attacks: Understanding the Threat and Azure’s Defense
By Amir Dahan
In the realm of cybersecurity, Distributed Denial-of-Service (DDoS) attacks are a significant concern. The recent holiday season has unveiled a complex and evolving threat landscape, marked by sophisticated tactics and diversification. From botnet delivery via misconfigured Docker API endpoints to the NKAbuse malware’s exploitation of blockchain technology for DDoS attacks, the tactics and scale of these attacks have shown significant sophistication and diversification.
Understanding and staying abreast of recent DDoS trends and attack vectors is crucial for maintaining robust network security and ensuring the availability of services. One such example is the recent HTTP/2 Rapid Reset Attack, where Microsoft promptly provided fixes and recommendations to safeguard web applications. This vulnerability exploits the HTTP/2 protocol, allowing attackers to disrupt server connections by rapidly opening and closing connection streams. This can lead to denial of service (DoS) conditions, severely impacting the availability of critical services and potentially leading to significant downtime and financial losses. Another example we wrote about were reflected TCP attack vectors that recently emerged in ways that were not believed possible before.
By closely monitoring these emerging threats, security professionals can develop and implement timely and effective countermeasures to protect their networks. This proactive approach is essential for anticipating potential vulnerabilities and mitigating risks before they can be exploited by malicious actors. Furthermore, understanding the evolving landscape of DDoS attacks enables the development of more resilient security architectures and the enhancement of existing defense mechanisms, ensuring that networks remain secure against both current and future threats.
In this blog, we focus on the newly revealed Application Loop DDoS attack vector. Microsoft hasn’t witnessed this vulnerability translated to actual DDoS attacks yet. However, we believe it’s important to highlight the threat landscape we see in Azure for UDP reflected attacks, as they present a prevalent attack vector with similar base pattern as Loop attacks. We then discuss what protection strategies Microsoft employs to protect Azure platform, our online services, and customers from newly emerging threats.
The Emergence of Loop DDoS Attacks
The Loop attack vulnerability was disclosed last month by CISPA. The attack exploits application-layer protocols relying on User Datagram Protocol (UDP). CISPA researchers found ~300,000 application servers that may be vulnerable to this attack vector. The published advisory describes Loop attacks as a sophisticated DDoS vector, exploiting the interaction between application servers to create a never-ending (hence the term Loop) cycle of communication that can severely degrade or completely halt their functionality. This attack method uses spoofed attack sources to create a situation where two or more application servers get stuck in a continuous loop of messages, usually error responses, because each server is programmed to react to incoming error messages with an error message.
Amongst the vulnerable applications, TFTP, DNS, NTP as well as legacy protocols, such as Echo, Chargen, QOTD, are at risk. The researchers provided a practical example of this, when two DNS resolvers automatically reply to error messages with their own errors. An attacker can start a loop by sending one fake spoofed DNS error to one resolver. This makes it send an error to the spoofed resolver, which does the same, creating an endless cycle of errors between them. This wastes the DNS servers’ resources and fills up the network links between them, with the potential to cause serious problems in service and network quality. Depending on the exact attack topology, Loop attacks may generate excessive amounts of traffic like other volumetric DDoS floods (e.g. DNS reflected amplified attacks).
How Loop DDoS differs from other volumetric DDoS attacks
The Loop attack is a kind of DDoS attack vector that targets applications and may manifest as a large-scale flood at the network layer as well. The cause is that attackers can set up multiple attack loops among multiple servers in a network or across networks in the peering links, overwhelming the servers and networks with traffic floods.
Like UDP reflected attacks, Loop attacks use a basic UDP weakness – the possibility to fake a source IP address to initiate the attack Loop. One of the most common attack vectors nowadays is the reflected UDP-based floods. It’s similar to Loop attack in that the malicious actor sends spoofed-source packets to an application server that replies to the spoofed IP, i.e. the victim. By generating many of these requests to an application server, the victim gets many of the responses they didn’t ask for. The impact of the reflected attack may be significantly more disastrous if the attacked application generates more traffic in response that it receives in the request. When this happens, it becomes a reflected amplified attack. Amplification is the secret sauce of why these attacks are dangerous. Loop attack is different than reflected amplified attacks in that the response may not necessarily be amplified. That is, for each spoofed packet sent to the application server, there may be a single response. However, Loop attacks are way more dangerous when the victim server who gets the response replies with its own response, which in turn is answered with another response in a loop that never ceases. For the malicious actor, it takes only a single well-crafted packet to create a Loop attack. If the attack is sent between multiple application servers, it is becoming a volumetric DDoS flood that may risk not only the application, but also the underline networks. Another interesting difference between reflected amplified UDP attacks and the Loop attack is that with Loop attack the malicious actor doesn’t control the attack lifecycle. Once the first packet is generated the Loop starts, and there’s no way for the attacker to stop it.
Reflected Amplified Attack Landscape in Azure
Since reflected amplified UDP attacks are similar to Loop attacks in their basic reflection pattern and their volumetric nature, we provide recent reflected attack landscape in Azure. As we see in the figure, UDP reflected amplification attacks account for 7% of all attacks in the first quarter of 2024.
Figure 1 – distribution of main attack vectors in Azure, January-March 2024
Read the full post here: Loop DDoS Attacks: Understanding the Threat and Azure’s Defense
By Amir Dahan
In the realm of cybersecurity, Distributed Denial-of-Service (DDoS) attacks are a significant concern. The recent holiday season has unveiled a complex and evolving threat landscape, marked by sophisticated tactics and diversification. From botnet delivery via misconfigured Docker API endpoints to the NKAbuse malware’s exploitation of blockchain technology for DDoS attacks, the tactics and scale of these attacks have shown significant sophistication and diversification.
Understanding and staying abreast of recent DDoS trends and attack vectors is crucial for maintaining robust network security and ensuring the availability of services. One such example is the recent HTTP/2 Rapid Reset Attack, where Microsoft promptly provided fixes and recommendations to safeguard web applications. This vulnerability exploits the HTTP/2 protocol, allowing attackers to disrupt server connections by rapidly opening and closing connection streams. This can lead to denial of service (DoS) conditions, severely impacting the availability of critical services and potentially leading to significant downtime and financial losses. Another example we wrote about were reflected TCP attack vectors that recently emerged in ways that were not believed possible before.
By closely monitoring these emerging threats, security professionals can develop and implement timely and effective countermeasures to protect their networks. This proactive approach is essential for anticipating potential vulnerabilities and mitigating risks before they can be exploited by malicious actors. Furthermore, understanding the evolving landscape of DDoS attacks enables the development of more resilient security architectures and the enhancement of existing defense mechanisms, ensuring that networks remain secure against both current and future threats.
In this blog, we focus on the newly revealed Application Loop DDoS attack vector. Microsoft hasn’t witnessed this vulnerability translated to actual DDoS attacks yet. However, we believe it’s important to highlight the threat landscape we see in Azure for UDP reflected attacks, as they present a prevalent attack vector with similar base pattern as Loop attacks. We then discuss what protection strategies Microsoft employs to protect Azure platform, our online services, and customers from newly emerging threats.
The Emergence of Loop DDoS Attacks
The Loop attack vulnerability was disclosed last month by CISPA. The attack exploits application-layer protocols relying on User Datagram Protocol (UDP). CISPA researchers found ~300,000 application servers that may be vulnerable to this attack vector. The published advisory describes Loop attacks as a sophisticated DDoS vector, exploiting the interaction between application servers to create a never-ending (hence the term Loop) cycle of communication that can severely degrade or completely halt their functionality. This attack method uses spoofed attack sources to create a situation where two or more application servers get stuck in a continuous loop of messages, usually error responses, because each server is programmed to react to incoming error messages with an error message.
Amongst the vulnerable applications, TFTP, DNS, NTP as well as legacy protocols, such as Echo, Chargen, QOTD, are at risk. The researchers provided a practical example of this, when two DNS resolvers automatically reply to error messages with their own errors. An attacker can start a loop by sending one fake spoofed DNS error to one resolver. This makes it send an error to the spoofed resolver, which does the same, creating an endless cycle of errors between them. This wastes the DNS servers’ resources and fills up the network links between them, with the potential to cause serious problems in service and network quality. Depending on the exact attack topology, Loop attacks may generate excessive amounts of traffic like other volumetric DDoS floods (e.g. DNS reflected amplified attacks).
How Loop DDoS differs from other volumetric DDoS attacks
The Loop attack is a kind of DDoS attack vector that targets applications and may manifest as a large-scale flood at the network layer as well. The cause is that attackers can set up multiple attack loops among multiple servers in a network or across networks in the peering links, overwhelming the servers and networks with traffic floods.
Like UDP reflected attacks, Loop attacks use a basic UDP weakness – the possibility to fake a source IP address to initiate the attack Loop. One of the most common attack vectors nowadays is the reflected UDP-based floods. It’s similar to Loop attack in that the malicious actor sends spoofed-source packets to an application server that replies to the spoofed IP, i.e. the victim. By generating many of these requests to an application server, the victim gets many of the responses they didn’t ask for. The impact of the reflected attack may be significantly more disastrous if the attacked application generates more traffic in response that it receives in the request. When this happens, it becomes a reflected amplified attack. Amplification is the secret sauce of why these attacks are dangerous. Loop attack is different than reflected amplified attacks in that the response may not necessarily be amplified. That is, for each spoofed packet sent to the application server, there may be a single response. However, Loop attacks are way more dangerous when the victim server who gets the response replies with its own response, which in turn is answered with another response in a loop that never ceases. For the malicious actor, it takes only a single well-crafted packet to create a Loop attack. If the attack is sent between multiple application servers, it is becoming a volumetric DDoS flood that may risk not only the application, but also the underline networks. Another interesting difference between reflected amplified UDP attacks and the Loop attack is that with Loop attack the malicious actor doesn’t control the attack lifecycle. Once the first packet is generated the Loop starts, and there’s no way for the attacker to stop it.
Reflected Amplified Attack Landscape in Azure
Since reflected amplified UDP attacks are similar to Loop attacks in their basic reflection pattern and their volumetric nature, we provide recent reflected attack landscape in Azure. As we see in the figure, UDP reflected amplification attacks account for 7% of all attacks in the first quarter of 2024.
Figure 1 – distribution of main attack vectors in Azure, January-March 2024
Read the full post here: Loop DDoS Attacks: Understanding the Threat and Azure’s Defense
May V1 Title Plan out now!
The Monthly Title Plan for May V1 is attached to this post. The Title Plan can also be found in the following locations:
MPN Partner Portal Learning Resources page Resource page for Training Services Partners (Title Plan publishing takes 2-3 business days)
MCT Lounge Brand-new lounge for MCTs
Thank you
The Monthly Title Plan for May V1 is attached to this post. The Title Plan can also be found in the following locations:
MPN Partner Portal Learning Resources page Resource page for Training Services Partners (Title Plan publishing takes 2-3 business days)
MCT Lounge Brand-new lounge for MCTs
Thank you Read More
Trying to call Azure rest api by using managed identity in Azure synapse notebook but failed
I’m trying to call Azure rest api by using managed identity in Azure synapse notebook but get following error.
As you can see, I already enabled the managed identity run on my notebook and the contributor role also assigned to MSI for the corresponding azure devops project. Not quite sure where is the issue. May be I used wrong scopes? Should I turn on something before execute the notebook?
Thanks for your help in advance!
I’m trying to call Azure rest api by using managed identity in Azure synapse notebook but get following error.
As you can see, I already enabled the managed identity run on my notebook and the contributor role also assigned to MSI for the corresponding azure devops project. Not quite sure where is the issue. May be I used wrong scopes? Should I turn on something before execute the notebook?Thanks for your help in advance! Read More
Outlook won’t open or is stuck at loading profile
When I click on the outlook icon to open my mail, it won’t open, it is stuck on loading profile.
I have uninstalled Microsoft 365, reinstalled it. Closed out the processes in task manager, reset to default settings and noting works.
When I click on the outlook icon to open my mail, it won’t open, it is stuck on loading profile.I have uninstalled Microsoft 365, reinstalled it. Closed out the processes in task manager, reset to default settings and noting works. Read More