Category: Microsoft
Category Archives: Microsoft
Microsoft 365 groups 2nd stage recycle bin
One of my clients has run out of SharePoint storage (hence the reason for asking here) in their new tenant. This is because we migrated an existing Dropbox system in it’s entirety into a Microsoft 365 group/SharePoint document library. We then copied folders far and wide into a series of smaller Microsoft 365 groups. Small company, 20 users, they get 1.3TB of storage.
I’ve deleted over 250GB out of the Dropbox copy document library so OneDrive/TreeSize now reports the size is 214GB but SharePoint is still reporting the original size of 563GB. I’ve gone into the M365 group and emptied the recycle bin for that group.
But I was aware of the 2nd stage recycle bin but I don’t seem to be able to find such a thing for the Microsoft 365 group. I can find it for the root SharePoint site collection.
Do Microsoft 365 groups/sites have a 2nd stage recycle bin and if so, how do you find it? I’ve looked high and low.
One of my clients has run out of SharePoint storage (hence the reason for asking here) in their new tenant. This is because we migrated an existing Dropbox system in it’s entirety into a Microsoft 365 group/SharePoint document library. We then copied folders far and wide into a series of smaller Microsoft 365 groups. Small company, 20 users, they get 1.3TB of storage. I’ve deleted over 250GB out of the Dropbox copy document library so OneDrive/TreeSize now reports the size is 214GB but SharePoint is still reporting the original size of 563GB. I’ve gone into the M365 group and emptied the recycle bin for that group. But I was aware of the 2nd stage recycle bin but I don’t seem to be able to find such a thing for the Microsoft 365 group. I can find it for the root SharePoint site collection. Do Microsoft 365 groups/sites have a 2nd stage recycle bin and if so, how do you find it? I’ve looked high and low. Read More
Unable to send emails from shared mailbox on mac outlook app
In the last few days our shared mailbox has not been sending emails in the outlook app on my imac or the macbook of the other staff member who shares this email address. Previously there were no issues with sending from this mailbox, I have removed all accounts from outlook and re-added and still having the same issue. The mailbox receives emails fine – but when you try and reply to send a new email and click on send nothing happens. Emails send fine through outlook on the web. Any help would be appreciated!
In the last few days our shared mailbox has not been sending emails in the outlook app on my imac or the macbook of the other staff member who shares this email address. Previously there were no issues with sending from this mailbox, I have removed all accounts from outlook and re-added and still having the same issue. The mailbox receives emails fine – but when you try and reply to send a new email and click on send nothing happens. Emails send fine through outlook on the web. Any help would be appreciated! Read More
Filtering most liked posts
Hello,
I am new to Viva Engage, but I want to know if it is possible to filter the posts in a community based on the number of likes. I have been looking at the options inside the community and I don’t see something to help me with that. I also open the associated sharepoint and I did not find something either.
Thanks
Hello,I am new to Viva Engage, but I want to know if it is possible to filter the posts in a community based on the number of likes. I have been looking at the options inside the community and I don’t see something to help me with that. I also open the associated sharepoint and I did not find something either. Thanks Read More
What’s New in Copilot for Sales – May 2024
Microsoft Copilot for Sales is reimagining sales. Integrated seamlessly into your daily tools across Microsoft 365 and Teams, Copilot for Sales harnesses the power of generative AI and customer data to keep sellers in their flow of work so they can spend more time with customers.
This month we’re bringing exciting capabilities to Outlook by unifying the sidecar for Microsoft 365 Copilot and Copilot for Sales, adding global entity search, and adding inline editing for from suggested CRM updates! We’re also excited to announce the preview of Copilot for Sales extensibility using Microsoft Copilot Studio! You can enrich out-of-the-box skills and bring your own skills to Copilot for Sales!
Capabilities highlighted in this post are included in the May 2024 release of Copilot for Sales. It may take time for specific capabilities to reach every tenant in each market.
Outlook
Unified Sidecar in Outlook*
We’re pleased to announce that Copilot for Sales and Microsoft 365 Copilot Chat now live in harmony in a shared sidecar! To help avoid confusion when launching and interacting with Copilot, we now have one Copilot experience across all Microsoft Surfaces.
You’ll see sales-specific value as experiences will be called out using the keyword “sales” with the Copilot for Sales icon displayed as needed.
* Available in Outlook for Web and New Outlook desktop.
Global entity search in Outlook*
Until now, sellers have been limited to viewing records automatically identified and suggested by Sales Copilot through certain relationships. With this month’s update, we’re excited to unlock the ability to search for entities directly from the Outlook sidecar!
Now sellers can search CRM data based on their own needs and requirements, using their knowledge and familiarity with their CRM data.
* Available in Outlook for Web.
Suggested CRM updates now feature inline editing
In our March blog, we highlighted the release of suggested CRM updates within the Copilot for Sales Outlook experience. This month, we’re excited to announce you can now take action directly inline – without having to leave the Outlook side car!
The screenshot below shows these two new capabilities in action.
Linked record details will be shown immediately so that opportunity timelines and budget can be updated directly in the sidecar. Linked contact record inline updates coming soon! Opportunity stages will still take end-users to the CRM, in accordance with business processes.
Select the “Update Opportunity” button in the sidecar to easily edit Opportunity information in the CRM inline—without ever leaving the sidecar!
Learn more about suggested CRM updates
Copilot Studio
This month we’re pleased to announce the public preview of Copilot for Sales extensibility experiences using Copilot Studio! You can try out extending Copilot for Sales with Microsoft Copilot Studio using our product documentation at https://aka.ms/CopilotforSalesExtensibility.
Bring data and insights into chat and non-chat experiences (preview)
In preview now, you can enrich out-of-box skills in Copilot for Sales and bring your own skill to the chat experience. Sellers can now get insights from any sales application contextually in the flow of work in Microsoft 365 Copilot – from your CRM and other sales applications.
Enrich out-of-box skills like Email summary and Key sales info (shown in figure 1) and Opportunity (CRM record) summary and CRM record details (shown in figure 2). You can also bring your own skills to Sales chat (shown in figure 3).
Author Power Platform Connector plugins to extend Copilot for Sales (preview)
Now in preview, you can extend chat and non-chat experiences in Copilot for Sales using power platform connector plugins authored in Microsoft Copilot Studio!
Copilot for Sales integrates with CRM out-of-box (OOB). However, sales teams need data and insights from non-CRM sales applications in the copilot experiences. Using Copilot Studio, customers can now build power platform connector plugins to bring data and insights from any sales application into the copilot experience!
Use Copilot Studio to author power platform connector plugins to extend Copilot for Sales.
Enrich OOB skills, including Email summary, Key sales info, Opportunity / Account summary, and CRM record details
Bring new skills to Copilot for Sales chat
The screenshots below show the application flow from getting started (figure 4) to configuring (figure 5) to publishing (figure 6).
Get started
Ready to join us and other top-performing sales organizations worldwide? Reach out to your Microsoft sales team or visit our product web page.
Ready to install Copilot for Sales? Have a look at our deployment guide for Dynamics 365 Sales users or our deployment guide for Salesforce users.
Learn more
Ready for all the details? Check out the Copilot for Sales product documentation.
Ready for the latest tips…and more? Copilot for Sales Tip Time can serve as a foundation for your training of Copilot for Sales users, customers, or partners! This content includes use cases and demonstrates how each feature will benefit sellers, administrators, and sales managers.
Looking for the latest adoption resources? Visit the Copilot for Sales Adoption Center and find the latest information about how to go from inspiration to adoption.
Stay connected
Want to stay connected? Learn about the latest improvements before everyone else at https://aka.ms/salescopilotupdates. Join our community in the community discussion forum and we always welcome your feedback and ideas in our product feedback portal.
Microsoft Tech Community – Latest Blogs –Read More
Part 1: Migrate Azure Analysis Services to Power BI Premium using Azure Databricks – Why
This post is authored in conjunction with Leo Furlong, Senior Solutions Architect, at Databricks.
In the world of data analytics and business intelligence, the tools and platforms you use can significantly impact the efficiency and capabilities of your data operations. Two major shifts in the landscape are the migration from Azure Analysis Services to Power BI Premium and the move to Azure Databricks SQL as the underlying data source for Power BI. Let’s dive into why these changes are worth considering.
Migrating from Azure Analysis Services to Power BI Premium
Azure Analysis Services has been a staple in the enterprise semantic layer toolset, providing robust semantic layer capabilities. It’s the same engine that powers Power BI under the hood, which means there’s a shared lineage and compatibility between the two services. However, Power BI has evolved rapidly and now offers a superset of functionalities that were once exclusive to Azure Analysis Services.
Here are some compelling reasons to migrate:
Enhanced Functionality: Power BI Premium has grown to encompass all the capabilities of Azure Analysis Services and then some. A feature comparison matrix is provided here.
Tool Consolidation: By migrating to Power BI Premium, organizations can consolidate their semantic layer and BI tools into a single platform. This not only simplifies the architecture but also reduces the overhead associated with maintaining multiple systems.
Microsoft’s Direction: Microsoft itself recommends that customers transition their Azure Analysis Services models to Power BI Premium. This is a strong indicator of the strategic direction Microsoft is taking, with Power BI Premium positioned as the go-to enterprise semantic layer solution.
For more details on this migration, you can refer to Microsoft’s official documentation here.
Moving to Azure Databricks SQL for Power BI
Azure Databricks SQL offers a modern and optimized approach to handling present and future data analytics challenges at any scale. When paired with Power BI, it unlocks new levels of performance and cost efficiency. It also provides the most elastic, highest performing, and cost-effective Azure native solution available today for Data Warehousing.
Here’s why Azure Databricks SQL is an excellent match for Power BI:
Native Integration: Power BI Premium supports native connections to Databricks SQL through the Databricks connector deployed through Power BI Desktop and the Service, allowing for seamless integration and data flow between the two services. This means you can leverage the full power of Databricks’ data processing within your Power BI reports.
Advanced Power BI Features: With the Azure Databricks SQL connector, you can take advantage of Power BI’s most sophisticated features, such as Import, DirectQuery, Dual Storage modes, Composite Models, Hybrid tables, and User-defined and Automatic Aggregations. These features provide performance and flexibility in how you handle and analyze your data.
Security and Management: The connector supports connections to Unity Catalog, the Databricks Security and Governance solution, using a stored credential, Service Principal, or Single Sign-On (SSO), ensuring that your data access is secure and easily manageable.
Cost Savings with Azure Databricks SQL Serverless: Azure Databricks SQL Serverless turns on within seconds, auto-terminates in as little as a minute, and charges for use by the second. This means you’re not paying for idle compute resources when they’re not in use, which can lead to significant cost savings, especially for organizations dealing with large volumes of data and a lot of users.
Composite Models equals the Best Overall Experience: Composite models in Power BI allow you to combine the speed of imported models with the freshness of DirectQuery, leading to potential cost savings and the best overall user experience possible. You can store large datasets in Azure Databricks SQL and only query what’s needed, reducing the overall resource consumption on your Power BI Premium capacity.
Unlimited Concurrency: Azure Databricks SQL Serverless scales out to multiple clusters in seconds, providing Power BI composite models with unlimited report user concurrency in real-time. Azure Databricks SQL Serverless also scales down aggressively to save costs on idle compute.
Migrating to Power BI Premium from Azure Analysis Services is a strategic move that aligns with Microsoft’s vision for enterprise BI. It consolidates your BI tools into a more powerful and feature-rich platform. Simultaneously, adopting Azure Databricks SQL as your Power BI data source leads to better performance, enhanced features, and cost savings. These shifts represent a modern approach to data analytics, providing organizations with the tools they need to stay agile and data-driven in today’s competitive landscape.
Microsoft Tech Community – Latest Blogs –Read More
Part 2: Migrate Azure Analysis Services to Power BI Premium using Azure Databricks – How-To
This post is authored in conjunction with Leo Furlong, Senior Solutions Architect, at Databricks.
Many customers choose to migrate their Azure Analysis Services semantic models to Power BI Premium due to the benefits and enhanced functionality described in the Power BI documentation. As customers migrate their semantic models to Power BI, native connections to Azure Databricks SQL become available due to the built-in Databricks SQL connector in Power BI. Databricks SQL Serverless combined with Power BI semantic models can provide customers with a number of benefits including a separation of compute and storage; instant, elastic compute; data refreshes charged by the second; and enterprise Data Warehouse functionality, query performance, and concurrency at any scale. The remainder of this article will focus on the in-and-outs of how to accomplish this migration.
Requirements
You must use Power BI Premium Capacities. This means that P, A4+, or F SKUs are required.
The XMLA endpoint for your Power BI workspace must be configured for Read/Write.
You will need Tabular Editor 2 or Tabular Editor 3 to migrate your semantic model to Databricks SQL. Tabular Editor is a 3rd party tool featured by Microsoft.
Migrating an existing semantic model to use Databricks SQL requires that your existing data source is a structured data source (PowerQuery based connection).
Limitations
AAS models migrated to Power BI Premium can’t be edited using web authoring per the documented limits, but as of writing this post, it does appear to allow web authoring on migrated models without issues/errors
AAS models migrated to Power BI can’t be downloaded to Power BI Desktop
Parallel connection configuration is only available in model Compatibility Level 1600+
Assumptions
Your data model in Databricks SQL is in the same structure as your current Azure Analysis Services model in terms of table/column names and column data types.
You are using the same Power BI storage modes as the previous model.
Migrate AAS Models to Power BI Premium
You can migrate your Azure Analysis Services models to Power BI Premium in two primary ways: using the migration utility built into the Power BI Service or using Tabular Editor projects. The purpose of this blog post is not to focus on these migrations but on the post-migration conversation of the model to use Databricks SQL. Due to this fact, we’ll only briefly describe the AAS migration to Power BI Premium below.
Migrate AAS Models using the Power BI Migration Utility
Microsoft provides a utility in the Power BI Service for migrating Azure Analysis Services models to Power BI Premium. For instructions on how to use the migration utility, see the following documentation page.
Migrate AAS Models using Tabular Editor
Tabular models can also be migrated and deployed to Power BI Premium using Tabular Editor.
Open the model.bim file using Tabular Editor and deploy it to Power BI Premium using the XMLA Endpoint.
If you don’t have a Visual Studio project, open the Azure Analysis Services model directly using Tabular Editor and save it to a file.
Open and deploy the semantic model file to Power BI Premium using the XMLA Endpoint.
Instructions on how to perform these steps are in the Ongoing Maintenance and Deployment using Tabular Editor section below.
Converting your post-migration Power BI Premium Semantic Model to use Databricks SQL Serverless
After you’ve completed your migration to Power BI Premium, you’ll want to convert your Semantic Model to use Databricks SQL. This requires altering the PowerQuery code that forms the bedrock of your Semantic Model tables. The sections below describe how to perform this conversation using Tabular Editor. This is the primary focus of this post and we’ll review these conversation steps in detail. These steps could easily be automated using the Tabular Object Model, but this will be the topic of a future post.
Update M Code using Tabular Editor
Tabular Editor 3 Flow
1) Post Azure Analysis Services migration to Power BI Premium, obtain the “Workspace connection” from the Power BI Workspace settings Premium tab. Instructions here.
2) After opening the Tabular Editor 3 application, go to File -> Open -> Model from DB…
3) In the Load Semantic Model from Database window, enter the workspace connection string obtained from step 1 into the Server field. Keep Authentication selected to Integrated and click OK.
4) Authenticate to the Power BI Service using your Microsoft Entra ID credentials.
5) Select the row for the model you want to open and click OK.
6) Once the model is open, click on the Model icon in the TOM Explorer window. In the properties section, expand the Database section. Change the Compatibility Level to 1600+ and save the model.
Save the model by clicking the “save changes back to the currently connected database” button.
7) While still in model properties, optionally change the parallelism settings for the model which are explained in more detail in the Power BI documentation and PBI blog post. This step is recommended because Databricks SQL can handle query parallelism sent from Power BI beyond the default configurations. Possible values are in the grid below, but your mileage may vary and you may need to test.
Make sure to Save the model post configuration change.
Model Properties for Parallelism
Possible Values
Data Source Default Max Connections
value between 1 and 30
value between 0 and 30
value between -1 and 30
8) Create Shared Expressions (also known as Power BI Parameters in the UI) for the Databricks SQL connection information. For each expression, set the Kind property to M and set the expression values using the appropriate M formulas (examples below). Get the Server Hostname and HTTP Path for your SQL Warehouse using these steps.
Expression Name
Kind
Expression
Server_hostname
M
“adb-5343834423590926.6.azuredatabricks.net” meta [IsParameterQuery=true, Type=”Text”, IsParameterQueryRequired=true]
HTTP_path
M
“/sql/1.0/warehouses/66d1c1444dg06346″ meta [IsParameterQuery=true, Type=”Text”, IsParameterQueryRequired=true]
Catalog
M
“adventureworks” meta [IsParameterQuery=true, Type=”Text”, IsParameterQueryRequired=true]
Schema
M
“adventureworksdw” meta [IsParameterQuery=true, Type=”Text”, IsParameterQueryRequired=true]
9) For each semantic model table and each partition in the table, change the M Expression and Mode to their appropriate values. An example of the M Expression is below that references the created expressions and leverages the native Databricks SQL connector in Power BI. For Mode, choose the correct Power BI Storage Mode for your semantic model table/partition based upon your use case.
Save the model after each table/partition modification.
M Expression Example for the Databricks SQL Connector using Expressions
let
Source = Databricks.Catalogs(Server_hostname, HTTP_path, null),
Database = Source{[Name=Catalog,Kind=”Database”]}[Data],
Schema = Database{[Name=Schema,Kind=”Schema”]}[Data],
Data = Schema{[Name=”<your table or view name>“,Kind=”Table”]}[Data]
in
Data
10) Navigate to the Power BI Workspace and select settings for the semantic model being migrated from using the … menu. Click Edit and configure the data source credentials using the following documentation for reference.
11) Navigate back to Tabular Editor and “Update table schema…” by right-clicking on each table in the semantic model. If there are any metadata changes between the old and new data source, Tabular Editor will fix the definitions. Save the model.
Example of TE3 detecting a new column added to the table in Databricks SQL.
12) Delete the old data sources under the Data Sources folder in Tabular Editor. Save the model. Note, Databricks SQL will not show up as a data source in this folder.
Using Tabular Editor 2 and Limitations
All UI options and screens in Tabular Editor 2 will be similar to the Tabular Editor 3 steps above. Note, that Tabular Editor 2 doesn’t support automatic table metadata updates for Power Query data sources (new column data types, added or dropped columns, etc…). For information on how to update metadata manually, see the TE2 docs on Power Query data sources.
Ongoing Maintenance and Deployment using Tabular Editor
Post-migration, one-time, and ongoing maintenance of your semantic model should be performed using Tabular Editor. Developers and organizations can choose to manage the lifecycle of their semantic model projects in their chosen Git repository tool. Development teams can work with their DevOps teams to implement CICD workflows within their DevOps tooling as required.
Manual Deployment
Tabular Editor provides a Deployment wizard via the UI for manual deployments.
1) Obtain the “Workspace connection” from the Power BI Workspace settings Premium tab. Instructions here.
2) Open your semantic model project from File -> Open -> From File…
3) After making changes to your semantic model that you’d like to deploy, enter into the Deployment wizard from Model -> Deploy…
4) Enter the workspace connection (XMLA endpoint) string from your Power BI Workspace. Use Azure AD login Authentication and click Next >. Authenticate using your Entra credentials and MFA.
5) Select the existing Database you want to deploy into and Click Next >.
6) Select your deployment options and click Next >.
7) In the final step, you can choose to deploy directly from the UI by clicking Deploy. You can also export your deployment to a TMSL Script and execute it from SQL Server Management Studio or another compatible IDE.
8) If you deploy from the UI, a message will be displayed in the bottom left-hand corner of Tabular Editor if the deployment was successful.
All UI options and screens in Tabular Editor 2 will be similar to the Tabular Editor 3 steps above.
Automated Deployment and CICD integration
Tabular Editor provides automated deployment options, including CICD integration from DevOps tools like Azure DevOps or GitHub Actions via command line deployments. Please refer to the following blog posts and GitHub Repo for more information and examples of these deployment options.
Microsoft Tech Community – Latest Blogs –Read More
If a “x” value is present and the column header is today’s date how do we display true/false
I have a workbook that I am trying to update to display the status (red or green) in cell AB1 of the lab in a singular cell based on =Today(). The table is a calendar format but due to some other pivot tables and other data validation features they have some separating columns (an example workbook is attached). The basic function of this is to track the daily events in each category/swim lane. An x marks that that issue happened on that particular day. So I am trying to come up with a function that will search the table for the specific date and if there are any x’s in the column below to then have a true/false scenario. I was searching through V/XLookup, MATCH, and IF functions but they all don’t quite seem to fit.
Anyone have any advice?
Also to note, secondary issue is that I wasn’t able to have the dates as the specific column headers in the tables so I opted above but if there is a solution to that as well, I’d be very grateful. Any formula I try doesn’t stick as it needs to be referenced to the year input as this is a rolling template from year to year.
Thank you so much,
Nyssa
I have a workbook that I am trying to update to display the status (red or green) in cell AB1 of the lab in a singular cell based on =Today(). The table is a calendar format but due to some other pivot tables and other data validation features they have some separating columns (an example workbook is attached). The basic function of this is to track the daily events in each category/swim lane. An x marks that that issue happened on that particular day. So I am trying to come up with a function that will search the table for the specific date and if there are any x’s in the column below to then have a true/false scenario. I was searching through V/XLookup, MATCH, and IF functions but they all don’t quite seem to fit. Anyone have any advice? Also to note, secondary issue is that I wasn’t able to have the dates as the specific column headers in the tables so I opted above but if there is a solution to that as well, I’d be very grateful. Any formula I try doesn’t stick as it needs to be referenced to the year input as this is a rolling template from year to year. Thank you so much, Nyssa Read More
Have list item inherit start time based on the time I select in a weekly calendar view.
Hi,
(First time poster, so let me know if this is in the incorrect board please. )
In SharePoint lists, we have a list that we use primarily as a calendar. We always use the weekly view. When looking at the weekly calendar view, if you click a blank spot on the calendar it will open the editor to create a new list item.
1. Is there a way to have the start time of that new item auto-fill based on where the user has clicked on the weekly calendar?
Ex: In the screenshot below I clicked on the 2pm-2:30pm block on the calendar, which opened the New item popup. Could I assign my Start Time field to inherit 2pm on May 26th by default?
2. Is it possible to have the end time auto fill to be 1hr after the start time? This would save some keystrokes and would allow it to display on the calendar properly. Currently the user has to enter an end time manually as well.
Thank you,
Hi, (First time poster, so let me know if this is in the incorrect board please. ) In SharePoint lists, we have a list that we use primarily as a calendar. We always use the weekly view. When looking at the weekly calendar view, if you click a blank spot on the calendar it will open the editor to create a new list item. 1. Is there a way to have the start time of that new item auto-fill based on where the user has clicked on the weekly calendar?Ex: In the screenshot below I clicked on the 2pm-2:30pm block on the calendar, which opened the New item popup. Could I assign my Start Time field to inherit 2pm on May 26th by default? 2. Is it possible to have the end time auto fill to be 1hr after the start time? This would save some keystrokes and would allow it to display on the calendar properly. Currently the user has to enter an end time manually as well. Thank you, Read More
Convite – Ação de Plantio
AÇÃO DE PLANTIO COLETIVO :seedling:
Em comemoração à Semana do Meio Ambiente, parceiros se juntaram à Ecolmeia nesta ação de plantio coletivo de mudas nativas da Mata Atlântica, indicadas previamente por levantamentos realizados por Biólogo e Agrônomo da Ecolmeia.
Ação em benefício da qualidade ambiental no território da Represa Billings, em São Bernardo do Campo/SP.
Gostariam de aderir? Sempre bem-vindos!
Inscrições online: https://form.jotform.com/OSCIP/conviteparaplantio
AÇÃO DE PLANTIO COLETIVO :seedling:Em comemoração à Semana do Meio Ambiente, parceiros se juntaram à Ecolmeia nesta ação de plantio coletivo de mudas nativas da Mata Atlântica, indicadas previamente por levantamentos realizados por Biólogo e Agrônomo da Ecolmeia.Ação em benefício da qualidade ambiental no território da Represa Billings, em São Bernardo do Campo/SP.Gostariam de aderir? Sempre bem-vindos!Inscrições online: https://form.jotform.com/OSCIP/conviteparaplantio Read More
Outlook.com and Clipboard
Hello
Every time I copy/paste into Outlook.com (send an email), I get this popup that appears
However, it seems that the setting is “saved”
Is this really annoying, an adjustment to make or a bug?
And this problem only appears in Outlook.com
Thanks
Hello Every time I copy/paste into Outlook.com (send an email), I get this popup that appears However, it seems that the setting is “saved” Is this really annoying, an adjustment to make or a bug?And this problem only appears in Outlook.com Thanks Read More
Calendar Template
I have downloaded a calendar template for excel that I like. As I am creating new months for the calendar it stops in October 2024 from the dropdown tab. How do I create the other months of this year and for 2025, which are not listed in the dropdown tab on the excel sheet?
I have downloaded a calendar template for excel that I like. As I am creating new months for the calendar it stops in October 2024 from the dropdown tab. How do I create the other months of this year and for 2025, which are not listed in the dropdown tab on the excel sheet? Read More
Cosmos Db JAVA SDK Retry Policy
Hi Azure Cosmos Db Team,
We haven’t explicitly set retry policy in the event of throttling. Uses the default throttling retry policy.
Below as seen from diagnostics.
throttlingRetryOptions=RetryOptions{maxRetryAttemptsOnThrottledRequests=9, maxRetryWaitTime=PT30S}
However when we encountered actual throttling (“statusCode”:429,”subStatusCode”:3200) we see in the diagnostics values increasing in multiples of 4 “retryAfterInMs”:4.0 x-ms-retry-after-ms=4, “retryAfterInMs”:8.0 x-ms-retry-after-ms=8 and resulting in Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later.
Can you please let me know the difference in behavior here(maxRetryWaitTime as shown in throttlingRetryOptions and retryAfterInMs in the diagnostics as seen above in the event pf throttling) ? I was expecting in the event of throttling the request will be retried after 30 seconds only based on throttlingRetryOptions setting? This is having a compounding effect in case of concurrent requests which affects overall throughput. We need to customize based on our requirement the retry no of times and interval in the event of throttling. Which parameter should we use for that?
With Regards,
Nitin Rahim
Hi Azure Cosmos Db Team, We haven’t explicitly set retry policy in the event of throttling. Uses the default throttling retry policy.Below as seen from diagnostics. throttlingRetryOptions=RetryOptions{maxRetryAttemptsOnThrottledRequests=9, maxRetryWaitTime=PT30S} However when we encountered actual throttling (“statusCode”:429,”subStatusCode”:3200) we see in the diagnostics values increasing in multiples of 4 “retryAfterInMs”:4.0 x-ms-retry-after-ms=4, “retryAfterInMs”:8.0 x-ms-retry-after-ms=8 and resulting in Request rate is large. More Request Units may be needed, so no changes were made. Please retry this request later. Can you please let me know the difference in behavior here(maxRetryWaitTime as shown in throttlingRetryOptions and retryAfterInMs in the diagnostics as seen above in the event pf throttling) ? I was expecting in the event of throttling the request will be retried after 30 seconds only based on throttlingRetryOptions setting? This is having a compounding effect in case of concurrent requests which affects overall throughput. We need to customize based on our requirement the retry no of times and interval in the event of throttling. Which parameter should we use for that? With Regards,Nitin Rahim Read More
Analysis toolpak
Can’t find the toolpak. When I go to File I don’t have options as listed in the instructions. I have a button for addins but the toolpak is not listed
Can’t find the toolpak. When I go to File I don’t have options as listed in the instructions. I have a button for addins but the toolpak is not listed Read More
Word help: building block work-around.
I am looking to make a template for my work environment, but what I have done so far hasn’t worked. I have tried saving the word document as a .dotx format so that I can save building blocks to it, but it doesn’t seem to stick (building blocks don’t transfer) for when others open the file. The building blocks I have created include a photo, name, phone number, and position/title of the respective person effectively signing the document. Drop-downs don’t seem to be able yo get the job done. Anyone have a work-around for this without having to transfer a building block file to everyone anytime there is an update (new person added to the list of those possibly signing or a change in the document)?
I am looking to make a template for my work environment, but what I have done so far hasn’t worked. I have tried saving the word document as a .dotx format so that I can save building blocks to it, but it doesn’t seem to stick (building blocks don’t transfer) for when others open the file. The building blocks I have created include a photo, name, phone number, and position/title of the respective person effectively signing the document. Drop-downs don’t seem to be able yo get the job done. Anyone have a work-around for this without having to transfer a building block file to everyone anytime there is an update (new person added to the list of those possibly signing or a change in the document)? Read More
Excel Pivot Table: Missing sum of month totals???
I am using 3 different pivot tables that link to 3 different sets of data (all configured identical format). For two of the PT’s, I am getting the monthly totals (circled in red in 2nd photo) but for the middle (2023) it does not bring back an value. When I hover over the cell, it shows value: 0. Please help. What am I doing wrong?
I am using 3 different pivot tables that link to 3 different sets of data (all configured identical format). For two of the PT’s, I am getting the monthly totals (circled in red in 2nd photo) but for the middle (2023) it does not bring back an value. When I hover over the cell, it shows value: 0. Please help. What am I doing wrong? Read More
Solidariedade em Ação: LBV Convoca Voluntários e Doações para Vítimas no Rio Grande do Sul
Na Legião da Boa Vontade – LBV, mobilizamos 42 postos de arrecadação em solidariedade às vítimas da recente tragédia no Rio Grande do Sul. Convidamos a sociedade a se juntar a nós, seja contribuindo com doações ou atuando como voluntário na recepção e classificação dos itens doados. Em São Paulo, nosso ponto de coleta está localizado na Avenida Rudge, 763 – Bom Retiro.
Estamos gratos por já termos reunido 380 toneladas de doações, e continuamos comprometidos em fazer a diferença na vida daqueles afetados por este evento devastador.”
https://lbv.org/lbv-envia-mais-de-130-toneladas-de-doacoes-e-abre-42-postos-de-arrecadacao/
Na Legião da Boa Vontade – LBV, mobilizamos 42 postos de arrecadação em solidariedade às vítimas da recente tragédia no Rio Grande do Sul. Convidamos a sociedade a se juntar a nós, seja contribuindo com doações ou atuando como voluntário na recepção e classificação dos itens doados. Em São Paulo, nosso ponto de coleta está localizado na Avenida Rudge, 763 – Bom Retiro.Estamos gratos por já termos reunido 380 toneladas de doações, e continuamos comprometidos em fazer a diferença na vida daqueles afetados por este evento devastador.”https://lbv.org/lbv-envia-mais-de-130-toneladas-de-doacoes-e-abre-42-postos-de-arrecadacao/ Read More
Office of Management and Budget (OMB) Uniform Guidance re-write: What you need to know
The federal Office of Management and Budget (OMB) recently released a major rewrite to the Uniform Guidance, the common rules governing most federal grantmaking to charitable nonprofits, and others, effective on Oct. 1, 2024: The Biden-Harris Administration Finalizes Guidance to Make Grants More Accessible and Transparent for Families, Communities, and Small Businesses | OMB | The White House
The rewrite addresses longstanding problems in covering nonprofits’ actual costs, advances equity by making grants accessible to more nonprofits, and makes other significant reforms that will reduce bureaucratic barriers and costs of seeking, performing, and reporting on grants using federal funds.
The National Council of Nonprofits are hosting a special, nationwide, free webinar, OMB Uniform Guidance: What the Updates Mean for Nonprofits, on Thursday, May 30th at 3:30 – 4:30pm ET to ensure charitable organizations understand the significant improvements to the Uniform Guidance and what the changes mean for their missions.
Register here: https://www.councilofnonprofits.org/form/omb-uniform-guidance-webinar
The federal Office of Management and Budget (OMB) recently released a major rewrite to the Uniform Guidance, the common rules governing most federal grantmaking to charitable nonprofits, and others, effective on Oct. 1, 2024: The Biden-Harris Administration Finalizes Guidance to Make Grants More Accessible and Transparent for Families, Communities, and Small Businesses | OMB | The White House
The rewrite addresses longstanding problems in covering nonprofits’ actual costs, advances equity by making grants accessible to more nonprofits, and makes other significant reforms that will reduce bureaucratic barriers and costs of seeking, performing, and reporting on grants using federal funds.
The National Council of Nonprofits are hosting a special, nationwide, free webinar, OMB Uniform Guidance: What the Updates Mean for Nonprofits, on Thursday, May 30th at 3:30 – 4:30pm ET to ensure charitable organizations understand the significant improvements to the Uniform Guidance and what the changes mean for their missions.
Register here: https://www.councilofnonprofits.org/form/omb-uniform-guidance-webinar Read More
Entra Free – Allow Signin to Application
I have a WPF application which I have integrated Entra into. I can signin and see that login in the logs on Azure. What I would like to do is distribute this program to anyone who wants it. I want them to signin to it. I want to be able to see their information in the logs. I do not want to have to charge them a fee such as a subscription to Entra for it. Is this possible?
I tried a different email address that I own and was unable to get it to work.
I have a WPF application which I have integrated Entra into. I can signin and see that login in the logs on Azure. What I would like to do is distribute this program to anyone who wants it. I want them to signin to it. I want to be able to see their information in the logs. I do not want to have to charge them a fee such as a subscription to Entra for it. Is this possible? I tried a different email address that I own and was unable to get it to work. Read More
Memory Protection for AI ML Model Inferencing
This article was originally posted on Confidential Container Project’s blog by Suraj Deshmukh & Pradipta Banerjee. Read the original article here and the source for this content can be found here.
Introduction
With the rapid stride of artificial intelligence & machine learning and businesses integrating these into their products and operations, safeguarding sensitive data and models is a top priority. That’s where Confidential Containers (CoCo) comes into picture. Confidential Containers:
Provides an extra layer of protection for data in use.
Helps prevent data leaks.
Prevents tampering and unauthorized access to sensitive data and models.
By integrating CoCo with model-serving frameworks like KServe, businesses can create a secure environment for deploying and managing machine learning models. This integration is critical in strengthening data protection strategies and ensuring that sensitive information stays safe.
Model Inferencing
Model inferencing typically occurs on large-scale cloud infrastructure. The following diagram illustrates how users interact with these deployments.
Importance of Model Protection
Protecting both the model and the data is crucial. The loss of the model leads to a loss of intellectual property (IP), which negatively impacts the organization’s competitive edge and revenue. Additionally, any loss of user data used in conjunction with the model can erode users’ trust, which is a vital asset that, once lost, can be difficult to regain.
Additionally, reputational damage can have long-lasting effects, tarnishing a company’s image in the eyes of both current and potential customers. Ultimately, the loss of a model can diminish a company’s competitive advantage, setting it back in a race where innovation and trustworthiness are key.
Attack Vectors against Model Serving Platforms
Model serving platforms are critical for deploying machine learning solutions at scale. However, they are vulnerable to several common attack vectors. These attack vectors include the following:
Data or model poisoning: Introducing malicious data to corrupt the model’s learning process.
Data privacy breaches: Unauthorized access to sensitive data.
Model theft: Proprietary or fine-tuned models are illicitly copied or stolen.
Denial-of-service attacks: Overwhelming the system to degrade performance or render it inoperable.
The OWASP Top 10 for LLMs paper provides a detailed explanation of the different attack vectors.
Among these attack vectors, our focus here is “model theft” as it directly jeopardizes the intellectual property and competitive advantage of organizations.
Traditional Model Protection Mechanisms
Kubernetes offers various mechanisms to harden the cluster in order to limit the access to data and code. Role-Based Access Control (RBAC) is a foundational pillar regulating who can interact with the Kubernetes API and how. Thus ensuring that only authorized personnel have access to sensitive operations. API security mechanisms complements RBAC and acts as gatekeeper, safeguarding the integrity of interactions between services within the cluster. Monitoring, logging, and auditing further augment these defences by providing real-time visibility into the system’s operations, enabling prompt detection and remediation of any suspicious activities.
Additionally, encrypting models at rest ensures that data remains secure even when not in active use, while using Transport Layer Security (TLS) for data in transit between components in the cluster protects sensitive information from interception, maintaining the confidentiality and integrity of data as it moves within the Kubernetes environment.
These layered security measures create a robust framework for protecting models against threats, safeguarding the valuable intellectual property and data they encapsulate.
But, is this enough?
Demo: Read Unencrypted Memory
This video showcases how one can read the pod memory when it is run using the default runc or kata-containers. But using kata’s confidential compute support we can avoid exposing the memory to the underlying worker node.
Confidential Containers (CoCo)
The Confidential Containers (CoCo) project aims at integrating confidential computing into Kubernetes, offering a transformative approach to enhancing data security within containerized applications. By leveraging Trusted Execution Environments (TEEs) to create secure enclaves for container execution, CoCo ensures that sensitive data and models are processed in a fully isolated and encrypted memory environment. CoCo not only shields the memory of applications hosting the models from unauthorized access but also from privileged administrators who might have access to the underlying infrastructure.
As a result, it adds a critical layer of security, protecting against both external breaches and internal threats. The confidentiality of memory at runtime means that even if the perimeter defenses are compromised, the data and models within these protected containers remain encrypted, ensuring the integrity and confidentiality of sensitive information crucial for maintaining competitive advantage and user trust.
KServe
KServe is a model inference platform on Kubernetes. By embracing a broad spectrum of model-serving frameworks such as TensorFlow, PyTorch, ONNX, SKLearn, and XGBoost, KServe facilitates a flexible environment for deploying machine learning models. It leverages Custom Resource Definitions (CRDs), controllers, and operators to offer a declarative and uniform interface for model serving, simplifying the operational complexities traditionally associated with such tasks.
Beyond its core functionalities, KServe inherits all the advantageous features of Kubernetes, including high availability (HA), efficient resource utilization through bin-packing, and auto scaling capabilities. These features collectively ensure that KServe can dynamically adapt to changing workloads and demands, guaranteeing both resilience and efficiency in serving machine learning models at scale.
KServe on Confidential Containers (CoCo)
In the diagram below we can see that we are running the containers hosting models in a confidential computing environment using CoCo. Integrating KServe with CoCo offers a transformative approach to bolstering security in model-serving operations. By running model-serving containers within the secure environment provided by CoCo, these containers gain memory protection. This security measure ensures that both the models and the sensitive data they process, including query inputs and inference outputs, are safeguarded against unauthorized access.
Such protection extends beyond external threats, offering a shield against potential vulnerabilities posed by infrastructure providers themselves. This layer of security ensures that the entire inference process, from input to output, remains confidential and secure within the protected memory space, thereby enhancing the overall integrity and reliability of model-serving workflows.
Takeaways
Throughout this exploration, we’ve uncovered the pivotal role of Confidential Containers (CoCo) in fortifying data protection, particularly for data in use. CoCo emerges as a comprehensive solution capable of mitigating unauthorized in-memory data access risks. Model-serving frameworks, such as KServe, stand to gain significantly from the enhanced security layer provided by CoCo, ensuring the protection of sensitive data and models throughout their operational life cycle.
However, it’s essential to recognize that not all components must operate within CoCo’s protected environment. A strategic approach involves identifying critical areas where models and data are most vulnerable to unauthorized access and focusing CoCo’s protective measures on these segments. This selective application ensures efficient resource utilization while maximizing data security and integrity.
Further
In the next blog we will see how to deploy KServe on Confidential Containers for memory protection.
This blog is a transcription of the talk we gave at Kubecon EU 2024. You can find the slides on Sched and the talk recording on YouTube.
Microsoft Tech Community – Latest Blogs –Read More
New Blog | Securing access to any resource, anywhere
Zero Trust has become the industry standard for safeguarding your entire digital estate. Central to Zero Trust is securing identity and access, which is essential for protecting resources, enforcing security policies, and ensuring compliance in today’s dynamic digital landscape.
With Microsoft Entra, we help our customers create a trust fabric that securely connects any trustworthy identity with anything, anywhere. Driven by the adoption of multicloud strategies in the era of AI, customers are encountering more challenges in securing access, not just across multiple public and private clouds, but also for business apps and on-premises resources. Unlike securing access for humans or within a single environment, where customers have established methods to address challenges, securing access anywhere is more complicated due to the dynamic nature of today’s digital estate and tools to address emerging challenges need further development. To support our customers, we unveiled our vision for securing access in any cloud at this year’s RSA conference. Today, we’re excited to dive deeper into our future investment aimed at securing access to cloud resources from any identity across diverse cloud environments.
Managing multicloud complexity in a rapidly evolving digital environment
Organizations are grappling with substantial challenges in navigating cloud access complexities, often citing issues like fragmented role-based access control (RBAC) systems, and compliance violations. These challenges are compounded by the growing use of cloud services from various cloud service providers. There have been links to several notable breaches attributed to over-permissioned identities. Our customer engagements reveal that organizations are currently using 7 to 8 products, including privileged access management (PAM) and identity governance and administration (IGA) solutions to tackle multicloud access challenges. Despite their efforts, such as toggling across multiple solutions and increasing their workforce, many organizations still struggle to achieve full visibility into their cloud access.
Our 2024 State of Multicloud Security Risk Report underscores these ongoing challenges arising from over-permissioned human and workload identities. Analysis of past year usage data from Microsoft Entra Permissions Management confirms that the complexities in multicloud environments primarily stem from rapid identity growth and over-provisioned permissions (learn more), including:
Over 51,000 permissions that can be granted to identities – 50% of which are identified as high-risk permissions.
Only 2% of those 51,000 permissions were used.
Of the 209M identities discovered, more than 50% are identified as super identities that have all permissions to access all resources.
Read the full post here: Securing access to any resource, anywhere
By Joseph Dadzie
Zero Trust has become the industry standard for safeguarding your entire digital estate. Central to Zero Trust is securing identity and access, which is essential for protecting resources, enforcing security policies, and ensuring compliance in today’s dynamic digital landscape.
With Microsoft Entra, we help our customers create a trust fabric that securely connects any trustworthy identity with anything, anywhere. Driven by the adoption of multicloud strategies in the era of AI, customers are encountering more challenges in securing access, not just across multiple public and private clouds, but also for business apps and on-premises resources. Unlike securing access for humans or within a single environment, where customers have established methods to address challenges, securing access anywhere is more complicated due to the dynamic nature of today’s digital estate and tools to address emerging challenges need further development. To support our customers, we unveiled our vision for securing access in any cloud at this year’s RSA conference. Today, we’re excited to dive deeper into our future investment aimed at securing access to cloud resources from any identity across diverse cloud environments.
Managing multicloud complexity in a rapidly evolving digital environment
Organizations are grappling with substantial challenges in navigating cloud access complexities, often citing issues like fragmented role-based access control (RBAC) systems, and compliance violations. These challenges are compounded by the growing use of cloud services from various cloud service providers. There have been links to several notable breaches attributed to over-permissioned identities. Our customer engagements reveal that organizations are currently using 7 to 8 products, including privileged access management (PAM) and identity governance and administration (IGA) solutions to tackle multicloud access challenges. Despite their efforts, such as toggling across multiple solutions and increasing their workforce, many organizations still struggle to achieve full visibility into their cloud access.
Our 2024 State of Multicloud Security Risk Report underscores these ongoing challenges arising from over-permissioned human and workload identities. Analysis of past year usage data from Microsoft Entra Permissions Management confirms that the complexities in multicloud environments primarily stem from rapid identity growth and over-provisioned permissions (learn more), including:
Over 51,000 permissions that can be granted to identities – 50% of which are identified as high-risk permissions.
Only 2% of those 51,000 permissions were used.
Of the 209M identities discovered, more than 50% are identified as super identities that have all permissions to access all resources.
Figure 1: 2024 State of Multicloud Security Risk key findings
Read the full post here: Securing access to any resource, anywhere Read More