Month: October 2024
Boost digital event engagement with Voting and Archiving in Teams Q&A
Managing and delivering effective town halls is key to enabling two-way dialogue with leaders in today’s hybrid world. Microsoft Teams Q&A makes meetings like town halls, webinars, or training sessions more interactive and productive. We’re excited to announce two new features that help town hall admins better manage the Q&A experience.
Voting: Prioritize Key Questions
Voting lets attendees upvote important questions, moving them to the top of the Q&A feed. Here are the options for using this feature:
Automatic: Voting is on by default but can be turned off.
Fair Voting: Each attendee gets one vote per question.
Sorting: Both attendees and organizers can sort questions by votes or activity.
Archiving: Keep the Feed Clean
Archiving lets organizers move older or irrelevant questions to a separate feed. In recurring town halls, old questions clutter the Q&A feed. Archiving keeps the current feed focused, while voting highlights the most important questions.
Key capabilities:
Selective: Organizers can archive all or specific questions.
Visibility: Both attendees and organizers can view archived questions via the filter.
Conclusion
Voting and Archiving in Teams Q&A ensures meetings are interactive, focused, and productive. Use these features to keep your audience engaged and your Q&A organized. Learn more about these at https://aka.ms/GetQnA
Microsoft Tech Community – Latest Blogs –Read More
Securing Hardware and Firmware Supply Chains
In the modern cloud data center, ensuring the authenticity, integrity, and security of hardware and firmware is paramount. Firmware is the lowest level software that runs on every chip in a server, e.g., CPU, GPU, storage controller. Since firmware provides programming interfaces that higher-level software builds upon, one could think of the hardware as bedrock and the firmware as the foundation upon which the rest of the stack is built.
Microsoft works with industry partners through the Open Compute Project (OCP) to define open hardware and firmware specifications that benefit the entire industry. One recent example, Caliptra, provides an open and transparent implementation of a Root of Trust for any ASIC. The Caliptra Root of Trust provides an unforgeable unique identity for each device, as well as a way to validate the authenticity of all the firmware running on the device. As industry partners start to deliver products with Caliptra as the root of trust next year, their customers will have increased confidence in the security and trustworthiness of the hardware they deploy.
With the Caliptra effort well underway, Microsoft and the OCP security community turned to improving the trustworthiness of the firmware serving as the foundation for the software environment. The result of this effort was the OCP Security Appraisal Framework and Enablement (SAFE) program launched in October of 2023 at the OCP Global Summit. This framework ensures security compliance for cloud hardware and firmware. Simply put, the goal of SAFE is to build a better foundation.
OCP SAFE
The OCP SAFE program defines a comprehensive framework to standardize security reviews of the code and hardware designs that modern-day computer runs on. The SAFE framework defines 3 scopes, each providing greater assurances while considering increasing sophistication from adversaries. The third level assumes a well-funded adversary with a sophisticated lab.
The result of a SAFE review, in addition to more secure firmware and hardware, is a cryptographically signed Short Form Report (SFR). This report, published in OCP’s GitHub, is a JSON document containing hashes of the reviewed firmware, a list of any remaining security issues, and metadata identifying the vendor and review provider. The schema is designed to be easily understood and inspected by both humans and programs. The SFR format allows an organization (a Cloud Service Provider, an enterprise, or an end user) to easily encode their own security policy into a simple program, such as: “only allow firmware that has been security reviewed and contains no security issues of medium or higher severity”. This allows automated and consistent enforcement of security policy at deployment, boot, and runtime.
Since the launch of SAFE in 2023 the community has grown. The SAFE Technical Advisory Committee (TAC) has approved 5 additional Security Review Providers (SRP), bringing the total to eight review providers. Before being accepted as an SRP, all firms must show their security expertise, independence, and commitment to improving security. All five of the newly added review providers joined at the request of their existing customers. These customers had already been actively working with security firms to ensure their products were secure. The SAFE program gives these device vendors a way to publicly demonstrate the security work they have been doing for years, bringing it to a common basis of quality that can be accepted across the industry.
“As our industry moves from “one and done” security testing into a state of continuous security validation throughout the full lifecycle of a device, OCP SAFE’s holistic and transparent approach uniquely validates device vendors are using secure development and build processes, consistently adhering to regulatory requirements, and by engaging experienced security reviewers and contributors like IOActive, assure the physical device, firmware, drivers, and software components down to its source code have thoroughly met or exceeded OCP SAFE’s modern cloud security standards.” Gunter Ollmann, CTO IOActive
Security Assurance
While Caliptra enabled devices and SAFE reviewed firmware each improve security independently, when combined they improve the transparency and trustworthiness of the system. This is achieved by having cryptographically verifiable measurements (hashes) of each layer of firmware running in the system linked to the attestations made by an open source (and SRP reviewed) silicon root of trust. The combination allows end users to independently verify that the firmware in their computing environment has undergone a rigorous security audit. This flow is illustrated below.
Figure 1 Flow for verification of firmware configuration and security assurances
There are multiple ways to expose the runtime attestations from a Caliptra enabled device, but the simplest is to share the measurements from an SPDM query, or in the case of Confidential Compute a Trusted Security Manager (TSM) Report. This query returns a listing of the hashes of firmware loaded into the system.
After the user has obtained the attestation report they can then compare the hashes in the attestation report with the Reference Integrity Manifest (RIM) file provided by the device vendor. The RIM file contains measurements for a collection of firmware the vendor has validated as being genuine, suitable for the device, and compatible with the other firmware listed in the RIM file.
Once the attestation report has been validated against the RIM file, the user now has confidence that their device is only running firmware developed by the device provider. The next step is to verify that the runtime measurements match those in the published SAFE short form reports. The report’s authenticity can be verified by checking the cryptographic signature on the report. If the measurements match, the user knows that the firmware they are running on was reviewed by the author of the short form report. The user should review any remaining issues in the published short form report to ensure this meets their own security requirements. In addition to cryptographically binding the SFR to a specific firmware version, the SFR acts as a secondary signature on the firmware. This additional layer provides protection against a vendor’s code signing process being compromised.
Hardware Supply Chain Provenance
The previous flow combined SAFE with Caliptra’s measurement capabilities to improve overall system security. Microsoft has developed another flow that leverages the cryptographic identity capabilities provided by Caliptra to ensure only authentic hardware is delivered to Azure. This flow tracks the unique identity of every device through the entire lifecycle, beginning with chip manufacturing and continuing through assembly, system integration, deployment, operation, and secure decommissioning in Azure.
Azure manages hardware identities in a management system known as Hardware Key Management Services (HKMS). At each stage in the manufacturing process HKMS collects the public portion of the device identities which were generated inside Caliptra, these identities are IDEV and LDEV Certificate Signing Requests (CSR). By collecting the public portion of these cryptographic identities, HKMS can validate the Hardware Bill of Materials (HBOM) as well as verify device provenance and manufacturing records before signing the CSR and endorsing the device as fit for deployment in Azure. Then throughout the operational life cycle of the device, the LDEV identity is renewed or revoked. Additional details on Caliptra identities can be found in the Caliptra specification.
Figure 2 Hardware Key Management Service
The backing store for HKMS is an Azure Confidential Ledger, which leverages the Azure Confidential Computing platform and the Confidential Computing Framework. These technologies ensure the security of the backing store and provide non-repudiation and immutable auditability of HKMS-managed hardware identities.
Supply Chain Transparency
Recognizing the widespread need for improving supply chain security, Microsoft engaged with industry partners to leverage our experience developing supply chain assurance processes (including the Caliptra and OCP SAFE technologies described above) into a broader framework that could be widely adopted. The result of this ongoing effort has been captured in the Supply Chain Integrity, Transparency, and Trust (SCITT) Internet Engineering Task Force initiative. This proposal describes the processes for managing the compliance and transparency of goods and services across supply chains. SCITT supports the ongoing verification of services and devices, where the authenticity of entities, evidence, policy, and artifacts can be assured and the actions of entities can be guaranteed to be authorized, non-repudiable, immutable, and auditable.
Figure 3 SCITT Framework
The SCITT framework builds on technologies like Caliptra and OCP SAFE to track devices and firmware throughout their chains of custody – in effect, securing the entire supply chains. By using transparent roots of trust, like Caliptra, and incorporating evidence such as manufacturer SBOMs, Reference Integrity Manifests (RIM), and OCP SAFE audit reports, device integrity can be verified across the hardware supply chain and throughout its operational lifecycle. The combination of these transparent security technologies ensures that hardware and firmware are always authorized, non-repudiable, and immutably auditable.
The Confidential Consortium Framework (CCF) is used to provide the SCITT eNotary immutable ledger. This is coupled with a confidential signing service that only signs artifacts with valid claims on the ledger can enforce non-repudiable, immutable, and auditable end to end supply chain claims.
Figure 4 Hardware Transparency Services
The SAFE framework, combined with Caliptra-enabled devices, significantly enhances the security, transparency, and trustworthiness of our systems. By leveraging the cryptographically-signed Short Form Reports of the SAFE framework, organizations can automate and consistently enforce security policies at deployment, boot, and runtime. Building upon the identity provided by Caliptra we can further secure the entire lifecycle of our devices, from manufacturing to deployment and end of use. As Microsoft continues to innovate and improve our security measures, we remain committed to providing robust and reliable solutions that meet the evolving needs of our customers.
Microsoft Tech Community – Latest Blogs –Read More
Liquid Cooling in Air Cooled Data Centers on Microsoft Azure
With the advent of artificial intelligence and machine learning (AI/ML), hyperscale datacenters are increasingly accommodating AI accelerators at scale, demanding higher power at higher density than is customary in traditionally air-cooled facilities.
As Microsoft continues to expand our growing datacenter fleet to enable the world’s AI transformation, we are faced with a need to develop methods for utilizing air-cooled datacenters to provide liquid cooling capabilities for new AI. Additionally, increasing per-rack-density for AI accelerators necessitates the use of standalone liquid-to-air heat-exchangers to support legacy datacenters that are typically not equipped with the infrastructure to support direct-to-chip (DTC) liquid cooling.
A solution: standalone liquid cooling heat exchanger units.
Microsoft’s Maia 100 platform marked the first introduction of a liquid cooling heat exchanger in existing air-cooled data centers for direct-to-chip liquid cooling. Since that time, we have continued to invest in novel cooling techniques to accommodate newer, more powerful AI/ML processors. Today at OCP 2024, we are sharing contributions for designing advanced liquid cooling heat exchanger units (HXU). By open sourcing our design approach through the Open Compute Project, we hope to share our HXU development work to enable closed-loop liquid cooling in AI datacenters across the entire computing industry.
Heat Exchanger Unit Design Principles
Our designs for HXUs focus on enabling advanced cooling capacity for modern AI processors, improving operating efficiency to reduce power demand, and enabling AI accelerator racks to operate in traditionally air-cooled data centers.
Microsoft’s vision for enhanced effectiveness centers on using the same chilled air that legacy datacenters are already providing for air-cooled platforms. Our engineering spec for HXUs targets the relative liquid and air flow rates required to supply the cooling liquid at the required temperature to the IT equipment.
The design principles for HXUs are the result of a close partnership with Delta and Ingrasys. Working with these partners has helped us evolve our approach, including double-wide rack to increase heat dissipation capacity, and specialized packaging to ensure leak-free transport. Envisioning HXUs with a modular design allows field servicing of key components, including pumps, fans, filters, printed circuit board assembly, and sensors. Quick disconnects and strategically placed leak detection ropes, along with drip pans that guide liquids to the base of an HXU, help mitigate and contain liquid leaks. Fans are placed at the rear to avoid pre-heating within an HXU and eliminate entrainment issues in the cold aisle. The modular fluid connections between HXUs and server racks allow for various configurations.
We welcome further collaboration from the broader OCP community in enabling the future of datacenter power and cooling innovation with state-of-the-art infrastructure engineering capabilities.
Microsoft Tech Community – Latest Blogs –Read More
blog about: A Beginner’s Guide to Using Microsoft Kaizala
In today’s digital age, organizations are seeking powerful tools to streamline communication and collaboration. Microsoft Kaizala is one such tool, designed to meet the needs of modern work environments, especially in scenarios where team members are spread across various locations, including on-the-go workers, frontline employees, and distributed teams. Launched as part of Microsoft’s Office 365 ecosystem, Kaizala offers an easy-to-use, mobile-first platform that facilitates instant communication, task management, and data sharing.
In this blog, we will explore how to use Microsoft Kaizala to improve collaboration and streamline workflows in your organization.
https://dellenny.com/a-beginners-guide-to-using-microsoft-kaizala/
In today’s digital age, organizations are seeking powerful tools to streamline communication and collaboration. Microsoft Kaizala is one such tool, designed to meet the needs of modern work environments, especially in scenarios where team members are spread across various locations, including on-the-go workers, frontline employees, and distributed teams. Launched as part of Microsoft’s Office 365 ecosystem, Kaizala offers an easy-to-use, mobile-first platform that facilitates instant communication, task management, and data sharing.
In this blog, we will explore how to use Microsoft Kaizala to improve collaboration and streamline workflows in your organization.
https://dellenny.com/a-beginners-guide-to-using-microsoft-kaizala/ Read More
How to assign a task with a follow up date which triggers a notification in Loop?
Hi – I’m creating an internal Wiki / Knowledge base for my team using Loop.
I’ve inserted a table similar to the below and would like a task to be assigned to the owner when the review date passes. Ideally I would like them to be notified via email / teams. Please can anyone advise how I do this?
ArticleOwnerReview date
Hi – I’m creating an internal Wiki / Knowledge base for my team using Loop. I’ve inserted a table similar to the below and would like a task to be assigned to the owner when the review date passes. Ideally I would like them to be notified via email / teams. Please can anyone advise how I do this? ArticleOwnerReview date Read More
Azure Private DNS Resolver – Need Help
Hi All,
we are planning to implement Azure DNS resolver to replace DNS forwarder ? have few question before on this .
1. does Azure Private DNS resolver works with SD-WAN / VWAN model network ?
2. does it requires to create a Azure DNS Zone for the private resolver ?
we require Azure DNS Private resolver for forwarding purpose only and our current DNS forwarder VM on Bind DNS looks like below –
By default all the Vent’s DNS IP should be pointing to DNS Forwarders VM Bind Server for dns resolution.2. DNS Forwarder in the region will forward the traffic to dns server based on the query to the domain controllers. There are specific rules for each Domain controller.we need similar kind of behavior from Azure private DNS resolver. will this work by using the DNS Private resolver ?appreciate for help in this issue
Hi All, we are planning to implement Azure DNS resolver to replace DNS forwarder ? have few question before on this . 1. does Azure Private DNS resolver works with SD-WAN / VWAN model network ?2. does it requires to create a Azure DNS Zone for the private resolver ? we require Azure DNS Private resolver for forwarding purpose only and our current DNS forwarder VM on Bind DNS looks like below – By default all the Vent’s DNS IP should be pointing to DNS Forwarders VM Bind Server for dns resolution.2. DNS Forwarder in the region will forward the traffic to dns server based on the query to the domain controllers. There are specific rules for each Domain controller.we need similar kind of behavior from Azure private DNS resolver. will this work by using the DNS Private resolver ?appreciate for help in this issue Read More
Query insights
Question: How can I identify unused data in a modern data platform built with Azure Synapse and the medallion architecture using Log Analytics?
I’m working with a client who has built a modern data platform based on the medallion architecture, leveraging Azure Synapse and Azure Storage Accounts. Users access the data in various ways within Synapse workspaces: some through Python scripts, others through serverless SQL endpoints, and others via dedicated SQL pools (utilizing views and stored procedures).
We log a significant amount of information via Log Analytics, which means that all select statements executed on the data are essentially logged. The client now wants to identify which data is not actively used, in order to reduce storage costs by removing unused datasets. In a traditional SQL data warehouse, the Query Store could be used for this, but in this platform, we only have access to the log data stored in Log Analytics.
My question is: How can we, based on the logs in Log Analytics, determine which data (tables, views, etc.) is processed through the various layers of the medallion architecture but not actually used?
The goal is to remove unused data to save costs.
Some additional questions:
Is there a way to analyze usage patterns of datasets based on raw logs in Log Analytics?Are there any existing tools or KQL queries that could help identify which datasets have been inactive over a certain period?Could a metastore tool, such as Azure Purview, play a role in identifying unused datasets? If so, how can this be integrated with our existing platform?
Any suggestions or insights would be greatly appreciated!
Question: How can I identify unused data in a modern data platform built with Azure Synapse and the medallion architecture using Log Analytics? I’m working with a client who has built a modern data platform based on the medallion architecture, leveraging Azure Synapse and Azure Storage Accounts. Users access the data in various ways within Synapse workspaces: some through Python scripts, others through serverless SQL endpoints, and others via dedicated SQL pools (utilizing views and stored procedures). We log a significant amount of information via Log Analytics, which means that all select statements executed on the data are essentially logged. The client now wants to identify which data is not actively used, in order to reduce storage costs by removing unused datasets. In a traditional SQL data warehouse, the Query Store could be used for this, but in this platform, we only have access to the log data stored in Log Analytics. My question is: How can we, based on the logs in Log Analytics, determine which data (tables, views, etc.) is processed through the various layers of the medallion architecture but not actually used? The goal is to remove unused data to save costs. Some additional questions:Is there a way to analyze usage patterns of datasets based on raw logs in Log Analytics?Are there any existing tools or KQL queries that could help identify which datasets have been inactive over a certain period?Could a metastore tool, such as Azure Purview, play a role in identifying unused datasets? If so, how can this be integrated with our existing platform?Any suggestions or insights would be greatly appreciated! Read More
Azure SQL Managed Instance Cross Subscription Database Restore using Azure Data Factory
Azure Data Factory (ADF) set up automated, continuous, or on-demand restoration of Azure SQL Managed Instance (MI) databases between two separate subscriptions.
Before you start the database restore process, make sure to turn off TDE. For those who need TDE enabled, check out the ABC blog for guidance.
Prerequisite
Azure SQL Managed Instances are located across two distinct subscriptions.
Azure Blob storage same subscriptions SQL Managed Instances are located
Azure Data Factory (ADF) instance
Permission requires
To perform backup and restore operations, the SQL Managed Instance Managed Identity needs to have the “Contributor, Storage Blob Data Contributor” permission for the blob storage.
To transfer backup files between two storage locations, ADF managed identity needs the “Storage Blob Data Contributor” permission for the blob storage.
To carry out backup and restore operations, ADF managed identity needs ‘sysadmin’ permissions on SQL Managed Instance.
Note: We utilized Managed Identity for permission granting. Should you employ a different ID, ensure it has the same permissions assigned.
Step: 1
Creates a server-level credential. A credential is a record that contains the authentication information that is required to connect to a resource outside SQL Server.
USE master
GO
CREATE CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>]
WITH IDENTITY=’Managed Identity’
GO
Validate the credential created successfully
Step: 2
Create ADF link service connects for both SQL Managed Instances and storage accounts.
Create ADF dataset using both SQL Managed Instances and storage accounts link services
Step: 3
If you’re utilizing a private endpoint, make sure to set up an ADF integration runtime and a managed link follow Create Azure Data Factory Managed Private Links
Step: 4
Create ADF pipeline to take database backup from source.
Split backup files into multiple files for faster backup
Use below scripts to take copy only database backup
Use Script activity to execute the backup scripts using source SQL MI link service
Uas Master
GO
BACKUP DATABASE [@{pipeline().parameters.source_database_name}]
TO URL = N’https://<storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak’
WITH COPY_ONLY, MAXTRANSFERSIZE = 4194304, COMPRESSION, STATS = 10
Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.
Step: 5
Create ADF pipeline to copy database backup files from source storage account to target storage account.
Use copy activity to copy backup files from source storage account to target storage account.
Allow a minute for the backup to transfer to blob storage, adjusting the duration to meet your specific needs.
Step: 6
Create Azure Data Factory pipeline to restore database to a target SQL Managed Instance backup from the designated storage account.
Use below scripts to restore database from designated storage account
Use Script activity to execute the restore scripts using target SQL MI link service
USE [master]
RESTORE DATABASE [@{pipeline().parameters.target_database_name}] FROM
URL = N’https://<storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_01.bak’,
URL = N’https://<storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_02.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_03.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_04.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_05.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_06.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_07.bak’,
URL = N’https:// <storageaccountname>.blob.core.windows.net/databaserefresh/@{pipeline().parameters.source_database_name}_08.bak’
Step: 7
Set up an additional pipeline to remove orphan databases users, provide user access, or carry out any extra tasks needed after a restore, using the suitable activity task.
Step: 8
Create ADF pipeline workstation to execute all Step4 > Step5>Step6>Step7 in sequence.
Set up parameters for both the source_database_name and target_database_name to enable dynamic operation of the pipeline across different databases.
Microsoft Tech Community – Latest Blogs –Read More
360 – Are Admins Able to Add Raters On Behalf of Subjects?
Prior to the migration to Viva, the documentation for Glint 360 assessments suggested it was possible to administratively add raters. I don’t see a way to do that in the program set up. Is this possible?
We had been looking forward to letting subjects search and find their own unlike our former tools. We did one test with the Entra/Azure search and we decided it was going to frustrate our leaders.
Prior to the migration to Viva, the documentation for Glint 360 assessments suggested it was possible to administratively add raters. I don’t see a way to do that in the program set up. Is this possible? We had been looking forward to letting subjects search and find their own unlike our former tools. We did one test with the Entra/Azure search and we decided it was going to frustrate our leaders. Read More
Unit and Work question
so i am a Electrical Contractor. Using Microsoft Projects to try to make Schedules for up coming work. is there a way to add Units and a Labor rate to it?
ex. i have 2600′ of Conduit to install in a Area of work. the Labor Rate for my Conduit task is .95 hrs per 1′ of Conduit. can i set up Col to have the total Footage(unit) in it and have the Labor Rate be in the Work to give me Duration in Rough Days per Area we work in?
hope this all makes sense New person to Microsoft Projects still learning.
so i am a Electrical Contractor. Using Microsoft Projects to try to make Schedules for up coming work. is there a way to add Units and a Labor rate to it?ex. i have 2600′ of Conduit to install in a Area of work. the Labor Rate for my Conduit task is .95 hrs per 1′ of Conduit. can i set up Col to have the total Footage(unit) in it and have the Labor Rate be in the Work to give me Duration in Rough Days per Area we work in? hope this all makes sense New person to Microsoft Projects still learning. Read More
Adding a range of cells based on a criteria and date range
Hi,
Can anyone please guide me adding cell range from AA2 to AA100 when cells I2 to I100 having value “ROBOT” and cell AD2 to AD100 contain date range from 01/07/2020 to 31/07/2020? The date format is DD/MM/YYYY.
I used the following formula but it is not working:
=SUMIFS(AA2:AA100, I2:I100, “ROBOT”, AD2:AD100, “>=01/07/2022”, AD2:AD100, “<=31/07/2022”)
Hi,Can anyone please guide me adding cell range from AA2 to AA100 when cells I2 to I100 having value “ROBOT” and cell AD2 to AD100 contain date range from 01/07/2020 to 31/07/2020? The date format is DD/MM/YYYY.I used the following formula but it is not working:=SUMIFS(AA2:AA100, I2:I100, “ROBOT”, AD2:AD100, “>=01/07/2022”, AD2:AD100, “<=31/07/2022”) Read More
Is it possible to access Dataverse and Microsoft Graph api by single token using AAD auth?
Is it possible to access Dataverse and Microsoft Graph api by single token using AAD auth?
Is it possible to access Dataverse and Microsoft Graph api by single token using AAD auth? Read More
Azure AI for an API Platform
We are managing an API Platform in our company.
We collect metrics in ELK (deployed on premise) and We have a Confluence WIKI page with our public documentation.
We would like to have a Chatbot that can be trained by structured (JSON, CSV) Metrics (Prometheus) stored in ELK and by trained by the unstructured content in our WIKI page.
The Chatbot should be able to answer questions like:
(Source: WIKI page)
– How can I publish an API with mTLS enabled
– How can I authorize a client certificate for my API
– How can I specify the RPS for my API
– and so on and so forth
(Source: ELK)
– Plot the request of the API “XX” for the last month
– Can you predict the API calls trend of the next month for the API “XXX”
– Give me a list of the client IP that accessed the API “YYY” yesterday
– and so on and so forth
In the future We may want the chat bot to be able to do some basic automatic actions like (contacting our self-service API):
– Allow this client certificate to access this API
– Change the RPS for the API ZZZ to 10 rps/s
– and so on and so forth
– What Azure AI Services would you recommend to start looking into? Which AI models? Which Azure resource?
– How can we train and feed the model from ELK? do we have to export the data from ELK daily and store it in an Azure Storage account or we can instruct the specific Azure AI Service to connect to our ELK endpoint (or a proxy API) to fetch the data?
Thank you
We are managing an API Platform in our company.We collect metrics in ELK (deployed on premise) and We have a Confluence WIKI page with our public documentation.We would like to have a Chatbot that can be trained by structured (JSON, CSV) Metrics (Prometheus) stored in ELK and by trained by the unstructured content in our WIKI page.The Chatbot should be able to answer questions like: (Source: WIKI page)- How can I publish an API with mTLS enabled- How can I authorize a client certificate for my API- How can I specify the RPS for my API- and so on and so forth (Source: ELK)- Plot the request of the API “XX” for the last month- Can you predict the API calls trend of the next month for the API “XXX”- Give me a list of the client IP that accessed the API “YYY” yesterday- and so on and so forth In the future We may want the chat bot to be able to do some basic automatic actions like (contacting our self-service API):- Allow this client certificate to access this API- Change the RPS for the API ZZZ to 10 rps/s- and so on and so forth – What Azure AI Services would you recommend to start looking into? Which AI models? Which Azure resource?- How can we train and feed the model from ELK? do we have to export the data from ELK daily and store it in an Azure Storage account or we can instruct the specific Azure AI Service to connect to our ELK endpoint (or a proxy API) to fetch the data? Thank you Read More
Cannot select Oracle Database in Excel
In Excel I always had an option to get data from Oracle database. But as of today I haven’t had that option in databases list.
Why this option/possibility has lost and how I could get it back?
In Excel I always had an option to get data from Oracle database. But as of today I haven’t had that option in databases list.Why this option/possibility has lost and how I could get it back? Read More
How can I make an Azure Logic App only execute during a certain window of time?
I have a logic app that retrieves all security incidents from Microsoft Sentinel and sends them to a Teams channel. Notifications in Microsoft Teams are always received between 9 PM and 5 AM. I want to configure the logic app in Microsoft Azure so that it sends incidents only within the time range of 7 AM to 5 PM, and only on working days, from Monday to Friday.
i used Recurrence trigger itself. As shown below I have it for every 15 minutes on Monday to Friday between 7 am to 5 pm.
Even, i tryed to put the récurrence trigger in défirent position in my logic app , but the same error
I have a logic app that retrieves all security incidents from Microsoft Sentinel and sends them to a Teams channel. Notifications in Microsoft Teams are always received between 9 PM and 5 AM. I want to configure the logic app in Microsoft Azure so that it sends incidents only within the time range of 7 AM to 5 PM, and only on working days, from Monday to Friday.i used Recurrence trigger itself. As shown below I have it for every 15 minutes on Monday to Friday between 7 am to 5 pm. Even, i tryed to put the récurrence trigger in défirent position in my logic app , but the same error Read More
Guidance Needed on Microsoft 365 Subscription for Teams App Development
I am currently developing an app for Teams and have set up accounts in the Azure portal, joined the developer program, and accessed the Teams developer portal.
However, when attempting to log in to the Teams admin center, an error occurs:
I am currently developing an app for Teams and have set up accounts in the Azure portal, joined the developer program, and accessed the Teams developer portal.However, when attempting to log in to the Teams admin center, an error occurs: Internal calls to PLS Service have failed. We can’t find the tenant region for tenantId. Please try again later to join the developer program, the message indicated: `Thank you for joining. You don’t currently qualify for a Microsoft 365 Developer Program sandbox subscription.`. Research revealed that Microsoft discontinued the free subscription.A subscription is needed to access the Microsoft Teams admin center and for Teams app development. I can see various Microsoft 365 plans for businesses like Basic and Standard, as well as licenses available in the Microsoft 365 admin center. I am considering the Microsoft Teams Essentials license. However, there is confusion regarding whether to buy a subscription or a specific license, especially since active users in the 365 admin center are showing as unlicensed.Could anyone advise on which plan or license is necessary for Teams app development? Read More
3 Innovative Ways Developers Are Building with AI
AI is reshaping industries and unlocking new opportunities for startups and enterprises to grow. Azure AI is at the forefront of this transformation, offering a comprehensive suite of tools that empowers you to build, manage, and deploy AI applications at scale. Programs like Microsoft for Startups Founders Hub which include access to Azure AI models and tools enable founders to turn innovative ideas into reality.
This past April, Microsoft hosted a multimodal themed Generative AI Hackathon on Devpost, where developers showcased their projects built with Azure AI. Participants of the hackathon used Azure AI to create innovative solutions across education, accessibility, fashion, media, and more. There were a wide range of use cases addressing real-world challenges responsibly:
1. AI as a Catalyst for Enhanced Learning
Project: ChatEDU | Industry: Education
ChatEDU exemplifies how AI can transform education by creating personalized, interactive learning experiences. The project moves beyond simple automation, offering a dynamic copilot that evolves with students, fostering curiosity and skill-building.
“Choosing Azure AI was a game-changer for us. Its robust suite of tools enabled us to transform diverse educational data materials into an intuitive and interconnected product. This has been pivotal in creating ChatEDU, a personalized, multimodal educational tool that enhances student learning experiences” – Jason Hedman | Founder | ChatEDU
The solution uses Azure AI Services like Azure Document Intelligence to understand and process knowledge files uploaded by students. This information from multiple data sources (plain text, images, videos, PDFs) is stored in Azure Cosmos DB. These files are then analyzed to extract topics and connections to create an interactive learning pathway that students can utilize to work towards learning objectives.
2. AI for Accessibility and Inclusion
Projects: GARVIS, Gino.AI | Industries: Technology, Media
GARVIS redefines accessibility by empowering users to navigate visual tasks without relying on verbal descriptions. The app utilizes models from Azure OpenAI Service to detect and analyze objects and environments, replacing text-based instructions with intuitive visual demonstrations. Using the Unity Engine paired with Microsoft Copilot to enhance coding productivity, this application makes advanced mixed reality assistants available to all. A complex space simplified with the ease of using multiple Microsoft products together.
Another example of seamless integration is Gino.AI, which makes information accessible and consumable in a way that caters to your learning style. Azure AI Search is used to index and efficiently query information from the ‘Mind Base’ knowledge space, and with Azure AI Vision, the system can extract text from images, enriching the content available for users. To ensure responsible AI use, Azure AI Content Safety filters out any harmful material from chatbot responses, summaries, podcast scripts, and other sources.
3. Leveraging AI to Maximize Existing Data
Projects: FarmFundAI, Winewiz, Docsy, Therapute | Industries: Agriculture, Retail, Healthcare
FarmFundAI demonstrates how AI can unlock new value from existing information available, helping farmers secure funding by showcasing sustainability and crop yield potential. By analyzing satellite and drone imagery via an app built with Azure AI Studio and Azure Functions, users can uncover insights like crop health that are crucial for investment decision-making.
This concept of utilizing existing publicly available data is also shown through Winewiz which simplifies the process of selecting and gifting wines by offering AI-powered conversational consultations and gift card creation. Leveraging Azure AI Speech, it provides interactive communication through the Speech-to-Text service, enhancing the user experience and making the selection process intuitive. Similarly, Docsy saves time for healthcare professionals and enhances the accuracy of patient records via audio data that is processed through Azure AI Speech for transcription. Within the same healthcare realm, Therapute is a physiotherapy platform that uses Azure Machine Learning to offer personalized rehabilitation guidance 24/7.
Building Multimodal and Cross-Domain AI Applications
As shown in the use cases above, Azure AI is versatile, supporting multimodality across industries and domains that require diverse data types, such as text, speech, images, and video. Leveraging the Azure AI model catalog is one way to make the most of this flexibility, where you can select the right AI models for specific needs and constraints such as price, performance, and customization. Multimodal models like GPT-4o via Azure OpenAI Service and Microsoft’s very own Phi-3-vision are also part of the model catalog that you can try out today.
It’s exciting to see that developers are utilizing Azure AI in conjunction with other Microsoft products. Talk to Listen is an example of this, where developers used Azure AI with GitHub Copilot to create a high-quality app bringing characters to life, all while saving time. We are thrilled to see what developers will continue to build and innovate.
Feeling inspired to start your own project? Start your journey by exploring helpful tutorials and resources:
Learn
Get started with our Generative AI for Beginners course
Get started with Phi-3 using our cookbook
Learn about generative AI use cases on Azure
Learn how to use Azure AI Studio on Microsoft Learn
Connect
Join a passionate community of builders on our Azure AI Discord
Learn, Connect, and Build with us on Microsoft Reactor
Have a startup? Sign up for Microsoft for Startups for exclusive access to mentors, Azure credits, and more.
Build
Explore and experiment with Azure AI in GitHub Models
Get started faster with templates
Microsoft Tech Community – Latest Blogs –Read More
Board International and Transmit Security offer transactable partner solutions in Azure Marketplace
Microsoft partners like Board International and Transmit Security deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about these offers below:
Board – The Enterprise Planning Platform: The Board Enterprise Planning platform turns complex data into better decisions by using AI, analytics, and customized solutions for supply chains, retail, manufacturing, and consumer packaged goods. Board integrates with Microsoft Azure for robust hosting, secure data, and advanced AI.
Transmit Security: Enable dynamic anti-fraud and identity security with Transmit Security, enhancing Microsoft Entra ID (formerly Azure Active Directory). Transmit Security performs continuous risk and trust analysis powered by machine learning while also automating photo ID and selfie analysis to verify identities.
Microsoft Tech Community – Latest Blogs –Read More
Feedback on Managing Town Halls as Organizer/Co-Organizer
Below are some suggestions for improving the management experience for organizers and co-organizers in Town Hall events:
Queuing Option for Screen Sharing
There should be a controlled queuing system for screen sharing to prevent anyone from starting to share their screen randomly.
End Town Hall Button
An option to end the event while keeping event members (organizers, presenters, and co-organizers) connected for post-event discussions.
Separate Attendee Joining Time
There should be an option to set a different joining time for attendees, independent of the event group’s start time, to allow the event group some preparation time.
Q&A Visibility and Permissions
Questions under review should be visible to presenters as well.Attendees should only have view-only access to published questions and should not be able to respond to them.
Presenter Management
Co-organizers should have permission to add or remove presenters during the event.
Control Rights for Organizers and Co-Organizers
Only organizers and co-organizers should have control over actions like bringing presenters on/off-screen, switching views, and managing event controls, similar to the Presenter/Producer views in Live Events.
Improved Recording Quality
The recording quality needs significant improvement to match the standard of Live Events.
These enhancements will greatly improve the overall efficiency and experience of managing Town Hall events.
Below are some suggestions for improving the management experience for organizers and co-organizers in Town Hall events:Queuing Option for Screen SharingThere should be a controlled queuing system for screen sharing to prevent anyone from starting to share their screen randomly.End Town Hall ButtonAn option to end the event while keeping event members (organizers, presenters, and co-organizers) connected for post-event discussions.Separate Attendee Joining TimeThere should be an option to set a different joining time for attendees, independent of the event group’s start time, to allow the event group some preparation time.Q&A Visibility and PermissionsQuestions under review should be visible to presenters as well.Attendees should only have view-only access to published questions and should not be able to respond to them.Presenter ManagementCo-organizers should have permission to add or remove presenters during the event.Control Rights for Organizers and Co-OrganizersOnly organizers and co-organizers should have control over actions like bringing presenters on/off-screen, switching views, and managing event controls, similar to the Presenter/Producer views in Live Events.Improved Recording QualityThe recording quality needs significant improvement to match the standard of Live Events.These enhancements will greatly improve the overall efficiency and experience of managing Town Hall events. Read More
pl-900 exam
I recently cleared my PL-900 certification within 1 week of preparation on my first attempt with a score of 930. I hope my experience can help you in your preparation journey.
I took an online course and watched several YouTube videos related to this certification. I also reviewed various FAQs to gain a deeper understanding.
The most significant help in my preparation was practicing exam questions from ITExamsTest. I highly recommend them. These tests contain verified answers that help you understand the concepts in depth. they are very similar to the actual exam. I found that around 80% of the questions appeared in my real exam. These Pl-900 questions come with detailed explanations for each question, which was invaluable for my preparation. They also offered exam notes which highlights important topics which is also quite helpful.
I recently cleared my PL-900 certification within 1 week of preparation on my first attempt with a score of 930. I hope my experience can help you in your preparation journey. I took an online course and watched several YouTube videos related to this certification. I also reviewed various FAQs to gain a deeper understanding. The most significant help in my preparation was practicing exam questions from ITExamsTest. I highly recommend them. These tests contain verified answers that help you understand the concepts in depth. they are very similar to the actual exam. I found that around 80% of the questions appeared in my real exam. These Pl-900 questions come with detailed explanations for each question, which was invaluable for my preparation. They also offered exam notes which highlights important topics which is also quite helpful. Read More