Tag Archives: microsoft
Beginner – VBA Copying data from sheet & range from several workbooks to a master via Teams
Hi experts!
I have read several forum posts on using Macros to copy a specific range from one workbook to another. However nothing I have read seems to match my requirements.
The scenario is I have 14 workbooks all identical in structure hosted in different MS Teams Channels.
I have one master workbook located on my OneDrive.
I want to be able to extract into my master workbook a specific sheets data in a range. Although the 14 workbooks have the exact structure, the names are different ie. Planner_100.xls, Planner_200.xls etc
To be more specific, in each of the 14 workbooks I wish to copy Sheet “Data”, range A1:C10 to my master workbook sheet “Summary” as a continuous list.
What is the best approach? Can this even be done when hosting the 14 workbooks in different MS Teams channels?
Does a macros be hosted in the master workbook? Or does each of the 14 workbooks host a macros?
Thank you for your advice
Hi experts!I have read several forum posts on using Macros to copy a specific range from one workbook to another. However nothing I have read seems to match my requirements. The scenario is I have 14 workbooks all identical in structure hosted in different MS Teams Channels. I have one master workbook located on my OneDrive. I want to be able to extract into my master workbook a specific sheets data in a range. Although the 14 workbooks have the exact structure, the names are different ie. Planner_100.xls, Planner_200.xls etc To be more specific, in each of the 14 workbooks I wish to copy Sheet “Data”, range A1:C10 to my master workbook sheet “Summary” as a continuous list. What is the best approach? Can this even be done when hosting the 14 workbooks in different MS Teams channels? Does a macros be hosted in the master workbook? Or does each of the 14 workbooks host a macros? Thank you for your advice Read More
Any risks to enabling Password Writeback ?
Hi Everyone,
I’ve been trying to configure password writeback in Entra ID so as to enable Azure SSPR. Would enabling Writeback on Entra ID Connect(currently it’s Hash sync) introduce any service disruption or risk either on azure or On-Prem ?
Hi Everyone, I’ve been trying to configure password writeback in Entra ID so as to enable Azure SSPR. Would enabling Writeback on Entra ID Connect(currently it’s Hash sync) introduce any service disruption or risk either on azure or On-Prem ? Read More
How to render column charts in Logic Apps using Run query and visualize results connector
Team,
I have been trying to run a kql query and render results in column chart , but i couldnt as currently “Run query and visualize results connector” support only bar, line and pie chart ? Is there any solution for this
Team, I have been trying to run a kql query and render results in column chart , but i couldnt as currently “Run query and visualize results connector” support only bar, line and pie chart ? Is there any solution for this Read More
5 ways to dramatically speed up your cloud application teams
Working with applications teams and partners developing cloud native apps on Azure, you quickly learn developer time is valuable, enthusiasm & flow state is critically important.
Whenever an application team has to wait for an environment, wait for a access, a service now ticket, a support case, admin access to install tooling, productivity is dramatically effected, projects can be 2x longer and of lower quality.
Equally, it’s important to have a designed, governed secure environment when using the public cloud, so your workload teams start right, and stay right! This covers all the normal design pillars of a well architected solution, reliability, security, cost and performance.
Application teams work best when they can select their preferred platform services, tooling, languages and libraries, and most importantly, reduce their dependencies on external requests & constraints that limit their selection of services. To this end, platform teams number one priority should be to work towards a self-service model, removing themselves from the process, constantly unblock friction points.
Where Platform teams should focus
You don’t need to have everything automated from day 1, nor have tooling for everything, but focusing in these 5 crucial elements will result in more impact for everyone in your organization, while avoiding unnecessary tickets/cases and bottlenecks, something I have seen depressingly way to often.
1. Environment Provisioning
When a application team works on a new product, timely access to an environment is important when enthusiasm is high. You should be targeting giving teams access to an environment they can use within 30mins from the initial request. Vending a Resource Groups to the application team with all the access they need to immediatly start deploying their solution designs (more on this later). The resource group naming, tagging, the subscription sharing model and level of access can all be determined base on the environment requested.
Subscriptions now in Azure can support hundreds of developers, the subscriptions have granular role-based controls, mature cost tracking services, and you can now track Subscription Limits & usage very effectively. We recently had 300 developers across 35 resource groups, deploying resources across the globe, all working happily in a single sandbox subscription.
During the vending process, it will be important to capture:
Workload Type (e.g. Production vs Sandbox), this drives the policies that are applied to control what can be deployed & levels of access, and keeps the separation between these environments. Keep the separation of any Production and Sandbox environments should be at the subscription level.
Required Networking (e.g. Connected or Non-connected) this determines if private IP connectivity is required by the workload, or if ingress/egress to the workload needs to be privately routed.
The first, simplest, most unconstrained environment you should offer is a Non-connected Sandbox, this allows the application teams the most flexibility to experiment with multiple services, full access to the environment in the portal to allow the team to rapidly get ideas to a POC stage. Here typically, there are no or little restrictions on access or resources that can be provisioned. The most constrained, and complex environment will be a Connected Production subscription, this will have policies to ensure production guardrails are followed, and networking to allow private IP connectivity, and ingress/egress routing controls (if needed).
The new Subscription Vending Bicep Verified Module is a excellent starting point to start vending these environments, from the simplest to the most complex with a module & parameter driven approach. You can collect the required information from the application team, then call the vending module directly from the az cli to start with, or create a pipeline/action in your favourite devops tool, maybe trigger a GitHub workflow from a Issue template:
Hot Take: I’d recommend Bicep over Terraform when automating environment provisioning or application deployment on Azure, even if you are multi-cloud, It’s a simple, powerful, performant 1st class experience, without needing the complexities of a state file, as the state is whatever is deployed in azure, and templates can be re-run and only the changes will be deployed.
2. Environment Permissions
So you have vended an environment, the app team tried to provision their first internally authenticated webapp that calls a gpt-4o model using identity based access, deployed using github actions… error error error 4 tickets in 5 minutes, now the team are googling for workarounds, not delivering their projects, wasting valuable time, and enthusiasm. Whats the problem?
No permissions to create a Role Assignment on the webapp managed identity
OpenAI resource not registered in subscription
Cannot create Application Registration in EntraID
Require Admin consent for application permissions.
Cannot create federated access from github to deploy to Azure
When building cloud native apps, managed identity and role based access is a crucial part of the application architecture, and 100% the best and most secure way of creating cloud native applications.
Platform teams must provide the appropriate level of access to the application team to allow these solution architectures. I’ve seen this being the single thing that wastes tens/hundreds of hours of skilled peoples time
Recommendation #1
When assigning roles to the application team, Contributor is not enough to create identity-based solution architectures! Consider providing the team Contributor & Role Based Access Control Administrator, this role can be scoped to either the resource group , and can be further limited to only assign selected roles to selected principals.
Recommendation #2
Ensure resource provider registrations have been done as part of the vending process, and not blocking the application teams from creating their resources.
Recommendation #3
Many applications will need users to authenticate, and the best way of doing that is with Entra ID. These apps need application registrations within EntraID, if your organization blocks the self-service creation of new application registrations, and/or has restrictive consent granting. Ensure the team know the process for requesting a new application registration. Also, unless you want a new Service now ticket every time the app team what to add a new callback uri, make then a owner of the app registration in the process.
Recommendation #4
Lastly, yes, ensure the use of Identity & Automation for deployments in Production environments, but don’t take away access to the portal from your application teams! Grant your application team corporate identities roles in the environment. Not granting this access, especially in the lower environments will make it much harder for the teams.
3. A little less documentation & a little more sample repos
Our environment provisioning provides the application teams a blank slate at this stage, it doesn’t make any assumptions about the application teams solution architecture, this will allow the application team to select the optimal services for their use-cases, that could be a microservices app or a integration workflow, or a simple static webapp. selecting the appropriate service for the use-case will make the best use of the public cloud, optimize your public cloud costs while minimizing the required operation to support your application. Equally, it don’t assume the structure or number of repo’s that the application team will use.
However, we should be providing the application teams more support than just a blank canvas, we should be looking to share successful architecture patterns, example applications that have already been approved for use within your organization.
Rather than documents, start to foster a innersource repo of samples, that can be simply provisioned into the vended environment, to show what a static webapp, or a simple microservices app, or an event driven process, or a integration workflow could look like. This can provide new teams a starting point with built in approved patterns to accelerated their journey to production. These examples, with good READMEs can also inform the teams how to structure there application team repos with the infrastructure-as-code, and automation deployment workflows.
Look at the Azure Developer CLI templates us a good example of this, I’m not saying you should use this tool, but azd template list shows a list of sample application patterns with well documented, structured repos. You can start to create a curated list that demonstrates getting started repos in each of the application solution categories for your organisation, even starting with some of these samples where relevant.
Infrastructure-as-code modules
Another thing to notice/adopt, in these samples repo /infra folders, their main bicep file is just composing a number of modules, these modules represent the ‘right’ way of configuring each service for your organization, for example, pre-configured with private endpoints and RBAC based access and so on. You can look to build a repo of these modules approved for use in your organization to again, accelerate your application teams. You can also get started by using Azure Verified Modules, or build your own bicep module library, using inner-sourcing, sharing this between the application teams.
Kubernetes namespace vending
4. Tooling / Local loop development
Application teams experiment in the portal, develop locally, and provision from their local machine, then, add the automation and the managed identity to perform auto deployments via source control to apply in the later environments.
Ensure the teams can install/configure VS code / VS code extensions / command line tools / docker locally, and they have connectivity from these tools to the public cloud APIs they need.
@azure/identity libraries are now brilliant! For many dependencies, there is no need any more to use API keys or credentials that need to be stored in key-vault’s & rotated periodically, now, just use your EntraID’s corporate identity or a Managed Identity with RBAC. Using these identity libraries, If the developer wants to run their code locally, and connect to a database or message service in azure, the locally running app will operate with the local developers corporate identity (obtained through az login), and as long as the dev has the appropriate RBAC on the database, all good. If they deploy their app to Azure PaaS Service, without any code changes, the code will access the database using the services managed identity. This makes the apps secure and resilient, and can be prompted up to production securely.
Without these tools and access, the application teams will not be writing the most secure way of coding their app.
5. Track Metrics
Track anything that causes friction. Anytime the application team is waiting on something, a case, access to a service, resolving a bug, track it, dashboard it, and constantly priorities securely removing friction. Promote the creation of issues on the platform teams repo, keep a prioritized backlog. Hold monthly feedback sessions.
If Application teams are held up, they will try to work around issues to ship their product, this can mean using the wrong environment, or using a less than ideal service or configuration. So removing friction will result in better, more secure use of the public cloud.
Wrapup
Let me know what you think of these recommendations, if you are in a Platform team supporting Azure, I’d love to hear your experiences. If you are in a Application Team deploying to Azure, have a chat with the team providing you the environment, show them this blog, setup a regular call, its important the teams collaborate to get your companies products out the door, security, reliably and on time.
Microsoft Tech Community – Latest Blogs –Read More
Create a hyperlink to a website using a specific cell in a spreadsheet as what to look for at the we
I want to create a link to a website that will search within the website using a specific cell in a workbook as the basis for the search selection. For example, I want to use a stock symbol that is inserted into a cell on a page in a workbook that can be linked to a website like https://finance.yahoo.com that will search within yahoo.com to show a specific page on the yahoo website related to the stock symbol inserted into the specific cell in the workbook. I think the hyperlink looks something like https://finance.yahoo.com/quote//analysis where the cell with the inserted stock symbol is somehow inserted between the 2 forward slashes, i.e. //. Please help if you know how to create this link.
I want to create a link to a website that will search within the website using a specific cell in a workbook as the basis for the search selection. For example, I want to use a stock symbol that is inserted into a cell on a page in a workbook that can be linked to a website like https://finance.yahoo.com that will search within yahoo.com to show a specific page on the yahoo website related to the stock symbol inserted into the specific cell in the workbook. I think the hyperlink looks something like https://finance.yahoo.com/quote//analysis where the cell with the inserted stock symbol is somehow inserted between the 2 forward slashes, i.e. //. Please help if you know how to create this link. Read More
Windows 2K16 Standard recognized as Essential/SBS
Hello,
I would ask for some advice for an issu : we have 2 Windows Server server:
SRV1 , Windows Server 2016 as PDC
SRV2, Windows Server 2016 as data server .
For few weeks know, the SRV2 (no PDC) Just sleep every 7 day. When I analyzed log, I saw the problem :
But I’m in trouble, someone could explain me this please ?
Why Windows detected itself as SBS/Essential version ? (this log is typical)Why only SRV2 have this log/ issu ?
Hello, I would ask for some advice for an issu : we have 2 Windows Server server: SRV1 , Windows Server 2016 as PDCSRV2, Windows Server 2016 as data server . For few weeks know, the SRV2 (no PDC) Just sleep every 7 day. When I analyzed log, I saw the problem : But I’m in trouble, someone could explain me this please ? Why Windows detected itself as SBS/Essential version ? (this log is typical)Why only SRV2 have this log/ issu ? Read More
SCCM Bitlocker – will not start encryption
Good morning, all.
I’ve ran through the following setup guides and both are giving the same results.
– https://www.systemcenterdudes.com/sccm-mbam-integration/
We are on version 2403
I’m specifically getting the error
Unable to connect to the MBAM recovery and hardware service
Error Code -2147024809
Details : the parameter is incorrect
Looking at MSFTs documentation here
This error occurs if the website isn’t HTTPS, or the client doesn’t have a PKI cert.
We do not have a PKI infrastructure, MECM is EHTTP and the website is HTTPS enabled as i can get to the site on the computer that is throwing this error
– I’ve verified the laptop is in an OU with absolutely no bitlocker policies enabled
– checked RSOP to verify there is nothing rogue
– opened the firewall completely up for this machine
– nothing glaring in either bitlocker logs under the CCM logs folder
unsure where else to check – been googling for the last day and cannot come across much with this specific error message if HTTPS is enabled
Good morning, all. I’ve ran through the following setup guides and both are giving the same results.- https://msendpointmgr.com/2020/04/02/goodbye-mbam-bitlocker-management-in-configuration-manager-part-1/- https://www.systemcenterdudes.com/sccm-mbam-integration/ We are on version 2403 I’m specifically getting the errorUnable to connect to the MBAM recovery and hardware serviceError Code -2147024809 Details : the parameter is incorrect Looking at MSFTs documentation here – https://learn.microsoft.com/en-us/mem/configmgr/protect/tech-ref/bitlocker/client-event-logs#18-coreservicedownThis error occurs if the website isn’t HTTPS, or the client doesn’t have a PKI cert. We do not have a PKI infrastructure, MECM is EHTTP and the website is HTTPS enabled as i can get to the site on the computer that is throwing this error – I’ve verified the laptop is in an OU with absolutely no bitlocker policies enabled – checked RSOP to verify there is nothing rogue – opened the firewall completely up for this machine – nothing glaring in either bitlocker logs under the CCM logs folder unsure where else to check – been googling for the last day and cannot come across much with this specific error message if HTTPS is enabled Read More
LAB VM Hardening loosing connectivity
Hi, I need some help here,
I am working on a project on an AzureLab to automate the installation of a Privileged Access Management solution (CyberArk). The problem I am encountering is that the Vault VM (containing passwords) needs a drastic hardening.
Everything works until I restart the VM after the hardening process from AzureLAB (I am able to restart it from windows without a problem). The starting button never ends and after 10 minutes the VM is disconected. However, until those 10 minutes I am still able to use it as if it worked completly fine.
My only clue here is I suppose that AzureLAB uses behind the scenes a specific utility to check if the VM is actually started, wich is blocked by my hardening ?
Does anyone already encountered a similar problem ?
Any help would be appreciated, thanks.
Hi, I need some help here, I am working on a project on an AzureLab to automate the installation of a Privileged Access Management solution (CyberArk). The problem I am encountering is that the Vault VM (containing passwords) needs a drastic hardening. Everything works until I restart the VM after the hardening process from AzureLAB (I am able to restart it from windows without a problem). The starting button never ends and after 10 minutes the VM is disconected. However, until those 10 minutes I am still able to use it as if it worked completly fine. My only clue here is I suppose that AzureLAB uses behind the scenes a specific utility to check if the VM is actually started, wich is blocked by my hardening ? Does anyone already encountered a similar problem ?Any help would be appreciated, thanks. Read More
New Project – Regex on Project Name – Limit Special Characters
Greetings,
I haven’t been able to find any reference to this anywhere online…
When creating a new Project in PoL/PWA, is there a way to apply RegEx to the ‘Name’ (Project Name) field?
Basically: I would like to limit it to Alpha-Numeric, Spaces and Dashes…and definitely prevent ‘&’ and brackets ‘()[]’. Either make it required or prevent ‘finish’ button (Create project) from functioning until it is valid.
What might be a strategy to go about this validation?
(Note: I am somewhat surprised this has never been discussed before…especially considering that PWA creates a sub-site & use the ‘Name’ field as the URL. Frankly, it is surprising that PWA even allows ‘&’.)
Much appreciated,
-TR
Greetings, I haven’t been able to find any reference to this anywhere online… When creating a new Project in PoL/PWA, is there a way to apply RegEx to the ‘Name’ (Project Name) field? Basically: I would like to limit it to Alpha-Numeric, Spaces and Dashes…and definitely prevent ‘&’ and brackets ‘()[]’. Either make it required or prevent ‘finish’ button (Create project) from functioning until it is valid. What might be a strategy to go about this validation? (Note: I am somewhat surprised this has never been discussed before…especially considering that PWA creates a sub-site & use the ‘Name’ field as the URL. Frankly, it is surprising that PWA even allows ‘&’.) Much appreciated,-TR Read More
Advanced hunting does not return network protection logs
Hello,
I am able to find network protection logs in event viewer:
However, I can’t retrieve network protection logs using advanced hunting and KQL query:
https://help.redcanary.com/hc/en-us/articles/8265764276375-Turn-on-Microsoft-Network-Protection
DeviceNetworkEvents
|where ActionType in (‘ExploitGuardNetworkProtectionAudited’,’ExploitGuardNetworkProtectionBlocked’)
Am I missing something?
Thank you
Hello, I am able to find network protection logs in event viewer: However, I can’t retrieve network protection logs using advanced hunting and KQL query:https://help.redcanary.com/hc/en-us/articles/8265764276375-Turn-on-Microsoft-Network-Protection DeviceNetworkEvents|where ActionType in (‘ExploitGuardNetworkProtectionAudited’,’ExploitGuardNetworkProtectionBlocked’) Am I missing something? Thank you Read More
Matching the row numbers between two tables in two different Fabric Lakehouse
Hi,
I’ve implemented inside MS Fabric two lakehouses, the first one at Bronze layer and the second one at Silver layer.
Inside the two layers I’ve created the same table, f.e. Customers, doing some controls at Silver layer.
I need to check that the number of Customer rows at Bronze layer is equal to the number of Customer rows at Silver layer.
Is it possible to implement a such control in Microsoft Purview for Fabric? Thanks
Hi,I’ve implemented inside MS Fabric two lakehouses, the first one at Bronze layer and the second one at Silver layer.Inside the two layers I’ve created the same table, f.e. Customers, doing some controls at Silver layer.I need to check that the number of Customer rows at Bronze layer is equal to the number of Customer rows at Silver layer.Is it possible to implement a such control in Microsoft Purview for Fabric? Thanks Read More
Persistent volume with Azure Files on AKS
Based on our document, when we need to statically create persistent volume with Azure Files integration, we need to create Kubernetes Secret to store storage account name and access key. And assign the secret into PV yaml file.
However, this mechanism will allow other people that has read permission of AKS cluster to easily read the Kubernetes secret and get the storage account key.
Our customer has concern about this and want to know if there was other mechanism that can prevent this risk (for example , need to fetch account key from key vault first , not directly put storage account key into Kubernetes secret)
Based on our document, when we need to statically create persistent volume with Azure Files integration, we need to create Kubernetes Secret to store storage account name and access key. And assign the secret into PV yaml file.https://learn.microsoft.com/en-us/azure/aks/azure-csi-files-storage-provision#create-a-kubernetes-secretHowever, this mechanism will allow other people that has read permission of AKS cluster to easily read the Kubernetes secret and get the storage account key. Our customer has concern about this and want to know if there was other mechanism that can prevent this risk (for example , need to fetch account key from key vault first , not directly put storage account key into Kubernetes secret) Read More
Azure Functions triggers and bindings for building intelligent apps with Azure OpenAI
Azure Functions triggers and bindings for building intelligent apps with Azure OpenAI
Over the last year, we have seen a large interest from customers to bring intelligence to their existing and new applications based on the innovation that has recently occurred in artificial intelligence and particularly Azure OpenAI.
After working with customers, a few common challenges have shown up when building these intelligent applications.
Developers often feel that they need to become AI engineers to integrate the capabilities into their application.
Most of the samples are in languages like Python and this is often not the expertise of the existing developers or what the organizations’ applications are built with.
Developers also find it difficult to know how successful their integration using OpenAI will be so right sizing their application given the new space can be very challenging.
Lastly, although multiple SDKs exist for working with OpenAI from Microsoft, there are still quite a lot of external tools and SDKs that seem required for sophisticated applications, but clear support is not present.
To help with these challenges, Azure Functions now supports a new extension for OpenAI that contains a set of triggers and bindings that make it easier to build applications that require the following capabilities.
Retrieval Augmented Generation (Bring your own data for semantic search)
Data ingestion with Functions bindings.
Automatic chunking and embeddings creation.
Store embeddings in vector database including AI Search, Cosmos DB for MongoDB, and Azure Data Explorer.
Binding that takes prompts, retrieves documents, sends to OpenAI LLM, and returns to user.
Text completion for content summarization and creation
Input binding that takes prompt or content and returns response from LLM.
Chat assistants
Input binding to chat with LLMs.
Output binding to retrieve chat history from persisted storage.
Skills trigger to extend capabilities of OpenAI LLM through natural language.
These bindings are available in preview for C#, Java, Python, Node, and PowerShell. You can see the documentation for more information.
Use your own data with Azure Open AI
A typical architecture when bringing your own data for semantic search using Retrieval Augmented Generation is shown below.
In this example, customers documents are uploaded by a client and stored for later retrieval. These documents are broken into chunks and sent to Azure OpenAI to create embeddings on the content. The embeddings are then stored in a vector database like AI Search, Cosmos DB for MongoDB or Azure data explorer. These are the current vector stores supported by Azure Functions OpenAI extension, but more will be added in the future.
Once the documents are successfully stored in a vector store, the client can now ask questions of this content to enhance the application with organization data driven by natural language with Azure OpenAI.
This is performed by sending the prompt to the Azure Function binding, where it will be sent to Azure OpenAI to create embeddings, and then these embeddings will be used to do semantic search against the vector store to retrieve relevant content. This content is then sent to Azure OpenAI along with the initial prompt so that an answer can be sent back to the client to enhance the application.
Extending Azure OpenAI chat experience with function calling
A second common scenario we see customers doing is extending the chat experience of working with Azure OpenAI to perform additional actions that the LLM cannot perform. These can be things like sending an email on the findings, looking up support ticket information, perform timecard reporting, or any other action that makes sense in the chat experience that is tailored for the user.
This capability in Azure OpenAI is made available through Function calling https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling using the Assistant API.
A typical flow for this scenario is described below:
In this example, the client starts a chat session with Azure OpenAI, interactions are automatically saved to a table storage account for history and auditing as required and to resume chats later. If a question comes that OpenAI is not able to answer, the Azure Functions Assistant Trigger capabilities are automatically sent for each interaction so that OpenAI can then direct Azure Functions bindings to call the trigger for the user to perform those custom actions.
This capability is automatically delivered by the Functions triggers and bindings to enable faster development and to give the developer more time to focus on the business integration of the application.
Content summarization and generation
One of the most common use cases for customers using Azure OpenAI is to integrate the ability to create content, improve existing content, or convert content to another form.
In this scenario, the client makes a prompt request with the content and instructions to OpenAI and the Azure Functions text completion binding takes this prompt and sends to OpenAI and returns the response to the client.
Different parameters are supported on the binding to help enhance the response.
As you can see in these three scenarios, the Azure Functions bindings and triggers for Azure OpenAI delivers built in capabilities for developers to integrate intelligence into their new and existing applications. The bindings are available in all supported languages in Azure Functions including .NET, Java, Python, Node, and PowerShell.
Given the serverless nature of Azure Functions, the OpenAI integration will automatically scale based on customer demand so organizations can experiment and know that the underlying platform can meet any usage scenarios. For applications that might need orchestration and workflow capabilities in these intelligent function applications, the native support for durable functions enables developers to take full advantage of the service to deliver the right solution for their users.
Lastly, for customers who need real-time RAG (Retrieval augmented generation) for their organizational data, existing triggers and bindings in the Azure Functions ecosystem enable automatic processing of data in files, databases, messaging services, or any of the systems natively supported for integration https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings?#supported-bindings
If you would like to get started building intelligence into your applications with Azure Functions and Azure OpenAI, please visit the documentation or you can see an end-to-end demo solution that you can deploy to see all of the capabilities.
The Azure Functions OpenAI triggers and bindings are currently in public preview, and we would love to hear feedback from you on improvements or issues that you experience. You can file them on Issues · Azure/azure-functions-openai-extension (github.com)
Microsoft Tech Community – Latest Blogs –Read More
Enhance productivity with Microsoft Teams certified devices
Microsoft Teams is the hub for teamwork, enabling effortless communication and collaboration. With Microsoft Teams certified devices, you can elevate your meeting and calling experience. These devices are carefully tested and certified to ensure they complement the Teams environment and make every interaction more engaging and productive.
Why use Teams certified devices?
Teams certified devices are specifically designed to enhance your Teams experience. Let’s explore some the benefits below:
Quality and Compatibility: These devices undergo thorough testing and certification to ensure they meet the highest standards of quality and reliability, delivering high-fidelity audio and HD video to ensure clear and effective communication. You can easily get started without any configuration required for these devices to work with Teams.
Firmware Updates: All devices support firmware updates to ensure you have access to the latest features and performance improvements.
Easily access Teams features: Personal peripheral devices are equipped with the Microsoft Teams button, which are designed to streamline your workflow by providing quick access to essential Teams functions. Let’s explore the functionality below:
Bring up the Teams App.
Join a Meeting.
Raise Your Hand within a meeting.
Optimized performance and reliable calling with Teams certified phone devices
Teams certified phone devices deliver reliable and high-quality calling experiences with Teams, making it easy to make and receive calls. We’re committed to supporting reliable experiences on Teams phone devices and have made the following improvements to support uninterrupted experiences for our users. See the full list of updates here.
Simplified user experience
We continue to invest in new capabilities that create easy to use and consistent experiences for Teams phone devices users. The features below are only a few of the investments we’ve made to help users enjoy a unified experience that makes communication and collaboration easier.
Enhanced user Experience: We have made updates to the user interface on the Calls app, and the Dialpad, to make it easier and faster for you to navigate and access the features you need. You can now switch between the Calls app and the home screen with ease and enjoy a Dialpad-only view in both portrait and landscape modes, to avoid typing errors.
New call handling capabilities: We’ve introduced several new capabilities and improvements to help you manage your calls in less clicks. You can now set up call forwarding from the phone home screen, send incoming calls to voicemail, and update your caller ID to make a call on behalf of a call queue phone number.
Performance, reliability, and stability enhancements
We recognize the critical importance of device performance and reliability for our customers using Teams certified phone devices. We are dedicated to delivering calling and meeting experiences that work when you need them and have made several investments to ensure reliable and consistent communications for our customers.
Improved performance and reliability: We’re continuously monitoring reliability incidents and have addressed the top issues based on customer feedback. We have made improvements to the Teams app by organizing and updating its building blocks and resources. These updates have noticeably improved app performance, making the app faster to use and load.
OS upgrade: In collaboration with our OEM partners, we are advancing support for Android OS 12 on phone devices, to ensure users have the latest security updates available.
While Microsoft Teams phone devices offer the most immersive Teams experience, we understand that numerous customers have prior investments in SIP devices. SIP Gateway allows these customers to utilize their existing telephony equipment as they transition to Teams Phone, ensuring that the fundamental calling features of Teams are accessible. Learn more about SIP Gateway and see the full list of supported SIP devices here.
Learn more
Explore the comprehensive portfolio of Teams certified devices here. Easily find and buy certified Teams devices through the Teams admin center or within the new device store in the Teams app.
Stay up to date on the latest feature announcements for Teams certified peripherals and phone devices.
Microsoft Tech Community – Latest Blogs –Read More
Announcing GA of Advance Notifications for Azure SQL Managed Instance
What are advance notifications?
Why should I configure advance notifications?
Affected region
Affected service
List of impacted resources
Status of maintenance event
How do advance notifications work?
How do you set up advance notifications?
Conclusion
Microsoft Tech Community – Latest Blogs –Read More
Is there some way to resolve a DLP false positive on a Sharepoint file?
We received an email from SharePoint indicating a file that’s shared outside our organization has a credit card number on it. The CAD drawing has been fully inspected and confirmed to not have a credit card number in it.
Our DLP policy is otherwise sound. Is there some way to just update this one file to confirm it’s ok?
When selecting REPORT AN ISSUE, this doesn’t go to anyone in our organization.
We received an email from SharePoint indicating a file that’s shared outside our organization has a credit card number on it. The CAD drawing has been fully inspected and confirmed to not have a credit card number in it. Our DLP policy is otherwise sound. Is there some way to just update this one file to confirm it’s ok? When selecting REPORT AN ISSUE, this doesn’t go to anyone in our organization. Read More
Clicking on Manage on a group Member’s Properties page sends me to a completely different device
Recently I sent a wipe to a computer (Device A). Groups – Group A – Members – Device A – Properties – Manage – Wipe. But, when I navigated to the Manage tab, it sent me to a completely different device (Device X). Device X is not in Group A, or any other group. When I sent a wipe, it sent it to Device X, not Device A, and it is currently still pending.
We found out after that this glitch starting happening with every member of every group in our system. Clicking Manage on any device in any group sent us to the Manage page of Device X. If we search for devices in Devices instead of Groups, this glitch does not happen.
What is going on? Is there a possibility that all of our devices could be wiped because of this glitch?
Recently I sent a wipe to a computer (Device A). Groups – Group A – Members – Device A – Properties – Manage – Wipe. But, when I navigated to the Manage tab, it sent me to a completely different device (Device X). Device X is not in Group A, or any other group. When I sent a wipe, it sent it to Device X, not Device A, and it is currently still pending.We found out after that this glitch starting happening with every member of every group in our system. Clicking Manage on any device in any group sent us to the Manage page of Device X. If we search for devices in Devices instead of Groups, this glitch does not happen.What is going on? Is there a possibility that all of our devices could be wiped because of this glitch? Read More
Parameterized function in cross workspace queries
Hi,
I’m looking to get some input on a query I’m working on.
The thought is to create a query for each customer in our Lighthouse tenant, then be able to query a function named for the customer, so for example,
CustomerA(“SigninLogs”)
| where Identity contains “someperson”
However, when calling the function above, I’m getting the following error.
Is there some limitation with the workspace command or where am I doing wrong?
Hi, I’m looking to get some input on a query I’m working on. The thought is to create a query for each customer in our Lighthouse tenant, then be able to query a function named for the customer, so for example,CustomerA(“SigninLogs”)| where Identity contains “someperson” However, when calling the function above, I’m getting the following error. Is there some limitation with the workspace command or where am I doing wrong? Read More
Attendance tab not showing for co-organisers for Webinars
When setting up some recent webinars that are open to the public, we have added a number of co-organisers across our organisation to allow them to check who’s registered and keep tabs on registration numbers for each webinar.
This used to be visible in Teams on the webinar menu bar for easy access to everyone. But despite checking the settings in meeting options and ensuring that ‘Allow attendance report’ is selected as ‘Yes,’ only the main webinar organiser can see this attendance report tab?
Any suggestions?
When setting up some recent webinars that are open to the public, we have added a number of co-organisers across our organisation to allow them to check who’s registered and keep tabs on registration numbers for each webinar. This used to be visible in Teams on the webinar menu bar for easy access to everyone. But despite checking the settings in meeting options and ensuring that ‘Allow attendance report’ is selected as ‘Yes,’ only the main webinar organiser can see this attendance report tab? Any suggestions? Read More
Problems with Group Header Formatting in Sharepoint List
I’m having an couple of issues with formatting the grouped headers of a sharepoint list where the JSON formatting seems to be having an odd effect. The first thing, is when I apply any JSON formatting at all, I lose the ability to click on the headers to filter by that group. This is despite having the default click custom row action included. The second issue is when I change the “hideSelection” field to true, the group heading jumps from it’s normal position to half way down the width of the list. If tried setting different padding and position fields, but it always applies from this new position in the middle of the screen. Any advise would be appreciated!
{
“$schema”: “https://developer.microsoft.com/json-schemas/sp/v2/row-formatting.schema.json”,
“hideSelection”: true,
“groupProps”: {
“headerFormatter”: {
“elmType”: “div”,
“style”: {
“padding-left”: “12px”,
“font-size”: “16px”,
“font-weight”: “400”,
“cursor”: “pointer”,
“outline”: “0px”,
“white-space”: “nowrap”,
“text-overflow”: “ellipsis”
},
“customRowAction”: {
“action”: “defaultClick”
},
“children”: [
{
“elmType”: “div”,
“children”: [
{
“elmType”: “span”,
“style”: {
“padding”: “5px 5px 5px 5px”
},
“txtContent”: “@group.fieldData.displayValue”
}
]
},
{
“elmType”: “div”,
“children”: [
{
“elmType”: “div”,
“style”: {
“display”: “flex”,
“flex-direction”: “row”,
“justify-content”: “center”
},
“children”: [
{
“elmType”: “div”,
“txtContent”: “=’ (‘ + @group.count + ‘)'”
}
]
}
]
}
]
}
}
}
I’m having an couple of issues with formatting the grouped headers of a sharepoint list where the JSON formatting seems to be having an odd effect. The first thing, is when I apply any JSON formatting at all, I lose the ability to click on the headers to filter by that group. This is despite having the default click custom row action included. The second issue is when I change the “hideSelection” field to true, the group heading jumps from it’s normal position to half way down the width of the list. If tried setting different padding and position fields, but it always applies from this new position in the middle of the screen. Any advise would be appreciated! {
“$schema”: “https://developer.microsoft.com/json-schemas/sp/v2/row-formatting.schema.json”,
“hideSelection”: true,
“groupProps”: {
“headerFormatter”: {
“elmType”: “div”,
“style”: {
“padding-left”: “12px”,
“font-size”: “16px”,
“font-weight”: “400”,
“cursor”: “pointer”,
“outline”: “0px”,
“white-space”: “nowrap”,
“text-overflow”: “ellipsis”
},
“customRowAction”: {
“action”: “defaultClick”
},
“children”: [
{
“elmType”: “div”,
“children”: [
{
“elmType”: “span”,
“style”: {
“padding”: “5px 5px 5px 5px”
},
“txtContent”: “@group.fieldData.displayValue”
}
]
},
{
“elmType”: “div”,
“children”: [
{
“elmType”: “div”,
“style”: {
“display”: “flex”,
“flex-direction”: “row”,
“justify-content”: “center”
},
“children”: [
{
“elmType”: “div”,
“txtContent”: “=’ (‘ + @group.count + ‘)'”
}
]
}
]
}
]
}
}
} Read More