Month: October 2024
Microsoft names Jay Parikh as a member of the senior leadership team
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning.
When I look to the next phase of Microsoft, both in terms of our scale and our massive opportunity ahead, it’s clear that we need to continue adding exceptional talent at every level of the organization to increase our depth and capability across our business priorities – spanning security, quality, and AI innovation.
With that context, I’m excited to share that Jay Parikh is joining Microsoft as a member of the senior leadership team (SLT), reporting to me. Jay was the global head of engineering at Facebook (now Meta) and most recently was CEO of Lacework. He has an impressive track record, with a unique combination of experiences building and scaling technical teams that serve both commercial customers and consumers. His deep connections across the start-up and VC ecosystems, coupled with his leadership roles at Akamai and Ning, will bring valuable perspective to Microsoft.
Over the years I’ve known Jay, I’ve admired him as a technology leader and respected engineer with a deep commitment to driving innovation and striving for operational excellence. His focus extends beyond technology, with his passion for and dedication to developing people, fostering a strong culture, and building world-class talent, all in service of delivering faster value to customers and driving business growth. In fact, there are very few leaders in our industry with Jay’s experience in leading teams through the rapid growth and scale required to support today’s largest internet businesses.
As he onboards, Jay will immerse himself in learning about our company priorities and our culture and will spend time connecting with our senior leaders and meeting with customers, partners, and employees around the world. We will share more on his role and focus in the next few months.
Please join me in welcoming Jay to Microsoft.
Satya
The post Microsoft names Jay Parikh as a member of the senior leadership team appeared first on The Official Microsoft Blog.
Satya Nadella, Chairman and CEO, shared the below communication with Microsoft employees this morning. When I look to the next phase of Microsoft, both in terms of our scale and our massive opportunity ahead, it’s clear that we need to continue adding exceptional talent at every level of the organization to increase our depth and…
The post Microsoft names Jay Parikh as a member of the senior leadership team appeared first on The Official Microsoft Blog.Read More
Programmatically Access a Quarantined File
Hello,
We would like to run additional analysis on quarantined files as part of a custom workflow. Is there a way to programmatically access quarantined files without restoring them from the quarantine. We’d like to leave the files in the quarantine, but we want to copy the files into another location within our organization outside of Defender for deeper malware analysis. Ideally, we’d like to use an MS Defender API for this post-quarantine action vs. using the MpCmdRun.exe util as we don’t want to restore the file.
Thanks!
Laurel
Hello, We would like to run additional analysis on quarantined files as part of a custom workflow. Is there a way to programmatically access quarantined files without restoring them from the quarantine. We’d like to leave the files in the quarantine, but we want to copy the files into another location within our organization outside of Defender for deeper malware analysis. Ideally, we’d like to use an MS Defender API for this post-quarantine action vs. using the MpCmdRun.exe util as we don’t want to restore the file. Thanks!Laurel Read More
Recovering Quarantined File without Restoring
Hello Microsoft Community,
I have been exploring the Defender for Endpoint API and noticed that it mentions the ability to fetch copies of files associated with alerts using a LiveResponse request using (GetFile). However, I’ve observed that for some alerts, Microsoft Defender quarantines the associated files. Is there a way to obtain a copy of a quarantined file or get the file itself without restoring it? Additionally, is there a way to determine if a file associated with an alert has been quarantined through the API, rather than manually logging into the Microsoft Defender for Endpoint portal?
I understand there are two common methods for restoring a file from quarantine: through the Microsoft Defender for Endpoint portal or via the command line. Both methods are detailed here: https://learn.microsoft.com/en-us/defender-endpoint/respond-file-alerts#restore-file-from-quarantine. My concern is that restoring the file will cause Defender to quarantine it again, resulting in a new alert for the same file.
In summary, is there a way to retrieve a copy of a quarantined file or the file itself without restoring it? And how can I know whether or not has been quarantined, by using the Microsoft Defender For Endpoint API or other Microsoft based API.
Thank you!
Hello Microsoft Community, I have been exploring the Defender for Endpoint API and noticed that it mentions the ability to fetch copies of files associated with alerts using a LiveResponse request using (GetFile). However, I’ve observed that for some alerts, Microsoft Defender quarantines the associated files. Is there a way to obtain a copy of a quarantined file or get the file itself without restoring it? Additionally, is there a way to determine if a file associated with an alert has been quarantined through the API, rather than manually logging into the Microsoft Defender for Endpoint portal? I understand there are two common methods for restoring a file from quarantine: through the Microsoft Defender for Endpoint portal or via the command line. Both methods are detailed here: https://learn.microsoft.com/en-us/defender-endpoint/respond-file-alerts#restore-file-from-quarantine. My concern is that restoring the file will cause Defender to quarantine it again, resulting in a new alert for the same file. In summary, is there a way to retrieve a copy of a quarantined file or the file itself without restoring it? And how can I know whether or not has been quarantined, by using the Microsoft Defender For Endpoint API or other Microsoft based API. Thank you! Read More
Folders Delayed on Mac
When using Outlook on mac in either Chrome or Safari, my folders are delayed in showing up when I click on “Move To”. After several seconds or clicks the folders finally show up. Anyone else experiencing this?
When using Outlook on mac in either Chrome or Safari, my folders are delayed in showing up when I click on “Move To”. After several seconds or clicks the folders finally show up. Anyone else experiencing this? Read More
Why are SharePoint list items duplicating automatically?
My list acts as a task list and is populated through a MS Form and then power automate. This process works as it should and allows users to add up to 20 different tasks (Items) within the list at once. However I have set up a custom form within the list that has been edited through power apps and allows users to add a singular task if they want. Tasks are grouped by area. If I add a single task to an area through the power app, this task doesn’t appears, but instead just duplicates all the tasks that were already under that area and creates a new group of that area. So I then delete the duplicated items and the original ones then automatically get removed also, leaving me with the individual task added… Any Idea what’s causing this? I have an identical set up for a separate part of the factory and do not have this problem ( This one was set up first). However I checked both against each other in terms of the power app set up and cannot find the problem. Any suggestions?
My list acts as a task list and is populated through a MS Form and then power automate. This process works as it should and allows users to add up to 20 different tasks (Items) within the list at once. However I have set up a custom form within the list that has been edited through power apps and allows users to add a singular task if they want. Tasks are grouped by area. If I add a single task to an area through the power app, this task doesn’t appears, but instead just duplicates all the tasks that were already under that area and creates a new group of that area. So I then delete the duplicated items and the original ones then automatically get removed also, leaving me with the individual task added… Any Idea what’s causing this? I have an identical set up for a separate part of the factory and do not have this problem ( This one was set up first). However I checked both against each other in terms of the power app set up and cannot find the problem. Any suggestions? Read More
Defender Vulnerability Managment Baseline Assessment
Looking for some assistance, we have onboarded 2012 through to 2022 servers via MDE into Defender for Servers and the devices are all visible within the Device list on the security portal. Issue I’m have is that if we try and create a baseline assessment policy against any of the server groups, even with no filtering in place, it is only picking up a small handful of the devices. I’m extracted all the devices for all the OS versions and ran various pivots to try and get a match on numbers to see if there was any common ground as to what was being detected.
Any suggestions as I don’t want to just enable it without knowing what servers are being looked at.
Looking for some assistance, we have onboarded 2012 through to 2022 servers via MDE into Defender for Servers and the devices are all visible within the Device list on the security portal. Issue I’m have is that if we try and create a baseline assessment policy against any of the server groups, even with no filtering in place, it is only picking up a small handful of the devices. I’m extracted all the devices for all the OS versions and ran various pivots to try and get a match on numbers to see if there was any common ground as to what was being detected. Any suggestions as I don’t want to just enable it without knowing what servers are being looked at. Read More
फोनपे में गलत ट्रांजेक्शन से पैसे वापस कैसे पाएं…..
फोनपे गलत ट्रांजेक्शन होने पर फोनपे से पैसे कैसे वाप… -ऐप में जाकर “ट्रांजैक्शन” ((,+91-629O-348-172) या “इतिहास” सेक्शन में जाएं. असफल लेन- देन चुनें. “वापस लें” या “वापस लेने के लिए अनुरोध करें” विकल्प चुनें. अगर रिफंड नहीं मिलता, तो अपने बैंक से संपर्क करे
फोनपे गलत ट्रांजेक्शन होने पर फोनपे से पैसे कैसे वाप… -ऐप में जाकर “ट्रांजैक्शन” ((,+91-629O-348-172) या “इतिहास” सेक्शन में जाएं. असफल लेन- देन चुनें. “वापस लें” या “वापस लेने के लिए अनुरोध करें” विकल्प चुनें. अगर रिफंड नहीं मिलता, तो अपने बैंक से संपर्क करे Read More
Announcing the General Availability of the Microsoft 365 Document Collaboration Partner Program
Empowering Seamless Collaboration for Modern Workforces
We are thrilled to announce the general availability of the Microsoft 365 Document Collaboration Partner Program (MDCPP). This new program allows eligible independent software vendors (ISVs) who provide cloud communication and collaboration platforms the opportunity to offer customers a collaboration experience inside and outside meetings. ISVs who join the program can enable customers on their platform to view, collaborate, and coauthor documents within Microsoft apps. We’re excited to expand this experience available in Microsoft Teams to new platforms.
Enable Live Microsoft 365 Document Collaboration
The Microsoft 365 Document Collaboration Partner Program allows eligible ISVs to include collaborative Microsoft 365 app experiences in their preferred communication and collaboration platform. And we’re bring the same quality customers expect from Microsoft for accessibility, security, and privacy. The Microsoft 365 Document Collaboration Partner Program includes:
Web app option for real-time co-authoring
Now eligible ISVs can provide simultaneous collaboration and editing of documents integrated into their platform. This includes the ability to integrate Word, Excel, and PowerPoint documents from SharePoint and OneDrive for Business. Available today.
Live app option in meetings
Soon, customers will be able to experience interactive and engaging experiences with PowerPoint Live and Excel Live in their preferred meeting solution. With PowerPoint Live and Excel Live, meeting participants can interact and collaborate with documents in real time, including editing and exploring documents directly in the meeting window. Presenters can also see rich presenter view, speaker notes, and other available presentation tools. Coming soon.
Partnering for Success with Zoom
We have collaborated with Zoom to leverage their expertise in communication and collaboration in the development of MDCPP.
“We continue the work with Microsoft as announced earlier this year to bring this to market for Zoom Workplace customers,” said Brendan Ittelson, chief ecosystem officer, Zoom.
Getting Started
We invite all eligible ISVs to take advantage of the Microsoft 365 Document Collaboration Partner Program and transform their document collaboration process. To get started, visit our website for more information and resources.
Join the MDCPP Today
The Microsoft 365 Document Collaboration Partner Program is available to all eligible Microsoft partners and customers. Join us in this exciting journey and experience the future of document collaboration.
We are excited for an easier, more collaborative future for our customers.
Microsoft Tech Community – Latest Blogs –Read More
Stay ahead of cyber threats with security skill building
As organizations around the world contend with mounting cyber threats, one thing is clear: building security skills is everyone’s responsibility. And cybersecurity isn’t just for October—it’s a year-round concern. Sophisticated attacks are increasingly common and costly; combatting them requires a comprehensive strategy that invests not only in cutting-edge technology, but in the knowledge and abilities of your team.
Your team’s security skill level can be the difference between significant financial and reputational losses and successfully defending against malicious actors. Unfortunately, there is a significant and growing security skills gap that can limit an organization’s ability to respond.
That’s why we published the paper Stay ahead of cyberthreats with security skill building from Microsoft Learn, to share how prioritizing security skill building in your organization can help you strengthen your defenses against cyber risks and remain resilient when threats arise. Resiliency requires leaders to address a critical reality—that security isn’t just a technology issue, it’s a human issue.
At Microsoft, we’ve weathered prominent security threats, evolved with the changing security landscape, and confronted the challenges that come with maintaining security skills in a world where technology never stops changing. This lived experience has prompted our own security transformation and a renewed commitment to corporate accountability. And we’ve learned it’s critical to achieve the right balance of investments in security across people and technology.
So how do you ensure your team has sufficient levels of competency, especially when 76% of organizations believe that “security skills are the most difficult abilities to recruit for and retain” and 78% say they “lack the in-house skills needed to fully achieve their cybersecurity objectives?”
That’s not an easy question, but we found there are two important actions that stand out as critical to your security skilling plans:
Build a learning-first culture—leaders must position skills development as a condition of collective and individual success. At Microsoft, for example, we encourage leaders to create time and space for their teams to learn, develop role-based skilling plans, and emphasize the value of learning from each other.
Place security skill-building initiatives at the center of your cybersecurity strategy—security skill-building for both technical and non-technical employees should be championed from the top of the organization and include a clearly defined skilling path for designated teams.
Your skilling efforts will likely need to focus on building baseline skillsets as well as deeper bodies of knowledge across teams. Start by examining the structure of your organization to determine the skills each team needs to learn or expand. It’s also a good idea to keep in mind that teams need to work together to strengthen security measures across your organization.
Because security skills are not one size fits all, it’s helpful to designate accountabilities and requisite skills by team—including business groups, IT, and data specialists. To succeed, everyone in the organization must cultivate a security mindset.
By setting the expectation that everyone in your organization is responsible for digital security, you will be well-positioned to adopt and implement security skill-building plans as a central component of your cybersecurity strategy.
Wondering how Microsoft Learn can help accelerate your security skill-building journey? Check out our Security hub—learn.microsoft.com/security
Microsoft Tech Community – Latest Blogs –Read More
Microsoft now a Leader in three major analyst reports for SIEM
We’re excited and honored to be positioned in the Leaders Category in the IDC MarketScape: Worldwide SIEM (security information and event management) for Enterprise 2024 Vendor Assessment (doc #US51541324, September 2024)—our third major analyst report in SIEM to name Microsoft as a Leader. We were recognized in the most recent reports as a Leader in the 2024 Gartner® Magic Quadrant™ for Security Information and Event Management and as a Leader in The Forrester Wave™: Security Analytics Platforms, Q4 2022. We believe this position validates our vision and continued investments in Microsoft Sentinel, making it a best-in-class, cloud-native SIEM solution. It’s always a rewarding experience when trusted analysts recognize the continued work we’ve put into helping our customers modernize their operations, improve their security posture, and work more efficiently.
A Leader in the market with an innovative solution for the SOC
Microsoft Sentinel provides a unique experience for customers to help them act faster and stay safer while managing the scaling costs of security. Customers choose our SIEM in order to:
Protect everything with a comprehensive SIEM solution. Microsoft Sentinel is a cloud-native solution that supports detection, investigation, and response across multi-cloud and multi-platform data sources with 340+ out-of-the-box connectors A strength of Microsoft’s offering is its breadth, which includes user entity and behavior analytics (UEBA), threat intelligence and security orchestration, automation, and response (SOAR) capabilities, along with native integrations into Microsoft Defender threat protection products.
Enhance security with a unified security operations platform. Customers get the best protection when pairing Microsoft Sentinel with Defender XDR in Microsoft’s unified security operations platform. The integration not only brings the two products together into one experience but combines functionalities across each to maximize efficiency and security. One example is the unified correlation engine which delivers 50% faster alerting between first- and third-party data, custom detections and threat intelligence.3 Customers can stay safer with a unified approach, with capabilities like automatic attack disruption—which contains attacks in progress, limiting their impact at machine speed.
Address any scenario. As the first cloud-native SIEM, Microsoft Sentinel helps customers observe threats across their digital estate with the flexibility required for today’s challenges. Our content hub offerings include over 200 Microsoft- created solutions and over 280 community contributions. The ability to adapt to the unique use cases of an organization is something called out in both the Forrester and Gartner reports.
Scale your security coverage with cloud flexibility. Compared with legacy, on-premises SIEM solutions, Microsoft Sentinel customers see up to a 234% return on investment (ROI).1 This makes it an attractive option for customers looking for a scalable offering to meet the evolving needs of their business while managing the costs of data. We’ve recently launched a new, low-cost data tier called Auxiliary Logs to help customers increase the visibility of their digital environment, while keeping their budgets in check. In addition, Microsoft’s SOC Optimizations feature, a first of its kind offering, provides targeted recommendations to users on how to better leverage their security data to manage costs and maximize their protection, based on their specific environment and using frameworks like the MITRE attack map
Respond quickly to emergent threats with AI. Security Copilot is a GenAI tool that can help analysts increase the speed of their response, uplevel their skills, and improve the quality of their work. 92% of analysts reported using Copilot helped make them more productive and 93% reported an improvement in the quality of their work.
What’s next in Microsoft Security
Microsoft is dedicated to continued leadership in security through ongoing investment to provide customers with the intelligence, automation, and scalability they need to protect their businesses and work efficiently. New and upcoming enhancements include more unified features across SIEM and XDR, exposure management and cloud security in the unified security operations platform, and our SIEM migration tool—which now supports conversion of Splunk detections to Microsoft Sentinel analytics rules and additional Copilot skills to help analysts do their job better.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Microsoft Tech Community – Latest Blogs –Read More
Dynamically fill Execute Flow – Flow ID from column value
I’m hoping to have a Microsoft List that has a list of Flows with a button to launch the flow. I have a column where I would enter the ID of the flow to then be able to input the value dynamically into the JSON on the button column.
Then formatting the JSON on the button launch column to include the following.
I’m hoping to have a Microsoft List that has a list of Flows with a button to launch the flow. I have a column where I would enter the ID of the flow to then be able to input the value dynamically into the JSON on the button column. Then formatting the JSON on the button launch column to include the following. “customRowAction”: {“action”: “executeFlow”,”actionParams”: “{“id”: “[$FlowID]”}” FlowID being the sharepoint name of the column storing the Flow ID. When I click the button to launch I get the following error – “The provided flow name ‘[$FlowID]’ contains invalid characters.”. If I enter the ID value into the JSON then the flow triggers fine. Is this possible? What am I doing wrong? Read More
腾龙娱乐联微开户70411382人
实际的环境中,IoT设备可以生成大量的数据。为了减少上传的数据量或者降低控制策略的生产延迟,有时必须在设备端对数据进行实时分析或处理。Azure Stream Analytics服务就是很好的解决方案方案一,用户可以从Azure Portal中创建Azure Stream Analytics服务,然后在Azure IoTHub中将其设置为IoT Edge Module并部署到Azure IoT Edge设备上。本文将演示如何创建Azure Stream Analytics作业,并将其部署到IoT Edge设备上。
1.创建存储账户
首先,在Azure门户中,转到“新建”,在搜索框中输入“存储”,选择“存储帐户 – Blob,文件、表、队列”。
然后,在“创建存储帐户”中,输入存储帐户的名称,选择存储IoTHub的相同位置(这里为东亚),然后选择“创建”。请记下该名称以供稍后使用。
接下来,转到刚才创建的存储账户,选择“Blob Service”。为Azure Stream Analytics模块创建一个用于存储数据的新容器,将访问级别设置为“Container”,选择“确定”。
2.创建Azure流分析作业
首先,在Azure门户中,转到“创建”>“物联网”,然后选择“流分析作业”。
然后,在“新建流分析作业”中执行以下操作:在“作业名称”框中输入作业名称;在“托管环境”下,选择“Edge”;在剩余字段中使用默认值。
然后,在所创建作业中的“作业拓扑”下,依次选择“输入”-“添加”。在“输入别名”框,输入温度。在“源类型”框,选择“数据流”。剩余字段中使用默认值。
接下来,在所创建作业中的“作业拓扑”下,依次选择“输出”-“添加”,在“输出别名”框中,键入警报,在剩余字段中使用默认值。之后选择“创建”。
最后,在所创建作业中的“作业拓扑”下,依次选择“查询”-“添加”,加入以下SQL语句并保存
从温度时间戳中选择
“重置”作为命令
进入
警报 ,按时间创建分组,按 TumblingWindow(秒,30)分组,按 Avg(machine.temperature)> 70
3. 部署Stream Analytics Job
首先,在Azure Portal的IoTHub页面内,转到“IoT Edge”并打开IoT Edge设备的详细信息页面。
“设置模块”,并确保已经按照文章中的步骤之前添加了tempSensor模块,因为这里的Azure Stream Analytics模块是针对选择tempSensor模块产生的数据来进行实时分析的。
在“添加模块”页面,选择“导入 Azure Stream Analytics IoT Edge 模块”,
在接下来的Edge Deployment页面,选择之前创建好的Stream Analytics – Edge Job,注意,这里要选择之前第一部分已经创建好的存储账户和容器,点击保存,如下图所示。
之后,将下面的代码复制到路由,将{ moduleName }替换为复制的模块名称:
{
“routes”:{
“telemetryToCloud”:“从/messages/modules/tempSensor/* 到 $upstream”,
“alertsToCloud”:“从/messages/modules/{moduleName}/* 到 $upstream”,
“alertsToReset”:“从/messages/modules/{moduleName}/* 到 BrokeredEndpoint(“/modules/tempSensor/inputs/control”)”,
“telemetryToAsa”:“从/messages/modules/tempSensor/* 到 BrokeredEndpoint(“/modules/{moduleName}/inputs/temperature”)”
}
}
选择下一步,然后提交。看到返回到“设备详细信息”页面,并选择“刷新”。应会新的流分析模块已经在列表中,但状态还是一个待部署。
稍后,等该模块部署到设备以后,刷新列表,可以发现,EdgeASA已经处于运行状态。
回到Putty工具,利用“docker log -f {moduleName}”指令(其中,{moduleName}用刚刚部署的流分析模块的名称代替),就可以查看流分析的日志信息。
实际的环境中,IoT设备可以生成大量的数据。为了减少上传的数据量或者降低控制策略的生产延迟,有时必须在设备端对数据进行实时分析或处理。Azure Stream Analytics服务就是很好的解决方案方案一,用户可以从Azure Portal中创建Azure Stream Analytics服务,然后在Azure IoTHub中将其设置为IoT Edge Module并部署到Azure IoT Edge设备上。本文将演示如何创建Azure Stream Analytics作业,并将其部署到IoT Edge设备上。1.创建存储账户 首先,在Azure门户中,转到“新建”,在搜索框中输入“存储”,选择“存储帐户 – Blob,文件、表、队列”。 然后,在“创建存储帐户”中,输入存储帐户的名称,选择存储IoTHub的相同位置(这里为东亚),然后选择“创建”。请记下该名称以供稍后使用。 接下来,转到刚才创建的存储账户,选择“Blob Service”。为Azure Stream Analytics模块创建一个用于存储数据的新容器,将访问级别设置为“Container”,选择“确定”。 2.创建Azure流分析作业 首先,在Azure门户中,转到“创建”>“物联网”,然后选择“流分析作业”。 然后,在“新建流分析作业”中执行以下操作:在“作业名称”框中输入作业名称;在“托管环境”下,选择“Edge”;在剩余字段中使用默认值。 然后,在所创建作业中的“作业拓扑”下,依次选择“输入”-“添加”。在“输入别名”框,输入温度。在“源类型”框,选择“数据流”。剩余字段中使用默认值。 接下来,在所创建作业中的“作业拓扑”下,依次选择“输出”-“添加”,在“输出别名”框中,键入警报,在剩余字段中使用默认值。之后选择“创建”。 最后,在所创建作业中的“作业拓扑”下,依次选择“查询”-“添加”,加入以下SQL语句并保存从温度时间戳中选择 “重置”作为命令进入 警报 ,按时间创建分组,按 TumblingWindow(秒,30)分组,按 Avg(machine.temperature)> 703. 部署Stream Analytics Job 首先,在Azure Portal的IoTHub页面内,转到“IoT Edge”并打开IoT Edge设备的详细信息页面。 “设置模块”,并确保已经按照文章中的步骤之前添加了tempSensor模块,因为这里的Azure Stream Analytics模块是针对选择tempSensor模块产生的数据来进行实时分析的。 在“添加模块”页面,选择“导入 Azure Stream Analytics IoT Edge 模块”, 在接下来的Edge Deployment页面,选择之前创建好的Stream Analytics – Edge Job,注意,这里要选择之前第一部分已经创建好的存储账户和容器,点击保存,如下图所示。 之后,将下面的代码复制到路由,将{ moduleName }替换为复制的模块名称:{ “routes”:{ “telemetryToCloud”:“从/messages/modules/tempSensor/* 到 $upstream”, “alertsToCloud”:“从/messages/modules/{moduleName}/* 到 $upstream”, “alertsToReset”:“从/messages/modules/{moduleName}/* 到 BrokeredEndpoint(“/modules/tempSensor/inputs/control”)”, “telemetryToAsa”:“从/messages/modules/tempSensor/* 到 BrokeredEndpoint(“/modules/{moduleName}/inputs/temperature”)” } } 选择下一步,然后提交。看到返回到“设备详细信息”页面,并选择“刷新”。应会新的流分析模块已经在列表中,但状态还是一个待部署。 稍后,等该模块部署到设备以后,刷新列表,可以发现,EdgeASA已经处于运行状态。 回到Putty工具,利用“docker log -f {moduleName}”指令(其中,{moduleName}用刚刚部署的流分析模块的名称代替),就可以查看流分析的日志信息。 Read More
Help with LogAnalytics in Azure
I have a LogAnalytics Workspace and a workbook. I want to create custom tables. when I try to create a Custom DCR table on the 2nd screen it wants me to import a .json file. WTF? I have no .json file, I just want to create a few tables and then use PowerShell to write data to them.
why is this asking for a .json file? How do I create this .json file? what is the format of this .json file? If I have 5 fields (or 5 tables I want to create)
T1, T2, T3, T4, T5
Also, is there any really good YouTube vids on creating workbooks? What I really want to do is collect data from systems using PowerShell and then write that data to my workbook. Any detailed instruction on doing that would be nice.
Thanks.
I have a LogAnalytics Workspace and a workbook. I want to create custom tables. when I try to create a Custom DCR table on the 2nd screen it wants me to import a .json file. WTF? I have no .json file, I just want to create a few tables and then use PowerShell to write data to them. why is this asking for a .json file? How do I create this .json file? what is the format of this .json file? If I have 5 fields (or 5 tables I want to create) T1, T2, T3, T4, T5Also, is there any really good YouTube vids on creating workbooks? What I really want to do is collect data from systems using PowerShell and then write that data to my workbook. Any detailed instruction on doing that would be nice. Thanks. Read More
Research Drop: How Work Arrangement Policies Shape Leaders’ Views of Their Organization
Four years post pandemic, organizations continue to refine their work arrangement policies—balancing employee feedback, industry trends, and leadership preferences across in-person, hybrid, and remote models. As they navigate this evolving landscape, one thing is clear: there’s no one-size-fits-all solution. To help our customers better navigate the world of hybrid work, Viva People Science dove into our data pool of insights from global leaders and people managers to better understand whether work arrangement policies impacted leaders’ perceptions of their organization. The Viva People Science team’s High Performing Organizations survey was used for this analysis, which studied 1,100 global leaders and people managers across 10+ industries.
For this month’s research drop, we aimed to gain a deeper understanding of how leaders viewed their ability to meet performance indicators, their evaluation of their organization’s effectiveness, and their perceived top barriers to high performance.
We grouped leader perceptions based on how they described their organization’s work arrangement policy. There were four categories:
Fully flexible – these organizations require no days onsite (21.5% of the sample)
Partially onsite – these organizations require 1-2 days onsite per week (31.3% of the sample)
Predominantly onsite – these organizations require 3-4 days onsite per week (40.7% of the sample)
Fully onsite – these organizations require 5 days onsite per week (6.9% of the sample)
It’s important to note that this categorization allows us to compare leader perceptions from organizations with various work arrangement policies. It doesn’t provide context on how employees at these organizations manage their time and location based on the policies or how co-located the teams are at these organizations.
Our data revealed an interesting trend! Leaders’ perceptions of their organization didn’t shift in a straightforward or linear way as the level of flexibility in work arrangement policies increased or decreased. In fact, we found that leaders from organizations with fully flexible or predominantly onsite reported higher organizational performance and effectiveness than leaders from organizations with partially onsite or fully onsite policies. This trend persisted throughout our analysis – suggesting that it’s not a battle of “remote vs. in-person” but that there is a level of nuance to work arrangement policies.
Top performance ratings come from leaders within fully flexible and predominantly onsite organizations
When exploring leaders’ perceptions of their organization’s performance, leaders at fully flexible and predominantly onsite organizations reported achievement of performance indicators more consistently than leaders at partially onsite and fully onsite organizations.
This pattern of leader perceptions indicates that we can’t assume that flexibility and weekly in-person time are the only variables at play in the impact of work arrangement policy on overall organizational performance. For example, more in-person time may be thought to connect to higher levels of collaboration1. However, while we do see that leaders at predominantly onsite organizations rate their collaboration the highest, this isn’t the case for all work arrangement policies with an in-person element. When crafting and implementing a work arrangement policy, consider more than just the number of days onsite.
Finding the right balance of work routines may influence perceptions of work practice effectiveness
Taking a deeper look into fully flexible and predominantly onsite organizations, we saw that these organizations may offer a sense of stability and balance in work routines and environments, driving higher leader perceptions. When asked to evaluate their organization on a set of organizational effectiveness factors, the pattern of stronger feedback from leaders at fully flexible and predominantly onsite organizations reigned consistent among leaders.
We’ve seen in previous research that being intentional with how and where you plan certain in-person work activities can facilitate effective work and productivity by increasing high-quality connections between employees2. But when we look at the above organizational effectiveness indicators, leaders from partially onsite and fully onsite organizations reported lower levels of vision, collaboration, innovation, and efficiency.
So, what is causing lower perceptions of these imperative work conditions? When considering how work arrangement policies may play a part, it may be that leaders who are onsite 1-2 days feel their work week is fragmented and leaders who are required to be onsite five days feel overwhelmed by in-person expectations. We’ve also seen in research that even as work arrangement policies evolve to include more in-person time, not all work routines are evolving accordingly. In fact, regardless of work arrangement policy, the volume of virtual meetings has not reduced3. This suggests that being in-person doesn’t reduce time spent collaborating in a virtual work environment, highlighting a need for effective collaboration habits regardless of where the work happens, such as effective meetings, intentional communications, and productive asynchronous work.
Unique strengths of fully flexible and predominantly onsite policies
Fully flexible and predominantly onsite policies both relate to higher leader perceptions and evaluations, but these policies each have certain areas where they stand out.
Perceived manager effectiveness and alignment are higher in fully flexible environments.
Leaders at fully flexible organizations rate their organizations higher on manager effectiveness and goal alignment, suggesting that those with fully flexible policies have created an environment that deepens the impact that leaders can have for their teams, such as providing great clarity and alignment in the flow of work. Research shows that virtual work can reduce the impact of traditional power dynamics, facilitating an environment where employees are more confident in bringing up sensitive topics and breaking down departmental silos4.
The perceived impact of “face time” is not consistent for in-person environments.
Between work arrangement policies with an in-person element, leaders at predominantly onsite organizations report the highest levels of confidence that their organization’s employees know what is expected of them and how to grow within the organization. It’s interesting that this benefit of being in-person doesn’t extend to partially onsite or fully onsite leaders’ perceptions, suggesting that there may be a bell curve of in-person impact.
Leader ratings suggest that being in-person only 1-2 days may not provide enough opportunities to network with cross-functional teams and identify performance goals for internal mobility, but that being fully onsite may introduce location-specific limitations on internal mobility, such as location restrictive job openings. Consider the moments that matter most to being in-person when crafting your work arrangement policy and the impact that you want those moments to have, such as generating team cohesion, supporting employees through onboarding, or facilitating effective project kick-offs2.
Overburdened managers and inefficient cultures stand in the way of high performance, regardless of work arrangement policy
Leaders also reported the barriers they feel slow their organization’s progress toward high performance. All four categories of work arrangement policies shared a common barrier in their top three: overburdened managers.
Leaders with different work arrangement policies reported the same barriers to high performance.
Inefficient work culture ranked as the top barrier for all flexible policies, suggesting that getting the processes and culture right is still a work in progress.
For predominantly onsite organizations, accumulation of tedious tasks and inefficient organizational practices ranked second. It may be that consistently being onsite comes with the greater expectation for leaders to perform routine and tedious tasks while they are in-office.
For fully onsite organizations, inefficient tools and resources ranked third. It may be that being fully onsite limits leaders’ ability to perform necessary focus work and leverage their time most effectively.
This research advances our understanding of how work arrangement policies shape leaders’ views on organizational performance and effectiveness. A key takeaway is that these policies drive noticeable differences in some areas of work but have little effect in others. This suggests that regardless of work arrangement policy, leaders should focus on getting foundational priorities right: 1) support overburden managers, 2) set up collaboration norms that are location-agnostic, and 3) invest in the moments that matter. hat matter.
While these takeaways reflect leaders’ perspectives, the employee viewpoint is also needed to shape effective work policies. Recent research shows that US employees expect their organizations to allow them to work from home around 2.3 days per week in the coming year, with employees on average wanting to be able to work up to 3 days per week from home5. Connecting leader and employee perceptions are essential to understanding the holistic experience of your organization and critical when strategizing work arrangement policies to facilitate high-quality employee experiences and expectations.
Stay tuned for our November Research Drop to keep up with what the Viva People Science team is learning!
1 – Gallup, How Important Is Time in the Office? March 2, 2023.
2 – Microsoft WorkLab, In the Changing Role of the Office, It’s All about Moments That Matter. 2024.
3 – Harvard Business Review, Hybrid Work Has Changed Meetings Forever. June 17, 2024.
4 – Deloitte, Inclusive or isolated? New DEI considerations when working from anywhere. May 25, 2023.
5 – Barrero, Jose Maria, Nicholas Bloom, & Steven J. Davis, 2021. “Why working from home will stick,” National Bureau of Economic Research Working Paper 28731.
Microsoft Tech Community – Latest Blogs –Read More
New controls for model governance and secure access to on-premises or custom VNET resources
New enterprise security and governance features in Azure AI for October 2024
At Microsoft, we’re focused on helping customers build and use AI that is trustworthy, meaning AI that is secure, safe and private. This month, we’re pleased to highlight new security capabilities that support enterprise-readiness, so organizations can build and scale GenAI solutions with confidence:
Enhanced model governance: Control which GenAI models are available for deployment from the Azure AI model catalog with new built-in and custom policies
Secure access to hybrid resources: Securely access on-premises and custom VNET resources from your managed VNET with Application Gateway for your training, fine-tuning, and inferencing needs
Below, we share more information about these enterprise features and guidance to help you get started.
Control which GenAI models are available for deployment from the Azure AI model catalog with new built-in and custom policies (public preview)
The Azure AI model catalog offers over 1,700 models for developers to explore, evaluate, customize, and deploy. While this vast selection empowers innovation and flexibility, it can also present significant challenges for enterprises that want to ensure all deployed models align with their internal policies, security standards, and compliance requirements. Now, Azure AI administrators can use new Azure policies to restrict select models for deployment from the Azure AI model catalog, for greater control and compliance.
With this update, organizations can use pre-built policies for Model as a Service (MaaS) and Model as a Platform (MaaP) deployments or create custom policies for Azure OpenAI Service and other AI services using detailed guidance:
1) Apply a built-in policy for MaaS and MaaP
Admins can now leverage the “[Preview] Azure Machine Learning Deployments should only use approved Registry Models” built-in policy within Azure Portal. This policy enables admins to specify which MaaS and MaaP models are approved for deployment. When developers access the Azure AI model catalog from Azure AI Studio or Azure Machine Learning, they will only be able to deploy approved models. See the documentation here: Control AI model deployment with built-in policies – Azure AI Studio.
2) Build a custom policy for AI Services and Azure OpenAI Service
Admins can now create custom policies for Azure AI Services and models in Azure OpenAI Service using detailed guidance. With custom policies, admins can tailor which services and models are accessible to their development teams, helping to align deployments with their organization’s compliance requirements. See the documentation here: Control AI model deployment with custom policies – Azure AI Studio.
Together, these policies provide comprehensive coverage for creating an allowed model list and enforcing it across Azure Machine Learning and Azure AI Studio.
Securely access on-premises and custom VNET resources from your managed VNET with Application Gateway (public preview)
Virtual networks keep your network traffic securely isolated in your own tenant, even when other customers use the same physical servers. Previously, Azure AI customers could only access Azure resources from their managed virtual network (VNET) that were supported by private endpoints (see a list of supported private endpoints here). This meant hybrid cloud customers using a managed VNET could not access machine learning resources that were not within an Azure subscription, such as resources located on-premises, or resources located in their custom Azure VNET but not supported with a private endpoint.
Now, Azure Machine Learning and Azure AI Studio customers can securely access on-premises or custom VNET resources for their training, fine-tuning, and inferencing scenarios from their managed VNET using Application Gateway. Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway will support a private connection from a managed VNET to any resources using an HTTP or HTTPs protocol. With this capability, customers can access the machine learning resources they need from outside their Azure subscription without compromising their security posture.
Supported scenarios for Azure AI customers using hybrid cloud
Today, Application Gateway is verified to support a private connection to Jfrog Artifactory, Snowflake Database, and Private APIs, supporting critical use cases for enterprise:
JFrog Artifactory is used to store custom Docker images for training and inferencing pipelines, store trained models ready to deploy, and for security and compliance of ML models and dependencies used in production. JFrog Artifactory may be in another Azure VNET, separate from the VNET used to access the ML workspace or AI Studio project. Thus, a private connection is necessary to secure the data transferred from a managed VNET to the JFrog Artifactory resource.
Snowflake is a cloud data platform where users may store their data for training and fine-tuning models on managed compute. To securely send and receive data, a connection to a Snowflake database should be entirely private and never exposed to the Internet.
Private APIs are used for managed online endpoints. Managed online endpoints are used to deploy machine learning models for real-time inferencing. Certain private APIs could be required to deploy managed online endpoints and must be secured through a private network.
Get started with Application Gateway
To get started with Application Gateway in Azure Machine Learning, see How to access on-premises resources – Azure Machine Learning | Microsoft Learn. To get started with Application Gateway in Azure AI Studio, see How to access on-premises resources – Azure AI Studio | Microsoft Learn.
How to use Microsoft Cost Management to analyze and optimize your Azure OpenAI Service costs
One more thing… As organizations increasingly rely on AI for core operations, it has become essential to closely track and manage AI spend. In this month’s blog, the Microsoft Cost Management team does a great job highlighting tools to help you analyze, monitor, and optimize your costs with Azure OpenAI Service. Read it here: Microsoft Cost Management updates.
Build secure, production-ready GenAI apps with Azure AI Studio
Ready to go deeper? Check out these top resources:
Azure security baseline for Azure AI Studio
5 Ways to Implement Enterprise Security with Azure AI Studio
Bicep template – Azure AI Studio basics
Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024! We’ll share the latest from Azure AI and go deeper into enterprise-grade security capabilities with these sessions:
Keynote: Microsoft Ignite Keynote
Breakout: Trustworthy AI: Future trends and best practices
Breakout: Secure and govern custom AI built on Azure AI and Copilot Studio
Breakout: Build secure GenAI apps with Azure AI Studio (in-person only)
Demo: Secure your GenAI project in 15 minutes with Azure AI Studio (in-person only)
Microsoft Tech Community – Latest Blogs –Read More
Announcing Healthcare AI Models in Azure AI Model Catalog
The healthcare industry is undergoing a transformation, driven by the power of artificial intelligence (AI). Last week at the HLTH conference, Microsoft announced advanced healthcare AI models in Azure AI Studio.
Developed in collaboration with Microsoft Research, our strategic partners, and leading healthcare institutions, these AI models are specifically designed for healthcare organizations to rapidly build and deploy AI solutions tailored to their specific needs, all while minimizing the extensive compute and data requirements typically associated with building multimodal models from scratch. With the healthcare AI models, healthcare professionals have the tools they need to harness the full potential of AI to assist patient care.
Modern medicine encompasses various data modalities, including medical imaging, genomics, clinical records, and other structured and unstructured data sources. Understanding the intricacies of this multimodal environment, Azure AI onboards specialized healthcare AI models that go beyond traditional text-based applications, providing robust solutions tailored to healthcare’s unique challenges.
Introducing the New Healthcare AI Models in the Azure AI Model Catalog:
The Azure AI model catalog has a new industry “Health and Life Sciences” filter with a new array of state-of-the-art, open source, healthcare models. This includes Microsoft’s first-party models as well as models by strategic partners:
MedImageInsight (paper) This embedding model enables sophisticated image analysis, including classification and similarity search in medical imaging. Researchers can use the model embeddings and build adapters for their specific tasks, streamlining workflows in radiology, pathology, ophthalmology, dermatology and other modalities. For example, researchers can explore how the model can be used to build tools to automatically route imaging scans to specialists, or flag potential abnormalities for further review, enabling improved efficiencies and patient outcomes. These models must be thoroughly tested, validated, and, in some cases, further fine-tuned to ensure their applicability in specific use cases. Furthermore, the model can be leveraged for responsible AI such as out-of-distribution (OOD) detection and drift monitoring, to maintain stability and reliability of AI tools and data pipelines in dynamic medical imaging environments.
Example Juypter notebooks deployable in Azure Machine Learning:
https://aka.ms/healthcare-ai-examples-mi2-deploy
https://aka.ms/healthcare-ai-examples-mi2-zero-shot
https://aka.ms/healthcare-ai-examples-mi2-adapter
https://aka.ms/healthcare-ai-examples-mi2-exam-parameter
MedImageParse (paper) Designed for precise image segmentation, this model covers various imaging modalities, including X-Rays, CT scans, MRIs, ultrasounds, dermatology images, and pathology slides. It can be fine-tuned for specific applications such as tumor segmentation or organ delineation, allowing developers to test and validate the ability to leverage AI for highly targeted cancer and other disease detection, diagnostics and treatment planning.
Example Juypter notebooks deployable in Azure Machine Learning:
https://aka.ms/healthcare-ai-examples-mip-deploy
https://aka.ms/healthcare-ai-examples-mip-examples
CXRReportGen (paper) Chest X-rays are the most common radiology procedure globally. They’re crucial because they help doctors diagnose a wide range of conditions—from lung infections to heart problems. These images are often the first step in detecting health issues that affect millions of people. By incorporating current and prior images, along with key patient information, this multimodal AI model generates report findings from chest X-rays, highlighting AI-generated findings directly on the images to align with human-in-the-loop workflows. Researchers can test this capability and the potential to accelerate turnaround times while enhancing the diagnostic precision of radiologists. This model has demonstrated exceptional performance on the industry standard MIMIC-CXR benchmark (paper):
Example Juypter notebooks deployable in Azure Machine Learning:
https://aka.ms/healthcare-ai-examples-cxr-deploy
Partner models: Paige.ai, Providence Healthcare, NVIDIA, and M42 contributed foundational models to the catalog, spanning areas including pathology, 3D medical imaging, biomedical research, and medical knowledge sharing. Developed under a core set of shared AI principles, these models provide a powerful starting point for organizations as they launch their own AI projects, while embedding responsible practices across the industry.
The open access to AI models on the catalog and modular approach allows healthcare organizations to customize solutions, maintain control over their data, and build trust through shared development and oversight. This approach aligns with our commitment to responsible AI, ensuring our technologies meet ethical standards and earn the trust of the medical community.
Microsoft is dedicated to responsibly scaling artificial intelligence and continuously improving our tools by listening and learning. Importantly, Microsoft does not use customer data to train AI models without explicit customer permission or in undisclosed ways. We collaborate with organizations to help them harness their data, enabling the development of predictive and analytical solutions that may drive their competitive advantage.
Azure AI Studio: Empowering Health and Life Sciences with Seamless AI Integration
Azure AI Studio offers healthcare professionals a comprehensive platform to develop, fine-tune, deploy, and continuously monitor AI models tailored to their unique needs. With the introduction of the new healthcare AI models, Azure AI Studio simplifies the integration of AI into healthcare workflows which allows professionals to focus on improving patient outcomes. Here’s how Azure AI Studio delivers value:
Bring your data and fine-tune models: Azure AI Studio and healthcare AI models complement the healthcare data solutions available in Microsoft Fabric, creating a unified environment to bring multimodal proprietary data to enable a wide range of use cases. Healthcare professionals can leverage the models as-is using the playground in Azure AI Studio or fine-tune pre-trained models with their data in Azure Machine Learning to adapt models for their specific clinical needs.
Rapid Development and Deployment: Azure AI Studio provides an intuitive interface and a comprehensive set of generative AI operations (GenAIOps) toolchains that enable professionals to quickly develop, test, and deploy AI applications. This streamlined process can accelerate the adoption of AI in healthcare, empowering organizations to integrate sophisticated diagnostic and analytical tools into their existing workflows. With built-in support for deploying models in cloud, on-premises, or hybrid environments, healthcare professionals can optimize their AI solutions for various clinical settings.
Supporting Safety and Compliance: Trust is crucial in healthcare, where AI can impact patient care. The model cards in the model catalog share details about the training and evaluation datasets used, including fairness testing where applicable. The platform supports hybrid deployment options for enhanced control over sensitive healthcare data. Additionally, Azure AI aligns with healthcare regulations such as HIPAA, helping organizations maintain high standards of data security, patient confidentiality, and overall compliance.
Real-world Impact: Customer Success Stories
Healthcare organizations are already leveraging these models to transform their workflows with Azure AI:
Mass General Brigham: MGB is using Microsoft’s MedImageInsight model to surface additional relevant information during clinical research and streamlining radiologist workflows, alleviating the administrative workload on clinical staff, and enhancing the speed of patient care.
University of Wisconsin-Madison: UW is targeting advanced report generation from medical imaging analysis using Microsoft’s CXRReportGen. With ever-increasing imaging volumes colliding with the ongoing combination of radiologist burnout and shortages, a state-of-the-art medical imaging model can be used to build an application that can transform a medical image into a draft note, supporting better outcomes for patients while helping clinicians focus on the hands-on components of their roles.
Sectra: Sectra is working with Microsoft to build on top of foundational models like MedImageInsight to automate the process of understanding the types of exams coming through the Sectra Vendor Neutral Archive (VNA) system for better routing and display.
Mars PETCARE: Mars PETCARE is exploring the use of the healthcare AI models for veterinary medicine applications, such as data evaluation in radiology and pathology teams. By combining veterinary expertise with advanced AI models, Mars PETCARE is setting new standards in animal health. This collaboration has the potential to transform veterinary diagnostics, improve the quality of care for pets, and showcase the versatility of healthcare AI models in non-human medical contexts.
Paige: In life sciences, Paige is working to combine radiology, pathology, and genomic insights for a more comprehensive approach to disease diagnosis, aimed at accelerating the discovery of new treatments.
Join Us in Shaping the Future of Healthcare
Join us at Microsoft Ignite to witness these models in action and learn how they can transform your healthcare practice. Visit the documentation and AI Studio to explore these cutting-edge healthcare AI models and start your journey toward a data-driven, AI-empowered future.
Medical device disclaimer: Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring solutions comply with applicable laws and regulations.
Generative AI does not always provide accurate or complete information. AI outputs do not reflect the opinions of Microsoft. Customers/partners will need to thoroughly test and evaluate whether an AI tool is fit for the intended use and identify and mitigate any risks to end users associated with its use. Customers/partners should thoroughly review the product documentation for each tool.
Microsoft Tech Community – Latest Blogs –Read More
What’s New in Microsoft Teams | October 2024
I hope that you’ve had the chance to try out some of the time-and-effort-saving new features we launched in September, like one of the favorites for Teams Premium and Copilot users, ‘Copilot in Teams meetings can now source responses from meeting chat’. It’s incredibly helpful that Copilot can now process the content from chat, as well as the transcript of the meeting, to make sure you don’t miss anything.
This month, we have even more updates to share that are bringing intelligence, convenience, and productivity together in Teams. A few that I’m most excited for you to try are: the highly anticipated ‘Queues App’ that makes handling and monitoring customer calls easier for call center agents and leads, ‘Voting, Filtering, Sorting and Archiving in Teams Q&A’ for Town Halls and Webinars, that allows attendees to upvote questions they find most compelling, and ‘Expanded cross-platform meetings via SIP join’ that give you the ability to use Microsoft Teams to join meetings from other services like Google Meet, Zoom, Cisco Webex, Amazon Chime, RingCentral, and others.
Have a look at those, and all the others, and let us know what you think!
Meetings, Webinars and Town Halls, & Mesh
Chat and Collaboration
Manage your teams and channels easily with the refreshed teams and channels view
This new page view allows you to easily access and manage your teams and channels. Simply click on the ellipsis at the top of your teams list and select “Your teams and channels.” This new hub lists all the teams you are a member of, allowing you to search, triage, and manage them efficiently. Here, you can create a new team, use search and filters like ‘teams you own’ or ‘archived teams’ to find what you need, respond to pending channel invites, and utilize the analytics tab to learn about your teams’ engagement. When you select a team, you can view all its associated channels and personalize your workspace by choosing to show only the channels of interest in Teams.
Chat details information pane
Now you can easily get an overview of all of the key details in your 1:1 and group chat with a new information pane. Just click on the ‘open chat details’ button located on the top-right of Teams next to the participant list, to see and access important information about the chat, including participants, shared files, pinned messages and the option to start a search within the chat. With the information pane, you are now able to able to access key chat information without losing sight of activity happening in the chat.
Meetings, Webinars and Town Halls, & Mesh
Recap notifications in Teams Activity feed
Users will receive a notification in their Teams Activity feed when the intelligent meeting recap for a meeting they have been invited to is ready. Clicking on the notification in Activity feed will take users directly to the recap.
Meeting organizers can manage who can admit attendees from the lobby
This feature helps meeting organizers keep meetings secure by controlling who can admit attendees from the lobby. On the Meeting Options page, organizers can select to either only allow the organizer and co-organizer to admit attendees, or allow organizers, co-organizers and presenters to admit attendees.
Voice isolation for Teams on MacOS
With voice isolation, you can enjoy clear and uninterrupted calls or meetings, no matter where you are. Voice isolation is an AI based advanced noise suppression feature that eliminates unwanted background noise including other human voices. The technology recognizes your voice profile and ensures only your voice is transmitted. Voice isolation can be enabled for calls and meetings making it easy to work from anywhere. It is now available on MacOS in addition to Windows desktops.
Manage Teams Town Hall and Webinar attendee communication
Organizers and co-organizers can now enable external email automation platforms to manage event communications with their Town Hall and Webinar attendees and registrants. This allows your organization to streamline your communications efforts and aggregate data from all aspects of your campaigns from within the main email automation platform of your choice.
Voting, filtering, sorting, and archiving in Teams Q&A
These enhancements to Teams Q&A can boost digital event engagement and make your town halls, webinars, and training sessions more interactive and organized. With Voting, each attendee can upvote important questions, ensuring they rise to the top of the Q&A feed. Both attendees and organizers can sort questions by votes or activity, which escalates the more relevant questions that are getting the most activity/interactions. Archiving allows organizers to move older or irrelevant questions to a separate feed, keeping the current Q&A focused and clutter-free. Learn more at https://aka.ms/GetQnA
Teams webinars in GCCH
Teams webinars is now available in the Government Community Cloud High (GCCH) environment, providing a secure and compliant platform for hosting large-scale virtual events.
Microsoft Mesh developer environment update with Mesh Pavilion sample
Introducing the Mesh Pavilion sample, a versatile environment designed specifically for developers to build and customize interactive scenarios. This sample includes a variety of pre-built elements such as a bean bag toss, fire pit, waterfall, and screenshare, all constructed from objects found in the Toybox package and other Mesh Toolkit features. It simplifies the development process by providing ready-to-use assets and activities, enabling developers to create engaging and high-performance environments efficiently. More info.
Expanded Avatar Catalog
Enhance your avatar creation with 51 new professional wardrobe items and 19 diverse hair options, allowing for greater self-expression and personalization. These updates allow you to create avatars that better reflect your unique identity and preference. The expanded options are available through the catalog to all Teams customers with avatars enabled.
Teams Phone
Queues app for Microsoft Teams
The new Queues app is a solution for collaboratively handling customer calls natively in Teams. Call queue members can easily handle inbound calls without ever leaving Teams, whether the call is made via PSTN or VOIP. Agents and supervisors can make outbound calls on behalf of the call queues or auto attendant that they are assigned to; and they can review the call queue statistics while seamlessly collaborating with their colleagues, all within the flow of work. Members of the queue can easily opt in and out depending on availability and business need.
Supervisors can monitor their call queues and auto attendants in real-time, generate reports on queue and agent performance, and access historical reporting. Call queue and auto attendant configuration is intuitively designed, with admin delegated rights that allow leads to manage members as well as call queue and auto attendant settings, all within Teams.
Additional silent coaching controls such as monitor, whisper, barge and take over will be available post GA. Queues app for Microsoft Teams requires a Teams Premium license and is now generally available.
Teams Rooms and Devices
Digital signage in Teams Rooms on Windows
IT Admins can now configure Teams Rooms on Windows to display dynamic, engaging, and relevant content for users to view on the Teams Rooms front-of-room display when the device is not in use.
Admins can configure tenant-wide and room-specific settings from the Teams Rooms Pro Management portal. Integrations with selected third-party digital signage providers and content management systems are supported, with initial partners Appspace and XOGO. This feature is available in Teams Rooms Pro. Learn more about digital signage in Teams Rooms.
Manage digital signage in the Teams Rooms Pro Management portal
IT Admins can now use Teams Rooms Pro Management to manage digital signage, configure
device settings, set custom URLs as sources, and enable trusted third-party integrations. This feature is available in Teams Rooms Pro. Learn more about the Pro Management portal.
Teams Rooms expanded cross-platform meetings via SIP join
Microsoft Teams Rooms on Windows is now enabled to join meetings like Google Meet, Zoom, Cisco Webex, Amazon Chime, RingCentral, and others from conferencing services that support SIP (Session Initiation Protocol) join. This functionality creates a seamless collaboration experience which mirrors the native Teams interface and supports features such as 1080p video, dual screen, various layout options, HDMI input, and others contingent on the third-party platform’s in-meeting controls. This feature requires a Teams Rooms Pro license, and a SIP plan from a CVI partner (currently limited to Pexip). Learn more about third-party interoperability in Teams Rooms.
Enhanced admin controls for Cloud IntelliFrame in Teams Rooms on Windows
Admins can now override the default settings of Cloud IntelliFrame and support cameras even if not on the supported camera list. This feature is available in Teams Rooms Pro. Learn more about Cloud IntelliFrame.
Enhanced shared display mode with room audio auto-detect in a BYOD room
For your smallest rooms or when a Teams Rooms is not available, shared display mode in a bring your own device (BYOD) room is automatically activated when you connect your laptop to a room’s audio peripheral via USB. Teams detects the device and recommends joining with room audio on the pre-join screen, streamlining content sharing and enhancing hybrid meetings. On devices already identified by Microsoft as shared devices, users are seamlessly transitioned to the room audio and shared display mode. Learn more about bring your own device (BYOD) rooms.
Automatically set workplace presence in a BYOD room
Sharing your work location makes it easier to connect with co-workers who are in the office. Now, when you connect to peripherals in a BYOD room, your workplace presence will be automatically set to ‘In the office’ (if you and your admin have given permission). This feature supports intelligent booking solutions that are coming soon.
Participant roster grouping in Teams BYOD rooms
Teams will now automatically list you as an ‘in-room participant’ on the meeting roster
when you join a Teams meeting with your laptop in a BYOD room. Grouping in-room participants enables individual identification and meeting intelligence capabilities. Now in public preview with general availability later this year.
Devices certified for Teams
Find certified for Teams devices for all types of spaces and uses at aka.ms/teamsdevices.
MeetUp 2 for Teams Rooms on Windows (with Lenovo Core)
Newly certified for Teams, Logitech MeetUp 2 is an all-in-one USB conference camera and designed for huddle and small rooms. It brings a simple setup, a selection of AI-enabled audio and video features, and easy management to USB-based deployments with a compute or in BYOD mode. MeetUp 2 can function in USB mode with either an in-room compute or when connected to a laptop in BYOD mode. It is easy to set up in small spaces, thanks to multiple mounting options, clean cabling, and a manual privacy shutter. Learn more
Poly Blackwire 8225 Stereo USB-C+A
Stay focused with the Poly Blackwire 8225 headset. The noise-canceling microphone enhanced with Acoustic Fence technology makes sure users sound clear in any environment. And the advanced hybrid active noise canceling (ANC) lets users adjust the setting to fit their environment, so you can enjoy high-quality audio for meetings, calls, and stereo music. Learn more
Poly Blackwire 3310 Monaural USB-C+A
This intuitive and simple to use headset is built with style and priced for enterprise deployment. It’s comfortable and reliable, with signature audio quality. The monaural over-ear headset includes a boom-arm microphone and convenient controls on the connection wire.
Epos Impact 430T
With plug-and-play connectivity and features like lift-to-mute, the Epos Impact 430T helps hybrid workers manage calls with less effort. This monoaural wired headset and boom arm microphone enables you to hear and be heard on any call, and advanced noise-filtering technology transmits your voice instead of unwanted noise. And, the ultralight, comfortable design and small noise-dampening ear pad helps you stay productive throughout the day.
Epos Impact 460T
The Epos Impact 460T provides all the quality, comfort, and features of the Impact 430T, with a binaural over-ear design for stereo sound.
Frontline Worker solutions
Working Time & Quiet Time in Teams
Rest, recharge, and reduce stress with Working Time and Quiet Time in Teams. Receiving notifications while off-the-clock introduces unnecessary strain for FLWs and their admins, and makes it harder to achieve work life balance.
Working Time links clock-in status with access to Teams, safeguarding the valuable time employees have off work. Organization administrators can set the Teams app to be active only when employees are clocked in, and to either show a warning screen or be muted and inaccessible when they are clocked out.
Quiet Time allows administrators to ensure that employees have the opportunity to wind down after hours and when off the clock without worry. Admins can set use Quiet Time to restrict pop-up notifications, remove notification badges from the Teams app icon, and indicate that conversation notifications are muted during hours when employees are off-the-clock. If permitted, it can be set to allow access to necessary communications when Teams is opened when off the clock.
Managed Home Screen Re-design
Managed Home Screen (MHS) is an application used on Android Enterprise dedicated devices enrolled into Intune and for fully managed user affiliated devices. It offers admins the ability to customize and control the user experiences on enrolled devices. This update to the MHS app provides a more streamlined user experience and more robust support for organizations. One key addition is a configurable navigation bar at the top of the page, which enables easier navigation and quick access to device-identifying information. This navigation bar provides the ability for admins to configure different information display options, and set display formats based on sign-in status. This added flexibility provides clear device and user identification, and simplifies device control and management. We’ve also added a few other new features to MHS which are only available as part of this latest update. More details are on the Microsoft Community Hub. Updates to the Managed Home Screen experience – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
Smart HR: Leveraging AI for Better People Management
Welcome to our latest blog post in the AI Empowerment series! In a recent webinar, we explored the world of AI in Human Resources. The webinar was packed with insights, practical examples, and a whole lot of fun! Here’s a recap of the key points discussed and some practical tips on how you can start using AI in your everyday HR tasks.
The current state of AI in HR
We began the conversation discussing where HR as a profession is with AI adoption today. Whilst we are seeing general optimism and acknowledgement that organizations need to keep up with AI, there are also concerns around the use of responsible AI, manager enablement and data privacy. Here are some key findings from a recent study by Gartner (2024):
Optimism and Adoption: 61% of HR professionals are optimistic about AI’s potential to enhance HR practices, particularly in HR analytics, employment law, and learning and development. However, only 38% of HR leaders are currently piloting or implementing AI solutions today.
Common use cases today: Recruitment, chatbots and virtual assistants, and reducing administrative load are the top areas where AI is making an impact.
Challenges and Concerns
Data privacy and security: Ensuring robust data handling practices and protecting sensitive information are paramount.
Professional caution: HR practitioners need to navigate the complexities of using new technology while keeping up with evolving laws and maintaining a human touch.
Using AI in everyday HR Tasks using Microsoft 365 Copilot
Using the Microsoft Copilot HR Scenario Library as inspiration, we explored a few use cases to demonstrate where and how Copilot could be leveraged in various HR tasks.
Conducting Market Research for Recruitment
Example Copilot prompt: “Generate a table of key skills and experience required for software engineers at large enterprise technology companies based on market trends and leading technology companies.”
Creating Job Descriptions
Example Copilot prompt: “Draft a job description for a software engineer role, including key responsibilities, required skills, and company culture.”
Updating Policy Documents
Example Copilot prompt: “Scan the latest version of our remote work policy and create an FAQ document for employees.”
Writing effective prompts for your daily HR tasks
During the webinar, we spoke about the importance of experimenting with prompt writing and reiteration to help create prompts that would produce the desired outcome. We wrote some prompts together and reviewed the subsequent Copilot generated output! Here are a few examples that stood out:
1. Writing pre-survey communication to employees
– Example Copilot prompt: “Help me draft an email to our employees about the upcoming global employee survey. The email should include information about when the survey is open and the closing deadline, reasons employees should participate and how they can provide their feedback.”
2. Encouraging employee participation in the survey
– Example Copilot prompt: “We are in the middle of our global employee engagement survey period and have only received 20 responses across the company. Help me write a reminder email to employees encouraging them to participate and the importance of sharing their feedback. Use a tone that is encouraging yet friendly as employees are very busy at this time of year.”
3. Helping leaders re-draft internal communications about a Return to Office policy using appropriate style and tone
– Example Copilot prompt: “Please provide a new version of the below email draft. I’d like the tone to be professional, but ensure the language used is empathetic and compassionate.”
Looking to the future
The potential for AI to revolutionize human resources is boundless. Imagine a world where HR professionals can focus more on strategic initiatives and employee engagement, while AI handles the repetitive and time-consuming tasks. The integration of AI into HR processes not only enhances efficiency but also opens up new avenues for innovation and growth in the profession. Here are some further resources to help you navigate your HR AI journey:
Viva Community Call (Oct 2024): How Microsoft HR is using Viva and M365 Copilot to Empower Employees
Microsoft Copilot Academy now available to all Microsoft 365 Copilot users
Empowering responsible AI practices
If you missed our live webinar, you also can watch the recording and access the presentation below.
Microsoft Tech Community – Latest Blogs –Read More
What’s New In Microsoft 365 Copilot | October 2024
Welcome to the October 2024 edition of What’s new in Microsoft 365 Copilot! Every month, we highlight new features and enhancements to keep Microsoft 365 admins up to date with Copilot features that help your users be more productive and efficient in the apps they use every day.
Let’s take a closer look at what’s new this month:
Admin and management capabilities:
See web search queries in the citation section of your Copilot response
Expanded controls for managing web searches in Copilot
More languages supported in Microsoft 365 Copilot
End-user capabilities:
New industry prompt collections in Copilot Lab
Expanded file summarization with Copilot in Loop
Prepare for meetings with Copilot in Outlook
Improved rewrites and tables with Copilot in Word
Unlocking Complex Problems – solving and code reviewing in BizChat
See web search queries in the citation section of your Copilot response
To help users understand what search queries were used to enhance Copilot responses, the exact web search queries derived from users’ prompts are shown in the linked citation section of Copilot responses.
Bing web search query citations provide users with valuable feedback as to exactly how their prompts are used to generate the web queries sent to Bing search. This transparency helps users with the information they need to improve prompts and use Copilot more effectively. This feature is rolling out in November.
Expanded controls for managing web searches in Copilot
Allowing Copilot to reference web content via web search improves the quality of Copilot responses by grounding them in the latest information from the web via the Bing search service. To offer more ways for IT admins to manage web search in Microsoft 365 Copilot and Microsoft Copilot, we’re introducing the feature Allow web search in Copilot policy.
This new control provides the ability to manage web search separately from other optional connected experiences for Microsoft 365. It will also support web grounding turned off in the Work tab, while allowing web grounding to stay on in the Web tab.
Both the Allow web search in Copilot policy feature and the optional connected experiences policy can be managed at the user and group levels from the Microsoft 365 Apps admin center. This feature is rolling out in November.
More languages supported in Microsoft 365 Copilot
We’ve added twelve more languages to the list of currently supported Copilot prompts and responses: Bulgarian, Croatian, Estonia, Greek, Indonesian, Latvian, Lithuanian, Romanian, Serbian (Latin), Slovak, Slovenian, and Vietnamese. Microsoft 365 Copilot now supports a total of 42 languages. These additional languages rolled out in October.
In early October, we also introduced support for Welsh and Catalan. Additionally, the rollout of Indonesian and Serbian, which began in mid-October, will reach all customers by early November. Finally, users working in Serbian language will see Teams meeting transcripts in Cyrillic rather than in Latin script until the issue is resolved. Support documentation will be updated with progress towards providing Teams meeting transcripts for Serbian language in Latin script. Learn more about supported languages for Microsoft Copilot.
We are always improving and refining Copilot’s language capabilities. We are also continuing to expand the list of supported languages in the coming months.
New industry prompt collections in Copilot Lab
New industry-specific prompt collections are coming to Copilot Lab! These collections offer users customized prompts that cater to specific tasks in sustainability, consumer goods, nonprofit, mobility, and retail sectors. These prompts are designed to assist users in addressing industry-specific challenges and enhancing their workflows. These will roll out in November.
Here are some example prompts users can find in these new collections:
Sustainability: “What’s the best way for a company to analyze its emissions data and identify potential strategies to reduce its overall carbon footprint?”
Consumer Goods: “Draft an email informing channel partners about a new planogram for Food and Beverages and its validity period”
Nonprofit: “Create a major donor profile template”
Mobility: “Create a table that highlights important differences in rules and regulations for a trip between California and Washington State with a class 7 vehicle. Include all intermediate states.”
Retail: “Create a training template for employee training on [upselling items during customer conversations]”
Expanded file summarization with Copilot in Loop
With Copilot in Loop, users can now attach a Word, Excel, or PowerPoint file, or a Loop page to a prompt. Users can then issue commands about the file, such as summarizing or extracting key points. Copilot will review the file to provide context, and the response will be displayed directly on the Loop page. This feature rolled out in October.
Prepare for meetings with Copilot in Outlook
With Copilot, users can now get ready for their next meeting in minutes. When a user has an upcoming meeting, Copilot proactively shows a “Prepare” button at the top of the inbox, which helps the user quickly get context for the meeting by creating a meeting agenda summary and showing and summarizing relevant files. This feature is rolling out in December on web and new Outlook.
Improved rewrites and tables with Copilot in Word
When users select text in a Word document and then select Copilot, they now have more options to fine tune the text, including to auto rewrite, get coaching, visualize as a table, or add in their own prompts. This feature is available now for web, Desktop, and Mac.
A popular Copilot in Word feature is the ability to select text and generate a table. Now users can take that a step further with additional support for formatting including font styling and cell styling. This feature rolled out in October on web.
Unlocking Complex Problems – solving and code reviewing in BizChat
Copilot can now tackle complex tasks when a user writes a prompt in natural language, making it easier for users to solve math problems, analyze data, and create visualizations. Additionally, users can now review Python code generated by Copilot to deepen their understanding of the output. They can also copy the code by clicking on the </> Code button and pasting in their preferred location. This feature started rolling out in October.
Did you know? The Microsoft 365 Roadmap is where you can get the latest updates on productivity apps and intelligent cloud services. Please note that the dates mentioned in this article are tentative and subject to change. Check back regularly to see what features are in development or coming soon.
Microsoft Tech Community – Latest Blogs –Read More
Customize the Phi-3.5 family of models with LoRA fine-tuning in Azure
The Phi model collection represents the latest advancement in Microsoft’s series of Small Language Models (SLMs). Back in August 2024, we welcomed the latest additions, Phi-3.5-mini and Phi-3.5-MoE, a Mixture-of-Experts (MoE) model:
Phi-3.5-mini: This 3.8B parameter model enhances multi-lingual support, reasoning capability, and offers an extended context length of 128K tokens.
Phi-3.5-MoE: Featuring 16 experts and 6.6B active parameters, this model delivers high performance, reduced latency, multi-lingual support, and robust safety measures, surpassing the capabilities of larger models while maintaining the efficacy of the Phi models.
From Generalist to Custom SLMs
The results from our benchmarks underscore the remarkable efficiency and capability of Phi-3.5-mini and Phi-3.5-MOE. But even so, the models can be further customized for your unique needs to match the performance of larger models for a given task.
There are three powerful techniques that can be used to customize language models for your organization’s specific needs:
Prompt Engineering
Retrieval Augmented Generation (RAG)
Fine-tuning
Let’s delve into each of these techniques.
Prompt Engineering is about providing clear instructions directly within the prompt, often in the system prompt, to guide the model’s responses. This method falls under the category of “giving more information in the prompt” and can be particularly useful for shaping the model’s behavior and output format.
Next, we have Retrieval Augmented Generation, or RAG. This technique is employed when you want to incorporate organizational data and knowledge into the model’s responses. RAG allows you to provide the model with reliable sources for answers through additional documents. It retrieves the relevant information and augments the prompt, enhancing the model’s ability to generate informed and contextually accurate responses. RAG also belongs to the category of “giving more information in the prompt”.
Fine-tuning is the process of customizing a model using labelled training data, which often leads to better performance and reduced computational resources. In fact, fine-tuning a smaller model with the appropriate training data can lead to its performance exceeding a larger model for a specific task. Specifically, Low-Rank Adaptation (LoRA) fine-tuning is an excellent approach for adapting language models to specific use cases due to several key advantages. First, LoRA significantly reduces the number of trainable parameters, making the fine-tuning process more efficient, saving time and cost. This reduced demand on resource allows for quicker iterations, making LoRA easier to experiment with fine-tuning tasks. LoRA keeps the original model weights mostly unchanged, which helps in maintaining the general capabilities of the pre-trained model while adapting it to specific tasks.
LoRA fine-tuning for Phi-3.5 models
Today, we are proud to announce the availability for LoRA fine-tuning the Phi-3.5-mini and Phi-3.5-MoE models in Azure AI starting November 1, 2024.
Serverless fine-tuning for Phi-3.5-mini and Phi-3.5-MOE models enables developers to quickly and easily customize the models for cloud scenarios without having to manage compute. The fine-tuning experience is available in Azure AI Studio, and it adopts a pay-as-you-go approach, ensuring you only pay for the actual training time your fine-tuning requires.
Once finetuned, the models can be deployed in Azure for inference, with the option to enable Content Safety. Deploying your fine-tuned model is a streamlined process with our pay-as-you-go service. The billing for fine-tuned model deployments is based on the number of input and output tokens used, along with a nominal hosting charge for maintaining the fine-tuned model. Once deployed, you can integrate your fine-tuned model with leading LLM tools like prompt flow, LangChain, and Semantic Kernel, enhancing your AI capabilities effortlessly.
Fine-tuning with the Managed Compute option is also available. You may use the Azure Machine Learning Studio user interface or follow our notebook example to create your custom model. Using the notebook has the advantage of greater flexibility for the configurations used by the fine-tuning job. You have the option to download the fine-tuned model, and then deploy it using Azure managed compute resources, your own premises, or your edge devices. The fine-tuned model is also licensed under MIT license.
Closing remark
The Phi-3.5 family of models represents a significant advancement in the realm of SLMs. With the introduction of Phi-3.5-mini and Phi-3.5-MoE, we have pushed the boundaries of performance, efficiency, and customization. The availability of LoRA fine-tuning in Azure AI further empowers developers to tailor these models to their specific needs, ensuring optimal performance for a wide range of applications. As we continue to innovate and refine our models, we remain committed to providing cutting-edge solutions that drive progress and enhance user experiences. Thank you for joining us on this journey, and we look forward to seeing the incredible things you will achieve with the Phi-3.5 models.
Microsoft Tech Community – Latest Blogs –Read More