Month: September 2024
Unlock Analytics and AI for Oracle Database@Azure with Microsoft Fabric and OCI GoldenGate
The strategic partnership between Oracle and Microsoft has redefined the enterprise cloud landscape. Oracle Database@Azure seamlessly integrates Oracle’s database services with Microsoft’s Azure cloud platform, empowering businesses to maintain the performance, security, and reliability of Oracle databases while modernizing with Azure’s extensive cloud services.
As companies strive to accelerate their digital transformation, reduce complexity, and optimize their cloud strategies, data remains central to their success. High-quality data underpins effective business insights and serves as the foundation for AI innovation. Now in public preview, customers have the opportunity to use OCI GoldenGate—a database replication and heterogeneous data integration service—to sync their data estates with Microsoft Fabric. This integration unlocks new prospects for data analytics and AI applications by unifying diverse datasets, allowing teams to identify patterns and visualize opportunities.
A Unified Platform for Data and AI
Microsoft Fabric is an AI-powered real-time analytics and business intelligence platform that consolidates data engineering, integration, warehousing, and data science into one unified solution. By simplifying the complexity and cost of integrating analytics services, Microsoft Fabric provides a seamless experience for data professionals across various roles.
Microsoft Fabric integrates tools like Azure Synapse Analytics and Azure Data Factory into a cohesive Software as a Service (SaaS) platform, featuring seven core workloads tailored to specific tasks and personas. This platform enables organizations to manage their entire data lifecycle within a single solution, streamlining the process of building, managing, and deploying data-driven applications. With its unified architecture, Microsoft Fabric reduces the complexity of managing a data estate and simplifies billing by offering a shared pool of capacity and storage across all workloads. It also enhances data management and protection with robust governance and security features.
A key highlight of Microsoft Fabric is its integration with native generative AI services, such as Copilot, which enables richer insights and more compelling visualizations. This AI-driven approach can significantly impact business growth by improving decision-making and collaboration across teams. With Power BI and Synapse workloads built in and native integration with Azure Machine Learning, you can accelerate the deployment of AI-powered solutions, making it an essential tool for organizations looking to advance their data strategies.
OCI Golden Gate integration with Microsoft Fabric
OCI GoldenGate is a real-time data integration and replication solution that ensures high availability, disaster recovery, and transactional integrity across diverse environments. When integrated with Microsoft Fabric, OCI GoldenGate adds significant value by enabling seamless, real-time data synchronization between Oracle databases and the AI-powered analytics platform of Fabric. This ensures that data professionals can work with the most up-to-date information across their data ecosystem, enhancing the accuracy and timeliness of insights.
OCI GoldenGate’s ability to support complex data transformations and migrations allows organizations to leverage Microsoft Fabric’s advanced analytics and AI capabilities without disruption, driving faster, more informed decision-making and enabling businesses to unlock new levels of innovation.
Get started
Enhance your data strategy and drive more informed decision-making by leveraging your existing Microsoft and Oracle investments with Oracle Database@Azure by integrating it with Microsoft Fabric. Get started today through the Azure Marketplace!
Read the Oracle CloudWorld blog: https://aka.ms/OCWBlog24
Learn more about Microsoft Fabric at https://aka.ms/fabric
Learn more about Oracle Database@Azure: https://aka.ms/oracle
Technical documentation: Overview – Oracle Database@Azure | Microsoft Learn
To setup OCI GoldenGate, you can refer to the documentation here – Implement OCI GoldenGate on an Azure Linux VM – Azure Virtual Machines | Microsoft Learn
Get skilled: https://aka.ms/ODAA_Learn
Microsoft Tech Community – Latest Blogs –Read More
Announcing availability of Oracle Database@Azure in Australia East
Microsoft and Oracle are excited to announce that we are expanding the general availability of Oracle Database@Azure for the Azure Australia East region.
Customer demand for Oracle Database@Azure continues to grow – that’s why we’re announcing plans to expand regional availability to a total of 21 regions around the world. Oracle Database@Azure is now available in six Azure regions – Australia East, Canada Central, East US, France Central, Germany West Central, and UK South. To meet growing global demand, the service will soon be available in more regions, including Brazil South, Central India, Central US, East US 2, Italy North, Japan East, North Europe, South Central US, Southeast Asia, Spain Central, Sweden Central, United Arab Emirates North, West Europe, West US 2, and West US 3. In addition to the 21 primary regions, we will also add support for disaster recovery in a number of other Azure regions including Brazil Southeast, Canada East, France South, Germany North, Japan West, North Central US, South India, Sweden South, UAE Central, UK West, and West US.
As part of the continued expansion of Oracle services on Azure, we have new integrations with Microsoft Fabric and Microsoft Sentinel and support for Oracle Autonomous Recovery Service. Visit our sessions at Oracle CloudWorld and read our blog to learn more.
Learn more: https://aka.ms/oracle
Technical documentation: Overview – Oracle Database@Azure | Microsoft Learn
Get skilled: https://aka.ms/ODAA_Learn
Microsoft Tech Community – Latest Blogs –Read More
Day zero support for iOS/iPadOS 18 and macOS 15
With Apple’s recent announcement of iOS/iPadOS 18.0 and macOS 15.0 Sequoia, we’ve been working hard to ensure that Microsoft Intune can provide day zero support for Apple’s latest operating systems so that existing features work as expected.
We’ll continue to upgrade our service and release new features that integrate elements of support for the new operating system (OS) versions.
Apple User Enrollment with Company Portal
With iOS/iPadOS 18, Apple no longer supports profile-based User Enrollment. Due to these changes, Intune will end support for Apple User Enrollment with Company Portal shortly after the release of iOS/iPadOS 18 and you’ll need to use an alternate management method for enrolling devices. We recommend enrolling devices with account driven User Enrollment for similar functionality and an improved user experience. For those looking for a simpler enrollment experience, try the new web based device enrollment for iOS/iPadOS.
Please note, device enrollment with Company Portal will remain unaffected by these changes.
Impact to existing devices and profiles:
After Intune ends support for User Enrollment with Company Portal:
Existing enrolled devices are not impacted and will continue to be enrolled.
Users won’t be able to enroll new devices if they’re targeted with this enrollment type profile.
Intune technical support will only be provided for existing devices enrolled with this method. We won’t provide technical support for any new enrollments.
New settings and payloads
We’ve continued to invest in the data-driven infrastructure that powers the settings catalog, enabling us to provide day zero support for new settings as they’re released by Apple. The Apple settings catalog has been updated to support all of the newly released iOS/iPadOS and macOS settings for both declarative device management (DDM) and mobile device management (MDM) so that your team can have your devices ready for day zero. New settings for DDM include:
Disk Management
External Storage: Control the mount policy for external storage
Network Storage: Control the mount policy for network storage
Safari Extension Settings
Allowed Domains: Control the domain and sub-domains that the extension can access
Denied Domains: Control the domain and sub-domains that the extension cannot access
Private Browsing: Control whether an extension is allowed in Private Browsing
State: Control whether an extension is allowed, disallowed, or configurable by the user
Software Update Settings
Allow Standard User OS Updates: Control whether a standard user can perform Major and Minor software updates
Software Update Settings > Automatic updates
Allowed: Specifies whether automatic downloads of available updates can be controlled by the user
Download: Specifies whether automatic downloads of available updates can be controlled by the user
Install OS Updates: Specifies whether automatic install of available OS updates can be controlled by the user
Install Security Update: Specifies whether automatic install of available security updates can be controlled by the user
Software Update Settings > Deferrals
Combined Period In Days: Specifies the number of days to defer a major or minor OS software update on the device
Major Period In Days: Specifies the number of days to defer a major OS software update on the device
Minor Period In Days: Specifies the number of days to defer a minor OS software update on the device
System Period In Days: Specifies the number of days to defer system or non-OS updates. When set, updates only appear after the specified delay, following the release of the update
Notifications: Configure the behavior of notifications for enforced updates
Software Update Settings > Rapid Security Response
Enable: Control whether users are offered Rapid Security Responses when available
Enable Rollback: Control whether users are offered Rapid Security Response rollbacks
Recommended Cadence: Specifies how the device shows software updates to the user
New settings for MDM include:
Extensible Single Sign On (SSO) > Platform SSO
Authentication Grace Period: The amount of time after a ‘FileVault Policy’, ‘Login Policy’, or ‘Unlock Policy’ is received or updated that unregistered local accounts can be used
FileVault Policy: The policy to apply when using Platform SSO at FileVault unlock on Apple Silicon Macs
Login Policy: The policy to apply when using Platform SSO at the login window
Non Platform SSO Accounts: The list of local accounts that are not subject to the ‘FileVault Policy’, ‘Login Policy’, or ‘Unlock Policy’
Offline Grace Period: The amount of time after the last successful Platform SSO login a local account password can be used offline
Unlock Policy: The policy to apply when using Platform SSO at screensaver unlock
Extensible Single Sign On Kerberos
Allow Password: Allow the user to switch the user interface to Password mode
Allow SmartCard: Allow the user to switch the user interface to SmartCard mode
Identity Issuer Auto Select Filter: A string with wildcards that can use used to filter the list of available SmartCards by issuer. e.g “*My CA2*”
Start In Smart Card Mode: Control if the user interface will start in SmartCard mode
Restrictions
Allow ESIM Outgoing Transfers
Allow Personalized Handwriting Results
Allow Video Conferencing Remote Control
Allow Genmoji
Allow Image Playground
Allow Image Wand
Allow iPhone Mirroring
Allow Writing Tools
System Policy Control
Enable XProtect Malware Upload
With the upcoming Intune September (2409) release, the new DDM settings will be:
Math
Calculator
Basic Mode
Add Square Root
Scientific Mode – Enabled
Programmer Mode – Enabled
Input Modes – Unit Conversion
System Behavior – Keyboard Suggestions
System Behavior – Math Notes
New MDM settings for Intune’s 2409 (September) release include:
System Extensions
Non Removable System Extensions
Non Removable System Extensions UI
Web Content Filter
Hide Deny List URLs
More information on configuring these new settings using the settings catalog can be found at Create a policy using settings catalog in Microsoft Intune.
Updates to ADE Setup Assistant screens within enrollment policies
With Intune’s September (2409) release, there’ll be six new Setup Assistant screens that admins can choose to show or hide when creating an Automated Device Enrollment (ADE) policy. These include three iOS/iPadOS and three macOS Skip Keys that will be available for both existing and new enrollment policies.
Emergency SOS (iOS/iPadOS 16+)
The IT admin can choose to show or hide the iOS/iPadOS Safety (Emergency SOS) setup pane that is displayed during Setup Assistant.
Action button (iOS/iPadOS 17+)
The IT admin can choose to show or hide the iOS/iPadOS Action button configuration pane that is displayed during Setup Assistant.
Intelligence (iOS/iPadOS 18+)
The IT admin can choose to show or hide the iOS/iPadOS Intelligence setup pane that is displayed during Setup Assistant.
Wallpaper (macOS 14+)
The IT admin can choose to show or hide the macOS Sonoma wallpaper setup pane that is displayed after an upgrade. If the screen is hidden, the Sonoma wallpaper will be set by default.
Lockdown mode (macOS 14+)
The IT admin can choose to show or hide the macOS Lockdown Mode setup pane that is displayed during Setup Assistant.
Intelligence (macOS 15+)
The IT admin can choose to show or hide the macOS Intelligence setup pane that is displayed during Setup Assistant.
For more information refer to Apple’s SkipKeys | Apple Developer Documentation.
Updates to supported vs. allowed versions for user-less devices
We previously introduced a new model for enrolling user-less devices (or devices without a primary user) for supported and allowed OS versions to keep enrolled devices secure and efficient. The support statements have been updated to reflect the changes with the iOS/iPadOS 18 and upcoming macOS 15 releases:
Support statement for supported versus allowed macOS versions for devices without a primary user.
If you have any questions or feedback, leave a comment on this post or reach out on X @IntuneSuppTeam. Stay tuned to What’s new in Intune for additional settings and capabilities that will soon be available!
Microsoft Tech Community – Latest Blogs –Read More
LLM Load Test on Azure (Serverless & Managed-Compute)
Introduction
In the ever-evolving landscape of artificial intelligence, the ability to efficiently load test large language models (LLMs) is crucial for ensuring optimal performance and scalability. llm-load-test-azure is a powerful tool designed to facilitate load testing of LLMs running in various Azure deployment settings.
Why Use llm-load-test-azure?
The ability to load test LLMs is essential for ensuring that they can handle real-world usage scenarios. By using llm-load-test-azure, developers can identify potential bottlenecks, optimize performance, and ensure that their models are ready for deployment. The tool’s flexibility, comprehensive feature set, and support for various Azure AI models make it an invaluable resource for anyone working with LLMs on Azure.
Some scenarios where this tool is helpful:
You set up an endpoint and need to determine the number of tokens it can process per minute and the latency expectations.
You implemented a Large Language Model (LLM) on your own infrastructure and aim to benchmark various compute types for your application.
You intend to test the real token throughput and conduct a stress test on your premium PTUs.
Key Features
llm-load-test-azure is packed with features that make it an indispensable tool for anyone working with LLMs on Azure. Here are some of the highlights:
Customizable Testing Dataset: Generate a custom testing dataset tailored to settings similar to your use case. This flexibility ensures that the load tests are as relevant and accurate as possible.
Load Testing Options: The tool supports customizable concurrency, duration, and warmup options, allowing users to simulate various load scenarios and measure the performance of their models under different conditions.
Support for Multiple Azure AI Models: Whether you’re using Azure OpenAI, Azure OpenAI Embedding, Azure Model Catalog serverless (Maas), or managed-compute (MaaP), llm-load-test-azure has you covered. The tool’s modular design enables developers to integrate new endpoints with minimal effort.
Detailed Results: Obtain comprehensive statistics like throughput, time-to-first-token, time-between-tokens and end2end latency in JSON format, providing valuable insights into the performance of your models.
Getting Started
Using llm-load-test-azure is straightforward. Here’s a quick guide to get you started:
Generate Dataset (Optional): Create a custom dataset using the generate_dataset.py script. Specify the input and output lengths, the number of samples, and the output file name.
[ python datasets/generate_dataset.py –tok_input_length 250 –tok_output_length 50 –N 100 –output_file datasets/random_text_dataset.jsonl ]
–tok_input_length: The length of the input. minimum 25.
–tok_output_length: The length of the output.
–N: The number of samples to generate.
–output_file: The name of the output file (default is random_text_dataset.jsonl).
Run the Tool: Execute the load_test.py script with the desired configuration options. Customize the tool’s behavior using a YAML configuration file, specifying parameters such as output format, storage type, and warmup options.
load_test.py [-h] [-c CONFIG] [-log {warn,warning,info,debug}]
optional arguments:
-h, –help show this help message and exit
-c CONFIG, –config CONFIG
config YAML file name
-log {warn,warning,info,debug}, –log_level {warn,warning,info,debug}
Provide logging level. Example –log_level debug, default=warning
Results
The tool will produce comprehensive statistics like throughput, time-to-first-token, time-between-tokens and end2end latency in JSON format, providing valuable insights into the performance of your LLM Azure endpoint.
Example of the json output:
“results”: [ # stats on a request level
…
],
“config”: { # the run settings
…
“load_options”: {
“type”: “constant”,
“concurrency”: 8,
“duration”: 20
…
},
“summary”: { # overall stats
“output_tokens_throughput”: 159.25729928295627,
“input_tokens_throughput”: 1592.5729928295625,
“full_duration”: 20.093270540237427,
“total_requests”: 16,
“complete_request_per_sec”: 0.79, # number of competed requests / full_duration
“total_failures”: 0,
“failure_rate”: 0.0
#time per ouput_token
“tpot”: {
“min”: 0.010512285232543946,
“max”: 0.018693844079971312,
“median”: 0.01216195583343506,
“mean”: 0.012808671338217597,
“percentile_80”: 0.012455177783966065,
“percentile_90”: 0.01592913103103638,
“percentile_95”: 0.017840550780296324,
“percentile_99”: 0.018523185420036312
},
#time to first token
“ttft”: {
“min”: 0.4043765068054199,
“max”: 0.5446293354034424,
“median”: 0.46433258056640625,
“mean”: 0.4660029411315918,
“percentile_80”: 0.51033935546875,
“percentile_90”: 0.5210948467254639,
“percentile_95”: 0.5295632600784301,
“percentile_99”: 0.54161612033844
},
#input token latency
“itl”: {
“min”: 0.008117493672586566,
“max”: 0.01664590356337964,
“median”: 0.009861880810416522,
“mean”: 0.010531313198552402,
“percentile_80”: 0.010261738599844314,
“percentile_90”: 0.013813444118403915,
“percentile_95”: 0.015781731761280615,
“percentile_99”: 0.016473069202959836
},
#time to ack
“tt_ack”: {
“min”: 0.404374361038208,
“max”: 0.544623851776123,
“median”: 0.464330792427063,
“mean”: 0.46600091457366943,
“percentile_80”: 0.5103373527526855,
“percentile_90”: 0.5210925340652466,
“percentile_95”: 0.5295597910881042,
“percentile_99”: 0.5416110396385193
},
“response_time”: {
“min”: 2.102457046508789,
“max”: 3.7387688159942627,
“median”: 2.3843793869018555,
“mean”: 2.5091602653265,
“percentile_80”: 2.4795608520507812,
“percentile_90”: 2.992232322692871,
“percentile_95”: 3.541854977607727,
“percentile_99”: 3.6993860483169554
},
“output_tokens”: {
“min”: 200,
“max”: 200,
“median”: 200.0,
“mean”: 200.0,
“percentile_80”: 200.0,
“percentile_90”: 200.0,
“percentile_95”: 200.0,
“percentile_99”: 200.0
},
“input_tokens”: {
“min”: 2000,
“max”: 2000,
“median”: 2000.0,
“mean”: 2000.0,
“percentile_80”: 2000.0,
“percentile_90”: 2000.0,
“percentile_95”: 2000.0,
“percentile_99”: 2000.0
},
}
}
Conclusion
llm-load-test-azure is a powerful and versatile tool that simplifies the process of load testing large language models on Azure. Whether you’re a developer or AI enthusiast, this repository provides the tools you need to ensure that your models perform optimally under various conditions. Check out the repository on GitHub and start optimizing your LLMs today!
Bookmark this Github link: maljazaery/llm-load-test-azure (github.com)
Acknowledgments
Special thanks to Zack Soenen for code contributions, Vlad Feigin for feedback and reviews, and Andrew Thomas, Gunjan Shah and my manager Joel Borellis for ideation and discussions.
llm-load-test-azure tool is derived from the original load test tool [openshift-psap/llm-load-test (github.com)]. Thanks to the creators.
Disclaimer
This tool is unofficial and not a Microsoft product. It is still under development, so feedback and bug reports are welcome.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft at Open Source Summit Europe 2024
Join Microsoft at Open Source Summit Europe, from September 16 to 18, 2024. This event gathers open source developers, technologists, and community leaders to collaborate, share insights, address challenges, and gain knowledge—advancing open source innovation and ensuring a sustainable ecosystem. Open Source Summit features a series of events focused on the most critical technologies, topics, and issues in the open source community today.
Register for Open Source Summit Europe 2024 today!
Attend Microsoft sessions
Attend a Microsoft session at Open Source Summit Europe to learn more about Microsoft’s contributions to open source communities, gain valuable insights from industry experts, and stay up to date on the latest open source trends. Be sure to add these exciting sessions to your event schedule.
Monday, September 16, 2024
Session
Speakers
Time
The Open Source AI Definition is (Almost) Ready
Justin Colannino, Microsoft
Stefano Maffulli, Open Source Initiative
2:15 PM to
2:55 PM CEST
Tuesday, September 17, 2024
Session
Speakers
Time
Keynote: OSS Security Through Collaboration
Ryan, Waite, Open Source Strategy and Incubations, Microsoft
9:50 AM to 10:05 AM CEST
Linux Sandboxing with Landlock
Mickaël Salaün, Senior Software Engineer, Microsoft
11:55 AM to 12:35 PM CEST
Danielle Tal, Microsoft; Mauro Morales, Spectro Cloud; Felipe Huici, Unikraft GmbH; Richard Brown, SUSE; Erik Nordmark, Zededa
11:55 AM to 12:35 PM PDT
Wednesday, September 18, 2024
Session
Speakers
Time
Panel: Why Open Source AI Matters for Europe
Justin Colannino, Microsoft; Sachiko Muto, OpenForum; Stefano Maffulli, Open Source Initiative; Cailean Osborne, The Linux Foundation
11:55 AM to 12:35 PM CEST
Open-Source Software Engineering Education
Stephen Walli, Principal Programmer Manager, Microsoft
3:10 PM to 3:50 PM CEST
Visit us at the Microsoft booth and experience exciting sessions and demos
Come visit us at booth D3 to engage with fellow open source enthusiasts at Microsoft, experience live demos on the latest open source technologies, and discuss the future of open source. You can also catch exciting sessions in the booth to learn more about a wide range of open source topics, including the following and more:
.NET 9
Azure Kubernetes Service
Flatcar Container Linux
Headlamp
Inspektor Gadget and eBPF observability
Linux on Azure
PostgreSQL
WebAssembly
We hope to see you in Vienna next week!
Learn more about Linux and open source at Microsoft
Open Source at Microsoft —explore the open source projects, programs, and tools at Microsoft.
Linux on Azure —learn more about building, running, and deploying your Linux applications in Azure.
Microsoft Tech Community – Latest Blogs –Read More
Purview eDiscovery’s Big Makeover
New Purview eDiscovery Due “by end of 2024”
eDiscovery is probably not where most Microsoft 365 tenant administrators spend a lot of time. Running eDiscovery cases is quite a specialized task. Often, large enterprises have dedicated compliance teams to handle finding, refining, analyzing, and understanding the material unearthed during eDiscovery together with liaison with outside legal and other expertise.
Starting with Exchange 2010, Microsoft recognized that eDiscovery was a necessity. SharePoint Server had its own eDiscovery center, and these elements moved into Office 365. In concert with their own work, Microsoft bought Equivio, a specialized eDiscovery company, in January 2015 to acquire the technology that became the eDiscovery premium solution.
Over the last few years, Microsoft has steadily added to the feature set of the eDiscovery premium solution while leaving the eDiscovery standard and content search solutions relatively unchanged. The last makeover that content search received was in 2021, and it wasn’t very successful. I thought it was slow and unwieldy. Things have improved since, but content searches have never been a great example of snappy performance and functionality, even if some good changes arrived, like the KQL query editor in 2022. (Microsoft has now renamed the keyword-based query lanuage to be KeyQL to differentiate it from the Kusto Query Language used with products like Sentinel).
Time marches on, and Microsoft has decided to revamp eDiscovery. In an August 12, 2024,announcement, Microsoft laid out its plans for the next generation of eDiscovery. The software is available in preview, but only in the new Microsoft Purview portal.
The new portal handles both Purview compliance and data governance solutions. Microsoft plans to retire the current Purview compliance portal by the end of 2024 (Figure 1). Whether that date is achieved is quite another matter. As reported below, there’s work to be done to perfect the new portal before retirement is possible.
Big Changes in the New Purview eDiscovery
Apart from a refreshed UI, the big changes include:
Rationalization of eDiscovery into a single UI. Today, Purview includes content searches, eDiscovery standard, and eDiscovery premium, each with their own UI and quirks. In the new portal, a single eDiscovery solution covers everything, with licensing dictating the functionality revealed to users. If you have an E5 license, you get premium eDiscovery with all its bells and whistles. If you have E3, you’ll get standard eDiscovery.
Better data source management: Microsoft 365 data sources span many different types of information. In the past, eDiscovery managers picked individual mailboxes, sites, and OneDrive accounts to search. A new data source picker integrates all sources
Support for sensitivity labels and sensitive information types within queries: The query builder supports looking for documents and messages that contain sensitive information types (SITs, as used by DLP and other Purview solutions) or protected by sensitivity labels. Overall, the query builder is much better than before (Figure 2).
The output of queries is handled differently too. Statistics are presented after a query runs (Figure 3), and the ability to test a sample set to determine if the query finds the kind of items that you’re looking for still exists.
Exporting query results doesn’t require downloading an app. Everything is taken care of by a component called the Process manager that coordinates the retrieval of information from the various sources where the query found hits. Everything is included in a compressed file that includes individual SharePoint files, PSTs for messages found in Exchange mailboxes, and a folder called “LooseFile” that appears to include Copilot for Microsoft 365 chats and meeting recaps.
Not Everything Works in the New Purview eDiscovery
Like any preview, not everything is available in the software available online. For instance, I could not create a query based on sensitivity labels. More frustratingly, I could find no trace of content searches in the new interface, despite Microsoft’s assertion that “users still have access to all existing Content Searches and both Standard and Premium eDiscovery cases on the unified eDiscovery case list page in the Microsoft Purview portal.” Eventually and after originally posting this article, a case called Content Searches appeared at the bottom of the case list. Navigating to the bottom of a case list (which could be very long) isn’t a great way to find content searches and it seems unnecessarily complicated. Perhaps a dedicated button to open content searches would work better?
Many administrators have created content searches in the past to look for data. For instance, you might want to export selective data from an inactive mailbox. In the new eDiscovery, content searches are created as standard eDiscovery cases, a change that Microsoft says improves security control by allowing the addition or removal of users from the case. Given that I have 100+ content searches in one case, I think that the new arrangement overcomplicates matters (how can I impose granular security on any one of the content searches if they’re all lumped together into one case?). It’s an example of how the folks developing the eDiscovery solution have never considered how tenant administrators use content searches in practice.
Interestingly, Microsoft says that the purge action for compliance searches can now remove 100 items at a time from an Exchange mailbox. They mention Teams in the same sentence, but what this really means is that the purge can remove compliance records for Teams from the mailbox that later synchronize with Teams clients to remove the actual messages.
Much More to Discover
Leaving aside the obvious pun, there is lots more to investigate in the new eDiscovery. If you are an eDiscovery professional, you’ll be interested in understanding how investigations work and whether Copilot (Security and Microsoft 365) can help, especially with large review sets. If you’re a tenant administrator, you should make sure that you understand how content searches and exports work. Microsoft has an interactive guide to help, but more importantly, we will update the eDiscovery chapter in the Office 365 for IT Pros eBook once the new software is generally available.
Learn how to exploit eDiscovery and the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
Using Guest Accounts to Bypass the Teams Meeting Lobby
And Why You Might Need to Change Account to Attend a Teams Meeting
Earlier this week I discussed a change made in how Teams copies text from messages that reduces user irritation. Let me balance the books by explaining a different aspect of Teams that continues to vex me.
I’m waiting to be accepted into a Teams meeting and wondering why I’m forced to wait in the lobby. I know that the organization wants people to use their guest accounts when attending meetings because of concerns about data leakage, so it’s annoying to have to twiddle my thumbs in the virtual lobby as the minutes tick by. And then the answer strikes: I’m attempting to join the meeting using my account rather than a guest account. After exiting, I rejoin after selecting my guest identity and enter the meeting without pausing in the lobby.
The UI to Change User Accounts
All of this happens because of what seems to be a major (to me) UI flaw in Teams. Figure 1 is the screen that appears when attempting to join a Teams meeting in a host tenant. By default, the user account from the home tenant is selected. If other accounts are available, the Change option appears to allow the user to select a different account. Teams knows if you have a guest account for the host tenant because it is listed under Accounts and Orgs in Teams settings.
Figure 1: The option to change account to attend a Teams meeting in another tenant
You can switch to the account by selecting it from the list (Figure 2).
Because the meeting is limited to tenant and guest accounts, a connection request using the guest account sails through without meeting any lobby restrictions.
I can appreciate what the Teams UI designers were trying to do when they placed the Change button on the dialog. It makes sense to offer users the choice to switch accounts. The problem is that the option is just a tad too subtle and that leads to it being overlooked. I know I am not the only one in this situation because it has happened to a bunch of people who might know better.
Managing Access to Confidential Calls
MVPs are members of the Microsoft Most Valuable Professional program. Part of the benefits of being an MVP are product briefings about new features or plans that Microsoft has to improve their software, including Teams. All such briefings are under a strict Non-Disclosure Agreement (NDA) and people are required to join meetings using the guest account created for them by Microsoft. The restriction is enforced by the lobby setting for meetings to allow tenant accounts and guests to bypass the lobby. It is a reasonable restriction because Microsoft needs to know who they’re talking to, and a guest account is a good indication that an external person has been vetted for access to a tenant.
I commonly attend several product briefings each week. And on a regular basis, I fail to switch to my guest account before attempting to join calls. The result is that I spend time waiting in the lobby thinking that it would be nice if someone started the call soon before I realize what’s going on or a presenter recognizes my name in the lobby and lets me in. I’ve been known to become distracted while waiting to be admitted from the lobby and miss the entire call.
Automatic Switching Would Help
Teams knows what the meeting setting is for lobby bypass. It knows if the person joining a call can bypass the lobby with one or more accounts. It would be terrific if Teams could apply some intelligence to the situation and prompt the user to change if their current account can’t bypass the lobby. I might make more calls then.
Make sure that you’re not surprised about changes that appear inside Microsoft 365 applications by subscribing to the Office 365 for IT Pros eBook. Our monthly updates make sure that our subscribers stay informed.
Copilot’s Automatic Summary for Word Documents
Automatic Document Summary in a Bulleted List
Last week, I referenced the update for Word where Copilot for Microsoft 365 generates an automatic summary for documents. This is covered in message center notification MC871010 (Microsoft 365 roadmap item 399921). Automatic summaries are included in Copilot for Microsoft 365 and Microsoft Copilot Pro (the version that doesn’t ground prompts using Graph data).
As soon as I published the article where I referred to the feature, it turned up in Word. Figure 1 shows the automatic summary generated for a document (in this case, the source of an article).
The summary is the same output as the bulleted list Copilot will generate if you open the Copilot pane and ask Copilot to summarize this doc. Clicking the Ask a question button opens the Copilot pane with the summary prepopulated ready for the user to delve deeper into the summary.
The summary is only available after a document is saved and closed. The next time someone opens the document, the summary pane appears at the top of the document and Copilot generates the summary. The pane remains at the top of the document and doesn’t appear on every page. If Copilot thinks it necessary (for instance, if more text is added to a document), it displays a Check for new summary button to prompt the user to ask Copilot to regenerate the summary.
Apart from removing the Copilot license from an account (in which case the summaries don’t appear), there doesn’t seem to be a way to disable the feature. You can collapse the summary, but it’s still there and can be expanded at any time.
Summarizing Large Word Documents
When Microsoft launched Copilot support for Word, several restrictions existed. For instance, Word couldn’t ground user prompts against internet content. More importantly, summarization could only handle relatively small documents. The guidance was that Word could handle documents with up to 15,000 words but would struggle thereafter.
This sounds a lot, and it’s probably enough to handle a large percentage of the documents generated within office environments. However, summaries really come into their own when they extract information from large documents commonly found in contracts and plans. The restriction, resulting from the size of the prompt that could be sent to the LLM, proved to be a big issue.
Microsoft responded in in August 2024 with an announcement that Word could now summarize documents of up to 80,000 words. In their text, Microsoft says that the new limit is four times greater than the previous limit. The new limit is rolling out for desktop, mobile, and browser versions of Word. For Windows, the increased limit is available in Version 2310 (Build 16919.20000) or later.
Processing Even Larger Word Documents
Eighty thousand words sounds a lot. At an average of 650 words per page, that’s 123 pages filled with text. I wanted to see how Copilot summaries coped with larger documents.
According to this source, the maximum size of a text-only Word document is 32 MB. With other elements included, the theoretical size extends to 512 MB. I don’t have documents quite that big, but I do have the source document for the Office 365 for IT Pros eBook. At 1,242 pages and 679,800 characters, including many figures, tables, cross-references, and so on, the file size is 29.4 MB.
Copilot attempted to generate a summary for Office 365 for IT Pros but failed. This wasn’t surprising because the file is so much larger than the maximum supported.
The current size of the Automating Microsoft 365 with PowerShell eBook file is 1.72 MB and spans 113,600 words in 255 pages. That’s much closer to the documented limit, and Copilot was able to generate a summary (Figure 2).
Although the bulleted list contains information extracted from the file, it doesn’t reflect the true content of the document because Copilot was unable to send the entire file to the LLM for processing. The bulleted list comes from the first two of four chapters and completely ignores the chapters dealing with the Graph API and Microsoft Graph PowerShell SDK.
Summaries For Standard Documents
Microsoft hasn’t published any documentation that I can find for Copilot’s automatic document summary feature. When it appears, perhaps the documentation will describe how to disable the feature for those who don’t want it. If not, we’ll just have to cope with automatic summaries. At least they will work for regular Word documents of less than 80,000 words.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive monthly insights into what happens, why it happens, and what new features and capabilities mean for your tenant.
Teams Improves Text Pasting and Mic Pending
Who Thought that Including Metadata in Teams Pasted Text Was a Good Idea?
In an example of finally listening to user feedback, Microsoft announced in MC878422 (30 August 2024) that Teams no longer includes metadata in messages copied from chats or channel conversations. The change is effective now and means that instead of having Teams insert a timestamp and the name of the person who created the text, only the text is pasted. This is exactly the way the feature should have worked since day zero. Quite why anyone thought it was a good idea to insert additional information into copied text is one of the great mysteries of Teams development.
MC878422 notes: “Many users have voiced frustrations over copying messages in Teams, particularly the inclusion of metadata like names and timestamps. Customer feedback has been clear, signaling that this feature was adding more noise than value to user workflow.”
Copying Metadata is An Old Lync Feature
It seems likely that inserting the timestamp and author name is an idea that came to Teams from Lync Server 2013 and Skype for Business. A support article from the time describes how to change the default setting of copying message, name, and time to copying just the message. Nearly eight years after Teams entered preview in November 2016, the opportunity to update a setting as in Lync Server 2013 never appeared. The net result is that Teams users had to manually remove the unwanted metadata from copied text after pasting it into another app. Thankfully, the change “helps maintain focus and reduces unnecessary noise.”
I’ve no idea about how many of the 320 million monthly active Teams users found this aspect of the product annoying, but it’s been high up on my list along with in-product advertising and a constant stream of irritating pop-up messages.
Mic Pending is a Feature You Probably Never Knew Exists
In a more positive note, Juan Rivera, Corporate Vice President @ Microsoft. Teams Calling, Meetings & Devices Engineering posted on LinkedIn about a feature called Mic Pending state, which apparently is now rolled out to all tenants.
I have never thought much about the process required to implement the mute/unmute button in a call, but apparently Microsoft has done the work to make sure that when users hit the mic button (Figure 1), the action occurs immediately. If something gets in the way to prevent mute/unmute happening, Teams displays a “pending” icon if it notices that the action has taken more than 100 milliseconds.
Figure 1: The Teams mute mic button now works with 99.99+% reliability
The issue being addressed is to make sure that people have confidence that Teams will mute their microphone immediately they press the button and unmute the microphone in a similarly effective manner. It seems like some folks have been caught by a delay in muting. The button displayed in a Teams meeting showed that the microphone was off when it was still live. You can see how this could end up with something being heard or captured on a Teams recording that people would have preferred not to have been captured. Calling your boss a flaming idiot over an open microphone that you thought was muted is possibly not a good thing to do.
According to the post, Microsoft believe that Teams delivers 99.99+% reliability for the mute/unmute toggle, which should mean that the status for the microphone shown on screen can be trusted. Of course, the paranoid amongst us will always give a microphone two or three seconds before we consider it to be truly off.
Two Good Changes
The one thing about Teams is that it’s always changing. People like the Office 365 for IT Pros writing team have no shortage of topics to cover when it comes to Teams. Thankfully, the two topics covered here are both positive, even if mic pending hasn’t come to our attention before.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
10 istilah AI (lanjutan) yang perlu Anda ketahui
Read the English version here
Jakarta, 4 September 2024 – Sejak generative artificial intelligence (AI) menjadi semakin populer pada akhir tahun 2022, sebagian besar dari kita telah memperoleh pemahaman dasar tentang teknologi tersebut dan bagaimana teknologi ini menggunakan bahasa sehari-hari untuk membantu kita berinteraksi dengan komputer secara lebih mudah. Beberapa dari kita bahkan telah menggunakan jargon seperti “prompt” dan “machine learning” sambil minum kopi santai bersama teman-teman. Pada akhir 2023 lalu, Microsoft telah merangkumkan 10 istilah AI yang perlu Anda ketahui. Namun, seiring dengan berkembangnya AI, istilah-istilah ini juga terus berkembang. Tahukah Anda perbedaan antara model bahasa besar dan kecil? Atau apa kepanjangan dari “GPT” di ChatGPT? Berikut ini adalah sepuluh kosa kata AI tingkat lanjut yang perlu Anda ketahui.
Penalaran (reasoning)/perencanaan (planning)
Komputer yang menggunakan AI kini dapat memecahkan masalah dan menyelesaikan tugas dengan menggunakan pola yang telah mereka pelajari dari data historis untuk memahami informasi. Proses ini mirip dengan penalaran atau proses berpikir logis. Sistem AI yang paling canggih menunjukkan kemampuan untuk melangkah lebih jauh dari ini dan dapat mengatasi masalah yang semakin kompleks dengan membuat perencanaan. Ia bisa merancang urutan tindakan yang perlu diterapkan untuk mencapai tujuan tertentu.
Sebagai contoh, bayangkan Anda meminta bantuan program AI untuk membuat rencana perjalanan ke taman bermain. Anda menulis “saya ingin mengunjungi enam wahana berbeda di taman bermain X, termasuk wahana air di waktu terpanas di hari Sabtu, 5 Oktober”. Berdasarkan tujuan Anda tersebut, sistem AI dapat memecahnya menjadi langkah-langkah kecil untuk membuat jadwal sambil menggunakan penalaran, untuk memastikan Anda tidak mengunjungi wahana yang sama dua kali, dan bahwa Anda bisa menaiki wahana air antara jam 12 siang sampai jam 3 sore.
Pelatihan (training)/inferensi (inference)
Ada dua langkah yang dilakukan untuk membuat dan menggunakan sistem AI: pelatihan dan inferensi. Pelatihan adalah aktivitas mendidik sistem AI di mana ia akan diberikan dataset, dan sistem AI tersebut belajar melakukan tugas atau membuat prediksi berdasarkan data tersebut. Misalnya, sistem AI diberikan daftar harga rumah yang baru-baru ini dijual di suatu lingkungan, lengkap dengan jumlah kamar tidur dan kamar mandi di masing-masing rumah tersebut dan banyak variabel lainnya. Selama pelatihan, sistem AI akan menyesuaikan parameter internalnya. Parameter internal yang dimaksud merupakan sebuah nilai yang menentukan berapa banyak bobot yang harus diberikan terhadap tiap variabel, dan bagaimana ia memengaruhi harga jual rumah. Sementara itu, inferensi adalah ketika sistem AI menggunakan pola dan parameter yang telah dipelajari tadi untuk menghasilkan prediksi harga untuk rumah yang baru akan dipasarkan di masa depan.
Model bahasa kecil (small language model / SLM)
Model bahasa kecil, atau SLM, adalah versi mini dari model bahasa besar, atau large language models (LLM). Keduanya menggunakan teknik pembelajaran mesin (machine learning) untuk membantu mereka mengenali pola dan hubungan, sehingga mereka dapat menghasilkan respons dalam bahasa sehari-hari yang realistis. Jika LLM berukuran sangat besar dan membutuhkan daya komputasi dan memori yang besar, SLM seperti Phi-3 dilatih menggunakan dataset lebih kecil yang terkurasi dan memiliki parameter yang lebih sedikit, sehingga lebih ringkas dan bahkan dapat digunakan secara offline alias tanpa koneksi internet. Ini membuatnya cocok diaplikasikan di perangkat seperti laptop atau ponsel, di mana Anda mungkin ingin mengajukan pertanyaan sederhana tentang perawatan hewan peliharaan, tetapi tidak perlu mengetahui informasi terperinci mengenai cara melatih anjing pemandu.
Grounding
Sistem generative AI dapat menyusun cerita, puisi, dan lelucon, serta menjawab pertanyaan penelitian. Tetapi terkadang mereka kesulitan membedakan fakta dan fiksi, atau mungkin data pelatihan mereka sudah ketinggalan zaman, sehingga sistem AI dapat memberikan tanggapan yang tidak akurat—suatu kejadian yang disebut sebagai halusinasi. Developers bekerja untuk membantu AI berinteraksi dengan dunia nyata secara akurat melalui proses grounding. Ini adalah proses ketika developers menghubungkan dan menambatkan model mereka dengan data dan contoh nyata untuk meningkatkan akurasi dan menghasilkan output yang lebih relevan secara kontekstual dan dipersonalisasi.
Retrieval Augmented Generation (RAG)
Ketika developers memberikan akses sistem AI ke sumber grounding untuk membantunya menjadi lebih akurat dan terkini, mereka menggunakan metode yang disebut Retrieval Augmented Generation atau RAG. Pola RAG menghemat waktu dan sumber daya dengan memberikan pengetahuan tambahan tanpa harus melatih ulang program AI.
Ini seolah-olah Anda adalah detektif Sherlock Holmes dan Anda telah membaca setiap buku di perpustakaan tetapi belum bisa memecahkan suatu kasus, jadi Anda naik ke loteng, membuka beberapa gulungan naskah kuno, dan voilà — Anda menemukan potongan teka-teki yang hilang. Sebagai contoh lain, jika Anda memiliki perusahaan pakaian dan ingin membuat chatbot yang dapat menjawab pertanyaan khusus terkait produk Anda, Anda dapat menggunakan pola RAG di katalog produk Anda untuk membantu pelanggan menemukan sweater hijau yang sempurna dari toko Anda.
Orkestrasi (Orchestration)
Program AI perlu melakukan banyak hal saat memproses permintaan pengguna. Untuk memastikan sistem AI ini melakukan semua tugas dalam urutan yang benar demi menghasilkan respons terbaik, seluruh tugas ini diatur oleh lapisan orkestrasi.
Sebagai contoh, jika Anda bertanya kepada Microsoft Copilot “siapa Ada Lovelace”, dan kemudian menanyakan Copilot “kapan dia lahir” di prompt selanjutnya, orkestrator AI di sini menyimpan riwayat obrolan Anda untuk melihat bahwa kata “dia” di prompt kedua itu merujuk pada Ada Lovelace.
Lapisan orkestrasi juga dapat mengikuti pola RAG dengan mencari informasi segar di internet untuk ditambahkan ke dalam konteks dan membantu model menghasilkan jawaban yang lebih baik. Ini seperti seorang maestro yang mengisyaratkan pemain biola dan kemudian seruling dan oboe, sambil mengikuti lembaran musik untuk menghasilkan suara yang ada dalam benak komposer.
Memori
Model AI saat ini secara teknis tidak memiliki memori. Tetapi program AI dapat mengatur instruksi yang membantu mereka “mengingat” informasi dengan mengikuti langkah-langkah spesifik dengan setiap interaksi — seperti menyimpan pertanyaan dan jawaban sebelumnya dalam obrolan secara sementara, dan kemudian memasukkan konteks itu dalam permintaan model saat ini, atau menggunakan data grounding dari pola RAG untuk memastikan respons yang diberikan menggunakan informasi terbaru. Developers bereksperimen dengan lapisan orkestrasi untuk membantu sistem AI mengetahui apakah mereka perlu mengingat rincian langkah secara sementara, misalnya — memori jangka pendek, seperti mencatat di sticky note — atau apakah akan lebih berguna jika sistem AI mengingat sesuatu dalam jangka waktu yang lebih lama dengan menyimpannya di lokasi yang lebih permanen.
Transformer models dan diffusion models
Orang-orang telah mengajarkan sistem AI untuk memahami dan menghasilkan bahasa selama beberapa dekade, tetapi salah satu terobosan yang mempercepat kemajuan baru-baru ini adalah transformer models. Di antara model generative AI, tranformer adalah model yang memahami konteks dan nuansa terbaik dan tercepat. Mereka adalah pendongeng yang fasih, memperhatikan pola data dan mempertimbangkan pentingnya input yang berbeda untuk membantu mereka dengan cepat memprediksi apa yang akan terjadi selanjutnya, sehingga memungkinkan mereka menghasilkan teks. Bahkan transformer adalah huruf T di ChatGPT — Generative Pre-trained Transformer. Sementara itu, diffusion models yang umumnya digunakan untuk pembuatan gambar menambahkan sentuhan baru dengan bekerja secara lebih bertahap dan metodis, menyebarkan piksel gambar dari posisi acak hingga didistribusikan sampai membentuk gambar yang diminta dalam prompt. Diffusion models terus membuat perubahan kecil sampai mereka menciptakan output yang sesuai dengan kebutuhan pengguna.
Frontier models
Frontier models adalah sistem skala besar yang mendorong batas-batas AI dan dapat melakukan berbagai macam tugas dengan kemampuan baru yang lebih luas. Mereka bisa sangat maju sehingga terkadang kita terkejut dengan apa yang dapat mereka capai. Perusahaan teknologi termasuk Microsoft membentuk Frontier Model Forum untuk berbagi pengetahuan, menetapkan standar keamanan, dan membantu semua orang memahami program AI yang kuat ini guna memastikan pengembangan yang aman dan bertanggung jawab.
GPU
GPU, yang merupakan singkatan dari Graphics Processing Unit, pada dasarnya adalah kalkulator bertenaga turbo. GPU awalnya dirancang untuk menghaluskan grafis fantastis dalam video game, dan kini menjadi otot dari komputasi. Chip-nya memiliki banyak core kecil, yakni jaringan sirkuit dan transistor, yang menangani masalah matematika secara bersama-sama, atau disebut juga sebagai pemrosesan paralel. Hal ini pada dasarnya sama dengan yang AI lakukan — memecahkan banyak perhitungan dalam skala besar untuk dapat berkomunikasi dalam bahasa manusia dan mengenali gambar atau suara. Karena itu, platform AI sangat memerlukan GPU, baik untuk pelatihan dan inferensi. Faktanya, model AI paling canggih saat ini dilatih menggunakan serangkaian besar GPU yang saling berhubungan — terkadang berjumlah puluhan ribu dan tersebar di pusat data raksasa — seperti yang dimiliki Microsoft di Azure, yang merupakan salah satu komputer paling kuat yang pernah dibuat.
Pelajari selengkapnya tentang berita AI terbaru di Microsoft Source dan berita kami di Indonesia melalui halaman ini.
-SELESAI-
Transferring Reusable PowerShell Objects Between Microsoft 365 Tenants
The Graph SDK’s ToJsonString Method Proves Its Worth
One of the frustrations about using the internet is when you find some code that seems useful, copy the code to try it out in your tenant, and discover that some formatting issue prevents the code from running. Many reasons cause this to happen. Sometimes it’s as simple as an error when copying code into a web editor, and sometimes errors creep in after copying the code, perhaps when formatting it for display. I guess fixing the problems is an opportunity to learn what the code really does.
Answers created by generative AI solutions like ChatGPT, Copilot for Microsoft 365, and GitHub Copilot compound the problem by faithfully reproducing errors in its responses. This is no fault of the technology, which works by creating answers from what’s gone before. If published code includes a formatting error, generative AI is unlikely to find and fix the problem.
Dealing with JSON Payloads
All of which brings me to a variation on the problem. The documentation for Graph APIs used to create or update objects usually include an example of a JSON-formatted payload containing the parameter values for the request. The Graph API interpret the JSON content in the payload to extract the parameters to run a request. By comparison, Microsoft Graph PowerShell SDK cmdlets use hash tables and arrays to pass parameters. The hash tables and arrays mimic the elements of the JSON structure used by the underlying Graph APIs.
Composing a JSON payload is no challenge If you can write perfect JSON. Like any other rules for programming or formatting, it takes time to become fluent with JSON, and who can afford that time when other work exists to be done? Here’s a way to make things easier.
Every object generated by a Graph SDK cmdlet has a ToJsonString method to create a JSON-formatted version of the object. For example:
$User = Get-MgUser -UserId Kim.Akers@office365itpros.com
$UserJson = $User.ToJsonString()
$UserJson
{
“@odata.context”: “https://graph.microsoft.com/v1.0/$metadata#users/$entity”,
“id”: “d36b323a-32c3-4ca5-a4a5-2f7b4fbef31c”,
“businessPhones”: [ “+1 713 633-5141” ],
“displayName”: “Kim Akers (She/Her)”,
“givenName”: “Kim”,
“jobTitle”: “VP Marketing”,
“mail”: “Kim.Akers@office365itpros.com”,
“mobilePhone”: “+1 761 504-0011”,
“officeLocation”: “NYC”,
“preferredLanguage”: “en-US”,
“surname”: “Akers”,
“userPrincipalName”: Kim.Akers@office365itpros.com
}
The advantages of using the ToJsonString method instead of PowerShell’s ConvertTo-JSON cmdlet is that the method doesn’t output properties with empty values. This makes the resulting output easier to review and manage. For instance, the JSON content shown above is a lot easier to use as a template for adding new user accounts than the equivalent generated by ConvertTo-JSON.
Transferring a Conditional Access Policy Using ToJsonString
The output generated by ToJsonString becomes very interesting when you want to move objects between tenants. For example, let’s assume that you use a test tenant to create and fine tune a conditional access policy. The next piece of work is to transfer the conditional access policy from the test tenant to the production environment. Here’s how I make the transfer:
Run the Get-MgIdentityConditionalAccessPolicy cmdlet to find the target policy and export its settings to JSON. Then save the JSON content in a text file.
$Policy = Get-MgIdentityConditionalAccessPolicy -ConditionalAccessPolicyId ‘1d4063cb-5ebf-4676-bfca-3775d7160b65’
$PolicyJson = $Policy.toJsonString()
$PolicyJson > PolicyExport.txt
Edit the text file to replace any tenant-specific items with equivalent values for the target tenant. For instance, conditional access policies usually include an exclusion for break glass accounts, which are listed in the policy using the account identifiers. In this case, you need to replace the account identifiers for the source tenant in the exported text file with the account identifiers for the break glass account for the target tenant.
Disconnect from the source tenant.
Connect to the target tenant with the Policy.ReadWrite.ConditionalAccess scope.
Create a variable ($Body in this example) containing the conditional policy settings.
Run the Invoke-MgGraph-Request cmdlet to import the policy definition into the target tenant.
$Uri = “https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies”
Invoke-MgGraphRequest -uri $uri -method Post -Body $Body
The Other Way
Another way to create a conditional access policy with PowerShell is to run the New-MgIdentityConditionalAccessPolicy cmdlet, which takes a hash table as its payload. It’s easy to translate the JSON into the format used for parameter values stored in the hash table, but it’s even easier to run Invoke-MgGraphRequest and pass the edited version of the JSON exported from the source tenant. Why make things hard for yourself?
This tip is just one of the hundreds included the Automating Microsoft 365 with PowerShell eBook (available separately, as part of the Office 365 for IT Pros (2025 edition) bundle, or as a paperback from Amazon.com).
undefined symbol xcb_shm_id when trying to startup MatLab
When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’ When trying to start up MatLab, I get
> ./bin/matlab
MATLAB is selecting SOFTWARE rendering.
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
/home/pblase/.MathWorks/ServiceHost/clr-df9a0cbb6bd34e079ef626671d1a7b7c/_tmp_MSHI_5363-9225-767d-e56f/mci/_tempinstaller_glnxa64/bin/glnxa64/InstallMathWorksServiceHost: symbol lookup error: /usr/lib64/libcairo.so.2: undefined symbol: xcb_shm_id
Unexpected exception: ‘N7mwboost10wrapexceptINS_16exception_detail39current_exception_std_exception_wrapperISt13runtime_errorEEEE: Error loading /home/pblase/matlab/bin/glnxa64/matlab_startup_plugins/matlab_graphics_ui/mwuixloader.so. /usr/lib64/libXt.so.6: undefined symbol: SmcModifyCallbacks: Success: Success’ in createMVMAndCallParser phase ‘Creating local MVM’ libcairo MATLAB Answers — New Questions
Intro to matlab lab and I have no idea how this works
<</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost.<</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost. <</matlabcentral/answers/uploaded_files/1765134/Screenshot%202024-09-02%20at%206.21.33%E2%80%AFPM.png>>
I don’t know what I am supposed to do with the second part of question 3 and also don’t know what to do with #4. This is my first time ever taking a class about coding so I’m super lost. vector, vectors, variable MATLAB Answers — New Questions
Poor performance of linprog in practice
I have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILEI have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILE I have to solve a dynamic programming problem using a linear programming approach. For details, please see this paper. The LP that I want to solve is:
min c’*v
s.t.
A*v>=u,
where c is n*1, v is n*1, A is n^2*n, u is n^2*1.
The min is with respect to v, the value function of the original DP problem. I have a moderate number of variables, n=300 and m=n^2*n=90000 linear inequalities as constraints. No bound constraints on v.
I use the Matlab function linprog which in turn is based on the solver HIGHS (since R2024a). The code is slow for my purposes (i.e. a brute-force value iteration is much faster). Moreover, linprog gives correct results only if I set the option ‘Algorithm’,’dual-simplex-highs’. With other algorithms, it gets stuck.
After profiling the code, it turns out that the bottleneck is line 377 of linprog:
[x, fval, exitflag, output, lambda] = run(algorithm, problem);
I was wondering if there is a way to speed up the code. Any help or suggestion is greatly appreciated! I put below a MWE to illustrate the problem.
clear,clc,close all
%% Set parameters
crra = 2;
alpha = 0.36;
beta = 0.95;
delta = 0.1;
%% Grid for capital
k_ss = ((1-beta*(1-delta))/(alpha*beta))^(1/(alpha-1));
n_k = 300;
k_grid = linspace(0.1*k_ss,1.5*k_ss,n_k)’;
%% Build current return matrix, U(k’,k)
cons = k_grid’.^alpha+(1-delta)*k_grid’-k_grid;
U_mat = f_util(cons,crra);
U_mat(cons<=0) = -inf;
%% Using LINEAR PROGRAMMING
% min c’*v
% s.t.
% A*v>=u, where c is n*1, v is n*1, A is n^2*n, u is n^2*1
n = length(k_grid);
c_vec = ones(n,1);
u_vec = U_mat(:); %% U(k’,k), stack columnwise
%% Build A matrix using cell-based method
tic
A = cell(n,1);
bigI = (-beta)*speye(n);
for i=1:n
temp = bigI;
temp(:,i) = temp(:,i)+1;
A{i} = temp;
end
A = vertcat(A{:});
disp(‘Time to build A matrix with cell method:’)
toc
%% Call linprog
% ‘dual-simplex-highs’ (default and by far the best)
options = optimoptions(‘linprog’,’Algorithm’,’dual-simplex-highs’);
tic
[V_lin,fval,exitflag,output] = linprog(c_vec,-A,-u_vec,[],[],[],[],options);
disp(‘Time linear programming:’)
toc
if exitflag<=0
warning(‘linprog did not find a solution’)
fprintf(‘exitflag = %d n’,exitflag)
end
%% Now that we solved for V, compute policy function
RHS_mat = U_mat+beta*V_lin; % (k’,k)
[V1,pol_k_ind] = max(RHS_mat,[],1);
pol_k = k_grid(pol_k_ind);
% Plots
figure
plot(k_grid,V1)
figure
plot(k_grid,k_grid,’–‘,k_grid,pol_k)
function util = f_util(c,crra)
util = c.^(1-crra)/(1-crra);
end
PROFILE linprog, performance MATLAB Answers — New Questions
How to import .EEG or text or excel file to EEGlab
Hi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab softwareHi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab software Hi all I’ve 1-hour EEG data with a sampling frequency 291hz.I’ve installed EEGlab v14.1.1 version and tried to load my data files of ‘.EEG file’,’text’ and ‘excel’formats, but none of them are loading to EEGlab.It’s showing the following error. Please help me to slove this issue since I’m new to this EEGlab software eeg, eeglab, signal processing MATLAB Answers — New Questions
Conditional formating using formula
Hi,
I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2)
I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:
=MATCH(A2,Array1,0)
This only highlights a single cell, even if I try to apply the conditional format to the Table1
Can anyone help?
Thanks
Hi, I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2) I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:=MATCH(A2,Array1,0)This only highlights a single cell, even if I try to apply the conditional format to the Table1 Can anyone help?Thanks Read More
New Outlook:
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails.
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails. Read More
Migrating to 365 with 2 domains
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same?
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same? Read More
Upcoming marketplace webinars available in September
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
https://aka.ms/MTMwebinars
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners