Category: News
Login page of Microsoft blocked in iFrame
I’ve created an Enterprise App and App Registration in Microsoft Entra ID for the authentication of users into a thrid-party web application (Qlik Sense) via OIDC. This works as designed. When I access the website of Qlik Sense directly in my browser, I’m redirected to login.microsoftonline.com, where I can pick an existing account or sign-in with a new account.
Now, I want to create a custom web application where the website of Qlik Sense is embedded on a page with an iFrame. Unfortunately, this doesn’t work, because the login page of Microsoft is blocked by the browser when opened inside an iFrame. The browser console indicates that this is because of a HTTP response header ‘X-Frame-Options’ that is coming from Microsoft Entra ID. Is there a way to prevent this behaviour by changing the configuration in Entra ID?
I’ve created an Enterprise App and App Registration in Microsoft Entra ID for the authentication of users into a thrid-party web application (Qlik Sense) via OIDC. This works as designed. When I access the website of Qlik Sense directly in my browser, I’m redirected to login.microsoftonline.com, where I can pick an existing account or sign-in with a new account. Now, I want to create a custom web application where the website of Qlik Sense is embedded on a page with an iFrame. Unfortunately, this doesn’t work, because the login page of Microsoft is blocked by the browser when opened inside an iFrame. The browser console indicates that this is because of a HTTP response header ‘X-Frame-Options’ that is coming from Microsoft Entra ID. Is there a way to prevent this behaviour by changing the configuration in Entra ID? Read More
FaceTime App Required notification message.
Hi,
We have Facetime blocked in Intune, but a notification keeps popping up that says “FaceTime App Required” I can’t seem to figure out why this keeps popping and we would really like it to not keep popping. I’m still pretty new to Intune, so any help would appreciated.
Hi, We have Facetime blocked in Intune, but a notification keeps popping up that says “FaceTime App Required” I can’t seem to figure out why this keeps popping and we would really like it to not keep popping. I’m still pretty new to Intune, so any help would appreciated. Read More
App Governance Alert – How to Identify Graph Explorer Usage
We recieved a Defender alert about Graph Explorer making numerous Graph API calls to search for keywords in emails. I’ve been looking in App Governance (there isn’t a community hub for this), and I don’t see a way to actually review usage or identify accounts. There are zero details in the incident except that Graph Explorer was used to query email.
Any suggestions?
We recieved a Defender alert about Graph Explorer making numerous Graph API calls to search for keywords in emails. I’ve been looking in App Governance (there isn’t a community hub for this), and I don’t see a way to actually review usage or identify accounts. There are zero details in the incident except that Graph Explorer was used to query email. Any suggestions? Read More
Looking for opinion: Unjoin Hybrid AD, or migrate to new tenant?
Hello,
So I have a bit of a conundrum, and I’m not sure which is the better option.
Situation: We currently have an Entra-Hybrid AD environment. Our local AD is a .lan domain, and has almost 30 years of historical garbage (none of it is required anymore). All of our endpoints are already setup for Intune, and all apps/policies are being pushed from Intune. We have two servers that we will migrate to Entra hosted VMs, so are not a big concern.
My question is this:
1) If we disconnect our Hybrid-AD connection, how do we make sure all users are cloud sync’d and no longer pulling from local AD.
2) Is there a way to formally disconnect our Entra tenant from our AD Sync tool (so that its no longer expecting that domain).
3) How do we remove our old domain from the Entra-ID Tenant?
3) Our current tenant was setup in a hurry, and was not setup with very good governance or any real organization. Is there a good service that can help “clean up” an existing tenant?
3) Would it be easier to simply create a new Tenant, set it up with best practices, migrate the users, email, onedrive and SP, and then re-join the Intune devices as the final setup?
Looking for recommendations/suggestions/pitfalls to look out for while doing this.
Thank you,
Hello, So I have a bit of a conundrum, and I’m not sure which is the better option. Situation: We currently have an Entra-Hybrid AD environment. Our local AD is a .lan domain, and has almost 30 years of historical garbage (none of it is required anymore). All of our endpoints are already setup for Intune, and all apps/policies are being pushed from Intune. We have two servers that we will migrate to Entra hosted VMs, so are not a big concern.My question is this:1) If we disconnect our Hybrid-AD connection, how do we make sure all users are cloud sync’d and no longer pulling from local AD. 2) Is there a way to formally disconnect our Entra tenant from our AD Sync tool (so that its no longer expecting that domain).3) How do we remove our old domain from the Entra-ID Tenant?3) Our current tenant was setup in a hurry, and was not setup with very good governance or any real organization. Is there a good service that can help “clean up” an existing tenant? 3) Would it be easier to simply create a new Tenant, set it up with best practices, migrate the users, email, onedrive and SP, and then re-join the Intune devices as the final setup? Looking for recommendations/suggestions/pitfalls to look out for while doing this. Thank you, Read More
Find email-enabled public folders exclusively using “Microsoft.Office.Interop.Outlook”
Is there a way to exclusively identify email-enabled public folders using Microsoft.Office.Interop.Outlook? EWS or PowerShell should not be used. The program should run on the workstation and utilize the installed Outlook.
Once I’ve found an email-enabled public folder, I should list the emails contained within it.
So far, I’m encountering difficulties. For instance, I have the following method:
// Recursive method for listing the email addresses of email-enabled public folders
static void ListPublicFolders(Outlook.Folder? folder, string indent)
{
if (folder != null)
{
foreach (object obj in folder.Folders)
{
if (obj is Outlook.Folder)
{
Outlook.Folder? subFolder = obj as Outlook.Folder;
if (subFolder != null && subFolder.DefaultItemType == Outlook.OlItemType.olMailItem)
{
Outlook.MAPIFolder? parentFolder = subFolder.Parent as Outlook.MAPIFolder;
string parentName = parentFolder != null ? parentFolder.Name : “Parent folder not found”;
Console.WriteLine($”{indent}- {subFolder.Name}: {parentName}”);
if (parentFolder != null)
{
Marshal.ReleaseComObject(parentFolder);
}
}
ListPublicFolders(subFolder, indent + ” “);
if (subFolder != null)
{
Marshal.ReleaseComObject(subFolder);
}
}
}
}
}
The query
if (subFolder != null && subFolder.DefaultItemType == Outlook.OlItemType.olMailItem)
fails because subFolder.DefaultItemType returns the value Outlook.OlItemType.olPostItem, even though the public folder was set up as an email-enabled folder in Exchange.
Specifically, this is in Microsoft 365. In the Exchange admin center, when creating the folder, I explicitly checked the box for “Email-enabled.” This action resulted in two additional options: “Delegation” and “Email properties.” In “Email properties,” I can specify an alias and a display name. By default, both fields are set to “Orders.” Now, I expect the public folder to be email-enabled, with the email address email address removed for privacy reasons.
I don’t understand why Outlook is treating the folder incorrectly (I can only create posts and not send emails).
Perhaps someone can help me figure this out.
Thank you and best regards,
René
Is there a way to exclusively identify email-enabled public folders using Microsoft.Office.Interop.Outlook? EWS or PowerShell should not be used. The program should run on the workstation and utilize the installed Outlook.Once I’ve found an email-enabled public folder, I should list the emails contained within it.So far, I’m encountering difficulties. For instance, I have the following method: // Recursive method for listing the email addresses of email-enabled public folders
static void ListPublicFolders(Outlook.Folder? folder, string indent)
{
if (folder != null)
{
foreach (object obj in folder.Folders)
{
if (obj is Outlook.Folder)
{
Outlook.Folder? subFolder = obj as Outlook.Folder;
if (subFolder != null && subFolder.DefaultItemType == Outlook.OlItemType.olMailItem)
{
Outlook.MAPIFolder? parentFolder = subFolder.Parent as Outlook.MAPIFolder;
string parentName = parentFolder != null ? parentFolder.Name : “Parent folder not found”;
Console.WriteLine($”{indent}- {subFolder.Name}: {parentName}”);
if (parentFolder != null)
{
Marshal.ReleaseComObject(parentFolder);
}
}
ListPublicFolders(subFolder, indent + ” “);
if (subFolder != null)
{
Marshal.ReleaseComObject(subFolder);
}
}
}
}
} The query if (subFolder != null && subFolder.DefaultItemType == Outlook.OlItemType.olMailItem) fails because subFolder.DefaultItemType returns the value Outlook.OlItemType.olPostItem, even though the public folder was set up as an email-enabled folder in Exchange. Specifically, this is in Microsoft 365. In the Exchange admin center, when creating the folder, I explicitly checked the box for “Email-enabled.” This action resulted in two additional options: “Delegation” and “Email properties.” In “Email properties,” I can specify an alias and a display name. By default, both fields are set to “Orders.” Now, I expect the public folder to be email-enabled, with the email address email address removed for privacy reasons. I don’t understand why Outlook is treating the folder incorrectly (I can only create posts and not send emails). Perhaps someone can help me figure this out. Thank you and best regards, René Read More
Three columns of a database in another sheet and always updated
HI. I’m new to this forum and I have the following need.
I have a database where I collect student data when they enroll.
I would like three of these data (surname, name and tax code respectively) to also appear in another sheet and to be updated whenever they are modified in the main database.
Furthermore, I would like that by adding or removing some rows in the main database (so if I add or remove some students), the same changes would also occur in the other sheet showing the various surnames, first names and tax codes. I tried with the Power query option but it doesn’t seem to work.
Thank you.
HI. I’m new to this forum and I have the following need.I have a database where I collect student data when they enroll.I would like three of these data (surname, name and tax code respectively) to also appear in another sheet and to be updated whenever they are modified in the main database.Furthermore, I would like that by adding or removing some rows in the main database (so if I add or remove some students), the same changes would also occur in the other sheet showing the various surnames, first names and tax codes. I tried with the Power query option but it doesn’t seem to work.Thank you. Read More
The LLM Latency Guidebook: Optimizing Response Times for GenAI Applications
Co-authors: Priya Kedia, Julian Lee, Manoranjan Rajguru, Shikha Agrawal, Michael Tremeer
Contributors: Ranjani Mani, Sumit Pokhariyal, Sydnee Mayers
Generative AI applications are transforming how we do business today, creating new, engaging ways for customers to engage with applications. However, these new LLM models require massive amounts of compute to run, and unoptimized applications can run quite slowly, leading users to become frustrated. Creating a positive user experience is critical to the adoption of these tools, so minimising the response time of your LLM API calls is a must. The techniques shared in this article demonstrate how applications can be sped up by up to 100x their original speed* through clever prompt engineering and a small amount of code!
Previous work has identified the core principles for reducing LLM response times. This article expands upon these, by providing practical examples coupled with working code, to help you accelerate your own applications and delight customers. This article is primarily intended for software developers, data scientists and application developers, though any business stakeholder managing GenAI applications should read on to learn new ideas for improving their customer experience.
Understanding the drivers of long response times
The response time of an LLM can vary based on four primary factors:
the model used.
the number of tokens in the prompt.
the number of tokens generated.
the overall load on the deployment & system.
You can imagine the model as a person typing on a keyboard, where each token is generated one after another. The speed of the person (the model used) and the amount they need to type (the number of generation tokens) tend to be the largest contributor to long response times.
Figure 1 – The response generation step typically dominates the overall response time. Not to scale.
Techniques for improving LLM response times
The below table contains a range of recommendations that can be implemented to improve the response times of your Generative AI application. Where applicable, sample code is included, to allow you to see these benefits for yourself, and copy the relevant code or prompts into your application.
Best Practice
Intuition
GitHub
Potential Speed up of application
1. Generation Token Compression
Prompt the LLM to return the shortest response possible. A few simple phrases in your prompt can speed up your application. Few-shot prompting can also be used to ensure the response includes all the key information.
Up to 2-3x or more
20s -> 8s
2. Avoid using LLMs to output large amounts of predetermined text
Rather than rewriting documents, use the LLM to identify which parts of the text need to be edited, and use code to make the edits. For RAG, use code to simply append documents to the LLM response.
Up to 16x or more
310s-> 20s
3. Implement semantic caching
By caching responses, LLM responses can be reused, rather than calling Azure OpenAI, saving cost and time. The input does not need an exact match- for example “How can I sign up for Azure” and “I want to sign up for Azure” will return the same cached result.
Up to 14x or more
19s -> 1.3s
4. Parallelize requests
Many use cases (such as document processing, classification etc.) can be parallelized.
Up to 72x or more
180s -> 2.5s
5. Use GPT-3.5 over GPT-4 where possible
GPT-3.5 has a much faster token generation speed. Certain use cases require the more advanced reasoning capabilities of GPT-4, however sometimes few-shot prompting or finetuning may enable GPT-3.5 to perform the same tasks. Generally only recommended for advanced users, after attempting other optimizations first.
Up to 4x
17s -> 5s
6. Leverage translation services for certain languages
Certain languages have not been optimised, leading to long response times. Generate the output in English and leverage another model or API for the translation step.
Up to 3x
53s -> 16s
7. Co-locate cloud resources
Ensure model is deployed close your users. Ensure Azure AI Search and Azure OpenAI are as closely located as possible (in the same region, firewall, vNet etc.).
NA
1-2x
8. Load balancing
Having an additional endpoint for handling overflow capacity (for example, a PTU overflowing to a Pay-as-you-Go endpoint) can save latency by avoiding queuing when retrying requests.
Up to 2x
58s -> 31s
9. Enable streaming
Streaming improves the perceived latency of the application, by returning the response in chunks as soon as they are available.
Coming soon
Coming soon
10. Separation of workloads
Mixing different workloads on the same endpoint can negatively impact latency. 1) This is because short completions batched with longer ones will have to wait before being sent back. 2) Mixing the calls can reduce your cache hit rate as they are both competing for the same space.
Coming soon
Coming soon
Putting it into practice through case studies
This section includes an overview of two case studies, which represent typical GenAI applications- perhaps one is similar to yours! The linked code repositories show the original speed of the application, and then walk you step-by-step through the process of implementing different combinations of the techniques in this document. Implementing these recommendations achieved an improvement in the response time ranging from 6.8-102x!
Case Study
Techniques applied
Cumulative speed improvement
GitHub
Document processing
Rewrite a document to correct spelling errors and grammar. This example can be extended with custom logic to adapt to more specific document processing use cases.
1. Base case
1x (315s)
2. Avoid rewriting documents
8.3x (38s)
3. Generation token compression
15.8x (20s)
4. Parallelization
105x (3s)
Retrieval Augmented Generation (RAG)
Help a user troubleshoot a product which is not working.
1. Base case
1x (23s)
2. Generation token compression
2.3x (9.8s)
3. Avoid rewriting documents
6.8x (3.4s)
Retrieval Augmented Generation (RAG)
Provide general product information
1. Base case
1x (17s)
2. Semantic caching
17x (1s)
Conclusion
With Generative AI transforming how people interact with applications, minimising response times is essential. If you’re interested in improving your GenAI application’s performance, select a few of these recommendations, clone the repository, and implement them in your application’s next release!
*Disclaimer: The results depicted are merely illustrative, emphasizing the potential benefits of these techniques. They are not all-encompassing and are based on a single test. Response times may differ with each run, thus the main goal is to demonstrate relative improvement. The tests are performed using the powerful, but slower, GPT-4 32k model, with a focus on improving response times. The effectiveness of techniques like error correction through document rewriting varies depending on the input; a document with many errors might take longer to correct than to rewrite entirely. Therefore, these techniques should be tailored to your application.
Microsoft Tech Community – Latest Blogs –Read More
New capabilities to help you manage Microsoft Teams channels from creation to archival
Successfully navigating a fast-paced workplace requires teams to collaborate closely and share the same information. Channels facilitate this by bringing people together for common functions, projects, or interests. Designed for enduring collaboration, channels maintain their structure and purpose, even as members change, ensuring organizational continuity, transparency, knowledge sharing, and the preservation of vital conversations and decisions.
When setting up a channel, you can choose from three channel types, standard, private and shared, enabling you to bring in the right collaborators while controlling access to shared resources and avoiding oversharing, without needing to create multiple similar teams. We also expanded the limit of the number of channels in a team to 1000, so you can manage a large project in one team.
However, channels’ persistent nature and scalability can present usability challenges if not managed as priorities and projects shift. Disorganized, redundant, or outdated channels can hinder finding relevant conversations, detracting from their role in supporting effective communication and collaboration.
We are investing in enhancements that will streamline how people work and collaborate within channels from start to finish. These improvements will make creating and joining teams and channels quick and straightforward, allowing for efficient workspace setup. Collaboration within channels has been enhanced, helping everyone to stay on top if the information that matters most and contribute effectively. Additionally, retiring channels that have served their purpose will help maintain focus on current, relevant channels, reducing clutter. Our new channel related features aim to minimize distractions, simplify information retrieval, and foster effective collaboration.
Create and join teams and channels with ease
Channels are the hubs for teamwork within a team, allowing for more targeted discussions and collaboration. We have reduced the number of steps needed to create a new team by defaulting to “create a team from scratch”. If you would like to create a team from a template, select more create team options” and pick from the template library. This is generally available.
Not every collaboration requires a new team. So, we’ve made it easier to create a channel from the same menu you use to create a new team. Now you can avoid creating unnecessary team structures and clutter when you only need a channel. This is generally available.
Lack of awareness of existing teams can lead to duplicate teams being created. Users are now able to discover public and private teams and join them as needed. Admins can now use a new setting to control whether users can find private teams or not. The new experience combines privacy and collaboration, empowering admins to ensure that only the right people can request access to private teams, without compromising security or control. This is generally available.
When joining a team, you will be able to select to show only the channels that are relevant to you, including from the channels the team owners have recommended. This will help you easily organize your list of channels to prioritize those you care about most, helping filter through the noise. This feature will be generally available in by end of year 2024.
Collaborating with channels effectively
When you have many channels across teams, it can be hard to keep track of all of them. Discover feed is a personalized, AI powered feed that surfaces the most relevant content in channels that you are not showing in your channel list. The discover feed surfaces channel posts you might otherwise have missed, bringing relevant content based on people you work with or topics that might interest you. Scroll through your feed, easily catch up on news, and like, comment, or share a post from the discover feed, just like any other channel post. Discover feed is generally available.
To allow for a more organized and personalized channel list, select to hide or show the general channel of a team, just like other channels. By hiding less relevant channels you can declutter your channel list. Hide general channel is generally available. Coming later in the year, team owner will be able to rename the general channel to a name that better reflects the purpose of the channel for everyone in the team, helping you navigate your channel list with greater ease.
Is your teams and channel list cluttered with channels you do not have interest in? Teams will automatically detect inactive channels you haven’t interacted with over the past 45 days, and automatically hide them for you. You have the option to review the list of channels and keep showing some or all of them, or opt out of automatic clean-up. This feature will roll out in Q3 calendar year 2024.
Notification settings experience is streamlined to allow better customization, enabling you to tailor it to your needs. You can choose to see your notification in the activity feed, display a banner, or turn them off. In your activity feed, you are now able to clear notifications with a single click, marking all your notifications as read at once, helping you keep up with the quick pace of conversations and notifications. These features are generally available.
Soon, you will also have the option to mute all notifications for a specific channel post. You can also customize the sound of your notifications to help you stay focused, prioritize quickly, and avoid distractions. Assign different sounds to different kinds of notifications, such as urgent messages. Or mute notification sounds when you are busy or in a meeting. This will be rolling out during Q2&Q3 calendar year.
Invite your colleague to participate in a discussion by sharing a link to the channel, a post or a reply. This way, they can navigate more easily and faster to specific content and conversations, without searching through many messages and files. This is generally available.
Retiring channels to reduce clutter and improve focus
When active collaboration in a project ends, but you need to keep the collaboration context for future reference, channel owners and administrators can archive a channel they own or manage. When archiving, you are turning off the option to have additional conversations and hiding from the channels list, for channels that are no longer used or relevant. Archiving a channel preserves channel content, including messages, files, and tabs, but removes it from the active list of channels in your or channel members’ left rail. Once a channel has been archived, actions such as messaging, reacting, commenting, are no longer available. In the event you need to revive an archived channel, you can restore the channel from the “Manage teams” menu. Archive channel is generally available.
Get more from your Teams channels
Channels are where teamwork happens, bringing internal and external members together to collaborate and share. Take advantage of these tools to help manage your channels across the lifecycle, for improved focus, connection, and productivity.
Learn more about channels in Microsoft Teams, and keep watching this this blog for additional features and strategies to help you get the most from your Teams channels.
Microsoft Tech Community – Latest Blogs –Read More
Global AI Bootcamp 2024 with MVP Communities
The Global AI Bootcamp, an annual event series organized by the Global AI Community, took place in February and March 2024. This event drew participants worldwide, offering a unique opportunity to delve into Microsoft AI technologies. Featured technologies included Azure OpenAI Service, ChatGPT, Copilot for Microsoft 365, Microsoft Copilot Studio, GitHub Copilot, Microsoft Fabric, Semantic Kernel, and the principles of responsible AI, capturing the attention of many engineers and users.
Over 100 events worldwide featured Microsoft AI technologies at Global AI Bootcamp 2024, a remarkable event that focused on these innovations. Microsoft MVPs, Regional Directors, Microsoft Learn Student Ambassadors, and AI community leaders led these events and engaged with local enthusiasts eager to learn the latest in AI through technical sessions, workshops, and mentorships. Amazingly, communities in India organized nearly 20 community events and supported learners of all ages.
The Microsoft MVP Award Program recognizes community leaders in 12 Award Categories covering a wide range of Microsoft products and services, including AI. Global AI Bootcamp 2024 benefited from diverse MVPs, not just the ones in the AI Award Category. These MVPs, who focus on their own technology areas, are improving their AI skills and spreading their expertise, making the Global AI Community more valuable.
Global AI Bootcamp 2024 Ahmedabad in India
Global AI Bootcamp 2024 – São Paulo in Brazil
Global AI Bootcamp Seoul 2024 in Korea
You can find more information about all the events on the Global AI Bootcamp event page. Many events have recorded videos that you can watch later, giving you a great opportunity to continue learning about AI. You can also look at social media posts with hashtag #GlobalAIBootcamp on X and LinkedIn to see how attendees were feeling excited.
In addition to the Global AI Bootcamp, the Global AI Community also showcases information about other community-led events in various formats including in-person, online, and hybrid. Those interested in learning alongside the community are encouraged to check out these events: Events – Global AI Community
Furthermore, this year’s Microsoft Build is scheduled for May 23 and 24, 2024. For those curious about the latest AI technologies that will be unveiled, registration is highly recommended.
Microsoft Tech Community – Latest Blogs –Read More
Impact Summit: What happens when you get Nonprofits in a room together? Progress.
There was special energy in the air at LinkedIn’s Impact Summit 2024. As our nonprofit community and social impact professionals came together, we felt the collective sigh made possible by a space created at the intersection of global challenges, inspiration, and an unmatched bias for action.
Aptly themed Progress at Work, The Impact Summit in New York delivered a multi-layered dialogue around the future of work and the sector fueled with the distinct kind of actionable inspiration only made possible when our communities come together.
Shonda Rhimes, Founder and Chief Storyteller of Shondaland, described new ways to tell cause-related stories. YearUp CEO and President Ellen McClain highlighted how her organization is rooting their work on training, recruiting and career matching in a critical sense of belonging. Aneesh Raman, Vice President, Workforce Expert at LinkedIn, talked about the soft skills that tomorrow’s jobs will require, and encouraged a “pro-human” view in how we think about skills-based hiring.
This sentiment was echoed by Kate Behncken, Global Head of Microsoft Philanthropies, who identified together in an interview with Kelly Fukai, C.M, M.B.A, COO of WTIA, that we should not rush adoption of new technologies until we’ve trained ourselves on how and when to apply them. This was followed by a riveting Copilot demo by Erin McHugh Saif, Chief Product Officer of Tech for Social Impact for Microsoft Philanthropies, who taught us where to go, how to prompt and save time and energy with gen AI in grant writing and impact reports.
I had the honor of presenting with Brandolon Barnett, Head of Innovation and Philanthropy at Giving Compass, on the ways in which AI is redefining productivity for the nonprofits. As the sector finds itself managing a complex polycrisis with headwinds around generosity, staffing, and widespread burnout as the demand for nonprofit services continues to increase.
As a group, we discussed how AI can be used to lift the burden of day-to-day tasks by letting AI help with document searches, meeting summaries and emails. It can elevate our work by helping us create first drafts and edits. All of this can then save time for what nonprofits do best; the uniquely human work of creating connections, building relationships and serving the community.
At its core, we hope our audience walked away with the understanding that a culture of trial and error is critical to nonprofit AI adoption. Use cases abound in impact. We need to fundamentally change the way we approach our work, from holding the pen to becoming editors. Rethinking how we ask questions. Challenging our own assumptions around what barriers are real and what are perceived. We have only scratched the surface of this work.
Fundamentally, our role in the broader scope of innovation, purpose and impact is that we believe that the nonprofit sector can do the most with our technology. Nonprofits are on the frontlines of every major challenge we face, so it is mission-critical for all of us that they get much closer to the solutions we build.
The Impact Summit ended with Sal Khan, Founder of the Kahn Academy, reminding us that we are all ” just protagonists in an epic adventure.” And if that’s so, then AI will be ok when used as “an amplification of human intent.”
And finally, let’s use our energy for what we are uniquely positioned to solve. Save the rest.
Continue the conversation by joining us in the Nonprofit Community! Want to share best practices or join community events? Become a member by “Joining” the Nonprofit Community. To stay up to date on the latest nonprofit news, make sure to Follow or Subscribe to the Nonprofit Community Blog space!
Microsoft Tech Community – Latest Blogs –Read More
Is it possible to make MatlabEngine class object persistent in Java?
Dear Matlab specialists!
My colleague wrote a simple Java class (code below) to create and use MatlabEngine "eng" object in the run() function. It works normally.
However, when one makes this object a member of class and then tries to reuse it, it looks like the whole associated Matlab session disappears after the first usage.
I am intersted in creating MatlabEngine object/session "eng" in a way when it is possible to call it from other Java classes for which the Java class where it is created, is visible. In particular, I’d like to create a Matlab class object within "eng", make some setups (which need quite long calculations), and then call the "Do(current_data)" function of this created Matlab class on my current data, i.e. whenever/wherever needed in Java. For that obviously, this "eng" object and its session should be persistent. So my question is – is it posibble at all?
Best wishes,
Y.
import com.mathworks.engine.EngineException;
import com.mathworks.engine.MatlabEngine;
import com.mathworks.engine.MatlabSyntaxException;
import java.util.concurrent.ExecutionException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class javaFevalFcnMulti implements Runnable{
String[] args;
int num1;
int num2;
public javaFevalFcnMulti(String[] args_in, int num1_in, int num2_in){
args = args_in;
num1 = num1_in;
num2 = num2_in;
}
@Override
public void run() {
MatlabEngine eng;
try {
System.out.println("MLT started");
eng = MatlabEngine.startMatlab();
Object[] results = eng.feval(3, "gcd", num1, num2);
Integer G = (Integer)results[0];
Integer U = (Integer)results[1];
Integer V = (Integer)results[2];
eng.close();
System.out.println("Greatest common divisor of "+Integer.toString(num1)+" and "+Integer.toString(num2)+" is "+G.toString());
} catch (EngineException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalArgumentException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalStateException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (MatlabSyntaxException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (ExecutionException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
}
}
}Dear Matlab specialists!
My colleague wrote a simple Java class (code below) to create and use MatlabEngine "eng" object in the run() function. It works normally.
However, when one makes this object a member of class and then tries to reuse it, it looks like the whole associated Matlab session disappears after the first usage.
I am intersted in creating MatlabEngine object/session "eng" in a way when it is possible to call it from other Java classes for which the Java class where it is created, is visible. In particular, I’d like to create a Matlab class object within "eng", make some setups (which need quite long calculations), and then call the "Do(current_data)" function of this created Matlab class on my current data, i.e. whenever/wherever needed in Java. For that obviously, this "eng" object and its session should be persistent. So my question is – is it posibble at all?
Best wishes,
Y.
import com.mathworks.engine.EngineException;
import com.mathworks.engine.MatlabEngine;
import com.mathworks.engine.MatlabSyntaxException;
import java.util.concurrent.ExecutionException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class javaFevalFcnMulti implements Runnable{
String[] args;
int num1;
int num2;
public javaFevalFcnMulti(String[] args_in, int num1_in, int num2_in){
args = args_in;
num1 = num1_in;
num2 = num2_in;
}
@Override
public void run() {
MatlabEngine eng;
try {
System.out.println("MLT started");
eng = MatlabEngine.startMatlab();
Object[] results = eng.feval(3, "gcd", num1, num2);
Integer G = (Integer)results[0];
Integer U = (Integer)results[1];
Integer V = (Integer)results[2];
eng.close();
System.out.println("Greatest common divisor of "+Integer.toString(num1)+" and "+Integer.toString(num2)+" is "+G.toString());
} catch (EngineException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalArgumentException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalStateException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (MatlabSyntaxException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (ExecutionException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
}
}
} Dear Matlab specialists!
My colleague wrote a simple Java class (code below) to create and use MatlabEngine "eng" object in the run() function. It works normally.
However, when one makes this object a member of class and then tries to reuse it, it looks like the whole associated Matlab session disappears after the first usage.
I am intersted in creating MatlabEngine object/session "eng" in a way when it is possible to call it from other Java classes for which the Java class where it is created, is visible. In particular, I’d like to create a Matlab class object within "eng", make some setups (which need quite long calculations), and then call the "Do(current_data)" function of this created Matlab class on my current data, i.e. whenever/wherever needed in Java. For that obviously, this "eng" object and its session should be persistent. So my question is – is it posibble at all?
Best wishes,
Y.
import com.mathworks.engine.EngineException;
import com.mathworks.engine.MatlabEngine;
import com.mathworks.engine.MatlabSyntaxException;
import java.util.concurrent.ExecutionException;
import java.util.logging.Level;
import java.util.logging.Logger;
public class javaFevalFcnMulti implements Runnable{
String[] args;
int num1;
int num2;
public javaFevalFcnMulti(String[] args_in, int num1_in, int num2_in){
args = args_in;
num1 = num1_in;
num2 = num2_in;
}
@Override
public void run() {
MatlabEngine eng;
try {
System.out.println("MLT started");
eng = MatlabEngine.startMatlab();
Object[] results = eng.feval(3, "gcd", num1, num2);
Integer G = (Integer)results[0];
Integer U = (Integer)results[1];
Integer V = (Integer)results[2];
eng.close();
System.out.println("Greatest common divisor of "+Integer.toString(num1)+" and "+Integer.toString(num2)+" is "+G.toString());
} catch (EngineException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (InterruptedException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalArgumentException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (IllegalStateException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (MatlabSyntaxException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
} catch (ExecutionException ex) {
Logger.getLogger(javaFevalFcnMulti.class.getName()).log(Level.SEVERE, null, ex);
}
}
} matlabengine, java MATLAB Answers — New Questions
Ho can i stop the axtoolbar from hiding?
I want the axtoolbar within a 2D-Plot to be shown always, not only when i put the mouse in the upper right part of the plot. Can this dynamic hiding function be disabled? If there is a solution, would it also work in App Designer?
Regards,
MatthiasI want the axtoolbar within a 2D-Plot to be shown always, not only when i put the mouse in the upper right part of the plot. Can this dynamic hiding function be disabled? If there is a solution, would it also work in App Designer?
Regards,
Matthias I want the axtoolbar within a 2D-Plot to be shown always, not only when i put the mouse in the upper right part of the plot. Can this dynamic hiding function be disabled? If there is a solution, would it also work in App Designer?
Regards,
Matthias axtoolbar, visibility, dynamic, static MATLAB Answers — New Questions
Force scientific notation in axes
Sometimes, when the "raw" values in the yticks are very small, matlab y axis automatically toggles to scientific notation, whereby the power of ten giving the order of magnitude appears in the top left corner, and the yticks are given in units of that power.
The threshold for this behavior seems to be 1e-3, but I can’t seem to find a property for forcing it on larger yticks.
I have found a few questions roughly on the same topic, but none of the relevant answers seem to apply to my case. Some people simply wanted to get rid of the scientific notation, other wanted it directly in the tick labels.
I like the "order of magnitude" format, but I am unable to force it (for instance, I’d like to have it for yticks of the order of 1e-2, for graphical homogeneity with a different plot).
Before downloading or creating an "ad hoc" code, I wanted to ask whether any of you knows a (perhaps undocumented?) way of toggling the "order of magnitude" notation.
Thanks a lot
FrancescoSometimes, when the "raw" values in the yticks are very small, matlab y axis automatically toggles to scientific notation, whereby the power of ten giving the order of magnitude appears in the top left corner, and the yticks are given in units of that power.
The threshold for this behavior seems to be 1e-3, but I can’t seem to find a property for forcing it on larger yticks.
I have found a few questions roughly on the same topic, but none of the relevant answers seem to apply to my case. Some people simply wanted to get rid of the scientific notation, other wanted it directly in the tick labels.
I like the "order of magnitude" format, but I am unable to force it (for instance, I’d like to have it for yticks of the order of 1e-2, for graphical homogeneity with a different plot).
Before downloading or creating an "ad hoc" code, I wanted to ask whether any of you knows a (perhaps undocumented?) way of toggling the "order of magnitude" notation.
Thanks a lot
Francesco Sometimes, when the "raw" values in the yticks are very small, matlab y axis automatically toggles to scientific notation, whereby the power of ten giving the order of magnitude appears in the top left corner, and the yticks are given in units of that power.
The threshold for this behavior seems to be 1e-3, but I can’t seem to find a property for forcing it on larger yticks.
I have found a few questions roughly on the same topic, but none of the relevant answers seem to apply to my case. Some people simply wanted to get rid of the scientific notation, other wanted it directly in the tick labels.
I like the "order of magnitude" format, but I am unable to force it (for instance, I’d like to have it for yticks of the order of 1e-2, for graphical homogeneity with a different plot).
Before downloading or creating an "ad hoc" code, I wanted to ask whether any of you knows a (perhaps undocumented?) way of toggling the "order of magnitude" notation.
Thanks a lot
Francesco axes, ticks, scientific notation MATLAB Answers — New Questions
The write channel is same as read channel data. Why the data is not processed?
We collect the real time data from read channel and want process it before sending it to write channel, but the data values in the write channel replicate the read channel values for the following code. Could anyone give suggession how to get the processed data?
% Source ThingSpeak Channel ID for reading data
readChannelID = 2522808; % Replace with your source channel ID
% Destination ThingSpeak Channel ID for writing data
writeChannelID = 2522810; % Replace with your destination channel ID
% ThingSpeak Read API Key for the source channel
readAPIKey = ‘XXX’; % Replace with your read API key
% ThingSpeak Write API Key for the destination channel
writeAPIKey = ‘XXX’; % Replace with your write API key
%% Read Data %%
% Read all available data from the source channel
data = thingSpeakRead(readChannelID, ‘ReadKey’, readAPIKey, ‘Fields’, [1, 2]);
% Extract the values from the read data
values1 = data(:, 1); % Values from field 1 i.e. Estimated Voltage
values2 = data(:, 2); % Values from field 2 i.e. Input Current
% Determine the number of data points retrieved
numPoints = size(data, 1); % Assign the number of rows in the variable ‘data’ to the variable ‘numPoints’
% Generate timestamps for the data
timeStamps = datetime(‘now’) – minutes(numPoints:-1:1);
% Initialize Y arrays
Y1 = zeros(size(values1)); % Create a new array ‘Y1’ filled with zeros that has same size as the array ‘values1’
Y2 = zeros(size(values2)); % Create a new array ‘Y2’ filled with zeros that has same size as the array ‘values2’
% Perform the operation Y(i) = i * I(i) for each value in the read data
for i = 1:length(values1)
disp([‘Processing index ‘, num2str(i)]);
disp([‘values1(‘, num2str(i), ‘) = ‘, num2str(values1(i))]);
disp([‘values2(‘, num2str(i), ‘) = ‘, num2str(values2(i))]);
Y1(i) = i * values1(i);
Y2(i) = i * values2(i);
disp([‘Y1(‘, num2str(i), ‘) = ‘, num2str(Y1(i))]);
disp([‘Y2(‘, num2str(i), ‘) = ‘, num2str(Y2(i))]);
end
% Write the data to the destination channel with timestamps
thingSpeakWrite(writeChannelID, ‘WriteKey’, writeAPIKey, ‘Values’, [Y1′, Y2′], ‘Fields’, [1, 2], ‘Timestamps’, timeStamps);We collect the real time data from read channel and want process it before sending it to write channel, but the data values in the write channel replicate the read channel values for the following code. Could anyone give suggession how to get the processed data?
% Source ThingSpeak Channel ID for reading data
readChannelID = 2522808; % Replace with your source channel ID
% Destination ThingSpeak Channel ID for writing data
writeChannelID = 2522810; % Replace with your destination channel ID
% ThingSpeak Read API Key for the source channel
readAPIKey = ‘XXX’; % Replace with your read API key
% ThingSpeak Write API Key for the destination channel
writeAPIKey = ‘XXX’; % Replace with your write API key
%% Read Data %%
% Read all available data from the source channel
data = thingSpeakRead(readChannelID, ‘ReadKey’, readAPIKey, ‘Fields’, [1, 2]);
% Extract the values from the read data
values1 = data(:, 1); % Values from field 1 i.e. Estimated Voltage
values2 = data(:, 2); % Values from field 2 i.e. Input Current
% Determine the number of data points retrieved
numPoints = size(data, 1); % Assign the number of rows in the variable ‘data’ to the variable ‘numPoints’
% Generate timestamps for the data
timeStamps = datetime(‘now’) – minutes(numPoints:-1:1);
% Initialize Y arrays
Y1 = zeros(size(values1)); % Create a new array ‘Y1’ filled with zeros that has same size as the array ‘values1’
Y2 = zeros(size(values2)); % Create a new array ‘Y2’ filled with zeros that has same size as the array ‘values2’
% Perform the operation Y(i) = i * I(i) for each value in the read data
for i = 1:length(values1)
disp([‘Processing index ‘, num2str(i)]);
disp([‘values1(‘, num2str(i), ‘) = ‘, num2str(values1(i))]);
disp([‘values2(‘, num2str(i), ‘) = ‘, num2str(values2(i))]);
Y1(i) = i * values1(i);
Y2(i) = i * values2(i);
disp([‘Y1(‘, num2str(i), ‘) = ‘, num2str(Y1(i))]);
disp([‘Y2(‘, num2str(i), ‘) = ‘, num2str(Y2(i))]);
end
% Write the data to the destination channel with timestamps
thingSpeakWrite(writeChannelID, ‘WriteKey’, writeAPIKey, ‘Values’, [Y1′, Y2′], ‘Fields’, [1, 2], ‘Timestamps’, timeStamps); We collect the real time data from read channel and want process it before sending it to write channel, but the data values in the write channel replicate the read channel values for the following code. Could anyone give suggession how to get the processed data?
% Source ThingSpeak Channel ID for reading data
readChannelID = 2522808; % Replace with your source channel ID
% Destination ThingSpeak Channel ID for writing data
writeChannelID = 2522810; % Replace with your destination channel ID
% ThingSpeak Read API Key for the source channel
readAPIKey = ‘XXX’; % Replace with your read API key
% ThingSpeak Write API Key for the destination channel
writeAPIKey = ‘XXX’; % Replace with your write API key
%% Read Data %%
% Read all available data from the source channel
data = thingSpeakRead(readChannelID, ‘ReadKey’, readAPIKey, ‘Fields’, [1, 2]);
% Extract the values from the read data
values1 = data(:, 1); % Values from field 1 i.e. Estimated Voltage
values2 = data(:, 2); % Values from field 2 i.e. Input Current
% Determine the number of data points retrieved
numPoints = size(data, 1); % Assign the number of rows in the variable ‘data’ to the variable ‘numPoints’
% Generate timestamps for the data
timeStamps = datetime(‘now’) – minutes(numPoints:-1:1);
% Initialize Y arrays
Y1 = zeros(size(values1)); % Create a new array ‘Y1’ filled with zeros that has same size as the array ‘values1’
Y2 = zeros(size(values2)); % Create a new array ‘Y2’ filled with zeros that has same size as the array ‘values2’
% Perform the operation Y(i) = i * I(i) for each value in the read data
for i = 1:length(values1)
disp([‘Processing index ‘, num2str(i)]);
disp([‘values1(‘, num2str(i), ‘) = ‘, num2str(values1(i))]);
disp([‘values2(‘, num2str(i), ‘) = ‘, num2str(values2(i))]);
Y1(i) = i * values1(i);
Y2(i) = i * values2(i);
disp([‘Y1(‘, num2str(i), ‘) = ‘, num2str(Y1(i))]);
disp([‘Y2(‘, num2str(i), ‘) = ‘, num2str(Y2(i))]);
end
% Write the data to the destination channel with timestamps
thingSpeakWrite(writeChannelID, ‘WriteKey’, writeAPIKey, ‘Values’, [Y1′, Y2′], ‘Fields’, [1, 2], ‘Timestamps’, timeStamps); thingspeak, read channel, write channel, data processing MATLAB Answers — New Questions
Spli cell array directly at the function output
Hi,
I have a function returning two 1*n cell arrays as output, which I want to append to two existing cell arrays (variable1 and variable2). However, I don’t want to end up with nested cells arrays, so my code looks like this:
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ output1, output2 ] = myFunc();
variable1 = [ variable1, output1{:} ];
variable2 = [ variable2, output2{:} ];
In this case, is it possible to append the outputs to variable1 and variable2 and exploding the cells in only one line instead of 3. So something like this (which obviously does not work but it is just to be sure I am clear enough):
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ [ variable1, output1{:} ], [ variable2, output2{:} ] ] = myFunc();
Thanks for your help.Hi,
I have a function returning two 1*n cell arrays as output, which I want to append to two existing cell arrays (variable1 and variable2). However, I don’t want to end up with nested cells arrays, so my code looks like this:
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ output1, output2 ] = myFunc();
variable1 = [ variable1, output1{:} ];
variable2 = [ variable2, output2{:} ];
In this case, is it possible to append the outputs to variable1 and variable2 and exploding the cells in only one line instead of 3. So something like this (which obviously does not work but it is just to be sure I am clear enough):
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ [ variable1, output1{:} ], [ variable2, output2{:} ] ] = myFunc();
Thanks for your help. Hi,
I have a function returning two 1*n cell arrays as output, which I want to append to two existing cell arrays (variable1 and variable2). However, I don’t want to end up with nested cells arrays, so my code looks like this:
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ output1, output2 ] = myFunc();
variable1 = [ variable1, output1{:} ];
variable2 = [ variable2, output2{:} ];
In this case, is it possible to append the outputs to variable1 and variable2 and exploding the cells in only one line instead of 3. So something like this (which obviously does not work but it is just to be sure I am clear enough):
variable1 = { ‘var11’, ‘var12’ };
variable2 = { ‘var21’, ‘var22’ };
[ [ variable1, output1{:} ], [ variable2, output2{:} ] ] = myFunc();
Thanks for your help. cell array, function MATLAB Answers — New Questions
Can’t change section background after selecting one of the new images
After update https://www.microsoft.com/en-US/microsoft-365/roadmap?filters=&searchterms=378647 I changed a section background to one of the new images. I can now no longer change back to a solid colour. Choosing it and publishing I see the solid colour but when I refresh the page it reverts to a white background. I have tried in incognito in case of caching issues.
Editing the page again I can see the solid colour is selected despite the white background.
The only backgrounds that persist are the new images and gradients.
Anyone else seeing this issue?
After update https://www.microsoft.com/en-US/microsoft-365/roadmap?filters=&searchterms=378647 I changed a section background to one of the new images. I can now no longer change back to a solid colour. Choosing it and publishing I see the solid colour but when I refresh the page it reverts to a white background. I have tried in incognito in case of caching issues. Editing the page again I can see the solid colour is selected despite the white background. The only backgrounds that persist are the new images and gradients. Anyone else seeing this issue? Read More
Edit Microsoft 365 profile picture modal from web part
Hi there!
I’m currently working on a sharepoint web part, from this web part I want to open up the edit profile picture modal from Microsoft 365 just like described over here: https://support.microsoft.com/en-us/office/add-your-profile-photo-to-microsoft-365-2eaf93fd-b3f1-43b9-9cdc-bdcd548435b7
If you open up sharepoint this functionality is already there, if you click on the user icon on the right top and edit your profile picture you will see this modal being opened. I want to create a button which links directly to this modal.
Does anyone have any experience with this? I can’t find any info on this.
Thanks in advance!
Hi there! I’m currently working on a sharepoint web part, from this web part I want to open up the edit profile picture modal from Microsoft 365 just like described over here: https://support.microsoft.com/en-us/office/add-your-profile-photo-to-microsoft-365-2eaf93fd-b3f1-43b9-9cdc-bdcd548435b7 If you open up sharepoint this functionality is already there, if you click on the user icon on the right top and edit your profile picture you will see this modal being opened. I want to create a button which links directly to this modal. Does anyone have any experience with this? I can’t find any info on this. Thanks in advance! Read More
Moving AD from 2012 to 2022 server
Hi All
I have a new server to put in to replace our 2012 r2 server which was previously migrated from 2003 server. The 2012 server will eventually go and the new 2022 server will run the AD. Is there anything I need to update on the 2012 server for this to go smoothly. If I remember correctly when we did 2003 to 2012 I needed to run something on the old 2003 server to all this all to happen.
It’s been a while since I have done a migration so just trying to make sure I have everything in place
Hi All I have a new server to put in to replace our 2012 r2 server which was previously migrated from 2003 server. The 2012 server will eventually go and the new 2022 server will run the AD. Is there anything I need to update on the 2012 server for this to go smoothly. If I remember correctly when we did 2003 to 2012 I needed to run something on the old 2003 server to all this all to happen. It’s been a while since I have done a migration so just trying to make sure I have everything in place Read More
SharePoint List Link Inventory Items
Don’t know if I’m searching for the right thing or making this over complicated. We have a SharePoint list with our asset’s. Some of the assets have two parts (mainframe, module). We can switch modules between mainframes in case one goes down. Can I link assets together somehow?
Thanks!
Don’t know if I’m searching for the right thing or making this over complicated. We have a SharePoint list with our asset’s. Some of the assets have two parts (mainframe, module). We can switch modules between mainframes in case one goes down. Can I link assets together somehow? Thanks! Read More