Month: September 2024
Unlocking Insights: GraphRAG & Standard RAG in Financial Services
GraphRAG
GraphRAG (Graph Retrieval-Augmented Generation) is an advanced approach in natural language processing that enhances traditional Retrieval-Augmented Generation (Standard RAG) systems by incorporating knowledge graphs generated by large language models (LLMs).
This method involves constructing a structured knowledge graph from a set of documents, identifying key entities such as people, places, and concepts, and representing them as nodes in a graph structure.
These nodes are then clustered into semantic communities, allowing for the generation of more comprehensive and diverse answers to complex, multi-hop questions. By leveraging these structured knowledge graphs, GraphRAG significantly improves the quality and relevance of the generated responses.
Standard RAG and GraphRAG difference
Standard RAG and GraphRAG differ primarily in their sources and methods of information retrieval.
Standard RAG relies on vector stores to retrieve relevant documents based on a user’s query. It ranks and selects the top documents, combines them with the query, and generates a final response using a language model. In contrast, GraphRAG utilizes knowledge graphs, which include entities, relationships, and document graphs. It extracts candidate entities, relationships, and concepts from the knowledge graph, ranks and filters these candidates, and then combines them with the query to generate a response.
This approach allows GraphRAG to leverage more structured and interconnected data, providing richer and more contextually accurate responses compared to the document-centric approach of Standard RAG.
Standard RAG and GraphRAG Example
Here is a simple example to demonstrate the difference between Standard and GraphRAG. Let’s say you would like to know the latest risks in the insurance sector. You come across a McKinsey article titled “Navigating shifting risks in the insurance industry” and decide to analyze the current challenges it presents (Navigating shifting risks in the insurance industry | McKinsey). You’re interested in how Standard RAG and GraphRAG might help you extract information from this article, especially when answering a question like “What are the common themes for the insurance industry?” Here’s what you’d find using Standard RAG and GraphRAG.
Standard RAG focuses on retrieving and summarizing relevant documents, resulting in a more straightforward and concise response.
In contrast, Graph RAG does a whole data set reasoning and provides a more detailed and interconnected response by leveraging relationships between entities in a knowledge graph. This allows it to offer a richer context, highlighting complex themes like interconnected risk factors and the integration of AI in the insurance industry.
How to determine a GraphRAG usecase
To determine a GraphRAG (Retrieval-Augmented Generation) use case, start by identifying areas where complex relationships and contextual understanding are crucial. GraphRAG excels in scenarios where data points are interconnected, such as in knowledge management, recommendation systems, and fraud detection. Begin by mapping out the entities and their relationships within your domain. For instance, in insurance, entities could include policyholder, agents, policies and claims, with relationships representing policy and claim transactions.
Next, evaluate the potential benefits of using a graph structure over traditional methods. GraphRAG can enhance data retrieval by leveraging these relationships, providing more accurate and contextually relevant information. This approach is particularly useful in domains requiring deep insights and nuanced understanding, such as claims fraud detection, where customer profiles and claim transactions are interlinked. By constructing a knowledge graph, you can enable more sophisticated queries and generate richer, more informative responses.
Use cases that can benefit from the combined capabilities of Standard RAG and GraphRAG
In the financial services sector, the integration of Standard RAG and GraphRAG can significantly enhance the depth and accuracy of insights. For instance, in the banking sector, the combined power of standard and Graph RAG can be instrumental in enhancing customer relationship management (CRM) and risk assessment. Standard RAG can pull in vast amounts of customer data, transaction histories, and market trends, providing a comprehensive view of a customer’s financial behavior. Meanwhile, Graph RAG can analyze the intricate web of relationships between customers, accounts, and transactions, identifying potential risks and opportunities. For example, it can detect unusual transaction patterns that may indicate fraudulent activity or highlight cross-selling opportunities by understanding the interconnected needs of customers. This holistic approach enables banks to offer more personalized services, improve risk management, and ultimately drive better business outcomes.
In the insurance industry, claims processing can be revolutionized by leveraging these combined capabilities. Standard RAG can efficiently retrieve relevant policy documents, historical claims data, and regulatory guidelines, while Graph RAG can map out relationships between various entities involved in a claim, such as policyholders, medical providers, and repair shops. This dual approach not only accelerates the claims adjudication process but also helps in identifying fraudulent claims by uncovering hidden connections and patterns that might be missed by traditional methods.
Developing an end-to-end copilot application using Combined RAG approach
Here’s a step-by-step guide on how to build an end-to-end copilot type application using combined RAG, which includes both the Standard RAG and Graph RAG.
1. Define Use Case and Data
Use Case: Post-disaster claims management.
Data: Historical claims, customer profiles, policy details, disaster impact data, geographical data, social networks, weather patterns.
2. Create and Populate Knowledge Graph
Data Collection: Gather data from internal and external sources.
Data Modeling: Define schema for entities and relationships.
Data Ingestion: Load data into the knowledge graph.
3. Index and Embed Data
Document Indexing: Index relevant documents.
Embedding Creation: Generate embeddings for entities and relationships.
4. Set Up Retrieval Systems
Document Retrieval: Implement system to retrieve documents from vector store.
Graph Retrieval: Implement graph queries to extract relevant entities and relationships.
5. Develop Ranking and Filtering Algorithms
Document Ranking: Rank and select top documents.
Graph Ranking: Rank and filter graph data.
6. Integrate with Language Model
Combine Data: Merge retrieved information from both sources.
Response Generation: Use a language model to generate the final response.
7. Develop User Interface
Frontend: Create user-friendly interface.
Backend: Ensure seamless communication between components.
8. Testing and Validation
Test Scenarios: Validate accuracy and relevance.
User Feedback: Refine system based on feedback.
9. Deployment and Monitoring
Deployment: Deploy in production.
Monitoring: Continuously monitor and improve.
Example Workflow
User Query: “Manage claims after a recent hurricane.”
Document Retrieval: Retrieve historical claims, policy details, and disaster impact reports.
Graph Retrieval: Extract geographical data, social networks, and real-time weather data.
Ranking and Filtering: Prioritize relevant information.
Response Generation: Combine data and generate a comprehensive claims management plan.
Output: Provide a detailed report with:
Historical claims and policy details.
Geographical impact analysis.
Social network insights to identify affected communities.
Real-time weather data for ongoing risk assessment.
Recommendations for resource allocation and expedited claims processing.
By following these steps, you can effectively implement a combined RAG approach to enhance post-disaster claims management, providing more accurate and contextually rich responses.
Conclusion and Next steps
In conclusion, both Standard RAG and GraphRAG offer unique strengths that can significantly enhance information retrieval and generation tasks. Standard RAG excels in providing concise, document-based summaries, making it ideal for straightforward queries. On the other hand, GraphRAG leverages the power of knowledge graphs to deliver more detailed and interconnected insights, which are particularly valuable in complex scenarios like sales optimization and fraud detection in the financial services sector.
By understanding the distinct capabilities of each approach, organizations can better determine when to use GraphRAG for its rich contextual understanding and when to rely on Standard RAG for quick, relevant summaries. Combining these approaches in an end-to-end copilot application can unlock new levels of efficiency and insight, enabling more informed decision-making and strategic planning. As demonstrated through examples and use cases, the synergy between Standard RAG and GraphRAG can drive innovation and improve outcomes across the industry.
Learn More:
GraphRAG: Unlocking LLM discovery on narrative private data – Microsoft Research
microsoft/graphrag: A modular graph-based Retrieval-Augmented Generation (RAG) system (github.com)
Microsoft Tech Community – Latest Blogs –Read More
A year of DAX Copilot: Healthcare innovation that refocuses on the clinician-patient connection
In 2012, a child’s drawing of her appointment with a physician was published in the Journal of the American Medical Association (JAMA). In the brightly colored sketch, the girl is sitting on the exam table accompanied by her mother and siblings. To those of us working in healthcare, her doctor is the most notable part of the image. He is sitting at his desk, typing on the computer — with his back to the patient.
This drawing, published alongside an article titled “The Cost of Technology,” is emblematic of the unintended consequences of healthcare innovation. With more tools for physicians to use and greater demands on their time, technology can easily become a barrier between the clinician and the patient. For example, in 2023 approximately 53% of physicians surveyed by Medscape reported feeling burned out, and 23% reported feeling depressed. Long working hours, bureaucratic demands and loss of work-life balance due to administrative requirements are leading causes. Our mission has always been to help clinicians turn their chairs around by providing the support they need to fully focus their attention on delivering care.
Fast-forward to today, and we’ve translated that goal into reality. After establishing the ambient technology category, we introduced DAX Copilot, the first generative AI voice-enabled solution, which has been generally available for one year, and we’re seeing remarkable momentum. Part of a proven and extensible platform, DAX Copilot leverages health systems’ existing investments in the trusted family of Dragon Medical solutions, which is used by more than 600,000 clinicians worldwide.
DAX Copilot combines proven conversational and ambient AI with generative AI to automatically convert multiparty conversations into specialty-specific standardized draft clinical summaries that integrate with existing workflows. DAX Copilot was the first ambient solution to be integrated into the Epic electronic health record (EHR) workflow and allows clinicians to seamlessly document patient visits directly within the EHR. Across the country, more than 400 organizations have already embraced its revolutionary capabilities, streamlining administrative tasks and lightening clinicians’ documentation workloads.
This transformative year has highlighted many of the ways we’re helping empower healthcare teams today and bringing joy back to practicing medicine for thousands of clinicians.
Improving Access to Care — and Enhancing Documentation Quality
DAX Copilot is helping Northwestern Medicine in Chicago improve patient access to care, with physicians using the solution in at least 50% of patient encounters and able to see an average of 11.3 additional patients per month. DAX Copilot users at Northwestern Medicine also report spending an average of 24% less time on notes and a 17% decrease in ‘pajama time,’ working on administrative tasks late into the night.
“Northwestern Medicine is committed to providing a superior work environment that promotes well-being, and implementing DAX Copilot will allow our physicians to spend more quality time with our patients, focusing on their needs rather than on paperwork and data entry,” says Dr. Gaurava Agarwal, chief wellness executive, Northwestern Medicine.
Overlake Medical Center & Clinics in Bellevue, Washington, deployed DAX Copilot to help reduce after-hours documentation time and equip its clinicians with tools to manage escalating demands. A pilot with 30 Overlake clinicians saw a significant reduction in time spent on notes outside of scheduled hours. Users also reported other important benefits: 81% said DAX Copilot had reduced their cognitive burden and 77% said the solution had improved the quality of their documentation.
At Atrium Health, “DAX Copilot is enabling a better clinician experience,” says Dr. Matthew Anderson, senior medical director of primary care at Atrium Health. “Most of our surveyed users are reporting a positive impact on their day-to-day schedule, being able to increase the number of patients they are able to see and even spending more time with them.”
Enhancing Clinician Well-being
Atrium Health isn’t alone in finding that the time saved on documentation with DAX Copilot is having a positive impact on clinicians’ quality of life. “I finally have weekends back,” says Dr. Christy Chan, a family medicine physician at Overlake. “I used to always have to worry that there was something I had to do — get back onto the EMR, log back in — but I actually have some weekends back.”
Clinicians at Novant Health are experiencing similar benefits. “DAX Copilot has done an amazing thing for the physicians and clinicians who are currently using it,” says Dr. Aram Alexanian, a family physician at Novant Health. “When you hear a comment like ‘I am now able to do things with my daughter in the evenings and weekends that I couldn’t do before,’ nothing satisfies us more than knowing the impact DAX is having on our clinicians.”
DAX Copilot can accurately identify different voices in the room during pediatric visits, differentiating between parents and children to accurately capture the patient exam note for the physician to review and approve.
Community Health Network places a major focus on clinician well-being and implementing DAX Copilot has had a major impact on its clinicians’ ability to close notes before they leave for the day. “Since we have implemented DAX Copilot, I have not left clinic with an open note,” says Dr. Patrick McGill, chief transformation officer for Community Health Network. “We have seen improved access, improved numbers of patients, but overall, it’s really the clinician satisfaction that we’ve seen. In one word, DAX Copilot is transformative. It transforms how we’re able to deliver care and how we’re able to document it. It also transforms the patient experience.”
“DAX Copilot has made my professional life easier. My patients have also benefited from my using Nuance DAX during our appointments. I can be right there with the patient and not furiously writing notes. I cannot thank you enough,” said Anita M. Kelsey, M.D., Duke Health.
Transforming the Patient Experience
Dr. Dominick Lanzo, an orthopedic surgeon at Greater Baltimore Medical Center, agrees. “Once I introduced the DAX Copilot program, it completely transformed the patient experience, and it’s turned out to be incredibly accurate with regards to the history of present illness and the physical exam,” he says. “It’s made my practice much more efficient. I can see more patients, my notes are more accurate, and they’re done in a timely fashion by the end of the afternoon.”
For Dr. Alison Pomykala, an internal medicine specialist at Baptist Medical Group, the integration of DAX Copilot with the Epic EHR is particularly valuable. “The thing I like most about DAX Copilot embedded in the Epic workflow is I’m able to focus more on the patient and I’m spending less time in the exam room typing on the computer,” she says.
“I think the interface is wonderful with Epic. It has been great to see the notes coming up basically in real time on the Epic system. That’s also helped with other things: where we needed to generate a note quickly for referrals, for insurance pre-certifications, for imaging studies, or to have a complete note ready for a patient that we’re sending to the hospital.”
“At our academic health system, integrating DAX Copilot has revolutionized patient care,” says Dr. Anthony Mazzarelli, co-president and CEO at Cooper University Health Care. “By automating clinical documentation through ambient voice technology, it has significantly reduced administrative workloads. This allows our physicians to focus on real-time patient interactions, leading to better care outcomes and increased job satisfaction. DAX Copilot has not only improved efficiency but has also empowered our team to spend more time where it matters most — caring for patients.”
It’s a privilege to see this industry-leading solution make a meaningful difference for the clinicians who are already on board — but the work never stops. The Microsoft healthcare team is determined to continue solving some of the industry’s most complex challenges, and harnessing the power and potential of AI is how we’ll achieve that. We will continue to be leaders in innovation, collaborating across our ecosystem of incredible customers, partners and Microsoft researchers to bring real impact to clinical settings.
DAX Copilot is an innovative solution that goes beyond documentation, offering unique features such as orders, problem-based charting and pre-charting capabilities. For example, recent updates to DAX Copilot include a robust set of features, such as the ability to customize documentation style and formatting, as well as automatically create referral letters, diagnostic evidence, after-visit summaries and encounter summaries. The solution also now offers AI coaching to help users improve the quality and completeness of their notes. Also, the new Summarized Evidence capability offers a comprehensive and sophisticated approach that helps clinicians validate and trust the note output by combining insights that go beyond evidence linking, helping clinicians validate the note.
Importantly, when creating technologies that can change the world, Microsoft believes organizations need to ensure that the technology is used responsibly. Microsoft is committed to creating responsible AI by design that is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.
DAX Copilot is helping restore the human connection at the heart of medicine. We’re excited to drive this solution forward and expand its reach to more clinicians over the coming year and beyond.
Today, the solution can be used across ambulatory specialties, in-office primary care and urgent care, telehealth and emergency medicine. And we are scaling the solution’s availability and capabilities to even more care settings, such as nursing and geographies.
If you’d like to learn how DAX Copilot can help transform healthcare at your organization, please visit: the DAX Copilot website.
The post A year of DAX Copilot: Healthcare innovation that refocuses on the clinician-patient connection appeared first on The Official Microsoft Blog.
In 2012, a child’s drawing of her appointment with a physician was published in the Journal of the American Medical Association (JAMA). In the brightly colored sketch, the girl is sitting on the exam table accompanied by her mother and siblings. To those of us working in healthcare, her doctor is the most notable part…
The post A year of DAX Copilot: Healthcare innovation that refocuses on the clinician-patient connection appeared first on The Official Microsoft Blog.Read More
Can I use production server to create ctf file and do I need to install matlab compiler sdk along with production server to achieve this?
I wrote a matlab script which contains command ‘mcc’ and the options to create the ctf in my local machine. The script runs correctly with the ctf created. However, when I deployed the codes on the production server, the command mcc cannot be executed. My local computer has both matlab compiler and matlab compiler sdk installed. It looks like the production server has matlab runtime running, maybe matlab compiler. But definitely no matlab compiler sdk. So should I install matlab compiler sdk on the same server as the production server? After installation, how do I tell the production server where to find the matlab compiler sdk and the command of "mcc"?I wrote a matlab script which contains command ‘mcc’ and the options to create the ctf in my local machine. The script runs correctly with the ctf created. However, when I deployed the codes on the production server, the command mcc cannot be executed. My local computer has both matlab compiler and matlab compiler sdk installed. It looks like the production server has matlab runtime running, maybe matlab compiler. But definitely no matlab compiler sdk. So should I install matlab compiler sdk on the same server as the production server? After installation, how do I tell the production server where to find the matlab compiler sdk and the command of "mcc"? I wrote a matlab script which contains command ‘mcc’ and the options to create the ctf in my local machine. The script runs correctly with the ctf created. However, when I deployed the codes on the production server, the command mcc cannot be executed. My local computer has both matlab compiler and matlab compiler sdk installed. It looks like the production server has matlab runtime running, maybe matlab compiler. But definitely no matlab compiler sdk. So should I install matlab compiler sdk on the same server as the production server? After installation, how do I tell the production server where to find the matlab compiler sdk and the command of "mcc"? production server, ctf file, mcc command, matlab compiler sdk, matlab compiler, path to mcc MATLAB Answers — New Questions
GPU utilization is not 100%.
The GPU usage is only 40% allocated for running the deep learning network.
Sometimes go up to 80% for a while but usually stay at 40%.
I want to know why.The GPU usage is only 40% allocated for running the deep learning network.
Sometimes go up to 80% for a while but usually stay at 40%.
I want to know why. The GPU usage is only 40% allocated for running the deep learning network.
Sometimes go up to 80% for a while but usually stay at 40%.
I want to know why. deep learning, gpu, trainnetwork MATLAB Answers — New Questions
Abnormal trailing ^ on input
I have suddenly run into a strange problem. Now, when pressing ^, my command line (not the editor) enters an abnormal state, writing ^ (underscored) AFTER the last of any subsequent input characters. The trailing ^ only disappears after pressing Enter (without its usual effect). The problem prevails after exiting and reopening matlab, and even after restarting the computer. Any solution or hint?I have suddenly run into a strange problem. Now, when pressing ^, my command line (not the editor) enters an abnormal state, writing ^ (underscored) AFTER the last of any subsequent input characters. The trailing ^ only disappears after pressing Enter (without its usual effect). The problem prevails after exiting and reopening matlab, and even after restarting the computer. Any solution or hint? I have suddenly run into a strange problem. Now, when pressing ^, my command line (not the editor) enters an abnormal state, writing ^ (underscored) AFTER the last of any subsequent input characters. The trailing ^ only disappears after pressing Enter (without its usual effect). The problem prevails after exiting and reopening matlab, and even after restarting the computer. Any solution or hint? abnormal ^ MATLAB Answers — New Questions
Incidents from Custom Detection Rules never have Emails for Evidence
let ignoreAddresses = datatable(address:string) [@’email address removed for privacy reasons’,@’email address removed for privacy reasons’];
let ignoreSpamSubjects = datatable(address:string) [@’ignored subject 1′,@’ignored subject 2′];
// Time range needs to be set in the UI dropdown in order for LatestDeliveryLocation filter to work (i.e., live table vs streaming API).
EmailEvents
| where SenderFromDomain in~ (_getEXOAcceptedDomains)
| where DetectionMethods has_any(‘URL detonation reputation’, ‘URL malicious reputation’) and not(RecipientEmailAddress in~ (ignoreAddresses) or SenderFromAddress in~ (ignoreAddresses))
| where not (Subject has_any (ignoreSpamSubjects))
| where (parse_json( AuthenticationDetails).DMARC =~ ‘Pass’ and EmailDirection =~ ‘Inbound’) or (EmailDirection =~ ‘Intra-org’)
| where (LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and not (LatestDeliveryAction =~ ‘Quarantine release’)) and parse_json(ConfidenceLevel).Phish in~ (‘Normal’,’High’)
| join kind=inner (
EmailUrlInfo
| summarize Urls = make_list(Url) by NetworkMessageId
) on NetworkMessageId
I’ve got the above query saved as a detection rule, which works fine except for one thing – the emails are never present in the Evidence tab of the generated incidents. Meanwhile the Recipients show up in the Mailbox and User assets as I’m using Entity mapping to mapping the RecipientEmailAddress / RecipientObjectId to those 2 entity types. The only thing I can find about Emails is that for Actions to be possible on the Emails in the query results – “The columns NetworkMessageId and RecipientEmailAddress must be present in the output results of the query to apply actions to email messages.” (ref) – which is being satisfied.
The Evidence available is the IP of the sender, and an empty email cluster, like this:
In the incident above there are 2 emails, and the 4 assets are the user and mailbox for each of the 2 emails’ Recipient. I can successfully just use the query manually to find and manage those emails, but a big part of the goal with these detection rules, at least in my opinion, is to be able to easily manage the evidence. In this exact case, I’m looking for inbound emails coming from our own Accepted Domains in the SenderFromAddress, which pass DMARC, but are in Quarantine, detected as Phish. The idea is to watch out for false positives due to URL detonation reputation since most of the messages fitting this criteria are coming in from various emailing services (e.g., Constant Contact, MailChimp, SendGrid, etc.) and these services tend to end up on the reputation lists a few times per month.
Just wondering if there are any tricks anyone knows about to help me populate the emails into my resulting incidents.
let ignoreAddresses = datatable(address:string) [@’email address removed for privacy reasons’,@’email address removed for privacy reasons’];
let ignoreSpamSubjects = datatable(address:string) [@’ignored subject 1′,@’ignored subject 2′];
// Time range needs to be set in the UI dropdown in order for LatestDeliveryLocation filter to work (i.e., live table vs streaming API).
EmailEvents
| where SenderFromDomain in~ (_getEXOAcceptedDomains)
| where DetectionMethods has_any(‘URL detonation reputation’, ‘URL malicious reputation’) and not(RecipientEmailAddress in~ (ignoreAddresses) or SenderFromAddress in~ (ignoreAddresses))
| where not (Subject has_any (ignoreSpamSubjects))
| where (parse_json( AuthenticationDetails).DMARC =~ ‘Pass’ and EmailDirection =~ ‘Inbound’) or (EmailDirection =~ ‘Intra-org’)
| where (LatestDeliveryLocation in~ (‘Quarantine’, ‘Junk folder’) and not (LatestDeliveryAction =~ ‘Quarantine release’)) and parse_json(ConfidenceLevel).Phish in~ (‘Normal’,’High’)
| join kind=inner (
EmailUrlInfo
| summarize Urls = make_list(Url) by NetworkMessageId
) on NetworkMessageIdI’ve got the above query saved as a detection rule, which works fine except for one thing – the emails are never present in the Evidence tab of the generated incidents. Meanwhile the Recipients show up in the Mailbox and User assets as I’m using Entity mapping to mapping the RecipientEmailAddress / RecipientObjectId to those 2 entity types. The only thing I can find about Emails is that for Actions to be possible on the Emails in the query results – “The columns NetworkMessageId and RecipientEmailAddress must be present in the output results of the query to apply actions to email messages.” (ref) – which is being satisfied. The Evidence available is the IP of the sender, and an empty email cluster, like this:In the incident above there are 2 emails, and the 4 assets are the user and mailbox for each of the 2 emails’ Recipient. I can successfully just use the query manually to find and manage those emails, but a big part of the goal with these detection rules, at least in my opinion, is to be able to easily manage the evidence. In this exact case, I’m looking for inbound emails coming from our own Accepted Domains in the SenderFromAddress, which pass DMARC, but are in Quarantine, detected as Phish. The idea is to watch out for false positives due to URL detonation reputation since most of the messages fitting this criteria are coming in from various emailing services (e.g., Constant Contact, MailChimp, SendGrid, etc.) and these services tend to end up on the reputation lists a few times per month. Just wondering if there are any tricks anyone knows about to help me populate the emails into my resulting incidents. Read More
make available/unavailable dates more visible
My customers are having difficulties to distinguish between available and unavailable dates. Is there a way to change the color of the available or unavailable dates?
My customers are having difficulties to distinguish between available and unavailable dates. Is there a way to change the color of the available or unavailable dates? Read More
Copilot enterprise data protection
It seems that Copilot in edge is now upgrading to Enterprise Data protection standards, does any one have a complete list of what is currently not available as a feature during the upgrade. and how long will this upgrade process last.
It seems that Copilot in edge is now upgrading to Enterprise Data protection standards, does any one have a complete list of what is currently not available as a feature during the upgrade. and how long will this upgrade process last. Read More
Teams Add-On disables itself when logged into a Citrix environment
When logged into my Citrix environment my Outlook Teams Add-On continually disables itself after enabling it. I have tried re-registering the add-on and that did not work. I have gone into the COM add-ons and reenabling but after closing Outlook and opening it back up, it is back under the Inactive Applications Add-ins list. I tried adding New Teams Meeting to the quick access toolbar but it was not visible on the ribbon. I created a new ribbon group but News Team Meeting was grayed out.
Looking for advice on others troubleshooting options to try and wondering if anyone else is having this issue.
When logged into my Citrix environment my Outlook Teams Add-On continually disables itself after enabling it. I have tried re-registering the add-on and that did not work. I have gone into the COM add-ons and reenabling but after closing Outlook and opening it back up, it is back under the Inactive Applications Add-ins list. I tried adding New Teams Meeting to the quick access toolbar but it was not visible on the ribbon. I created a new ribbon group but News Team Meeting was grayed out. Looking for advice on others troubleshooting options to try and wondering if anyone else is having this issue. Read More
Subscribe to the Topics you care about in Viva Engage
Viva Engage is key to understanding culture, knowledge, community and engagement at organizations. Topics in Viva Engage help categorize conversations, making it easier to find and follow discussions on specific subjects. Anyone can add topics to their posts, which then appear as tags, and these topics can be followed for updates connecting everyone to the things they care about most.
‘Topic subscriptions’ is a new feature available to all Viva Engage users which allows them to receive notifications for Viva Engage topics they follow. Users will receive an Engage bell notification whenever a new post is tagged with a topic they are subscribed to or that topic is added to an existing post.
Note: Topic subscriptions are an additional step from following a topic. When a user follows a topic, they do not receive notifications but may receive content with that topic in their Engage Digest. However, by subscribing to a topic, they would receive a notification for that topic.
How to make the most of topic subscriptions
Topics can help users in your organization easily categorize information and keep employees informed and up-to-date. Here’s how to help your organization make the most out of topic subscriptions:
As a subject matter expert, subscribe to topics in your areas of expertise and encourage others to post using that topic so questions are answered quickly.
Corporate communicators can track topics most important to campaigns, leadership and strategic messaging.
Encourage employees to subscribe to work related topics or interests including company events, Employee Resource Groups, learning opportunities or social gatherings.
Show different teams and functions how topics are used to share insights, articles, and reports on industry trends. Employees interested in staying ahead in their field can subscribe to these topics to stay informed.
Where can users subscribe to topics?
Topic page: All Viva Engage users can subscribe to a topic on the topic page. This page is accessible from the Viva Engage search engine results page (SERP) and when a user clicks on a topic tag in Engage.
Discover more topics page: Premium Viva Engage users can subscribe to a topic in the discover more topics page in the Answers tab. Find the, “Discover more topics link,” in the right rail at the bottom of the Topics to follow section.
Note: Users can subscribe to a topic after following that topic.
Unsubscribing from the topic page
To unsubscribe, click on the blue bell to unsubscribe. It then changes to an unfilled bell icon which indicates that you are no longer subscribed to that topic.
Note: A user can unfollow a topic in the overflow menu, and that automatically unsubscribes them:
Notifications from topic subscriptions
Today, users receive notifications in Viva Engage for topics that they are subscribed to. In the future they will also receive notifications in Teams and Outlook. These workflows are currently a priority on our roadmap.
Subscribing to topics in Viva Engage helps users stay informed on things they care about, encourages collaboration, personalizes content based on topic subscriptions and fosters overall engagement. To learn more about topics in Viva Engage check out our support article.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the public preview of the new Azure FXv2-series Virtual Machines
Today, Microsoft is announcing the public preview of the new Azure FXv2-series Virtual Machines (VMs), based on the 5th Generation Intel® Xeon® Platinum 8573C (Emerald Rapids) processor. The preview includes the Compute-optimized VMs FXmsv2-series and FXmdsv2-series.
To request access to the preview, please fill out FXv2-series-Preview-Signup.
The new Azure FXv2-series VMs offer several enhancements compared to previous generation (FXv1-series) VMs:
up to 1.5x CPU performance
2x vCPUs, with 96 vCPU as the largest VM size
1.5x+ Network bandwidth, and offers up to 70Gbps
up to 2x local storage (Read) IOPS and offers up to 5280 GiB local SSD capacity
up to 2x IOPS and up to 5x throughput in remote storage performance
up to 400k IOPS and up to 11 GBps throughput with Premium v2/ Ultra Disk support
up to 1800 GiB memory
FXv2-series VMs feature an all-core-turbo frequency up to 4.0 GHz. In addition to the standard sizes, FXv2-series VMs are available in constrained-core sizes, with vCPU count constrained to one-half or one-quarter of the original VM size, giving you the flexibility to select the core and memory configuration that best fits your workloads. These VMs support up to a 21:1 memory-to-vCPU ratio with the base sizes, and an even better ratio with the constrained-core sizes. You also have the option to choose between FXv2-series VMs with and without local disk. These VMs support NVMe interface for both local and remote disks.
Additionally, improvements in Azure Boost enhance the networking, storage and overall CPU performance and security of FXv2-series VMs. Finally, these VMs have enhanced AI capabilities with Intel® Advanced Matrix Extension (AMX), delivering higher inference and training performance, and enhanced security capabilities with Total Memory Encryption (TME) technology.
You can learn more about the new Azure FXv2-series VMs by visiting the specification pages at FXmdsv2-series and FXmsv2-series.
Overall, the FXv2-series Azure Virtual Machine is best-suited to provide a balanced solution for compute-intensive workloads such as databases, data analytics workloads and EDA workloads, that also require large amounts of memory and high-performance, storage, I/O bandwidth.
The FXv2-series is purpose-built, designed to address several requirements of SQL Server workloads, particularly those that require higher computing performance, memory, storage, I/O bandwidth but not necessarily a higher core count. Key benefits for SQL Server workloads include:
Improved performance: The FXv2-series VMs offer higher and improved performance capabilities, ensuring SQL Server workloads can handle large amounts of data and transactions efficiently. These capabilities are crucial for applications that require high I/O operations per second (IOPS) and low latency.
High memory and storage: FXv2-series VMs provide specifications (such as 96 vCPU with up to 1800 GiB memory, up to 400K IOPS and up to 11 GiB remote storage throughput) to cater to the needs of SQL workloads requiring high memory, storage, and I/O bandwidth.
Cost efficiency: Constrained vCPU capable VM sizes help reduce the cost of SQL Server licensing by offering an improved memory-to-core ratio while maintaining the same memory, storage, and I/O bandwidth.
Local storage: FXmdsv2-series VMs provide local storage for tempdb heavy workloads, significantly improving the performance of SQL Server workloads that need quick access to temporary data.
FXv2-series VMs also provide EDA customers with an updated Intel-based option tailored to the needs of the semiconductor community. Features like larger L3 caches, higher instructions per clock, spacious memory, and faster storage compared to the prior generation FXv1 VM combined to make FXv2-series VMs much improved for the demanding needs of EDA workloads, such as reducing chip design turn-around time and time-to-market and improving license utilization.
During the preview period, the VMs are available in Azure regions such as West US 3 and Southeast Asia for all VM sizes up to 96 vCPUs. The number of regions will continue to expand in 2024 and beyond.
The new virtual machines support all remote disk types such as Standard SSD, Standard HDD, Premium SSD (v1), Premium SSD v2 and Ultra Disk storage. To learn more about various disk types and their regional availability, please refer to Azure managed disk type and disk storage is billed separately from virtual machines. You can deploy these new VMs using existing methods including Azure portal, SDKs, APIs, PowerShell, and the command-line interface (CLI).
The new Azure FXv2-series VMs support a wide range of OS distributions, such as Canonical Ubuntu, Red Hat Enterprise Linux, SUSE Enterprise Linux, Alma Linux, Windows Client 11 Pro and Enterprise, Windows Server and many more, for developers who want to take advantage of Azure’s highly available, scalable, and secure platform to run cloud-based workloads and test workflows. Customers can access the full list of images in the Azure Marketplace.
During the preview, Azure FXv2-series VMs will be charged at full price. To learn more about the pricing of FXv2-series VMs, please visit the Azure Virtual Machines pricing pages. Eligible new Azure customers can sign up for an Azure free account and receive $200 Azure credit.
FXv2-series VMs are tailored for compute-intensive tasks, making them perfect for high-performance data processing, complex operations, and intensive analytics, delivering excellent performance and efficiency. The new FXv2-series VMs provide a versatile solution for a wide range of computing needs. Explore the FXv2-series VMs today and discover how these VMs can enhance your performance and operational efficiency.
To request access: please fill out FXv2-series-Preview-Signup.
Have questions ? Please reach us at Azure Support and our experts will be there to help you with your Azure journey.
Microsoft Tech Community – Latest Blogs –Read More
Sharepoint list conditional formatting colour palette
Hi,
I have several lists that have some columns set with conditional formatting based the list input, all works fine but the colour palette is very limited, is there a way to add colours to the options available to use?
I have the RGB Colour Codes I want to add but how do I do it?
Hi,I have several lists that have some columns set with conditional formatting based the list input, all works fine but the colour palette is very limited, is there a way to add colours to the options available to use?I have the RGB Colour Codes I want to add but how do I do it? Read More
The Future of AI: The paradigm shifts in Generative AI Operations
As generative AI technologies rapidly evolve, businesses across industries are harnessing their potential to drive innovation and transformation. However, the operational challenges of managing, scaling, and securing these applications in production environments remain significant. Microsoft’s Generative AI Operations (GenAIOps) framework addresses these complexities, offering a comprehensive approach to ensure that organizations can successfully integrate, manage, and govern generative AI applications. Customers like ASOS have utilized Azure AI tools and frameworks to streamline their GenAIOps processes, automating and optimizing the end-to-end workflow of content generation, significantly reducing the time and resources required to deliver personalized shipping experiences at scale.
This blog is the first in a series exploring the intricacies of GenAIOps, with future entries diving deeper into specific areas and Azure AI tools designed to support this framework.
Customer Challenges in Productionizing Generative AI Applications
While generative AI presents transformative opportunities, organizations face numerous operational hurdles when attempting to deploy and scale these solutions. Among the most common challenges are:
Complex Model Landscape: Selecting the right model for specific use cases from a vast array of available generative models can be overwhelming. Organizations must evaluate models not just for performance but also for integration into existing infrastructure and fit for their specific use case.
Data Quality and Quantity: Without high-quality, comprehensive datasets, generative AI models may generate biased or inaccurate outputs, undermining trust and adoption.
Operational Performance: Managing the resource-intensive nature of large-scale AI models while ensuring smooth performance can strain existing IT systems. This includes balancing token processing speed, performance optimization and resource allocation for efficient deployments.
Cost Efficiency: Enterprises need to optimize costs while maintaining high-quality outputs, which requires a fine balance between computational power and budget constraints.
Security and Compliance: Ensuring data privacy, meeting regulatory requirements, and managing the ethical implications of generative AI are critical concerns for organizations deploying these solutions.
The Paradigm Shift: From MLOps to LLMOps to GenAIOps
Traditional MLOps frameworks were designed to manage machine learning models, which are often deterministic and predictable in nature. However, generative AI introduces non-deterministic outputs and requires a new framework, leading to the evolution of LLMOps, which focuses on the lifecycle of large language models.
Generative AI Operations (GenAIOps), is a comprehensive set of practices, tools, foundational models, and frameworks designed to integrate people, processes, and platforms. GenAIOps extends beyond LLMOps to address the full spectrum of generative AI operations, including small language models (SLMs) and multi-modal models. This shift moves from merely managing large models to ensuring continuous development, deployment, monitoring, and governance of generative AI applications.
As enterprises embrace generative AI, we anticipate a transformation of traditional roles to meet new challenges. Data teams will become AI insight orchestrators, while IT operations evolve into AI infrastructure specialists. Software developers will routinely incorporate AI components, and business analysts will translate AI capabilities into strategic advantages. Legal teams will also incorporate AI governance, and executives will drive AI-first strategies. New roles will emerge, including AI ethics boards and centers of excellence, fostering responsible innovation. This shift will demand cross-functional collaboration, continuous learning, and adaptability, reshaping the enterprise AI landscape.
Azure AI Tools and Services for GenAIOps
To help developers and engineers rapidly build, deploy, and manage generative AI applications, Azure AI offers a robust suite of tools tailored to every stage of the generative AI lifecycle. These tools emphasize scalability, orchestration, and developer collaboration, enabling efficient production of innovative AI solutions.
Getting Started
Kickstarting your generative AI journey with Azure AI is straightforward thanks to its powerful tools designed for rapid setup and development. The Azure Developer CLI (AZD) AI Templates enable you to speed up resource setup with pre-configured templates, streamlining your initial development. Additionally, the Chat Playground in Azure AI Studio or GitHub Models provides a user-friendly environment for quick, no-code testing of AI models, allowing you to experiment with different models and refine interactions without diving into complex code.
Customization
Customizing models to meet specific business needs is essential for building generative AI applications. Retrieval Augmented Generation (RAG) integrates AI models with external data sources, enhancing accuracy and contextual relevance. Azure AI Search and Microsoft Fabric provide seamless access to real-time data, enabling reliable and precise AI solutions. Fine-Tuning allows developers to customize pre-trained models with domain-specific data using Azure AI Studio and Azure Machine Learning, supporting serverless fine-tuning without infrastructure management. Model versioning and management within Azure AI ensures reproducibility, easy rollbacks, and proper governance as models evolve.
Development
During the development phase, managing prompts and evaluating model performance is crucial. Azure AI offers a variety of tools to support developers in building robust generative AI applications. Prompty allows efficient prompt management and optimization, integrating seamlessly with environments like LangChain and Semantic Kernel. Azure AI support the entire Gen AI application lifecycle, from data preparation to model fine-tuning and deployment, ensuring smooth transitions between stages. Additionally, Azure AI Services offers pre-built APIs for language understanding, speech recognition, and computer vision, enhancing the functionality and user experience of AI workflows. The Azure AI model catalog provides a wide range of foundation models from leading AI providers, optimized for tasks like text generation and image recognition. With Azure AI’s commitment to Trustworthy AI, customers can ensure safety, security, and privacy, utilizing features like evaluators, groundedness detection, and correction tools. By leveraging these tools and services, you can streamline your development process, ensure high-quality outputs, and maintain efficient workflows, ultimately driving innovation and operational excellence in generative AI applications.
Production
After the development and evaluation of the models and apps, deploying them is the next step. Azure AI provides strong deployment, automation, and monitoring features to help deploy applications to production environments seamlessly. Implementing feedback loops is crucial for the continuous improvement of generative AI applications. Azure AI supports this through:
Continuous Monitoring and Feedback: Comprehensive evaluation frameworks within Azure AI allow for performance analysis and fairness checks, while continuous monitoring tools support data drift detection and A/B testing ensuring your AI systems remain reliable and ethical. Regularly analyze performance metrics to identify bottlenecks and optimize models. This includes fine-tuning parameters, adjusting resource allocation, and implementing efficient algorithms.
Automation with GitHub Actions: Azure AI integrates with GitHub Actions for automated continuous deployment and monitoring of GenAI applications, enabling seamless management of updates and performance metrics while reducing manual efforts.
By integrating these practices, organizations can ensure their generative AI applications remain effective, efficient, and aligned with business goals.
This blog marks the beginning of a deep dive into Microsoft’s GenAIOps concept. The challenges of managing generative AI applications require a comprehensive operational approach that spans the entire lifecycle, from experimentation to deployment. As we continue this GenAIOps Tech Blog Series, we’ll explore specific Azure AI tools and services that help organizations operationalize and scale their generative AI initiatives.
Stay tuned for future posts where we’ll provide detailed insights into key GenAIOps components, including model orchestration, prompt engineering, and real-time AI monitoring.
Microsoft Tech Community – Latest Blogs –Read More
Case Study: Harnessing Copilot for Security in Defending Against Cyberthreats
In a typical SOC environment, Tier 1 analysts are responsible for reviewing incoming security incidents. They’re in charge of investigating and resolving non-critical alerts, including the identification of false positives. When a critical incident is detected, the Tier 1 analyst must quickly escalate it to the Tier 2 or 3 team for a more thorough investigation. In this case study, you will take on the role of a Tier 1 analyst and discover how Copilot for Security can support you throughout the investigation. It is a tool designed to enable you to assess and manage incidents with greater efficiency and effectiveness.
Let’s use a security incident in the Microsoft Defender XDR portal for our case study. As shown in the next screen capture, this high-severity incident involves three devices, four users and one email account. It consists of a correlation of 25 alerts generated by different Microsoft security solutions. This incident, titled ‘Multi-stage incident involving Initial access & Lateral movement on multiple endpoints reported by multiple sources’ appears complex, extensive, and intimidating.
As soon as you start examining this incident in the Microsoft Defender XDR portal, Copilot for Security is already present on the incident page, as shown in the next diagram. It automatically creates a summary of the incident, providing a clear overview of the attack. Additionally, it offers tailored guidance for responding to the incident through its Guided Response feature, which is also accessible on the incident page.
Rather than sifting through all 25 alerts in the incident to understand the attack, let’s review the summary provided by Copilot for Security first. You can copy the incident summary for further review using Copilot’s “Copy to clipboard” option. This feature is illustrated in the next screenshot.
For your reference, here is the full incident summary copied from Copilot:
The high severity incident ‘Multi-stage incident involving Initial access & Lateral movement on multiple endpoints reported by multiple sources’ occurred between 2024-08-06 17:15:43 UTC and 2024-08-11 00:01:42 UTC. It was tagged as Credential Phish and Lateral Movement.
InitialAccess: The incident began at 2024-08-06 17:15:43 UTC when user ‘kdickens’ clicked on a malicious URL ‘hxxps://fm06722326.mangodune-7572a721.eastus.azurecontainerapps.io/’.
Discovery: At 2024-08-06 17:16:17 UTC, a suspicious sequence of exploration activities were detected on ‘mb-winclient’ (Windows 10), involving the processes ‘whoami.exe’, ‘powershell.exe‘ (running as SYSTEM), and impacting users ‘pgustavo’ and ‘kdickens’.
DefenseEvasion: At 2024-08-06 17:16:20 UTC, a process was injected with potentially malicious code on ‘mb-winclient’ (Windows 10), involving the files ‘ContosoADFSadmincreds.ps1‘, ‘Midnight161.ps1‘, and impacting users ‘pgustavo’ and ‘kdickens’.
Execution: At 2024-08-06 17:16:20 UTC, a malicious PowerShell Cmdlet was invoked on ‘mb-winclient’ (Windows 10), involving the files ‘ContosoADFSadmincreds.ps1‘, ‘Midnight161.ps1‘, and impacting users ‘pgustavo’ and ‘kdickens’.
CredentialAccess: At 2024-08-06 17:17:07 UTC, a suspected DCSync attack (replication of directory services) was detected on ‘mb-winclient’ (Windows 10), impacting user ‘pgustavo’.
LateralMovement: At 2024-08-06 17:17:09 UTC, an ongoing hands-on-keyboard attack via Impacket toolkit was detected on ‘mb-adfs’ (Windows Server 2022), involving the processes ‘lsass.exe’ (running as SYSTEM), ‘powershell.exe‘ (running as SYSTEM), and impacting users ‘LocalSystem’ and ‘adfsadmin’.
Persistence: At 2024-08-06 17:17:10 UTC, suspicious service creation was detected on ‘mb-adfs’ (Windows Server 2022), involving the process ‘services.exe’ (running as SYSTEM), and impacting user ‘adfsadmin’.
CredentialAccess: At 2024-08-06 17:18:20 UTC, an ADFS private key extraction attempt was detected on ‘mb-adfs’ (Windows Server 2022), impacting user ‘adfsadmin’.
DefenseEvasion: At 2024-08-11 00:01:42 UTC, an attempt to hide the use of a dual-purpose tool was detected on ‘mb-adfs’ (Windows Server 2022), involving the file ‘jyglecpm.1v0.exe‘, and impacting user ‘adfsadmin’.
As seen from the incident summary, Copilot for Security provides an overview of the incident that helps you to quickly understand the scope and impact of the attack. It groups the suspicious events into distinct attack stages, outlining the attack’s progression from initial access to lateral movement and persistent access. Each stage is detailed with specific timestamps and a concise summary of the events. In a typical investigation, the security analyst would need to create a similar incident outline manually. With Copilot, this process is now automated and much faster.
After reviewing the incident summary, it’s clear that this is a highly involved incident with multiple sophisticated attacks. As a Tier 1 analyst, your goal is to quickly assess the attacks, rule out the false positives, determine the impact of the incident, and decide whether it needs to be escalated. Let’s dive into some of the incident investigations and see how Copilot for Security can assist throughout the process.
Initial Access, Discovery, and Defense Evasion
The initial access occurs when the attacker gains entry into the target system or network. In this case, the attacker employs a common attack tactic by tricking the user to click on a malicious link, to establish the initial foothold within the user’s corporate network. The following excerpt is from the incident summary generated by Copilot.
InitialAccess: The incident began at 2024-08-06 17:15:43 UTC when user ‘kdickens’ clicked on a malicious URL ‘hxxps://fm06722326.mangodune-7572a721.eastus.azurecontainerapps.io/’.
Once an attacker gains access to the corporate network, he/she needs to gather information about the environment. For instance, running the Windows process “whoami.exe” to gather details about the current user’s identity and environment. Executing “whoami /groups” shows the groups to which the current user belongs, revealing user’s access level and permissions within the system. This process is documented during the Discovery phase.
Discovery: At 2024-08-06 17:16:17 UTC, a suspicious sequence of exploration activities were detected on ‘mb-winclient’ (Windows 10), involving the processes ‘whoami.exe’, ‘powershell.exe‘ (running as SYSTEM), and impacting users ‘pgustavo’ and ‘kdickens’.
So far, the attack techniques encountered are standard and frequently observed. For a Tier 1 analyst, these techniques are familiar and well-known, making the investigation relatively simple.
However, starting from the Defense Evasion phase, things may get a bit tricky. You see a process injection event in the alert:
DefenseEvasion: At 2024-08-06 17:16:20 UTC, a process was injected with potentially malicious code on ‘mb-winclient’ (Windows 10), involving the files ‘ContosoADFSadmincreds.ps1‘, ‘Midnight161.ps1‘, and impacting users ‘pgustavo’ and ‘kdickens’.
What might “a process was injected with potentially malicious code” indicate? What’s a process injection? Let’s consult with Copilot for Security.
You can switch from the embedded experience of Copilot to its standalone mode. This allows you to ask Copilot more questions using its prompt feature. In the Microsoft Defender XDR portal, you may have noticed that there is an ellipsis right inside the Copilot pane. It offers several options for you, including the one “Open in Copilot for Security”. Selecting this option enables you to switch from the Copilot’s embedded experience to its standalone mode. This is shown in the next screen capture.
The “Open in Copilot for Security” feature also allows you to continue your current Copilot session in the standalone experience, preserving any ongoing work or context from the embedded experience. In this standalone mode, you can use the prompt feature to ask additional questions or seek further information, as shown in the next screenshot:
Prompt to Copilot for Security: Can you tell me more about DefenseEvasion: a process was injected with potentially malicious code on ‘mb-winclient’ (Windows10), involving the files ‘ContosoADFSadmincreds.ps1’, ‘Midnight161.ps1’
The response from Copilot provides an overview of process injection and its characteristics:
The term “process injection” typically refers to a technique used by attackers to execute arbitrary code in the address space of a separate live process. This technique improves the stealth and evasion capabilities of malware since the execution of the malicious code does not result in the creation of a new process, making it harder to detect.
The full Copilot response is displayed in the next screen capture:
The process injection overview may still be somewhat abstract. You can follow up with Copilot for a more detailed description using a prompt seen in the next screen capture.
The next diagram presents a portion of the response from Copilot for Security. With this response from Copilot, you have a better understanding of process injection.
The information from Copilot adds more context and insights for this alert, enabling you, the Tier 1 analyst, to start a more thorough investigation immediately.
Back to the Microsoft Defender XDR portal, analysts typically leverage the Defender for Endpoint Timeline feature to investigate the event that might have triggered the alert “A process was injected with potentially malicious code.” The next screenshot shows that in the Timeline, Defender XDR recorded a device event, “powershell.exe executed a script,”. This event led Defender for Endpoint to immediately detect and report “powershell.exe process contains suspicious patterns in AMSI content related to ‘Process Injection’ and other techniques.” This then leads to the triggering of the process injection alert.
Defender XDR also provides the detailed PowerShell process commands that you can ask Copilot to analyze, as shown in the next screen capture.
With Copilot’s information on process injection, and combining the details from the device events, especially that the PowerShell script was executed from the user’s downloads directory, it’s clear that this is likely a malicious attack rather than a false positive. The use of a more advanced technique of process injection suggests that this case should be escalated to the Tier 2 team for further investigation.
In Defender XDR’s Timeline feature, security analysts typically review device events that occurred around the time the security alert was triggered to identify any additional malicious activity. This process is both time-consuming and labor-intensive. But now Copilot for Security can handle the heavy lifting for you. Let’s return to the Copilot standalone environment and use its prompt feature to accomplish this.
The prompt seen in the next screenshot instructs Copilot for Security to use a Defender KQL query to retrieve all device process events from around the time of the process injection activity. It then asks Copilot to analyze these events for any signs of suspicious activity.
As shown in the next screen capture, Copilot has flagged a few events that are worth investigating further.
You can ask Copilot for Security to further investigate these events with additional prompts in Copilot’s standalone experience.
This investigation may take some time to finish. However, as a Tier 1 analyst, you have concluded that these are likely malicious events involving sophisticated attack techniques. The Tier 2 team will need to carry out a more in-depth investigation. In the meantime, you can move on to the next set of security alerts.
Credential Access
In the incident summary, Copilot for Security also mentions a suspected DCSync attack as part of Credential Access activity:
CredentialAccess: At 2024-08-06 17:17:07 UTC, a suspected DCSync attack (replication of directory services) was detected on ‘mb-winclient’ (Windows 10), impacting user ‘pgustavo’.
What is a DCSync attack? Rather than spending a lot of time researching it online, you can leverage Copilot for Security’s standalone experience to quickly get an idea of what it is and how it works.
Prompt to Copilot for Security: In your incident summary, you mentioned that “At 2024-08-06 17:17:07 UTC, a suspected DCSync attack (replication of directory services) was detected on ‘mb-winclient’ (Windows 10), impacting user ‘pgustavo’.” What’s a DCSync attack? Can you provide its TTPs?
The next screenshot displays the question you can ask Copilot, along with some of its responses.
Copilot for Security first provides an overview of the DCSync attack with some specific examples. It then walks you through the steps involved in a typical DCSync attack, as seen in the next snapshot.
After reviewing the information provided by Copilot for Security, you should have a better understanding of the key aspects of a DCSync attack and how it operates. If there are more questions or if further clarification is needed, you can always use Copilot’s prompt feature to gain additional insights.
Once you have a solid understanding of the DCSync attack, you can apply this knowledge to assist the security incident investigation. Back in the Defender XDR portal, when you examine the alert story for the “suspected DCSync attack” alert, you’ll likely notice something is off when the end user sends a replication request to a domain controller, as seen in the screen capture.
You can confirm your suspicion with Copilot by entering the following prompt in Copilot’s standalone mode:
Is it common for a regular user on a Windows 10 device to send a replication request to a DC?
As seen in the next screen capture, Copilot for Security confirms that “it is not common for a regular user on a Windows 10 device to send a replication request to a Domain Controller (DC). In a typical Active Directory environment, only Domain Controllers replicate data amongst themselves.”
At this point, you likely can confirm this is a true security incident, not a false positive. Given the nature of the attack, it will require further investigation by the Tier 2 team.
Without the help from Copilot for Security, researching and confirming whether suspicious behavior is truly malicious can be time-consuming and labor-intensive. However, with Copilot for Security, it is faster and more straightforward to confirm malicious activities. Additionally, its provided explanations offer more context, helping the Tier 1 analyst to assess the impact and severity of the incident. This enables the analyst to quickly decide if the incident needs to be escalated to the Tier 2 team or not.
Lateral Movement
In this same security incident, there is an alert for “an ongoing hands-on-keyboard attack via Impacket toolkit”:
LateralMovement: At 2024-08-06 17:17:09 UTC, an ongoing hands-on-keyboard attack via Impacket toolkit was detected on ‘mb-adfs’ (Windows Server 2022), involving the processes ‘lsass.exe’ (running as SYSTEM), ‘powershell.exe‘ (running as SYSTEM), and impacting users ‘LocalSystem’ and ‘adfsadmin’.
While the description of the Impacket toolkit in the Microsoft Defender XDR portal is helpful, but for more detailed insight, you can turn to Copilot for Security’s standalone experience and ask:
Can you tell me more about the Impacket tool? Where is it used most often? How is it exploited by attackers?
Copilot provides a more thorough description of this toolkit:
It’s worth noting that in Copilot’s description of the toolkit, it was emphasized that while Impacket is a legitimate tool, it can also be exploited by attackers. Copilot included several simple examples to illustrate this:
While Impacket is a legitimate tool with many valid uses, it can also be exploited by attackers due to its powerful capabilities. For example, attackers can use Impacket to move laterally across a network, execute commands on remote systems, or extract sensitive information such as password hashes.
Copilot for Security then highlighted a specific attack scenario, pointing out that Impacket can be used to carry out a DCSync attack. And, as the Tier 1 analyst, you have just confirmed that a DCSync attack is involved in the same security incident.
It’s also worth mentioning that while using Copilot for Security, it captures the sequence of your prompts and responses in a session. A session is a distinct interaction period with Copilot for Security, during which Copilot retains the context of the conversation. This allows it to provide relevant answers to subsequent requests, ensuring that the conversation flows logically, and that each response builds upon the previous ones.
Persistence
As we proceed with the incident summary generated by Copilot for Security, the next security alert is a suspicious service creation event occurred during the persistence stage:
Persistence: At 2024-08-06 17:17:10 UTC, suspicious service creation was detected on ‘mb-adfs’ (Windows Server 2022), involving the process ‘services.exe’ (running as SYSTEM), and impacting user ‘adfsadmin’.
If you start examining this alert in the Microsoft Defender XDR portal, you will notice that Defender for Identity and Defender for Endpoint EDR all generated similar alerts, likely for the same suspicious event, as seen in the next screen capture:
Let’s first take a look at “suspicious service creation” alert triggered in Defender for Identity, as shown next:
The service path listed in the alert looks suspicious and it is encoded. Let’s use the following prompt to consult Copilot and understand its purpose.
Can you decode then tell me more about this service path: %COMSPEC% /C “powershell.exe -EncodedCommand QwA6AFwAUwBjAHIAaQBwAHQAcwBcAFIAZQBuAGEAbQBlAFAAbwB3AGUAcgBzAGgAZQBsAGwALgBwAHMAMQA=”
The next screen capture displays the response from Copilot for Security:
Copilot for Security provides a detailed breakdown of the service path. It identifies the encoded command as base64 encoding and decodes it for you. It then concludes with a summary: “this service path is used to execute the RenamePowershell.ps1 PowerShell script located in the C:Scripts directory. The use of base64 encoding can often be a technique to obfuscate the actual command being run, which could be a sign of malicious activity.”
This information is helpful. Let’s continue examining the other related security alerts. The “suspicious service registration” alert triggered in Defender for Endpoint likely pertains to the same suspicious activity that was just looked at. This alert provides a little bit more information and mentions “services.exe modified service image file”, as shown next.
Using the following prompt, let’s see what Copilot for Security can find out:
Can you explain in more detail about the following event: services.exe modified service image file from to %COMSPEC% /C “powershell.exe -EncodedCommand QwA6AFwAUwBjAHIAaQBwAHQAcwBcAFIAZQBuAGEAbQBlAFAAbwB3AGUAcgBzAGgAZQBsAGwALgBwAHMAMQA=” for service WDUAPJKVADPWOQLFGQSH
The screenshot shown next displays Copilot’s response.
Copilot’s findings for this “suspicious service registration” alert closely align with its analysis on the “suspicious service creation” alert triggered in Defender for Identity. This confirms the suspicion that the two alerts from different security solutions are both reporting the same malicious activity. However, because the Defender for Endpoint alert points out a “services.exe modified service image file” and this detail was included in the prompt, Copilot for Security can now provide additional insights in its analysis that weren’t present in the previous report. You now have a clear picture of this malicious activity: the services.exe process has been set to run RenamePowershell.ps1 whenever the WDUAPJKVADPWOQLFGQSH service is started. For your reference, the following snippet is taken from Copilot’s last response:
It’s worth noting that changes to service image file paths, especially those involving scripts or unusual locations, can be a sign of malicious activity. In this case, the services.exe process has been instructed to launch a PowerShell script whenever the WDUAPJKVADPWOQLFGQSH service is started, which is unusual behavior and could potentially be malicious.
This example also highlights the importance of effective prompting. The more specific information you provide to Copilot, the more relevant and precise responses you’re likely to receive.
In addition, this example highlights the Microsoft Defender XDR’s capability to integrate and correlate data across various Defender solutions. When different Defender tools detect the same malicious activity, they each generate separate alerts. Microsoft Defender XDR then aggregates these alerts into a unified incident. This correlation helps to provide a comprehensive view of the threat.
Credential Access
Let’s review one more alert from the incident summary provided by Copilot for Security: suspicious credential access event involving an ADFS private key extraction:
CredentialAccess: At 2024-08-06 17:18:20 UTC, an ADFS private key extraction attempt was detected on ‘mb-adfs’ (Windows Server 2022), impacting user ‘adfsadmin’.
You will use Copilot for Security to again get a better understanding of the alert. In the Microsoft Defender XDR portal, the device Timeline very often shows the events that led to the alert being triggered. For this security alert, the device event, “jyglecpm.1v0.exe ran an LDAP query“, triggered the alert “ADFS private key extraction attempt”, as seen in the next screenshot. To further investigate this device event, one option is to utilize Copilot for Security to analyze the command line for jyglecpm.1v0.exe. You can perform this directly within the Microsoft Defender XDR portal leveraging the embedded experience of Copilot for Security. Alternatively, in standalone mode, you can paste the command line into the Copilot prompt and request for analysis.
As shown in the next screen capture, Copilot for Security offers a detailed analysis of the command line through its embedded experience.
From the Copilot analysis excerpt shown below, you can sense something isn’t right when jyglecpm.1v0.exe performed the LDAP query:
The provided command line script executes a file named “jyglecpm.1v0.exe” in a hidden window. It then uses the ‘-encodedCommand’ parameter to run a base64-encoded command. The base64 string decodes to a PowerShell script located at ‘C:ScriptsExportADFSTokenSigningCert.ps1’. This suggests that the script is likely used to execute a hidden process and then run a PowerShell script, possibly for malicious purposes.
To further investigate, you can first get a sense of what are the normal LDAP operations by prompting Copilot with the following questions in its standalone experience:
The next screenshot displays the information provided back by Copilot:
Based on Copilot’s explanation, you quickly gain an understanding of LDAP and how it operates. LDAP queries are typically initiated by client applications or services that need to access data stored in a directory service. For instance, an email client might perform an LDAP query to retrieve the email addresses of users within an organization.
Now you may be wondering, why would an unfamiliar and rarely seen process like “jyglecpm.1v0.exe” need to run an LDAP query to access data in a directory service? It is especially suspicious with its encoded command line. Let’s continue to consult Copilot with the prompt shown in the next screenshot:
Copilot for Security, leveraging its vast amount of knowledge, comments: “The command line script you provided does not appear to be a normal LDAP operation. It executes …” The full response from Copilot is displayed in the next screen capture:
Without the assistance of Copilot for Security, determining whether a behavior is truly malicious can be time-consuming and labor-intensive. Traditional methods typically involve extensive manual research, multiple clicks, and complex navigation through various user interface elements. Analysts often need to sift through many data points and conduct in-depth analyses to assess the severity of threats, which can delay the identification of true risks.
However, with Copilot for Security, this process becomes significantly more efficient and straightforward. Analysts can quickly ask specific questions about suspicious behavior rather than spending extended periods researching online. By providing additional context and explanations directly within the platform, Copilot enables Tier 1 analysts to quickly understand the nature and severity of an incident. This instant clarity allows analysts to make informed decisions and determine whether escalation to Tier 2 or 3 teams is necessary, thus accelerating threat response and reducing the overall time spent on investigation.
Microsoft Tech Community – Latest Blogs –Read More
How to propagate orbits on a non-rotating axesm globe
I used as my foundation the example found here:
https://www.mathworks.com/help/map/plotting-a-3-d-dome-as-a-mesh-over-a-globe.html
As described, I plotted a radar dome on a axesm globe.
Now I want to display satellite orbits. For that, I used the orbital propagation model found here:
https://www.mathworks.com/help/satcom/ug/comparison-of-orbit-propagators.html
This model uses a TLE file to generate a set of ephemeris tables containing XYZ coordinates for a satellite at 60-second intervals.
[positionSGP4,velocitySGP4] = states(satSGP4);
positionSGP4 = 3×2881
10^7 x
-2.5292 -2.5313 -2.5334 -2.5353 -2.5373 …
0.6924 0.6848 0.6772 0.6695 0.6619
3.1681 3.1583 3.1484 3.1384 3.1284
I took this XYZ coordinate data and rotated it sideways, and added a time column to create an ephemeris table. I converted it to latitude and longitude, then I used it to plot satellite coordinates on my axesm earth globe.
What I found was that the satSGP4 orbit propagator model has a built-in rotating Earth model, while my axesm Earth model does not. A precessing, Sun-synchronous polar orbit works its way around the satSGP4 model every 24 hours, while with my axesm Earth model, it just keeps retracing the same path in longitude. Mind you, there is nothing in the positionSGP4 data to account for orbital precession. Something else is making it happen.
I tried biasing the longitude values based on time and got a reasonable-looking, precessing plot, but it is still out of phase with the SGP4 propagator model.
Question: is there something built into the propagator model that takes into account Earth’s siderial rotation? How do I apply that to my non-rotating Earth model?
I don’t want to go down the Satellite Communications Toolkit approach. I already have a working radar system, up to this point.I used as my foundation the example found here:
https://www.mathworks.com/help/map/plotting-a-3-d-dome-as-a-mesh-over-a-globe.html
As described, I plotted a radar dome on a axesm globe.
Now I want to display satellite orbits. For that, I used the orbital propagation model found here:
https://www.mathworks.com/help/satcom/ug/comparison-of-orbit-propagators.html
This model uses a TLE file to generate a set of ephemeris tables containing XYZ coordinates for a satellite at 60-second intervals.
[positionSGP4,velocitySGP4] = states(satSGP4);
positionSGP4 = 3×2881
10^7 x
-2.5292 -2.5313 -2.5334 -2.5353 -2.5373 …
0.6924 0.6848 0.6772 0.6695 0.6619
3.1681 3.1583 3.1484 3.1384 3.1284
I took this XYZ coordinate data and rotated it sideways, and added a time column to create an ephemeris table. I converted it to latitude and longitude, then I used it to plot satellite coordinates on my axesm earth globe.
What I found was that the satSGP4 orbit propagator model has a built-in rotating Earth model, while my axesm Earth model does not. A precessing, Sun-synchronous polar orbit works its way around the satSGP4 model every 24 hours, while with my axesm Earth model, it just keeps retracing the same path in longitude. Mind you, there is nothing in the positionSGP4 data to account for orbital precession. Something else is making it happen.
I tried biasing the longitude values based on time and got a reasonable-looking, precessing plot, but it is still out of phase with the SGP4 propagator model.
Question: is there something built into the propagator model that takes into account Earth’s siderial rotation? How do I apply that to my non-rotating Earth model?
I don’t want to go down the Satellite Communications Toolkit approach. I already have a working radar system, up to this point. I used as my foundation the example found here:
https://www.mathworks.com/help/map/plotting-a-3-d-dome-as-a-mesh-over-a-globe.html
As described, I plotted a radar dome on a axesm globe.
Now I want to display satellite orbits. For that, I used the orbital propagation model found here:
https://www.mathworks.com/help/satcom/ug/comparison-of-orbit-propagators.html
This model uses a TLE file to generate a set of ephemeris tables containing XYZ coordinates for a satellite at 60-second intervals.
[positionSGP4,velocitySGP4] = states(satSGP4);
positionSGP4 = 3×2881
10^7 x
-2.5292 -2.5313 -2.5334 -2.5353 -2.5373 …
0.6924 0.6848 0.6772 0.6695 0.6619
3.1681 3.1583 3.1484 3.1384 3.1284
I took this XYZ coordinate data and rotated it sideways, and added a time column to create an ephemeris table. I converted it to latitude and longitude, then I used it to plot satellite coordinates on my axesm earth globe.
What I found was that the satSGP4 orbit propagator model has a built-in rotating Earth model, while my axesm Earth model does not. A precessing, Sun-synchronous polar orbit works its way around the satSGP4 model every 24 hours, while with my axesm Earth model, it just keeps retracing the same path in longitude. Mind you, there is nothing in the positionSGP4 data to account for orbital precession. Something else is making it happen.
I tried biasing the longitude values based on time and got a reasonable-looking, precessing plot, but it is still out of phase with the SGP4 propagator model.
Question: is there something built into the propagator model that takes into account Earth’s siderial rotation? How do I apply that to my non-rotating Earth model?
I don’t want to go down the Satellite Communications Toolkit approach. I already have a working radar system, up to this point. orbit propagation, sgp4, rotating earth MATLAB Answers — New Questions
Connection to a database
I have a script in which data from an ORACLE database are called. The problem is that the connection isn’t always succesfull. After 3-4 attempts the connection is succesfull. Do you know why does ths happen?
ThanksI have a script in which data from an ORACLE database are called. The problem is that the connection isn’t always succesfull. After 3-4 attempts the connection is succesfull. Do you know why does ths happen?
Thanks I have a script in which data from an ORACLE database are called. The problem is that the connection isn’t always succesfull. After 3-4 attempts the connection is succesfull. Do you know why does ths happen?
Thanks database, oracle, query MATLAB Answers — New Questions
Image Acquisition Toolbox Support Package for GenICam Interface does not install
Hi all,
i tried to install Image Acquisition Toolbox Support Package for GenICam Interface and it told me it is already installed but it does not appear in the addon manager and is not available.
how can i fix this?
thanks a lot for any help
kind regards
Norbert
PS i use 2024a unfortunately i cannot set the version in the menueHi all,
i tried to install Image Acquisition Toolbox Support Package for GenICam Interface and it told me it is already installed but it does not appear in the addon manager and is not available.
how can i fix this?
thanks a lot for any help
kind regards
Norbert
PS i use 2024a unfortunately i cannot set the version in the menue Hi all,
i tried to install Image Acquisition Toolbox Support Package for GenICam Interface and it told me it is already installed but it does not appear in the addon manager and is not available.
how can i fix this?
thanks a lot for any help
kind regards
Norbert
PS i use 2024a unfortunately i cannot set the version in the menue image acquisition MATLAB Answers — New Questions
Evidence in Compliance Manager
Hi,
I would like to do internal audits for ISO 27001:2022 using Compliance Manager. I’m having problems adding evidence like a Security policy if the service Microsoft 365 is active.
Hi,I would like to do internal audits for ISO 27001:2022 using Compliance Manager. I’m having problems adding evidence like a Security policy if the service Microsoft 365 is active. Read More
Missing Security option, to unlock sheet with macro.
Hi.
I have a sheet with macros, but I can not find how to enable Excel to run macros on the file. Saved at Sharepoint.
My documents options settings does not have security option. Any sugestions, how to enable macros in this document?
Best Regards
Geir
Hi. I have a sheet with macros, but I can not find how to enable Excel to run macros on the file. Saved at Sharepoint. My documents options settings does not have security option. Any sugestions, how to enable macros in this document? Best Regards Geir Read More
Is there a cost management option for customers to see in Azure Gov CSP?
Is there a cost management option in the Azure portal for customers/partners to see in Azure Gov CSP?
Is there a cost management option in the Azure portal for customers/partners to see in Azure Gov CSP? Read More