Month: August 2024
How would I distribute a value among cells?
I have a sheet I’m using to calculate the number of tasks throughout the week. It is distributed based on hours worked each day. Mon-Thurs: 8.5 hours, Fri: 6 hours.
Right now I’m using a simple rounddown/up on the cells in the “Daily” row but as you can see below, they don’t always round properly so that the total at the end is the same as the actual amount I’m starting the day with.
How would I go about making sure they round properly so that the “Tasks:” each day equals the “Total” at the end of the row of days?
Any other improvements you all can think of to improve this would be appreciated. I feel like I’m getting better at this but
I have a sheet I’m using to calculate the number of tasks throughout the week. It is distributed based on hours worked each day. Mon-Thurs: 8.5 hours, Fri: 6 hours.Right now I’m using a simple rounddown/up on the cells in the “Daily” row but as you can see below, they don’t always round properly so that the total at the end is the same as the actual amount I’m starting the day with. How would I go about making sure they round properly so that the “Tasks:” each day equals the “Total” at the end of the row of days? Any other improvements you all can think of to improve this would be appreciated. I feel like I’m getting better at this but Read More
Introducing the MDTI Article Digest
The MDTI team is excited to introduce the MDTI Article Digest, a new way for customers to stay up to speed with the latest analysis of threat activity observed across more than 78 trillion daily threat signals from Microsoft’s interdisciplinary teams of experts worldwide. The digest, seamlessly integrated into the MDTI user interface in the threat intelligence blade of Defender XDR, shows users everything published since their last login:
Customers will see that not only does the digest notify users of the latest content but also encourages exploration through a user-friendly sidebar that lists the articles:
With the added convenience of pagination, users can now easily navigate through a wealth of information, ensuring they never miss valuable insights. The digest is also flexible, allowing users to clear notifications, thus tailoring the experience to their preferences.
The digest is a significant step forward in our commitment to delivering exceptional user experiences, and we’re excited to see how it will positively impact the MDTI community. If you’re a licensed MDTI user, login to Defender XDR today to see the digest located on the right-hand side of the UI, to the left of the TI Copilot embedded experience sidebar.
Conclusion
Microsoft delivers leading threat intelligence built on visibility across the global threat landscape made possible protecting Azure and other large cloud environments, managing billions of endpoints and emails, and maintaining a continuously updated graph of the internet. By processing an astonishing 78 trillion security signals daily, Microsoft can deliver threat intelligence in MDTI providing an all-encompassing view of attack vectors across various platforms, ensuring Sentinel customers have comprehensive threat detection and remediation.
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDTI product web page.
Also, be sure to contact our sales team to request a demo or a quote. Learn how you can begin using MDTI with the purchase of just one Copilot for Security SCU here.
Microsoft Tech Community – Latest Blogs –Read More
A better Phi-3 Family is coming – multi-language support, better vision, intelligence MOEs
After the release of Phi-3 at Microsoft Build 2024, it has received different attention, especially the application of Phi-3-mini and Phi-3-vision on edge devices. In the June update, we improved Benchmark and System role support by adjusting high-quality data training. In the August update, based on community and customer feedback, we brought Phi-3-mini-128k-instruct multi-language support, Phi-3-vision-128k with multi-frame image input, and provided Phi-3 MOE newly added for AI Agent. Next, let’s take a look
Multi-language support
In previous versions, Phi-3-mini had good English corpus support, but weak support for non-English languages. When we tried to ask questions in Chinese, there were often some wrong questions, such as
But in the new version, we can have better understanding and corpus support with the new Chinese prediction support
Better vision
placeholder = “”
for i in range(1,22):
with open(“../output/keyframe_”+str(i)+”.jpg”, “rb”) as f:
images.append(Image.open(“../output/keyframe_”+str(i)+”.jpg”))
placeholder += f”<|image_{i}|>n”
Intelligence MOEs
Faster pre-training speed than dense models
Faster inference speed than models with the same number of parameters
Requires a lot of video memory because all expert systems need to be loaded into memory
There are many challenges in fine-tuning, but recent research shows that instruction tuning for mixed expert models has great potential.
“””
sys_msg = “””You are a helpful AI assistant, you are an agent capable of using a variety of tools to answer a question. Here are a few of the tools available to you:
– Blog: This tool helps you describe a certain knowledge point and content, and finally write it into Twitter or Facebook style content
– Translate: This is a tool that helps you translate into any language, using plain language as required
– Final Answer: the final answer tool must be used to respond to the user. You must use this when you have decided on an answer.
To use these tools you must always respond in JSON format containing `”tool_name”` and `”input”` key-value pairs. For example, to answer the question, “Build Muliti Agents with MOE models” you must use the calculator tool like so:
{
“tool_name”: “Blog”,
“input”: “Build Muliti Agents with MOE models”
}
Or to translate the question “can you introduce yourself in Chinese” you must respond:
{
“tool_name”: “Search”,
“input”: “can you introduce yourself in Chinese”
}
Remember just output the final result, ouput in JSON format containing `”agentid”`,`”tool_name”` , `”input”` and `”output”` key-value pairs .:
[
{ “agentid”: “step1”,
“tool_name”: “Blog”,
“input”: “Build Muliti Agents with MOE models”,
“output”: “………”
},
{ “agentid”: “step2”,
“tool_name”: “Search”,
“input”: “can you introduce yourself in Chinese”,
“output”: “………”
},
{
“agentid”: “final”
“tool_name”: “Result”,
“output”: “………”
}
]
The users answer is as follows.
“””
We can see that by telling the model what skills it needs to have and the arrangement of tasks, we can assign them to different tasks through Phi-3 MOEs to complete related work.
[
{
“agentid”: “step1”,
“tool_name”: “Blog”,
“input”: “Generative AI with MOE”,
“output”: “Generative AI with MOE (Mixture of Experts) is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more.”
},
{
“agentid”: “step2”,
“tool_name”: “Translate”,
“input”: “Generative AI with MOE is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more.”,
“output”: “基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。”
},
{
“agentid”: “final”,
“tool_name”: “Result”,
“output”: “基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。”
}
]
Thoughts on SLMs
Resources
Microsoft Tech Community – Latest Blogs –Read More
Visualizing Data as Graphs with Fabric and KQL
Introduction
For quite a while, I have been extremely interested in data visualization. Over the last few years, I have been focused on ways to visualize graph databases (regardless of where the data comes from Using force directed graphs to highlight the similarities or “connected communities” in data is incredibly powerful. The purpose of this post is to highlight the recent work that the Kusto.Explorer team has done to visualize graphs in Azure Data Explorer database with data coming from a Fabric KQL Database.
Note: The Kusto.Explorer application used to visualize the graph is currently only supported on Windows.
Background
Azure Data Explorer (ADX) is Microsoft’s fully managed, high-performance analytics engine specializing in near real time queries on high volumes of data. It is extremely useful for log analytics, time-series and Internet of Things type scenarios. ADX is like traditional relational database models in that it organizes the data into tables with strongly typed schemas.
In September 2023, the ADX team introduced extensions to the query language (KQL) that enabled graph semantics on top of the tabular data. These extensions enabled users to contextualize their data and its relationships as a graph structure of nodes and edges. Graphs are often an easier way to present and query complex or networked relationships. These are normally difficult to query because they require recursive joins on standard tables. Examples of common graphs include social networks (friends of friends), product recommendations (similar users also bought product x), connected assets (assembly line) or a knowledge graph.
Fast forward to February 2024, Microsoft Fabric introduced Eventhouse as a workload in a Fabric workspace. This brings forward the power of KQL and Real-Time analytics to the Fabric ecosystem.
So now, I have a large amount of data in Fabric Eventhouse that I want to visualize with a force directed graph…
Let’s get started!
Pre-Requisites
If you want to follow along, you will need a Microsoft Fabric account (Get started with Fabric for Free).
Next, for this post, I used an open dataset from the Bureau of Transportation Statistics. The following files were used:
Aviation Support Tables – Master Coordinate data
When you download this file, you can choose the fields to be included in it. For this example, I only used AirportID, Airport, AirportName, AirportCityName and AirportStateCode.
This Airport data will be loaded directly to a table in KQL.
This file does not necessarily need to be unzipped.
Airline Service Quality Performance 234 (On-Time performance data)
For this blog, I only used the “April 2024” file from this link.
This data will be accessed using a Lakehouse shortcut.
Unzip this file to a local folder and change the extension from “.asc” to “.psv” because this is a pipe-separated file.
In order to use these downloaded files, I uploaded them to the “Files” section of the Lakehouse in my Fabric Workspace. If you do not have a Lakehouse in your workspace, first, navigate to your workspace and select “New” -> “More Options” and choose “Lakehouse” from the Data Engineering workloads. Give your new Lakehouse a name and click “Create”.
Once you have a Lakehouse, you can upload the files by clicking on the Lakehouse to bring up the Lakehouse Explorer. First, in the Lakehouse Explorer, click the three dots next to “Files” and select “New subfolder” and create a folder for “Flights”. Next, click the three dots next to the “Flights” sub-folder and select “Upload” from the drop-down menu and choose the on-time performance file. Confirm that the file is uploaded to files by refreshing the page.
Next, an Eventhouse will be used to host the KQL Cluster where you will ingest the data for analysis. If you do not have an Eventhouse in your workspace, select “New” -> “More Options” and choose “Eventhouse” from “Real-Time Intelligence” workloads. Give your new Eventhouse a name and click “Create”.
Finally, we will use the Kusto.Explorer application (available only for Windows) to visualize the graph. This is a one-click deployment application, so it is possible that it will run an application update when you start it up.
Ingest Data to KQL Database
When the Eventhouse was created, a default KQL database with the same name was created. To get data into the database, click the three dots next to the database name, select “Get Data” -> “Local File”. In the dialog box that pops up, in the “Select or create a destination table”, click the “New table” and give the table a name, in this case it will be “airports”. Once you have a valid table name, the dialog will update to drag or browse for the file to load.
Note: You can upload files in a compressed file format if it is smaller than 1GB.
Click “Next” to inspect the data for import. For the airports data, you will need to change the “Format” to CSV and enable the option for “First row is column header”.
Click “Finish” to load the file to the KQL table.
The airport data should now be loaded into the table, and you can query the table to view the results.
Here is a sample of query to verify that data was loaded:
airports
| take 100;
For the On-Time performance data, we will not ingest it into KQL. Instead, we will create a shortcut to the files in the Lakehouse storage.
Back in the KQL Database explorer, at the top, click on the “+ New -> OneLake shortcut” menu item.
In the dialog that comes up, choose “Microsoft OneLake” and in the “Select a data source type”, choose the Lakehouse where the data was uploaded earlier, and click “Next”
Once the tree view of the OneLake populates the Tables and Files, open the files, and select the subfolder that was created when uploading the On-Time data, and click “Create” to complete the shortcut creation.
Once the shortcut is created, you can view that data by clicking the “Explore your data” and running the following query to validate your data.
external_table(‘flights’)
| count;
Note: When accessing the shortcut data, use the “external_table” and the name of the shortcut that was created. You cannot change the shortcut name.
Query and Visualize with Kusto.Explorer
Now that the data is connected to an Eventhouse database, we want to start to do analytics on this data. Fabric does have a way to run KQL Queries directly, but the expectation is that the results of the query will be a table. The only way to show the graph visualization is to use the Kusto.Explorer.
To connect to the KQL database, you need to get the URI of the cluster from Fabric. Navigating to the KQL Database in Fabric, there is a panel that includes the “Database details”.
Using the “Copy URI” to the right of the Query URI will copy the cluster URI to the clipboard.
In the Kusto.Explorer application, right click the “Connections” and select “Add Connection”
In the popup, paste the Query URI into the “Cluster connection” textbox replacing the text that is there. You can optionally give the connection an alias rather than using the URI. Finally, I chose to use the AAD for security. You can choose whatever is appropriate for your client access.
At this point, we can open a “New Tab” (Home menu) and type in the query like what we used above.
let nodes = airports;
let edges = external_table(‘flights’)
| project origin = Column7, dest = Column8, flight = strcat(Column1, Column2), carrier = Column1;
edges
| make-graph origin –> dest with nodes on AIRPORT
Note: You may need to modify the table names (airports, flights) depending on the shortcut or table name you used when loading the data. These values are case-sensitive.
The points of interest in our graph will be the airports (nodes) and the connections (edges) will be the individual flights that were delayed. I am using the “make-graph” extension in KQL to make a graph of edges from origin to destination using the three-character airport code as link.
Visualize with “make-graph”
When this query is run, if the last line of the query is “make-graph”, Kusto.Explorer will automatically pop up a new window titled “Chart” to view the data. In the image below, I chose to change the visualization to a dark theme and then colored the edges based on the “carrier” column of the flight data.
Note: I have zoomed in on the cluster of interest.
If I drag a few of the nodes around, I can start to see there are some nodes (airports) with a lot of orange connections. If I click on an orange link, I quickly learn the orange lines are Delta Flights and the three nodes I pulled out in the image below are Atlanta, Minneapolis, and Detroit.
Conclusion
I started with tables of text-based data and ended with a nice “network” visualization of my flights data. The power of graph visualization to see the relationships between my data rather than just reading tables is invaluable.
Next, I am excited to start to explore visualizations of the data for supply chains and product recommendations.
Microsoft Tech Community – Latest Blogs –Read More
When is subsystem reference a better choice than model reference?
I am trying to understand the difference between "model reference" and "subsystem reference". I came across this documentation page, but no matter how many use-cases I reviewed, I never seem to end up with a "subsystem reference". From the flow chart shown in that page, the only way conclude with "subsystem reference" is when the block does not have a defined interface, or is not re-used or does not change. These are the antithesis of arguments for any creating any type of subsystems. Can someone comment on practical situations when a "subsystem reference" will be useful over "model reference"? Are there computational costs associated with "model reference"? I am inclined to believe that in practical cases in the industry, "subsystem references" are rarely used.I am trying to understand the difference between "model reference" and "subsystem reference". I came across this documentation page, but no matter how many use-cases I reviewed, I never seem to end up with a "subsystem reference". From the flow chart shown in that page, the only way conclude with "subsystem reference" is when the block does not have a defined interface, or is not re-used or does not change. These are the antithesis of arguments for any creating any type of subsystems. Can someone comment on practical situations when a "subsystem reference" will be useful over "model reference"? Are there computational costs associated with "model reference"? I am inclined to believe that in practical cases in the industry, "subsystem references" are rarely used. I am trying to understand the difference between "model reference" and "subsystem reference". I came across this documentation page, but no matter how many use-cases I reviewed, I never seem to end up with a "subsystem reference". From the flow chart shown in that page, the only way conclude with "subsystem reference" is when the block does not have a defined interface, or is not re-used or does not change. These are the antithesis of arguments for any creating any type of subsystems. Can someone comment on practical situations when a "subsystem reference" will be useful over "model reference"? Are there computational costs associated with "model reference"? I am inclined to believe that in practical cases in the industry, "subsystem references" are rarely used. reference subsystem, model subsystem MATLAB Answers — New Questions
How to combine panes, or how to redock an undocked file in main editor pane
When I am editing multiple files, I have them all open in one undocked editor window, each one with it’s own tab.
Sometimes I like to rearrange them by dragging the tabs. It’s very sensitive to vertical movements (that’s a bug too, it should not be so sensitive in the vertical direction) so if I move a little bit vertically, the editor creates a separate pane for the tab. I can’t figure out how to move it back to the pane with all my other files. I can undock, then dock all in editor, but the editor just puts it back into its separate pane.
How do I get back to the arrangement that I started with?When I am editing multiple files, I have them all open in one undocked editor window, each one with it’s own tab.
Sometimes I like to rearrange them by dragging the tabs. It’s very sensitive to vertical movements (that’s a bug too, it should not be so sensitive in the vertical direction) so if I move a little bit vertically, the editor creates a separate pane for the tab. I can’t figure out how to move it back to the pane with all my other files. I can undock, then dock all in editor, but the editor just puts it back into its separate pane.
How do I get back to the arrangement that I started with? When I am editing multiple files, I have them all open in one undocked editor window, each one with it’s own tab.
Sometimes I like to rearrange them by dragging the tabs. It’s very sensitive to vertical movements (that’s a bug too, it should not be so sensitive in the vertical direction) so if I move a little bit vertically, the editor creates a separate pane for the tab. I can’t figure out how to move it back to the pane with all my other files. I can undock, then dock all in editor, but the editor just puts it back into its separate pane.
How do I get back to the arrangement that I started with? editor, undock, dock, pane MATLAB Answers — New Questions
isolated databricks cluster call from synapses or azure datafactory
how can I create a job in databricks with parameters of isolated from synapses or azure datafactory, because I can not find any option that allows to pass as parameter this value and not being able to do so I have no access to my unit catalog in databricks
example:
{
“num_workers”: 1,
“cluster_name”: “…”,
“spark_version”: “14.0.x-scala2.12”,
“spark_conf”: {
“spark.hadoop.fs.azure.account.oauth2.client.endpoint”: “…”,
“spark.hadoop.fs.azure.account.auth.type”: “…”,
“spark.hadoop.fs.azure.account.oauth.provider.type”: “…”,
“spark.hadoop.fs.azure.account.oauth2.client.id”: “…”,
“spark.hadoop.fs.azure.account.oauth2.client.secret”: “…”
},
“node_type_id”: “…”,
“driver_node_type_id”: “…”,
“ssh_public_keys”: [],
“spark_env_vars”: {
“cluster_type”: “all-purpose”
},
“init_scripts”: [],
“enable_local_disk_encryption”: false,
“data_security_mode”: “USER_ISOLATION”,
“cluster_id”: “…”
}
how can I create a job in databricks with parameters of isolated from synapses or azure datafactory, because I can not find any option that allows to pass as parameter this value and not being able to do so I have no access to my unit catalog in databricksexample:{ “num_workers”: 1, “cluster_name”: “…”, “spark_version”: “14.0.x-scala2.12”, “spark_conf”: { “spark.hadoop.fs.azure.account.oauth2.client.endpoint”: “…”, “spark.hadoop.fs.azure.account.auth.type”: “…”, “spark.hadoop.fs.azure.account.oauth.provider.type”: “…”, “spark.hadoop.fs.azure.account.oauth2.client.id”: “…”, “spark.hadoop.fs.azure.account.oauth2.client.secret”: “…” }, “node_type_id”: “…”, “driver_node_type_id”: “…”, “ssh_public_keys”: [], “spark_env_vars”: { “cluster_type”: “all-purpose” }, “init_scripts”: [], “enable_local_disk_encryption”: false, “data_security_mode”: “USER_ISOLATION”, “cluster_id”: “…”} Read More
MS ACCESS wont read Form Variables as query parameters
I use form variables as parameters in my queries. I have been doing this for 25 years in Access. Recently I have noticed that some of my queries are showing null values for form variables that are present on the form and I can see there is content in them. I can even see data in the immediate window when I reference the form fields in question, but doesn’t work anymore in the query.
What would be the reason that referencing a form variables in a query would not work (would not retain the data in the parameter that references a form variable)?
EX:
SELECT tblLoanTypes.LoanTypeCode, tblSDSTeam.ScheduledAttendancePerWeek, tblSDSTeam.StartDateThisAcademicYear, tblSDSTeam.EndDateThisAcademicYear, tblSDSTeam.AwardPeriodBeginDate, tblSDSTeam.AwardPeriodEndDate, tblSDSTeam.NumPaymentPeriodsInThisAward, tblSDSTeam.StudentProgramEnrollmentId, tblSDSTeam.StudentAcademicYearId, tblSDSTeam.StudentAwardId, tblSDSTeam.ExportDate, [Forms]![frmMain]![cRecId] AS Expr1, tblSDSTeam.ScheduledAwardAmount, tblSDSTeam.ScheduledAwardAmount, “R” AS Expr2, [Forms]![frmMain]![cSeq] AS Expr3, Left(Trim([StudentId]),9) AS Expr4, tblSDSTeam.AwardYear
FROM tblSDSTeam INNER JOIN tblLoanTypes ON tblSDSTeam.AwardType = tblLoanTypes.LoanType
WHERE (((tblSDSTeam.StudentProgramEnrollmentId)=[Forms]![frmMain]![nStudentProgramEnrollmentId]) AND ((tblSDSTeam.StudentAcademicYearId)=[Forms]![frmMain]![nStudentAcademicYearId]) AND ((tblSDSTeam.StudentAwardId)=[Forms]![frmMain]![nStudentAwardId]));
The query above worked fin until a month ago. But now all of the form variables are not making it to the query with data in them.
I compacted and repaired, re-created the tables and queries from scratch – with no luck. Any help is appreciated.
Microsoft® Access® for Microsoft 365 MSO (Version 2408 Build 16.0.17928.20066) 64-bit
Windows 11 pro with 32 GIG Ram
I use form variables as parameters in my queries. I have been doing this for 25 years in Access. Recently I have noticed that some of my queries are showing null values for form variables that are present on the form and I can see there is content in them. I can even see data in the immediate window when I reference the form fields in question, but doesn’t work anymore in the query. What would be the reason that referencing a form variables in a query would not work (would not retain the data in the parameter that references a form variable)? EX:SELECT tblLoanTypes.LoanTypeCode, tblSDSTeam.ScheduledAttendancePerWeek, tblSDSTeam.StartDateThisAcademicYear, tblSDSTeam.EndDateThisAcademicYear, tblSDSTeam.AwardPeriodBeginDate, tblSDSTeam.AwardPeriodEndDate, tblSDSTeam.NumPaymentPeriodsInThisAward, tblSDSTeam.StudentProgramEnrollmentId, tblSDSTeam.StudentAcademicYearId, tblSDSTeam.StudentAwardId, tblSDSTeam.ExportDate, [Forms]![frmMain]![cRecId] AS Expr1, tblSDSTeam.ScheduledAwardAmount, tblSDSTeam.ScheduledAwardAmount, “R” AS Expr2, [Forms]![frmMain]![cSeq] AS Expr3, Left(Trim([StudentId]),9) AS Expr4, tblSDSTeam.AwardYearFROM tblSDSTeam INNER JOIN tblLoanTypes ON tblSDSTeam.AwardType = tblLoanTypes.LoanTypeWHERE (((tblSDSTeam.StudentProgramEnrollmentId)=[Forms]![frmMain]![nStudentProgramEnrollmentId]) AND ((tblSDSTeam.StudentAcademicYearId)=[Forms]![frmMain]![nStudentAcademicYearId]) AND ((tblSDSTeam.StudentAwardId)=[Forms]![frmMain]![nStudentAwardId])); The query above worked fin until a month ago. But now all of the form variables are not making it to the query with data in them. I compacted and repaired, re-created the tables and queries from scratch – with no luck. Any help is appreciated. Microsoft® Access® for Microsoft 365 MSO (Version 2408 Build 16.0.17928.20066) 64-bitWindows 11 pro with 32 GIG Ram Read More
How to recursively go through all directories and sub directories and process files?
So I have a single directory with sub directories and sub directories and then files (its a mess). I want to go to each folder at end ( I dont care about directories) and process the files in it (be it .txt .jpg etc). I kinda got a way to list all folders using the dirwalk function from here, – https://www.mathworks.com/matlabcentral/fileexchange/32036-dirwalk-walk-the-directory-tree , but I don’t know two things. How to figure out if a path (the function outputs all possible paths, thats what i understood) is not directory with sub directories, but a directory with images or text files, so I can process them. After choosing a path, I need to figure out how to loop through files. Or is there an easier way I’m missing? any advice would be appreciated!
Example of direc structure:
Main Directory>
>> Dir 1
>> fold1
>>txt files
>>fold2
>>files
>>Dir 2
…etcSo I have a single directory with sub directories and sub directories and then files (its a mess). I want to go to each folder at end ( I dont care about directories) and process the files in it (be it .txt .jpg etc). I kinda got a way to list all folders using the dirwalk function from here, – https://www.mathworks.com/matlabcentral/fileexchange/32036-dirwalk-walk-the-directory-tree , but I don’t know two things. How to figure out if a path (the function outputs all possible paths, thats what i understood) is not directory with sub directories, but a directory with images or text files, so I can process them. After choosing a path, I need to figure out how to loop through files. Or is there an easier way I’m missing? any advice would be appreciated!
Example of direc structure:
Main Directory>
>> Dir 1
>> fold1
>>txt files
>>fold2
>>files
>>Dir 2
…etc So I have a single directory with sub directories and sub directories and then files (its a mess). I want to go to each folder at end ( I dont care about directories) and process the files in it (be it .txt .jpg etc). I kinda got a way to list all folders using the dirwalk function from here, – https://www.mathworks.com/matlabcentral/fileexchange/32036-dirwalk-walk-the-directory-tree , but I don’t know two things. How to figure out if a path (the function outputs all possible paths, thats what i understood) is not directory with sub directories, but a directory with images or text files, so I can process them. After choosing a path, I need to figure out how to loop through files. Or is there an easier way I’m missing? any advice would be appreciated!
Example of direc structure:
Main Directory>
>> Dir 1
>> fold1
>>txt files
>>fold2
>>files
>>Dir 2
…etc directory, file processing MATLAB Answers — New Questions
How to Custom Export Excel using power automate
Hi Everyone,
i have sharepoint list with multiple value
I want export to excel using power automate with custom column: **bleep**, Name, List App (deleted).
Flow used “create html table”
But the results obtained did not match expectations. I tried selecting 2 options: Chrome and Mozilla Firefox
is there something wrong with the flow? or do I have to add code to PowerApp? If you add code, what should you add?
Hi Everyone, i have sharepoint list with multiple value I want export to excel using power automate with custom column: **bleep**, Name, List App (deleted).Flow used “create html table” But the results obtained did not match expectations. I tried selecting 2 options: Chrome and Mozilla Firefoxis there something wrong with the flow? or do I have to add code to PowerApp? If you add code, what should you add? Read More
Swappable boot drives with Intune
I have a situation where multiple users get hard drives assigned to them to use in our classroom lab PCs that have drive trays where they insert their assigned drive. I have been testing how Intune handles the new drive being inserted.
When the first drive is built from the on-prem deployment system, it is recognized by Intune and all is well. When the second student arrives and builds their drive, all is well. The problem arises when the first student comes back to boot their drive. Intune flags it as non-compliant and will not pull new policy. They can still use the machine but Intune freaks out a bit. When the second drive comes back, all is well again in Intune.
I realize this is a strange scenario, but I thought someone might have a clever idea of how to get Intune to recognize both builds as compliant. I’m not sure if this is just the way it registers the hardware IDs or if I’m fighting a number of issues because this is not what it is designed for.
I have a situation where multiple users get hard drives assigned to them to use in our classroom lab PCs that have drive trays where they insert their assigned drive. I have been testing how Intune handles the new drive being inserted. When the first drive is built from the on-prem deployment system, it is recognized by Intune and all is well. When the second student arrives and builds their drive, all is well. The problem arises when the first student comes back to boot their drive. Intune flags it as non-compliant and will not pull new policy. They can still use the machine but Intune freaks out a bit. When the second drive comes back, all is well again in Intune. I realize this is a strange scenario, but I thought someone might have a clever idea of how to get Intune to recognize both builds as compliant. I’m not sure if this is just the way it registers the hardware IDs or if I’m fighting a number of issues because this is not what it is designed for. Read More
Azure OpenAI FedRAMP High + M365 Copilot Targeting Sept 2025 for GCC High/DOD
We’re excited to share two major updates for our public sector and defense customers:
Azure OpenAI Service is now FedRAMP High authorized for Azure Government. This approval allows government agencies to securely leverage advanced AI capabilities, including GPT-4o, within their Azure Government environment.
For the first time, we’re targeting a General Availability (GA) of September 2025 for Microsoft 365 Copilot in GCC High and DOD environments (pending government authorization). Copilot will deliver powerful AI tools tailored for decision-making, automation, and enhanced collaboration, all while meeting the strict compliance and security needs of our defense and government customers.
For more information on these updates and how they can impact your workflows, check out the full blog post
Let’s discuss how you’re planning to use these AI advancements in your environments!
3 business colleagues standing and talking
We’re excited to share two major updates for our public sector and defense customers:
Azure OpenAI Service is now FedRAMP High authorized for Azure Government. This approval allows government agencies to securely leverage advanced AI capabilities, including GPT-4o, within their Azure Government environment.
For the first time, we’re targeting a General Availability (GA) of September 2025 for Microsoft 365 Copilot in GCC High and DOD environments (pending government authorization). Copilot will deliver powerful AI tools tailored for decision-making, automation, and enhanced collaboration, all while meeting the strict compliance and security needs of our defense and government customers.
For more information on these updates and how they can impact your workflows, check out the full blog post
Let’s discuss how you’re planning to use these AI advancements in your environments!
Read More
Outlook
Hi, recently my Outlook account was empty, all folders, emails, and information were gone. Right now I am only receiving new emails. Nothing was deleted, it was just empty. How can I restore everything?
Thank you very much.
Hi, recently my Outlook account was empty, all folders, emails, and information were gone. Right now I am only receiving new emails. Nothing was deleted, it was just empty. How can I restore everything?Thank you very much. Read More
Microsoft Copilot for Microsoft 365 GCC GA Update
Exciting news for Federal Civilian and SLG customers! Microsoft Copilot for Microsoft 365 GCC is set for General Availability on October 15, 2024 (pending US Government authorization).
Key features coming in October:
AI-powered tools in Word, Excel, Outlook, and Teams
Graph-grounded chat for quick data access
Intelligent meeting recaps
March 2025 will bring even more capabilities like real-time meeting summaries and task automation.
Security & Compliance: Fully aligned with Microsoft 365 GCC standards.
:loudspeaker: Read the full blog for more details!
two business woman on laptops sitting and working side by side
Exciting news for Federal Civilian and SLG customers! Microsoft Copilot for Microsoft 365 GCC is set for General Availability on October 15, 2024 (pending US Government authorization).
Key features coming in October:
AI-powered tools in Word, Excel, Outlook, and Teams
Graph-grounded chat for quick data access
Intelligent meeting recaps
March 2025 will bring even more capabilities like real-time meeting summaries and task automation.
Security & Compliance: Fully aligned with Microsoft 365 GCC standards.
:loudspeaker: Read the full blog for more details! Read More
Feature Deep Dive: Export for OneDrive Sync Health Reports
We are excited to announce the Public Preview for exporting your Sync Health Reports data! This feature allows you to seamlessly integrate with other datasets like Azure Active Directory (AAD), Exchange, and Teams to create actionable insights and to automate your workflows.
What are the OneDrive Sync Health Reports?
When managing the health and data of hundreds or thousands of desktops in your organization, it can be challenging to know if your users are syncing their content to the cloud and that their data is protected. That’s where the Sync Health Reports come in.
The OneDrive Sync Health Reports dashboard provides insights into the health of the devices in your organization so you can proactively maintain your organization’s information and data. These health reports contain information for individual devices including if important folders are being backed up, if any sync errors have occurred, and if there are any health issues or advisories that need attention. These insights can help you easily address issues and ensure your users’ files are protected and synchronizing with the cloud.
How does export work for the OneDrive Sync Health Reports?
The data is exported via Microsoft Graph Data Connect, enabling seamless integration with other datasets such as Azure Active Directory (AAD), Exchange, and Teams data. This integration opens the door to actionable insights and automation that can transform how you manage OneDrive sync health across your organization.
Some of the valuable questions you can answer using the exported data are:
How many devices have opted into Known Folder Move (KFM)?
Which folders are most selected for Known Folder Move (KFM)?
What is the breakdown of unhealthy devices by OS version?
What is the breakdown of unhealthy devices by OneDrive Sync client version?
Is the device for user X reporting as healthy?
How many devices are showing errors?
Which types of errors are making most devices unhealthy?
Which devices are showing a specific error?
What are the errors occurring on a specific device?
Benefits at a Glance
Comprehensive insights and actionable data: Get a holistic view of sync health across all devices and also join with other datasets for in-depth analysis and actionable insights.
Enhanced monitoring: Detect spikes in errors, monitor Known Folder Move (KFM) rollout, and more.
Automation potential: Leverage the power of automation to streamline your OneDrive sync health management.
Getting Started
Ready to dive in? Here’s how you can get started with the new OneDrive Sync Health Data Export feature:
Set up the OneDrive sync health dashboard: Configure the devices in your organization to report device status. Learn more.
Set up Microsoft Graph Data Connect: Ensure you have the necessary permissions and setup for Microsoft Graph Data Connect.
Configure your Azure storage account: Make sure your Azure storage account is ready to receive the data.
Initiate the export: Use the OneDrive admin center or PowerShell to start exporting the sync health data.
Analyze and act: Once the data is in your Azure storage account, you can begin analyzing it and integrating it with other datasets for deeper insights.
For detailed instructions and support, visit our guide Step-by-step: OneDrive Sync Health.
We hope this new feature empowers you to manage OneDrive sync health more effectively and keep your organization’s data secure and synchronized. As always, we appreciate your feedback and look forward to hearing how you’re using this new capability.
Microsoft Tech Community – Latest Blogs –Read More
Comprehensive coverage and cost-savings with Microsoft Sentinel’s new data tier
As digital environments grow across platforms and clouds, organizations are faced with the dual challenges of collecting relevant security data to improve protection and optimizing costs of that data to meet budget limitations. Management complexity is also an issue as security teams work with diverse datasets to run on-demand investigations, proactive threat hunting, ad hoc queries and support long-term storage for audit and compliance purposes. Each log type requires specific data management strategies to support those use cases. To address these business needs, customers need a flexible SIEM (Security Information and Event Management) with multiple data tiers.
Microsoft is excited to announce the public preview of a new data tier Auxiliary Logs and Summary Rules in Microsoft Sentinel to further increase security coverage for high-volume data at an affordable price.
Auxiliary Logs supports high-volume data sources including network, proxy and firewall logs. Customers can get started today in preview with Auxiliary Logs today at no cost. We will notify users in advance before billing begins at $0.15 per Gb (US East). Initially Auxiliary Logs allow long term storage, however on-demand analysis is limited to the last 30 days. In addition, queries are on a single table only. Customers can continue to build custom solutions using Azure Data Explorer however the intention is that Auxiliary Logs cover most of those use-cases over time and are built into Microsoft Sentinel, so they include management capabilities.
Summary Rules further enhance the value of Auxiliary Logs. Summary Rules enable customers to easily aggregate data from Auxiliary Logs into a summary that can be routed to Analytics Logs for access to the full Microsoft Sentinel query feature set. The combination of Auxiliary logs and Summary rules enables security functions such as Indicator of Compromise (IOC) lookups, anomaly detection, and monitoring of unusual traffic patterns. Together, Auxiliary Logs and Summary Rules offer customers greater data flexibility, cost-efficiency, and comprehensive coverage.
Some of the key benefits of Auxiliary Logs and Summary Rules include:
Cost-effective coverage: Auxiliary Logs are ideal for ingesting large volumes of verbose logs at an affordable price-point. When there is a need for advanced security investigations or threat hunting, Summary Rules can aggregate and route Auxiliary Logs data to the Analytics Log tier delivering additional cost-savings and security value.
On-demand analysis: Auxiliary Logs supports 30 days of interactive queries with limited KQL, facilitating access and analysis of crucial security data for threat investigations.
Flexible retention and storage: Auxiliary Logs can be stored for up to 12 years in long-term retention. Access to these logs is available through running a search job.
Microsoft Sentinel’s multi-tier data ingestion and storage options
Microsoft is committed to providing customers with cost-effective, flexible options to manage their data at scale. Customers can choose from the different log plans in Microsoft Sentinel to meet their business needs. Data can be ingested as Analytics, Basic and Auxiliary Logs. Differentiating what data to ingest and where is crucial. We suggest categorizing security logs into primary and secondary data.
Primary logs (Analytics Logs): Contain data that is of critical security value and are utilized for real-time monitoring, alerts, and analytics. Examples include Endpoint Detection and Response (EDR) logs, authentication logs, audit trails from cloud platforms, Data Loss Prevention (DLP) logs, and threat intelligence.
Primary logs are usually monitored proactively, with scheduled alerts and analytics, to enable effective security detections.
In Microsoft Sentinel, these logs would be directed to Analytics Logs tables to leverage the full Microsoft Sentinel value.
Analytics Logs are available for 90 days to 2 years, with 12 years long-term retention option.
Secondary logs (Auxiliary Logs): Are verbose, low-value logs that contain limited security value but can help draw the full picture of a security incident or breach. They are not frequently used for deep analytics or alerts and are often accessed on-demand for ad-hoc querying, investigations, and search.
These include NetFlow, firewall, and proxy logs, and should be routed to Basic Logs or Auxiliary Logs.
Auxiliary logs are appropriate when using Log Stash, Cribl or similar for data transformation.
In the absence of transformation tools, Basic Logs are recommended.
Both Basic and Auxiliary Logs are available for 30 days, with long-term retention option of up to 12 years.
Additionally, for extensive ML, complex hunting tasks and frequent, extensive long-term retention customers have the choice of ADX. But this adds additional complexity and maintenance overhead.
Microsoft Sentinel’s native data tiering offers customers the flexibility to ingest, store and analyze all security data to meet their growing business needs.
Use case example: Auxiliary Logs and Summary Rules Coverage for Firewall Logs
Firewall event logs are a critical network log source for threat hunting and investigations. These logs can reveal abnormally large file transfers, volume and frequency of communication by a host, and port scanning. Firewall logs are also useful as a data source for various unstructured hunting techniques, such as stacking ephemeral ports or grouping and clustering different communication patterns.
In this scenario, organizations can now easily send all firewall logs to Auxiliary Logs at an affordable price point. In addition, customers can run a Summary Rule that creates scheduled aggregations and route them to the Analytics Logs tier. Analysts can use these aggregations for their day-to-day work and if they need to drill down, they can easily query the relevant records from Auxiliary Logs. Together Auxiliary Logs and Summary Rules help security teams use high volume, verbose logs to meet their security requirements while minimizing costs.
Figure 1: Ingest high volume, verbose firewall logs into an Auxiliary Logs table.
Figure 2: Create aggregated datasets on the verbose logs in Auxiliary Logs plan.
Customers are already finding value with Auxiliary Logs and Summary Rules as seen below:
“The BlueVoyant team enjoyed participating in the private preview for Auxiliary logs and are grateful Microsoft has created new ways to optimize log ingestion with Auxiliary logs. The new features enable us to transform data that is traditionally lower value into more insightful, searchable data.”
Mona Ghadiri
Senior Director of Product Management, BlueVoyant
“The Auxiliary Log is a perfect fusion of Basic Log and long-term retention, offering the best of
both worlds. When combined with Summary Rules, it effectively addresses various use cases for ingesting large volumes of logs into Microsoft Sentinel.”
Debac Manikandan
Senior Cybersecurity Engineer, DEFEND
Looking forward
Microsoft is committed to expanding the scenarios covered by Auxiliary Logs over time, including data transformation and standard tables, improved query performance at scale, billing and more. We are working closely with our customers to collect feedback and will continue to add more functionality. As always, we’d love to hear your thoughts.
Learn more
Log retention plans in Microsoft Sentinel | Microsoft Learn
Plan costs and understand pricing and billing – Microsoft Sentinel | Microsoft Learn
What’s new in Microsoft Sentinel | Microsoft Learn
Reduce costs for Microsoft Sentinel | Microsoft Learn
When to use Auxiliary Logs in Microsoft Sentinel | Microsoft Learn
Aggregate Microsoft Sentinel data with summary rules | Microsoft Learn
Microsoft Sentinel Pricing | Microsoft Azure
Microsoft Tech Community – Latest Blogs –Read More
Partner Case Study Series | Cloud of Things & Marketplace Rewards
Cloud of Things, a Microsoft partner empowering clients to maximize their ROI with IoT solutions
A Microsoft partner since 2018, Cloud of Things creates innovative ecosystems of IoT connected products that are manageable at scale. Based in Israel and the United States, the company works globally with product and utility companies to make their products and services smarter and more profitable by using its DeviceTone Suite on Microsoft Azure Marketplace. Leveraging adaptive, low-footprint firmware and electronics on the edge with a robust device management and configuration system in the cloud, Cloud of Things enables cost-effective edge hardware and ensures better ROI in mass-produced IoT products and implementations.
Marketplace Rewards benefits raise awareness and deliver results
Cloud of Things used Marketplace Rewards benefits to further promote its IoT solutions in the Azure Marketplace. The company wanted to create greater awareness of DeviceTone, both internally by educating and motivating Microsoft sales professionals to sell its IoT solutions and externally by rolling out Azure cloud-delivered IoT solutions to Cloud of Things’ channel partners and direct customers.
“Working with the Microsoft Marketplace Rewards team, we’ve been able to reach more prospects, meet more partners, develop new offerings in the Connected Field Service space, and drive more awareness of the benefits of our solutions running on the Azure cloud. Those benefits include a faster time to market, performance and resilience based on the Azure infrastructure, full-stack cybersecurity, and an ability to start small but scale big. Since engaging, we’ve seen a 5X increase in customer leads,” said David Chouraqui, Vice President, Business Development, Cloud of Things.
Continue reading here
**Explore all case studies or submit your own**
Click here for another case study on Cloud of Things using Azure loT
Microsoft Tech Community – Latest Blogs –Read More
Harnessing the power of KQL Plugins for enhanced security insights with Copilot for Security
Overview
Copilot for Security is a generative-AI powered security solution that empowers security and IT professionals to respond to cyber threats, process signals, and assess risk exposure at the speed and scale of AI. As we build Copilot for Security, we are guided by four principles that shape the product’s vision: Intuitiveness, Customizability, Extensibility and adherence to Responsible AI principles. Plugins are a great example of how we bring the principles of customizability and extensibility alive within the product. In line with this, Copilot for Security allows customers to bring in signals from not just Microsoft solutions but also several third-party security solutions via plugins. Today, the platform supports three types of plugins: API, GPT and KQL-based plugins. KQL-based plugins can ingest insights into Copilot from three sources: Log Analytics workspaces-including data from custom tables, M365 Defender XDR and Azure Data Explorer (ADX) clusters.
Why use KQL plugins?
To tap into vast amounts of data already available in data stores across Log Analytics, Microsoft 365 Defender XDR and Azure Data explorer clusters.
To bring in highly customized insights into Copilot for Security. Kusto is a highly versatile query language that gives you tremendous flexibility to customize the signals to bring into Copilot for Security.
To accelerate value realization from your Copilot for Security investment by tapping into data and queries you already have in your environment coupled with the low skill barrier required to build the plugins.
To tap into data from third party solutions within tables such as CommonSecurityLog.
To leverage built-in “on behalf of” authentication and authorization capabilities that align to existing RBAC setting controlling access to the target data sources.
In this blog we shall focus on how you can use a KQL-based plugin to bring in insights from Microsoft Sentinel-enabled Log Analytics workspaces.
Use case summary
To showcase how one can leverage KQL-based plugins to tap into the vast amounts of security insights contained within Sentinel-enabled Log Analytics workspaces, we will build a query based on Microsoft Sentinel’s UEBA anomaly insights. Sentinel’s UEBA engine plays a unique and valuable role in sifting through large amount of raw data to build baselines of expected behavior within an Azure Tenant across historical time horizons. Based on these baselines, anomalies can then be detected and surfaced for eventual ingestion to supplement Copilot for Security workflows. As a result, the KQL queries that one needs to build based on the normalized insights generated by UEBA are typically far much simpler than would have been if one were to build anomaly detection queries on top of raw data targeting similar outcomes.
Connecting to a Log Analytics workspace
To connect to the Sentinel-enabled Log Analytics workspace, you will need to specify four required connection parameters within the YAML or JSON-based plugin manifest file i.e. Tenant ID, Log Analytics Workspace name, Azure Subscription ID and the name of the Resource group that hosts your Log Analytics workspace as captured in below image:
Once the workspace parameters are defined under in the settings section under the plugin Descriptor, they are now referenced in the SkillGroups section where the additional parameter of “Target” is also specified. Given that in this case we are targeting a Sentinel workspace then the target is specified as “Sentinel”. The elements within the curly brackets now make it possible for these inputs to be provided in the Copilot plugin setup UI as opposed to within the plugin manifest as was previously the case:
For KQL-based plugins user access is handled by Entra ID, and permissions will be scoped to match the existing access the user has in the Sentinel Log Analytics workspace the plugin is connecting to. In other words, authentication and authorization occur “on behalf of” the signed in user using the custom plugin.
Parameters can also be used to capture specific user input, making the plugin further customizable. In our example, we are using parameters to take in a time range and an investigation priority value from the end user.
Sample use case: Detect unusual application and/or user activity within an Azure tenant
With the basics covered, let us now dive into a specific use case that will showcase how we can leverage the KQL plugin architecture to pull in synthesized insights into Copilot for Security, giving us insights about anomalous behavior detected around admin users and applications. To accomplish this use case, you will need to have the following pre-requisites in place:
An active instance of Microsoft Sentinel with UEBA enabled.
At least the following data sources ingesting into the UEBA engine: SigninLogs, Audit Logs, AzureActivity and ARM logs.
Here we are leveraging Sentinel UEBA’s built-in capability to build the expected baseline over time and large amounts of raw data and then make it possible to detect anomalies that are a deviation from that baseline. In this case-what has been established to be the norm for periods ranging between 10 and 180 days depending on the UEBA insight. The KQL query will then look back within a period that you specify and check for anomalies depending on the skill invoked. This plugin defines two skills: AnomalousAppActivity which surfaces app-related anomalies and AnomalousAdminActivity which surfaces admin user-related activity as detailed below:
AnomalousAppActivity. The first time a user used an app, an uncommonly used app, an app uncommonly used among user peers, an app that is observed in a tenant for the first time or an app that is uncommonly used in the tenant.
AnomalousAdminActivity
Activity performed for the first time, uncommon by the user, uncommon among the user’s peers, uncommon in the tenant, from an uncommon country or a user connecting from a country seen for the first time, or user accessing a resource for the first time or accessing a resource that is uncommon among their peers, whether the account has been dormant, is a local admin or is a new account.
The full list of Sentinel UEBA enrichments that can be used in KQL queries are detailed in this document.
Skill description
Pay particular attention to the Description section as this need to be as unambiguous as possible to avoid skill collision (a situation whereby the Copilot planner selects the wrong skill because the description of one plugin is very similar to that of one or more active plugins).
Adding a second skill/query to the same KQL-based plugin manifest
An additional capability available within KQL-based plugins is the ability to add additional skills by specifying an additional query that brings in a different but related set of insights, making plugin building more efficient. To do so one needs to add a new section below the first query but starting with the name of the new skill as shown below:
Full plugin manifest
Note: The code below has reformatted for presentation within the blog. Copy pasting directly into a YAML editor may present formatting issue that will need to be addressed before you can upload the manifest into Copilot.
Descriptor:
Name: AnomalousAppandAdminUserActivity
DisplayName: Anomalous Application and Admin User Activity
Description: Uses UEBA normalized Insights in Sentinel UEBA to identify Applications observed for the first time in the tenant over the last 30 days. It applies to profiled activities across ARM, Azure sign-in, and audit logs
Settings:
– Name: TenantId
Required: true
– Name: WorkspaceName
Required: true
– Name: SubscriptionId
Required: true
– Name: ResourceGroupName
Required: true
SupportedAuthTypes:
– None
SkillGroups:
– Format: KQL
Skills:
– Name: AnomalousAppActivity
DisplayName: Anomalous activity detected around application
Description: Uses Sentinel UEBA to identify unusual or anomalous actons such as first time application observed in tenant,
Inputs:
– Name: fromDateTime
Description: The start of the lookback window
Required: true
– Name: toDateTime
Description: The end of the lookback window
Required: true
Settings:
Target: Sentinel
# The ID of the AAD Organization that the Sentinel workspace is in.
TenantId: ‘{{TenantId}}’
# The id of the Azure Subscription that the Sentinel workspace is in.
SubscriptionId: ‘{{SubscriptionId}}’
# The name of the Resource Group that the Sentinel workspace is in.
ResourceGroupName: ‘{{ResourceGroupName}}’
# The name of the Sentinel workspace.
WorkspaceName: ‘{{WorkspaceName}}’
Template: |-
let fromDateTime=datetime(‘{{fromDateTime}}’);
let toDateTime=datetime(‘{{toDateTime}}’);
BehaviorAnalytics | where datetime_utc_to_local(TimeGenerated, “US/Eastern”) between ( fromDateTime .. toDateTime )
| project-away TenantId, Type, SourceRecordId, EventSource, TimeProcessed
| where ActivityInsights.FirstTimeUserUsedApp == true or
ActivityInsights.AppUncommonlyUsedByUser == true or
ActivityInsights.AppUncommonlyUsedAmongPeers == true or
ActivityInsights.FirstTimeAppObservedInTenant == true or
ActivityInsights.AppUncommonlyUsedInTenant == true
– Name: AnomalousAdminActions
DisplayName: Anomalous administrative actions performed by user
Description: Uses Sentinel UEBA to identify Users performing activities that are performed for the first time, uncommon by the user, uncommon among the user’s peers, uncommon in the tenant, from an uncommon country or a user connecting from a country seen for the first time, or user accessing a resource for the first time or accessing a resource that is uncommon among their peers
Inputs:
– Name: fromDateTime
Description: The start of the lookback window
Required: true
– Name: toDateTime
Description: The end of the lookback window
Required: true
– Name: InvestiGationPriority
Description: Calculated priority for investigation between 1 and 10
Required: false
Settings:
Target: Sentinel
# The ID of the AAD Organization that the Sentinel workspace is in.
TenantId: ‘{{TenantId}}’
# The id of the Azure Subscription that the Sentinel workspace is in.
SubscriptionId: ‘{{SubscriptionId}}’
# The name of the Resource Group that the Sentinel workspace is in.
ResourceGroupName: ‘{{ResourceGroupName}}’
# The name of the Sentinel workspace.
WorkspaceName: ‘{{WorkspaceName}}’
Template: |-
let fromDateTime=datetime(‘{{fromDateTime}}’);
let toDateTime=datetime(‘{{toDateTime}}’);
BehaviorAnalytics
| where datetime_utc_to_local(TimeGenerated, “US/Eastern”) between ( fromDateTime .. toDateTime )
| project-away TenantId, Type, SourceRecordId, EventSource, TimeProcessed
| where ActivityType =~ “Administrative”
| where isnotempty(UserName)
| where ActivityInsights.FirstTimeUserPerformedAction == true or
ActivityInsights.FirstTimeActionPerformedInTenant == true or
ActivityInsights.ActionUncommonlyPerformedByUser == true or
ActivityInsights.ActionUncommonlyPerformedAmongPeers == true or
ActivityInsights.FirstTimeUserAccessedResource == true or
ActivityInsights.CountryUncommonlyConnectedFromByUser == true
Using the plugin
Upload the custom plugin manifest file by following the steps documented here:
Once configured, you can invoke the plugin via natural language or by calling the skills directly depending on how much control you want to have over the specificity of the prompt you provide to Copilot for Security. Note: The investigation priority for the AnomalousAppActivity skill has default priority of => 5.
Method 1: Sample natural language prompt
Method 2: Sample direct skill invocation prompt
Sample prompts
show me which users performed an anomalous administrative activity for the first time over the past 14 days, include the investigation priority if 3 or higher. Include the blast radius
show me applications exhibiting anomalous behavior over the last 14 days
Pro tips:
Given that KQL queries must first be executed in real-time on the Sentinel side, it is recommended that the queries be as optimized as possible to improve performance. To optimize query performance, follow the existing best practices published here:
Optimize log queries in Azure Monitor – Azure Monitor | Microsoft Learn
Best practices for Kusto Query Language queries – Azure Data Explorer & Real-Time Intelligence | Microsoft Learn
Use the project-away operator to eliminate columns that you feel you won’t need to be ingested into Copilot e.g. TenantID and SourceRecordID in the case of this use case.
Using Azure Monitor’s ingestion time transformation capabilities is another strategy to achieve efficiency by minimizing real-time calculation of fields using operator such as extend or performing Regex operations at query time:
Transform or customize data at ingestion time in Microsoft Sentinel (preview) | Microsoft Learn
Ensure to use a code editor such as Visual Studio Code that can help you spot any formatting issues as YAML is sensitive to tabbing, indentation and hidden characters which would prevent the plugin successfully uploading into Copilot.
Use reasonable short lookback periods in the KQL query to narrow down your search and avoid returning too many records that could exceed the context window limits which would lead to an error.
Conclusion
KQL plugins present a relatively simple and scalable way to leverage the existing repositories of proven KQL queries within the Microsoft security ecosystem. These can then be used as a basis to bring AI enrichment onto security data already present within Sentinel-enabled Log Analytics workspaces while taking advantage of specialized capabilities such as UEBA for anomaly detection and other Sentinel related use cases. Give it a go give us your feedback so we can continuously improve the product for your benefit.
Additional resources
Get started with Microsoft Copilot for Security | Microsoft Learn
Advanced threat detection with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel | Microsoft Learn
Kusto Query Language (KQL) plugins in Microsoft Copilot for Security | Microsoft Learn
Create your own custom plugins in Microsoft Copilot for Security | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Generally Available Now: Informatica Intelligent Data Management Cloud – An Azure Native ISV Service
We are happy to launch Informatica Intelligent Data Management Cloud – An Azure Native ISV Service as a generally available offering. This is a result of a close collaboration between Microsoft and Informatica. This integration enables creation and management of Informatica organizations and serverless runtime environments within the Azure Management console. With serverless runtime environments, you are not required to instantiate VMs in your Azure tenants to install Informatica secure agents. You can focus on creation of their data management tasks without worrying about managing the infrastructure. You can find the Informatica announcement here
Azure Native ISV Services enable you to easily provision, manage, and tightly integrate ISV software and services on Azure. By leveraging the power of Azure, the Informatica Intelligent Data Management Cloud – An Azure Native ISV Service, offers you a range of benefits, including scalability, flexibility, and cost-effectiveness. It also provides secure connectivity to Informatica’s IDMC portal using single sign-on via Azure portal, CLI, and SDK. You can easily sign up for this service via the Azure Marketplace.
Managing Informatica Secure Agent infrastructure in your Azure tenant could be an elaborate and time-consuming task, requiring expertise in areas such as networking, security, and scaling. By using serverless run time environments, you can focus on integration mappings while Informatica manages the underlying Infrastructure.
Key Capabilities
Seamless onboarding: You can create an Informatica Organization or Link an existing Informatica Organization in Azure pod from Azure portal, CLI or SDK like any other Azure resource. For example, you can discover the service from search bar in Azure portal.
Figure 1: Informatica IDMC – Azure Native ISV Service in Azure portal
Figure 2: Creating the IDMC Azure Native ISV Service from the Azure portal
Figure 3: Linking an existing IDMC Organization in Azure pod to Azure Native ISV Service from the Azure portal
Single sign-on to IDMC portal: An auto generated SSO link securely redirects you to IDMC portal.
Figure 4: Newly created Informatica organization resource with SSO URL on Overview page.
.
Figure 5: The SSO URL redirects to the newly created Informatica organization in the IDMC portal
Management of IDMC serverless runtime environments: Within the Azure portal, you can create and manage Informatica Cloud Data Integration Advanced Serverless, a service of IDMC, to eliminate the need for creating VMs to run secure agents.
Figure 6: Serverless runtime created from Azure portal with management options highlighted
Figure 7: Serverless runtime in Informatica portal
Azure SDK and CLI integration: You can easily manage IDMC resources from the Azure Java, Java Script, Python, Go and .NET SDKs and from command line interfaces like Azure CLI, PowerShell. This enables you to automate repetitive tasks and complex processes using scripts and provides greater flexibility and customization in managing Azure resources. CLIs can also easily be integrated into continuous integration/continuous deployment (CI/CD) pipelines, enabling seamless integration with DevOps practices and workflows.
Get started with Informatica Intelligent Data Management Cloud – An Azure Native ISV Service
Setup and subscribe to your Informatica Intelligent Data Management Cloud – An Azure Native ISV, from Azure Marketplace.
Follow the documentation to create an Informatica organization and run time environments to deploy your integration mappings.
Microsoft Tech Community – Latest Blogs –Read More
error: unknown type name ‘mxArray’
I added non-inlined s-functions to a simulink model, built the code, and am now trying to generate a2l file.
I am running into these errors. In what file do I define the type names and how do I define them?
Thanks,
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘_ResolveVarFcn’
error: unknown type name ‘_ssFcnCallExecArgInfo’
simstruc.h:: error: unknown type name ‘_ssFcnCallExecArgInfo’
error: unknown type name ‘_ssFcnCallExecArgs’
imstruc.h: error: unknown type name ‘SSSimulinkFunctionPtr’I added non-inlined s-functions to a simulink model, built the code, and am now trying to generate a2l file.
I am running into these errors. In what file do I define the type names and how do I define them?
Thanks,
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘_ResolveVarFcn’
error: unknown type name ‘_ssFcnCallExecArgInfo’
simstruc.h:: error: unknown type name ‘_ssFcnCallExecArgInfo’
error: unknown type name ‘_ssFcnCallExecArgs’
imstruc.h: error: unknown type name ‘SSSimulinkFunctionPtr’ I added non-inlined s-functions to a simulink model, built the code, and am now trying to generate a2l file.
I am running into these errors. In what file do I define the type names and how do I define them?
Thanks,
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘mxArray’
simstruc.h:: error: unknown type name ‘_ResolveVarFcn’
error: unknown type name ‘_ssFcnCallExecArgInfo’
simstruc.h:: error: unknown type name ‘_ssFcnCallExecArgInfo’
error: unknown type name ‘_ssFcnCallExecArgs’
imstruc.h: error: unknown type name ‘SSSimulinkFunctionPtr’ ‘mxarray’ unknown type name MATLAB Answers — New Questions