Month: October 2024
forward Euler function to solve ODEs
according to the assignment I’m working on, i have to
(a) Set (tstart, tfinal, y0, f, nsteps) to be the inputs and an output vector yvec. i. tstart is the starting time ii. tfinal is the final time. iii. f is an anonymous function handle that defines the right hand side of whatever ODE we’re studying. iv. y0 is the initial condition for the given ODE v. nsteps is the number of timesteps you’re taking. vi. name the function forward_euler
(b) Define a stepsize dt using tstart, tfinal and nsteps. Then use these to construct a discretized time domain vector tvec.
(c) Set t(1) = tstart and y(1) = y0.
(d) Construct a for-loop that performs the Forward Euler algorithm that was discussed in class.
(e) (Optional but super helpful for plotting): Instead of returning only yvec, you could return [tvec, yvec]Question 2.1according to the assignment I’m working on, i have to
(a) Set (tstart, tfinal, y0, f, nsteps) to be the inputs and an output vector yvec. i. tstart is the starting time ii. tfinal is the final time. iii. f is an anonymous function handle that defines the right hand side of whatever ODE we’re studying. iv. y0 is the initial condition for the given ODE v. nsteps is the number of timesteps you’re taking. vi. name the function forward_euler
(b) Define a stepsize dt using tstart, tfinal and nsteps. Then use these to construct a discretized time domain vector tvec.
(c) Set t(1) = tstart and y(1) = y0.
(d) Construct a for-loop that performs the Forward Euler algorithm that was discussed in class.
(e) (Optional but super helpful for plotting): Instead of returning only yvec, you could return [tvec, yvec]Question 2.1 according to the assignment I’m working on, i have to
(a) Set (tstart, tfinal, y0, f, nsteps) to be the inputs and an output vector yvec. i. tstart is the starting time ii. tfinal is the final time. iii. f is an anonymous function handle that defines the right hand side of whatever ODE we’re studying. iv. y0 is the initial condition for the given ODE v. nsteps is the number of timesteps you’re taking. vi. name the function forward_euler
(b) Define a stepsize dt using tstart, tfinal and nsteps. Then use these to construct a discretized time domain vector tvec.
(c) Set t(1) = tstart and y(1) = y0.
(d) Construct a for-loop that performs the Forward Euler algorithm that was discussed in class.
(e) (Optional but super helpful for plotting): Instead of returning only yvec, you could return [tvec, yvec]Question 2.1 for loop, function, vectors MATLAB Answers — New Questions
difference between dissect in matlab and METIS_NodeND in Metis?
Hello everyone,
I find that dissect in matlab utilize the METIS library, and it can choose the number of separator, which is also the number of subdomain.
But it seems like that there is no choice of the number of separator in the interface METIS_NodeND in METIS.
So what is the difference between the dissect and METIS_NodeND?Hello everyone,
I find that dissect in matlab utilize the METIS library, and it can choose the number of separator, which is also the number of subdomain.
But it seems like that there is no choice of the number of separator in the interface METIS_NodeND in METIS.
So what is the difference between the dissect and METIS_NodeND? Hello everyone,
I find that dissect in matlab utilize the METIS library, and it can choose the number of separator, which is also the number of subdomain.
But it seems like that there is no choice of the number of separator in the interface METIS_NodeND in METIS.
So what is the difference between the dissect and METIS_NodeND? metis, dissect MATLAB Answers — New Questions
Why does the track2 function is not working?
Hey community
I am using the R2023b version and the track2 function does not respond correctly.
It should work as:[lati , loni] = track2(lat1,lon1,lat2,lon2,number);
This last numbers is automatically 100, however if I change it, matlab still uses 100. Even if I try with 1, or 1000.Hey community
I am using the R2023b version and the track2 function does not respond correctly.
It should work as:[lati , loni] = track2(lat1,lon1,lat2,lon2,number);
This last numbers is automatically 100, however if I change it, matlab still uses 100. Even if I try with 1, or 1000. Hey community
I am using the R2023b version and the track2 function does not respond correctly.
It should work as:[lati , loni] = track2(lat1,lon1,lat2,lon2,number);
This last numbers is automatically 100, however if I change it, matlab still uses 100. Even if I try with 1, or 1000. distance, matlab, function MATLAB Answers — New Questions
Filters as Worksheet Title
I have an Excel worksheet that is an inventory list 17 columns wide. I have Filtered the column headers (row eight) and I filter by various columns to generate different lists (I tried it with slicers, but they were more clunky for me to use).
I would like the header of the worksheet to change based on the filters applied.
I picture showing the filters in cells A1, A2, and A3, or perhaps in text boxes across a fat row 1.
Does anyone have a solution?
I have an Excel worksheet that is an inventory list 17 columns wide. I have Filtered the column headers (row eight) and I filter by various columns to generate different lists (I tried it with slicers, but they were more clunky for me to use). I would like the header of the worksheet to change based on the filters applied. I picture showing the filters in cells A1, A2, and A3, or perhaps in text boxes across a fat row 1. Does anyone have a solution? Read More
Happy Azure Week!
Join Tech for Social Impact as we kick off Azure week. We will be posting a new item every day to the Microsoft for Nonprofits LinkedIn handle as well as TSI Facebook and X channels.
TSI Azure week LinkedIn kickoff post
Join Tech for Social Impact as we kick off Azure week. We will be posting a new item every day to the Microsoft for Nonprofits LinkedIn handle as well as TSI Facebook and X channels.
TSI Azure week LinkedIn kickoff post
Partner Blog | Partner Center Technical Corner: September 2024 edition
By Monilee Keller, Vice President, Product Management
Welcome to the September 2024 edition of Partner Center Technical Corner. This month, we are excited to highlight our actions to protect identities and secrets, along with several recently released experience improvements for our partners. As always, you can find the most up-to-date technical roadmap and essential Partner Center resources at the end of the blog.
Securing the channel
Partner Center Security workspace: The Security workspace in Partner Center provides Cloud Solution Provider (CSP) partners with a centralized view for all security actions and resources. Two new roles, Security Administrator and Security Reader, have been added, allowing detailed access management without needing the admin agent role.
Azure authentication and fraud discretionary credit criteria changes: Starting September 2024, the criteria for assessing refund eligibility for a one-time discretionary credit related to fraudulent activity on individual Azure customer tenants is now updated to comply with the Microsoft Partner Agreement, including the requirement for multifactor authentication (MFA). To be eligible for the credit, MFA must be enabled when fraudulent activity begins. Additionally, starting October 2024, as part of the Secure Future Initiative, Microsoft will enforce MFA for all users signing in to the Azure portal.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
How can I add a version number to my standalone executable created with MATLAB Compiler?
I would like to use MATLAB Compiler to generate a Windows standalone executable that has version and file information (which can be seen by right-clicking on the executable and examining its properties, as shown in the screenshot below).I would like to use MATLAB Compiler to generate a Windows standalone executable that has version and file information (which can be seen by right-clicking on the executable and examining its properties, as shown in the screenshot below). I would like to use MATLAB Compiler to generate a Windows standalone executable that has version and file information (which can be seen by right-clicking on the executable and examining its properties, as shown in the screenshot below). standalone, executable, version, meta, data, information, compile, mcc, resource, file, .rc, adding, metadata, to, application MATLAB Answers — New Questions
Which one is the joystick ID?
I am trying to use the vrjoystick command in Matlab to create a joystick object. From the help documentation, I should use the command joy = vrjoystick(id). My problem is that I don’t know how to find out the id.
The description in help is as follows- "The id parameter is a one-based joystick ID. The joystick ID is the system ID assigned to the given joystick device." But, when I get to the device manager, I saw a lot of "IDs" (e.g. hardware IDs, Compatible IDs, etc.) under the "details" tab of the properties window of the Xbox 360 Controller for Windows. I tried to input everyone one of them, but none of them worked.
How should I get the system ID (there is no system ID in the properties window) of a joystick? How should the ID look like? Thank you.I am trying to use the vrjoystick command in Matlab to create a joystick object. From the help documentation, I should use the command joy = vrjoystick(id). My problem is that I don’t know how to find out the id.
The description in help is as follows- "The id parameter is a one-based joystick ID. The joystick ID is the system ID assigned to the given joystick device." But, when I get to the device manager, I saw a lot of "IDs" (e.g. hardware IDs, Compatible IDs, etc.) under the "details" tab of the properties window of the Xbox 360 Controller for Windows. I tried to input everyone one of them, but none of them worked.
How should I get the system ID (there is no system ID in the properties window) of a joystick? How should the ID look like? Thank you. I am trying to use the vrjoystick command in Matlab to create a joystick object. From the help documentation, I should use the command joy = vrjoystick(id). My problem is that I don’t know how to find out the id.
The description in help is as follows- "The id parameter is a one-based joystick ID. The joystick ID is the system ID assigned to the given joystick device." But, when I get to the device manager, I saw a lot of "IDs" (e.g. hardware IDs, Compatible IDs, etc.) under the "details" tab of the properties window of the Xbox 360 Controller for Windows. I tried to input everyone one of them, but none of them worked.
How should I get the system ID (there is no system ID in the properties window) of a joystick? How should the ID look like? Thank you. joystick, vrjoystick MATLAB Answers — New Questions
@mention is not working for Office document
@mention is not working for Office document that is stored in MS Teams/SharePoint Online for a couple of users when adding a new comment. No new conditional access created and all users are internal.
@mention is not working for Office document that is stored in MS Teams/SharePoint Online for a couple of users when adding a new comment. No new conditional access created and all users are internal. Read More
KNN Nearest Neighbor Question
Is KNN nearest neighbor still a part of Excel’s Analytic Solver?
Is KNN nearest neighbor still a part of Excel’s Analytic Solver? Read More
Need some technical answers about AVD
Greetings. I’m considering using Azure Virtual Desktop but before I jump in too far, or even create a trial account, I’m hoping people here can answer some technical questions. I’m a total newbie when it comes to AVD, so please bear with me. One of my most important goals is keeping the total cost down.
Background: I wrote a very small desktop Windows app for the residents of the retirement community where I live. It allows them to display and search on resident demographic information. The app consists of an executable and a few ancillary files. The entire app takes less than 35MB, and when running uses extremely little CPU. However, the app only runs on Windows desktops, and some users want to access it with things like iPhones, iPads, Macs, and Android devices. I want to allow such users to access my app remotely on a “desktop” running in the cloud, which is why I’m considering using AVD.
I contacted MS Sales and unfortunately they weren’t able to answer my questions, and I’m unable to contact MS tech support, so I’ll ask them here.
1. If I use multi-session (multiple users using one VM), will each user who logs into that one VM have a separate session? Or will they all be sharing the same session? I want to make sure that each user has their own session with their own keyboard input, their own view of the app, etc. Must I use Single-Session to accomplish that?
2. Do I have to purchase Client Access Licenses (CLA’s), or any other type of licenses, separately, or does the all the pricing (per user charges, CPU usage, storage fees, etc. etc.) include the licensing fees?
3. Are there charges for inactive VMs, meaning when no one is logged into them? If you’re familiar with Amazons AWS AppStream 2.0, it’s what they call “stopped-instance fees.”
Questions about the pricing calculator:
4. Under Per User Access Pricing is an entry called “External Users?” What are they? Do I need them or do I enter Not Applicable? I’m not an “organization.” I just have a bunch of residents who want to log in to AVD and use my app. I entered 2 for Total Named Users (meaning a maximum of 2 users at a time can log in).
5. If you click the little arrow next to “Customize the size of the OS,” it displays an item called “OS Size” and the text “amount of core required by the OS.” It defaults to 0.75. It LOOKs like a percentage. Changing that percentage drastically changes the monthly cost. What is that number, and how do I know what to set it to? By the way, I’m considering using an A0 Instance, which 1 core and 0.75GB of RAM.
6. Under Connectivity, I don’t understand what the “Peak Concurrency” and “Off-Peak Concurrency” percentages are, and how they affect cost. With my user base there is no “peak” time. Anyone could access my app at any time of day, but at most times of day no one will be using the app at all.
7. Further down, I also don’t understand the “off-peak concurrency” and “peak concurrency” calculations — I entered 2 for Total Named Users and 5 for Total Usage Hours — why then is it calculating off-peak concurrency using 725 hours? And is Total Usage Hours per user, or for total users?
8. Under Managed Disks, it says I need two disks. Why not just one? One would be more than large enough for my needs.
Thanks to all for any help.
Greetings. I’m considering using Azure Virtual Desktop but before I jump in too far, or even create a trial account, I’m hoping people here can answer some technical questions. I’m a total newbie when it comes to AVD, so please bear with me. One of my most important goals is keeping the total cost down. Background: I wrote a very small desktop Windows app for the residents of the retirement community where I live. It allows them to display and search on resident demographic information. The app consists of an executable and a few ancillary files. The entire app takes less than 35MB, and when running uses extremely little CPU. However, the app only runs on Windows desktops, and some users want to access it with things like iPhones, iPads, Macs, and Android devices. I want to allow such users to access my app remotely on a “desktop” running in the cloud, which is why I’m considering using AVD. I contacted MS Sales and unfortunately they weren’t able to answer my questions, and I’m unable to contact MS tech support, so I’ll ask them here. 1. If I use multi-session (multiple users using one VM), will each user who logs into that one VM have a separate session? Or will they all be sharing the same session? I want to make sure that each user has their own session with their own keyboard input, their own view of the app, etc. Must I use Single-Session to accomplish that? 2. Do I have to purchase Client Access Licenses (CLA’s), or any other type of licenses, separately, or does the all the pricing (per user charges, CPU usage, storage fees, etc. etc.) include the licensing fees? 3. Are there charges for inactive VMs, meaning when no one is logged into them? If you’re familiar with Amazons AWS AppStream 2.0, it’s what they call “stopped-instance fees.” Questions about the pricing calculator: 4. Under Per User Access Pricing is an entry called “External Users?” What are they? Do I need them or do I enter Not Applicable? I’m not an “organization.” I just have a bunch of residents who want to log in to AVD and use my app. I entered 2 for Total Named Users (meaning a maximum of 2 users at a time can log in). 5. If you click the little arrow next to “Customize the size of the OS,” it displays an item called “OS Size” and the text “amount of core required by the OS.” It defaults to 0.75. It LOOKs like a percentage. Changing that percentage drastically changes the monthly cost. What is that number, and how do I know what to set it to? By the way, I’m considering using an A0 Instance, which 1 core and 0.75GB of RAM. 6. Under Connectivity, I don’t understand what the “Peak Concurrency” and “Off-Peak Concurrency” percentages are, and how they affect cost. With my user base there is no “peak” time. Anyone could access my app at any time of day, but at most times of day no one will be using the app at all. 7. Further down, I also don’t understand the “off-peak concurrency” and “peak concurrency” calculations — I entered 2 for Total Named Users and 5 for Total Usage Hours — why then is it calculating off-peak concurrency using 725 hours? And is Total Usage Hours per user, or for total users? 8. Under Managed Disks, it says I need two disks. Why not just one? One would be more than large enough for my needs. Thanks to all for any help. Read More
Took over reports
Took over a report from a former employee and it looks like it was a pivot table?
We used to be able to just dump last year cancels and it would add it to the main sheet. I have tried to do my general troubleshooting. ( making sure that the formulas was on automatic not on manual etc)
We just want to be able to dump the data from last year cancels to this year and see if we wet met our retention goal instead of manually counting.
any help or advise is greatly appreciated!
Took over a report from a former employee and it looks like it was a pivot table? We used to be able to just dump last year cancels and it would add it to the main sheet. I have tried to do my general troubleshooting. ( making sure that the formulas was on automatic not on manual etc) We just want to be able to dump the data from last year cancels to this year and see if we wet met our retention goal instead of manually counting. any help or advise is greatly appreciated! Read More
Using Copilot to analyze SharePoint document libraries and content.
I am just getting started with Copilot studio. I’ve been asked to build a prototype where we use Copilot to analyze some large document libraries across several SharePoint sites.
The short term goal of this project is to use CP to help us classify/categorize documents into clinical (medical) and non-clinical. For instance, I would want to ask CP to get me an excel list of documents with a column indicating if the document is clinical or not.
The long term goal is to use Copilot to apply data classification/sensitivty labels to the documents if they are clinical.
We have several sites with libraries where there are multiple TBs of documents that are intermingled (clinical and nonclinical). This could really help us organize this information. By doing this, it gets us in a good position in regards to our data retention policies.
I have Copilot studio connected to SP with a delegated app registration. I can send it a name of a document and it can tell me if it contains clinical data or not. I want to scale this up so it do this at a larger scale.
My questions are: Is this even possible? If so…how would I approach this? In studio, when I ask basic questions like “How many documents are in this SharePoint site”, it says it can’t undestand and to rephrase. However, I can ask: Give me the newest document that was created…and it will return a correct result.
Any help or direction for me would be huge!
Thanks!
I am just getting started with Copilot studio. I’ve been asked to build a prototype where we use Copilot to analyze some large document libraries across several SharePoint sites. The short term goal of this project is to use CP to help us classify/categorize documents into clinical (medical) and non-clinical. For instance, I would want to ask CP to get me an excel list of documents with a column indicating if the document is clinical or not. The long term goal is to use Copilot to apply data classification/sensitivty labels to the documents if they are clinical. We have several sites with libraries where there are multiple TBs of documents that are intermingled (clinical and nonclinical). This could really help us organize this information. By doing this, it gets us in a good position in regards to our data retention policies. I have Copilot studio connected to SP with a delegated app registration. I can send it a name of a document and it can tell me if it contains clinical data or not. I want to scale this up so it do this at a larger scale. My questions are: Is this even possible? If so…how would I approach this? In studio, when I ask basic questions like “How many documents are in this SharePoint site”, it says it can’t undestand and to rephrase. However, I can ask: Give me the newest document that was created…and it will return a correct result. Any help or direction for me would be huge! Thanks! Read More
Learn Live GitHub Universe resources
El código del voucher (cupón) se ingresará manualmente durante el proceso de pago. A continuación, se detallan los pasos de registro y para agendar tu examen:
Inicia sesión en el sitio de registro del examen y elige la certificación deseada. Esto te redireccionará a la página de registro.
Haz clic en “Programar/realizar examen” para continuar.
Completa el formulario de registro y selecciona “Programar examen” en la parte inferior.
Esta acción transmitirá tus detalles de elegibilidad a nuestro proveedor de pruebas, PSI.
Al enviar el formulario de registro, serás dirigido al sitio de pruebas de PSI para finalizar la programación de su examen.
Durante el proceso de pago en el sitio de pruebas de PSI, encontrarás un campo designado donde puedes ingresar el código del voucher (cupón) para poner a cero el saldo.
The voucher code will be entered manually during the checkout process. Below are the registration and scheduling steps:
Log into the exam registration site and choose the desired certification. This will redirect you to the registration page.
Click on “Schedule/take exam” to proceed.
Complete the registration form and select “Schedule exam” at the bottom.
This action will transmit your eligibility details to our testing vendor, PSI.
Upon submitting the registration form, you’ll be directed to the PSI testing site to finalize the scheduling of your exam.
During the checkout process on the PSI testing site, you’ll encounter a designated field where you can enter the voucher code to zero the balance.
Microsoft Tech Community – Latest Blogs –Read More
Step-by-step: Gather a detailed dataset on SharePoint Sites using MGDC and Fabric
0. Overview
This blog shows a step-by-step guide to getting SharePoint Sites information using the Microsoft Graph Data Connect for SharePoint and the Microsoft Fabric. This includes detailed instructions on how to extract SharePoint and OneDrive site information and use that to run analytics for your tenant.
If you follow these steps, you will have a Microsoft Fabric Report like the one shown below, which includes number of sites by type, and total storage used by type. You can also use the many other properties available in the SharePoint Sites dataset.
To get there, you can split the process into 3 distinct parts:
Set up your tenant for the Microsoft Graph Data Connect
Get data about SharePoint Sites using Microsoft Fabric
Create a report on these sites using Microsoft Fabric
Note: Following these instructions will create Azure resources and this will add to your tenant’s Azure bill. For more details on pricing, see How can I estimate my Azure bill?
1. Setting up the Microsoft Graph Data Connect
The first step in the process is to enable the Microsoft Graph Data Connect and its prerequisites. You will need to do a few things to make sure everything is ready to run the pipeline:
Enable Data Connect in your Microsoft 365 Admin Center. This is where your Tenant Admin will check the boxes to enable the Data Connect and enable the use of SharePoint datasets.
Create an application identity to run your pipelines. This is an application created in Microsoft Entra Id which will be granted the right permission to access MGDC.
Create an Azure Resource Group for all the resources we will use for Data Connect, like the Azure Storage account and the Azure Synapse workspace.
Create a Fabric Workspace and Fabric Lakehouse to store your MGDC data.
Add your Microsoft Graph Data Connect application in the Azure Portal. Your Microsoft Graph Data Connect application needs to be associated with a subscription, resource group, Fabric Workspace and Fabric Lakehouse.
Finally, your Global Administrator needs to use the Microsoft Admin Center to approve the Microsoft Graph Data Connect application access.
Let us look at each one of these.
1a. Enable the Microsoft Graph Data Connect
The first step is to go into the Microsoft 365 Admin Center and enable the Microsoft Graph Data Connect.
Navigate to the Microsoft 365 Admin Center at http://admin.microsoft.com/ and make sure you are signed in as a Global Administrator.
Select the option to Show all options on the left.
Click on Settings, then on Org settings.
Select the settings for Microsoft Graph Data Connect.
Check the box to turn Data Connect on.
Make sure to also check the box to enable access to the SharePoint and OneDrive datasets.
IMPORTANT: You must wait 48 hours to onboard your tenant and another 48 hours for the initial data collection and curation. For example, if you check the boxes on August 1st, you will be able to run your first data pull on August 5th, targeting the data for August 3rd. You can continue with the configuration, but do not trigger your pipeline before that.
1b. Create the Application Identity
You will need to create an Application in Microsoft Entra ID (formerly Azure Active Directory) and setup an authentication mechanism, like a certificate or a secret. You will use this Application later when you configure the pipeline.
Here are the steps:
Navigate to the Azure Portal at https://portal.azure.com
Find the Microsoft Entra ID service in the list of Azure services.
Select the option for App Registration on the list on the left.
Click the link to New Registration to create a new one.
Enter an app name, select “this organizational directory only” and click on the Register button.
On the resulting screen, select the link to Add a certificate or secret.
Select the “Client secrets” tab and click on the option for New client secret.
Enter a description, select an expiration period, and click the Add button.
Copy the secret value (there is a copy button next to it). We will need that secret value later. Secret values can only be viewed immediately after creation. Save the secret before leaving the page.
Click on the Overview link on the left to view the details about your app registration.
Make sure to copy the Directory (tenant) ID and the Application (client) ID, found on the Application’s Overview page. We will need those values later as well.
1c. Create the Azure Resource Group
You will need to create an Azure Resource Group for all the resources we will use for Data Connect, including the Storage Account and Synapse Workspace.
Here are the steps.
Navigate to the Azure Portal at https://portal.azure.com
Find the Resource Groups in the list of Azure services.
Click on the Create link to create a new resource group.
Select a name and a region.
IMPORTANT: You must use a region that matches the region of your Microsoft 365 tenant.
Click on Review + Create, make sure you have everything correctly entered and click Create.
1d. Create a Fabric Workspace and Fabric Lakehouse
Next, you will need to create a Microsoft Fabric Workspace. This is where you will you’re your data-related items like your pipelines and your Lakehouse.
Here are the steps:
Navigate to the Microsoft Fabric Portal at https://fabric.microsoft.com
I the main Microsoft Fabric page, select the option for “Data Engineering”
Find the “Workspaces” icon on the bar on the left and click on it.
At the very bottom of the workspace list, select the option for “+ New workspace”. You may already have a “My workspace”, but it generally best to have a separate workspace for MGDC so it’s easier to organize your projects and collaborate.
Give your workspace a name, a description and a license mode. Then click “Apply”.
In your empty workspace, using the option for “+ New” and then select “Lakehouse”.
Give your Lakehouse a name and create it.
Please note down the name of your workspace and the name of the Lakehouse. You will need those in upcoming steps.
1e. Add your Microsoft Graph Data Connect application
Your Microsoft Graph Data Connect application needs to be associated to a subscription, resource group, application identity, Fabric workspace, Fabric Lakehouse and datasets. This will define everything that the app will need to run your pipelines.
Here are the steps:
Search for the “Microsoft Graph Data Connect” service in the Azure Portal at https://portal.azure.com or navigate directly to https://aka.ms/MGDCinAzure to get started.
Select the option to Add a new application.
Under Application ID, select the one from step 1b and give it a description.
Select Microsoft Fabric for Compute Type.
Select Copy Activity for Activity Type.
Fill the form with the correct Subscription and Resource Group (from step 1c).
Under Destination Type, select Fabric Lakehouse.
Fill the form with the correct Workspace and Fabric Lakehouse (from step 1d).
Click on “Next: Datasets”.
In the dataset page, under Dataset, select BasicDataSet_v0.SharePointSites_v1.
Under Columns, select all.
Click on “Review + Create”.
Click “Create” to finish.
You will now see the app in the list for Graph Data Connect.
1f. Approve the Microsoft Graph Data Connect Application
Your last step in this section is to have a Global Administrator approve the Microsoft Graph Data Connect application.
Make sure this step is performed by a Global administrator who is not the same user that created the application.
Navigate to the Microsoft 365 Admin Center at http://admin.microsoft.com/
Select the option to Show all options on the left.
Click on Settings, then on Org settings.
Click on the tab for Security & privacy.
Select the option for settings for Microsoft Graph Data Connect applications.
You will see the app you defined with the status Pending Authorization.
Double-click the app name to start the authorization.
Follow the wizard to review the app data, the datasets, the columns and the destination, clicking Next after each screen.
In the last screen, click on Approve to approve the app.
NOTE: The Global administrator that approves the application cannot be the same user that created the application. If it is, the tool will say “app approver and developer cannot be the same user.”
Run a Pipeline
Next, you will configure a pipeline in Microsoft Fabric. We will use Synapse here. You will trigger this pipeline to pull SharePoint data from Microsoft 365 and drop it on the Azure Storage account. Here is what you will need to do:
Go to the Fabric Workspace and create a pipeline
Define the source (Dataset from MGDC)
Define the destination (Table in the Lakehouse)
Save and run the pipeline.
Monitor the pipeline to make sure it has finished running.
Let us look at each one of these.
2a. Go to the Fabric Workspace and create a pipeline
Navigate to the Microsoft Fabric Portal at https://fabric.microsoft.com
I the main Microsoft Fabric page, select the option for “Data Engineering”
Find the “Browse” icon on the bar on the left and click on it.
Find the Fabric workspace you defined in step 1d and click on it.
In the workspace, click on “+ New” item.
Select “Data pipeline”.
Give the new pipeline a name and click on “Create”
Select the “Copy data assistant” option
2b. Define the source (Dataset from MGDC)
In the first step of the “Copy data assistant”, search for 365 to help you find that source.
Select the source called “Microsoft365”.
To create a new connection to the Microsoft365 data source, enter a new connection name, select “Service principal” for Authentication kind and the credentials for the Application registration you created in step 1b, including Tenant ID, Service principal client ID and Service principal Key.
Click on “Next” and wait for the dataset information to load.
Select “BasicDataSet_v0.SharePointSites_v1” as the table.
Keep the default scope (SharePoint datasets do not use scope filters).
Select SnapshotDate as a column filter and select a date. Since we are doing a full pull, you should use the same date for Start time and End time.
Click on “Next”
IMPORTANT: Valid dates go from 23 days ago to 2 days ago.
IMPORTANT: You cannot query dates before the date when you enabled SharePoint dataset collection.
2c. Define the destination (Table in the Lakehouse)
Next, you will choose a data destination.
To start choosing a destination, select the Lakehouse you created in step 1d under the OneLake data hub (not the Azure blobs or ADLS destinations).
Select the option to “Load to new table”, which will show the column mappings for the Sites dataset.
You can keep the default name “BasicDataSet_v0.SharePointSites_v1” or use something simpler like “Sites”.
2d. Save and run the pipeline
The last step in the copy data definition is to review the details
Click on the “Save + Run” button to save the new task and run it immediately.
2e. Monitor the pipeline to make sure it has finished running
After the assistant ends, you land on the definition of the pipeline, where you can see the copy data activity on the top and you can monitor the running pipeline at the bottom.
At this point, your pipeline state should be “queued”, “initializing” or “in progress”. Later it will go into “extracting data” and “persisting data”.
Wait for the pipeline to run. This should take around 20 minutes to run, maybe more if your tenant is large.
After everything runs, the status will show as “Succeeded”.
2f. View the newly created Lakehouse table
After your pipeline finishes running, you can see the new table
Select “Browse” icon on the bar on the left, then click on the Lakehouse.
Create a Fabric report
The last step is to use the data you just got to build a Power BI dashboard. You will need to:
Create a semantic model
Create a new Fabric report
Add a visualization
3a. Create a new semantic model
Find the “Browse” icon on the bar on the left and click on it.
Find the Lakehouse you defined in step 1d and click on it.
In the Lakehouse, select the option to create a “new semantic model”.
Give the semantic model a name and expand the tree to find the newly created “Sites” table. Make sure you select it.
Click “Confirm” to create the semantic model
3b. Create a new Fabric report
Find the “Browse” icon on the bar on the left and click on it.
Find the Fabric workspace you defined in step 1d and click on it.
In the workspace, click on “+ New” item and select “Report”.
Select the option to “pick a published semantic model”
Select the semantic model and use the option to “create a blank report”
You will end up with an empty report in Power BI, with panels on the right for Filters, Visualizations and Data (from the semantic model).
3d. Add a visualization
Click on the stacked bar chart in the Visualizations to add to the report
Resize the bar chart visualization
Add the “Id” column to the y-axis property
Add the “RootWeb.WebTemplate” to the x-axis property
Use the “File”, “Save as” menu option to name the report “Site Summary”
The final report will show
Conclusion
You have triggered your first pipeline and created your first report using MGDC with the Microsoft Fabric. Now there is a lot more that you could do.
Here are a few suggestions:
Investigate the many datasets in the Microsoft Graph Data Connect, which you can easily use in your Microsoft Fabric workspace.
Trigger your pipeline on a schedule, to always have fresh data in your storage account.
Use a Delta pull to get only the data that has changed since your last pull.
Extend your pipeline to do more, like join multiple data sources.
Share your Report with other people in your tenant.
You can read more about the Microsoft Graph Data Connect for SharePoint at https://aka.ms/SharePointData. There you will find links to many details, including a list of datasets available, complete with schema definitions and samples.
Microsoft Tech Community – Latest Blogs –Read More
Trouble Sending to Gmail / Google Workspace domains
Hello, we have been having ongoing issues sending to gmail or Google workspace addresses for months now. I know that Google tightened up their requirement so domains had to have solid SPF, DKIM and DMARC setup, which we do and always have. We’ve worked around the issue all this time by using a connector and mail flow rule in Exchange online. This has worked OK, but every time we run into a new company we are dealing with that uses Google for email, we need to manually add them to the rule. This has lead to some problems and is disruptive and of course I fear that it may break at some point.
We’ve registered our domain with the Google postmaster tools and it’s verified there. We’ve opened tickets with Google months ago and get no response. Microsoft support tells us everything is fine on our end and it’s Google’s issue.
This is the error we get:
Error:
550 5.7.350 Remote server returned message detected as spam -> 550 5.7.1 [2a01:111:f403:2412::72e 19] Gmail has detected that this message;is likely suspicious due to the very low reputation of the sending;domain. To best protect our users from spam, the message has been;blocked. For more information, go to; https://support.google.com/mail/answer/188131 98e67ed59e1d1-2e059080a69si2208538a91.116 – gsmtp
Message rejected by:
mx.google.com
There is no spamming or anything coming out of our domain and we have a very small volume of email. Online checks of our domain don’t show any reputation issues or blacklists.
I’ve seen I think dozens of other people complaining about this issue, but don’t really see any actual solution other than the email connector workaround.
Anyone have any suggestions?
Thanks!
Hello, we have been having ongoing issues sending to gmail or Google workspace addresses for months now. I know that Google tightened up their requirement so domains had to have solid SPF, DKIM and DMARC setup, which we do and always have. We’ve worked around the issue all this time by using a connector and mail flow rule in Exchange online. This has worked OK, but every time we run into a new company we are dealing with that uses Google for email, we need to manually add them to the rule. This has lead to some problems and is disruptive and of course I fear that it may break at some point. We’ve registered our domain with the Google postmaster tools and it’s verified there. We’ve opened tickets with Google months ago and get no response. Microsoft support tells us everything is fine on our end and it’s Google’s issue. This is the error we get:Error:550 5.7.350 Remote server returned message detected as spam -> 550 5.7.1 [2a01:111:f403:2412::72e 19] Gmail has detected that this message;is likely suspicious due to the very low reputation of the sending;domain. To best protect our users from spam, the message has been;blocked. For more information, go to; https://support.google.com/mail/answer/188131 98e67ed59e1d1-2e059080a69si2208538a91.116 – gsmtpMessage rejected by:mx.google.com There is no spamming or anything coming out of our domain and we have a very small volume of email. Online checks of our domain don’t show any reputation issues or blacklists. I’ve seen I think dozens of other people complaining about this issue, but don’t really see any actual solution other than the email connector workaround. Anyone have any suggestions?Thanks! Read More
Export to Excel Date and Time Fields
Hi all – we’re stumped on an issue when exporting a Document Library containing Date and Field times to Excel. Two columns in SharePoint are of type Date and Time (see attachment)…..when we export to Excel, the formatting changes to string (see attached image). We can of course go into the Excel doc and change the formatting of the column, but we don’t want our customer to do that. How can we maintain the integrity of the Date and Time formatting when exporting to Excel?
Hi all – we’re stumped on an issue when exporting a Document Library containing Date and Field times to Excel. Two columns in SharePoint are of type Date and Time (see attachment)…..when we export to Excel, the formatting changes to string (see attached image). We can of course go into the Excel doc and change the formatting of the column, but we don’t want our customer to do that. How can we maintain the integrity of the Date and Time formatting when exporting to Excel? Read More
New Copilot for Security Plugin Name Reflects Broader Capabilities
The Copilot for Security team is continuously enhancing threat intelligence (TI) capabilities in Copilot for Security to provide a more comprehensive and integrated TI experience for customers. We’re excited to share that the Copilot for Security threat Intelligence plugin has broadened beyond just MDTI to now encapsulate data from other TI sources, including Microsoft Threat Analytics (TA) and SONAR, with even more sources becoming available soon.
To reflect this evolution of the plugin, customers may notice a change in its name from “Microsoft Defender Threat Intelligence (MDTI) to “Microsoft Threat Intelligence,” reflecting its broader scope and enhanced capabilities.
Since launch in April, Copilot for Security customers have been able to access, operate on, and integrate the raw and finished threat intelligence from MDTI developed from trillions of daily security signals and the expertise of over 10 thousand multidisciplinary analysts through simple natural language prompts. Now, with the ability for Copilot for Security’s powerful generative AI to reason over more threat intelligence, customers have a more holistic, contextualized view of the threat landscape and its impact on their organization.
This broader range of information, delivered instantly and in-context, adds to the ability to enable different security personas to defend at machine speed and scale. For example, a customer may ask “Tell me more about the Threat actor Silk Typhoon” for the latest threat intelligence information from MDTI, including IoCs, data from mass collection and analysis, intelligence articles, Intel Profiles (vulnerabilities, threat actors, threat tooling], and guidance. Copilot for Security now also shows customers the impact of threat to their organization and which assets may be vulnerable though threat analytics and reputation information from SONAR for indicators associated with incidents and other threat activity.
It’s important to note that customers will only see threat intelligence associated with the products they are provisioned for. For example, a Copilot for Security customer that isn’t provisioned for Defender XDR will not see any threat intelligence from Threat Analytics.
Conclusion
Microsoft delivers leading threat intelligence built on visibility across the global threat landscape made possible protecting Azure and other large cloud environments, managing billions of endpoints and emails, and maintaining a continuously updated graph of the internet. By processing an astonishing 78 trillion security signals daily, Microsoft can deliver threat intelligence in Copilot for Security providing an all-encompassing view of attack vectors across various platforms, ensuring customers have comprehensive threat detection and remediation.
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDTI product web page. To learn more about Copilot for Security, visit the Tech Community page here.
Also, be sure to contact our sales team to request a demo or a quote. Learn how you can begin using MDTI with the purchase of just one Copilot for Security SCU here.
Microsoft Tech Community – Latest Blogs –Read More
please create matlab code :- system identification, modal analysis, and estimation of vibration frequencies and damping ratios.
Normalized amplitudes of complex frequency response function of the bridge estimated from Welch’s average power spectral density function due to vehicular movement-based excitation. The gray lines correspond to 10 segments of the measurements and the black lines show their average.Normalized amplitudes of complex frequency response function of the bridge estimated from Welch’s average power spectral density function due to vehicular movement-based excitation. The gray lines correspond to 10 segments of the measurements and the black lines show their average. Normalized amplitudes of complex frequency response function of the bridge estimated from Welch’s average power spectral density function due to vehicular movement-based excitation. The gray lines correspond to 10 segments of the measurements and the black lines show their average. welch’s algorithm, eigensystem realization algorithm (era):, fast fourier transform (fft): MATLAB Answers — New Questions
excel formulas are not working
My formulas are suddenly not working anymore. I copy and paste a formula and it shows the formula is there, however the value amount is wrong. It has copied the value amount from the previous cell. I can click on the formula and delete one number and then put back that same number and now it changes it to the correct value amount. Also, I created a formula for the sum of a few cells and the formula is correct. However the amount is not correct. I can highlight those cells and see the total at the bottom and it is not the total in the formula cell.
My formulas are suddenly not working anymore. I copy and paste a formula and it shows the formula is there, however the value amount is wrong. It has copied the value amount from the previous cell. I can click on the formula and delete one number and then put back that same number and now it changes it to the correct value amount. Also, I created a formula for the sum of a few cells and the formula is correct. However the amount is not correct. I can highlight those cells and see the total at the bottom and it is not the total in the formula cell. Read More