Month: August 2024
Bookingtimes
how can i setup in microsoft bookins that customers can only make appointments 1 week or longer before the appointment so i can prepare myself 1 week ahead.
how can i setup in microsoft bookins that customers can only make appointments 1 week or longer before the appointment so i can prepare myself 1 week ahead. Read More
deep learning architecture can explain how connected layers and filters
some one can explain how this connections fit each to previous stage
layers = [ imageInputLayer([28 28 1])
convolution2dLayer(5,20)
reluLayer
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer() ]some one can explain how this connections fit each to previous stage
layers = [ imageInputLayer([28 28 1])
convolution2dLayer(5,20)
reluLayer
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer() ] some one can explain how this connections fit each to previous stage
layers = [ imageInputLayer([28 28 1])
convolution2dLayer(5,20)
reluLayer
maxPooling2dLayer(2, ‘Stride’, 2)
fullyConnectedLayer(10)
softmaxLayer
classificationLayer() ] deep learning MATLAB Answers — New Questions
Complex JSON from a REST Api with Dataflow
Hi.
I have a Rest API that retreives a complex json. In order to flatten that. Do i have to store de json in a file first? or can i flateen that json from the Rest API directly?
Do you know if is it an examle? (i found videos of flatten complex json from a json file, but not directly from a Rest API)
I get this error trying test connection
This api works properly in a copy_data task
Thank you
Hi.I have a Rest API that retreives a complex json. In order to flatten that. Do i have to store de json in a file first? or can i flateen that json from the Rest API directly?Do you know if is it an examle? (i found videos of flatten complex json from a json file, but not directly from a Rest API) I get this error trying test connection This api works properly in a copy_data taskThank you Read More
unable to incorporate own design Loss function in r2024a
Switching from r2023b to r2024 I made some changes in my Net (CNN). e.g. modified input/output and replace RegressionLayer with SoftmaxLayer, using trainnet function, etc.
I expected better performance, perspective compatibility (RegressionLayre is not more recommended) and have a vision of my Net optimization with use of Prune approach etc.
To the contrary to the previous version I am not able to involve my own Loss function (as it was done in previous version).
The (siplified) code is below, the used synthax was inspired by example:
https://www.mathworks.com/matlabcentral/answers/2100631-how-can-i-define-a-custom-loss-function-using-trainnet
The error message is:
Error using trainnet (line 46)
Error calling function during training.
Error in callMyLoss (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Caused by:
Error using myOwnLoss
The specified superclass ‘nnet.layer.softmaxLayer’ contains a parse error, cannot be found on MATLAB’s
search
path, or is shadowed by another file with the same name.
Error in callMyLoss>@(Y,target)myOwnLoss(name,Y,target) (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Error in nnet.internal.cnn.util.UserCodeException.fevalUserCode (line 11)
[varargout{1:nargout}] = feval(F, varargin{:});
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof=========================
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof=========================Switching from r2023b to r2024 I made some changes in my Net (CNN). e.g. modified input/output and replace RegressionLayer with SoftmaxLayer, using trainnet function, etc.
I expected better performance, perspective compatibility (RegressionLayre is not more recommended) and have a vision of my Net optimization with use of Prune approach etc.
To the contrary to the previous version I am not able to involve my own Loss function (as it was done in previous version).
The (siplified) code is below, the used synthax was inspired by example:
https://www.mathworks.com/matlabcentral/answers/2100631-how-can-i-define-a-custom-loss-function-using-trainnet
The error message is:
Error using trainnet (line 46)
Error calling function during training.
Error in callMyLoss (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Caused by:
Error using myOwnLoss
The specified superclass ‘nnet.layer.softmaxLayer’ contains a parse error, cannot be found on MATLAB’s
search
path, or is shadowed by another file with the same name.
Error in callMyLoss>@(Y,target)myOwnLoss(name,Y,target) (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Error in nnet.internal.cnn.util.UserCodeException.fevalUserCode (line 11)
[varargout{1:nargout}] = feval(F, varargin{:});
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof=========================
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof========================= Switching from r2023b to r2024 I made some changes in my Net (CNN). e.g. modified input/output and replace RegressionLayer with SoftmaxLayer, using trainnet function, etc.
I expected better performance, perspective compatibility (RegressionLayre is not more recommended) and have a vision of my Net optimization with use of Prune approach etc.
To the contrary to the previous version I am not able to involve my own Loss function (as it was done in previous version).
The (siplified) code is below, the used synthax was inspired by example:
https://www.mathworks.com/matlabcentral/answers/2100631-how-can-i-define-a-custom-loss-function-using-trainnet
The error message is:
Error using trainnet (line 46)
Error calling function during training.
Error in callMyLoss (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Caused by:
Error using myOwnLoss
The specified superclass ‘nnet.layer.softmaxLayer’ contains a parse error, cannot be found on MATLAB’s
search
path, or is shadowed by another file with the same name.
Error in callMyLoss>@(Y,target)myOwnLoss(name,Y,target) (line 55)
myTrainedNet = trainnet(Y,target,net, @(Y,target) myOwnLoss(name,Y,target),options);
Error in nnet.internal.cnn.util.UserCodeException.fevalUserCode (line 11)
[varargout{1:nargout}] = feval(F, varargin{:});
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof=========================
classdef myOwnLoss < nnet.layer.softmaxLayer
% own Loss
methods
%function layer = sseClassificationLayer(name)
function layer = myOwnLoss(name)
% layer = sseClassificationLayer(name) creates a sum of squares
% error classification layer and specifies the layer name.
% Set layer name.
layer.Name = name;
% Set layer description.
layer.Description = ‘my own Loss v.2024a’;
end
function loss = forwardLoss(layer, Y, T)
%%% function loss = forwardLoss(Yo, To)
% loss = forwardLoss(layer, Y, T) returns the Tdiff loss between
% the predictions Y and the training targets T.
disp("myLoss");
aa=1;
% just something very simple
loss = sum(Y-T,’all’);
end
% original backwardLoss
function dX = backwardLoss(layer, Y, T)
numObservations = size( Y, 3);
dX = (Y – T)./numObservations;
end
end
end
%=======================eof========================= loss function, trainnet MATLAB Answers — New Questions
HELP Canonical Huffman coding
Hi, can some one share the source code for Canonical Huffman?
Thank you…Hi, can some one share the source code for Canonical Huffman?
Thank you… Hi, can some one share the source code for Canonical Huffman?
Thank you… huffman, canonical, image compressing, entropy MATLAB Answers — New Questions
Simulink thermal coppling problem
I have this problem where i seem to have some thermal coppling of my Tank(G-TL) with my Constant Volume Chamber. Yet there shouldnt be anything that does effect the temperature in my Constant Volume Chamber i still have something definitely influencing it. When I only simulate the outflow of my Constant Volume Chamber (pink) filled with N2 with a start value of 300 bar and a starting temperature of 300 Kelvin, i get a temperature of 267 Kelvin at the point my Chamber pressure dropped down to 200 bar. But if I simulate the whole circuit (screenshot) i get a different temperature after i dropped down to 200 bar in my Constant Volume Chamber depending on the settings i set my Tank(G-TL) (pink+yellow) on. It would be very nice if someone has an idea what could cause this effect of the temperature behaviour of my Constant Volume Chamber in this Simulink modell?I have this problem where i seem to have some thermal coppling of my Tank(G-TL) with my Constant Volume Chamber. Yet there shouldnt be anything that does effect the temperature in my Constant Volume Chamber i still have something definitely influencing it. When I only simulate the outflow of my Constant Volume Chamber (pink) filled with N2 with a start value of 300 bar and a starting temperature of 300 Kelvin, i get a temperature of 267 Kelvin at the point my Chamber pressure dropped down to 200 bar. But if I simulate the whole circuit (screenshot) i get a different temperature after i dropped down to 200 bar in my Constant Volume Chamber depending on the settings i set my Tank(G-TL) (pink+yellow) on. It would be very nice if someone has an idea what could cause this effect of the temperature behaviour of my Constant Volume Chamber in this Simulink modell? I have this problem where i seem to have some thermal coppling of my Tank(G-TL) with my Constant Volume Chamber. Yet there shouldnt be anything that does effect the temperature in my Constant Volume Chamber i still have something definitely influencing it. When I only simulate the outflow of my Constant Volume Chamber (pink) filled with N2 with a start value of 300 bar and a starting temperature of 300 Kelvin, i get a temperature of 267 Kelvin at the point my Chamber pressure dropped down to 200 bar. But if I simulate the whole circuit (screenshot) i get a different temperature after i dropped down to 200 bar in my Constant Volume Chamber depending on the settings i set my Tank(G-TL) (pink+yellow) on. It would be very nice if someone has an idea what could cause this effect of the temperature behaviour of my Constant Volume Chamber in this Simulink modell? simulink, simscape MATLAB Answers — New Questions
Storage Migration Service, keeps on transferring indefinitely
I have 2016 source and a 2022 destination and a 2019 WAC as orchestration server.
I’ve used the orchestration server for 4 previous migrations, but this one seems to never be done on the step of “transfer differences” after the initial sync. Source is 6M files and 6TB data, currently processed is 83M files and 59,7TB data. This is the 3rd attempt, tried restarting source and destination first, before 3rd attempt also tried to restart orchestration server.
I’m not seeing anything bad in the event log, debug log on destination is seeing a lot of transfer attempts every second which seem OK.
I have 2016 source and a 2022 destination and a 2019 WAC as orchestration server.I’ve used the orchestration server for 4 previous migrations, but this one seems to never be done on the step of “transfer differences” after the initial sync. Source is 6M files and 6TB data, currently processed is 83M files and 59,7TB data. This is the 3rd attempt, tried restarting source and destination first, before 3rd attempt also tried to restart orchestration server.I’m not seeing anything bad in the event log, debug log on destination is seeing a lot of transfer attempts every second which seem OK. Read More
2 Past Accounts Cannot be Removed From Onedrive Pull Down Menu even Under New Android Emulator
As you can tell from the following screen capture.
The account with a key on the right of the pull down menu, is the past account cannot be removed.
This is a fresh new Android emulator, no Samsung account, no other setting, fresh new.
I open 2 ticket to Microsoft OneDrive team, one replied it has nothing to do with Microsoft 365, wants me to contact mobile phone team, the other team escalated the situation to development team but no reply ever since.
It seems a cloud problem, not related to phone, how can i get Microsoft Cloud team to solve the problem?
As you can tell from the following screen capture.The account with a key on the right of the pull down menu, is the past account cannot be removed.This is a fresh new Android emulator, no Samsung account, no other setting, fresh new. I open 2 ticket to Microsoft OneDrive team, one replied it has nothing to do with Microsoft 365, wants me to contact mobile phone team, the other team escalated the situation to development team but no reply ever since. It seems a cloud problem, not related to phone, how can i get Microsoft Cloud team to solve the problem? Read More
Calculating bonus with multiple conditions
I need help with calculating bonus based on the below :-
If user reaches 120% in one category except E, user will get 10% bonus in another category which reached 60%-69%
The bonus column is my desired result.
UserCategoryAchievementBonusJohnA65%10%JohnB80%0JohnC120%0JohnD68%10%JohnE120%0JackA68%10%JackB54%0JackC130%0%JackD65%10%JackE120%0%
I need help with calculating bonus based on the below :- If user reaches 120% in one category except E, user will get 10% bonus in another category which reached 60%-69% The bonus column is my desired result. UserCategoryAchievementBonusJohnA65%10%JohnB80%0JohnC120%0JohnD68%10%JohnE120%0JackA68%10%JackB54%0JackC130%0%JackD65%10%JackE120%0% Read More
Generative answers
Hi,
Can we remove the reference link from generative answers?
Hi, Can we remove the reference link from generative answers? Read More
Securing your AI Apps on Azure: Recordings and Slides!
In July, we put on a 6-part live stream series all about securing your AI apps on Azure, covering topics like keyless authentication, user login with Microsoft Entra, RAG data access control, and private network deployment. If you missed the live streams, you can still catch up by watching the recordings, downloading the slides, and trying out the sample projects.
Using Keyless Auth with Azure AI Services
Ready to go keyless and never worry about compromised keys again? All the Azure AI services support keyless authentication using role-based access control, making it possible for you to authenticate to the services with either your logged in local user identity or your deployed app’s managed identity. We’ll show you how to use keyless authentication with Azure OpenAI, demonstrating how to set up the access controls in the Portal, with the Azure CLI, or with infrastructure-as-code (Bicep). Then we’ll connect to that Azure OpenAI service in our application code, using both the OpenAI SDK and the popular Langchain SDK. Our examples will be in Python, but you can use keyless auth with most modern OpenAI packages.
:link: Helpful links:
Slides
GitHub project: Keyless Azure OpenAI Deployment
GitHub project: Keyless OpenAI Chat App on Azure Container Apps
Add User Login to AI Apps using Built-in Auth
Building an AI app on Azure and want to know the easiest way to let users sign-in? We’ll show you how to setup built-in authentication on Azure App Service and Azure Container Apps. With built-in auth, employees can sign-in to either a workforce tenant or, thanks to Entra External ID, consumers can sign-in with a one-time passcode, username/password, or Google/Facebook login. Then your Azure app can display user details like their name, with minimal code changes. We’ll demonstrate how to setup built-in auth to your apps using either the Graph SDK and the newly released Graph Bicep provider, and provide links to samples with full code provided.
:link: Helpful links:
Slides
GitHub project: AI Chat App with Built-in Auth
Add User Login to AI Apps using MSAL SDK
Need a user sign-in feature for your AI app? We’ll show you how to setup an OAuth2 OIDC flow in Python using the the MSAL SDK with the open source identity package. You can use this approach to either enable employees to sign-in to a workforce tenant or, thanks to Entra External ID, let customers sign-in with a one-time passcode, username/password, or Google/Facebook login. Then your app can use user details from the Graph SDK, like their name and email. We’ll also demonstrate how to automate the creation of Microsoft Entra applications using the Graph SDK.
:link: Helpful links:
Slides
GitHub project: AI Chat App with MSAL Auth (Python)
Handling User Auth for a SPA App on Azure
Many modern web applications use a SPA architecture: a single-page web app for the frontend and an API for the backend. In this talk, we’ll discover how you can add user authentication to a SPA app using Microsoft Entra, using the MSAL.JS SDK on the frontend and the MSAL Python SDK on the backend. Learn how to set up Entra applications correctly, one for the client and one for the server, and how to use the on-behalf-of-flow on the server for handling tokens sent from the client. Our example application will be an AI RAG application with a React frontend and Python backend, but you can apply the same principles to any SPA applications that need user authentication.
:link: Helpful links:
Slides
GitHub project: RAG SPA app with optional user auth
Documentation for enabling user login
Data Access Control for AI RAG Apps on Azure
If you’re trying to get an LLM to accurately answer questions about your own documents, you need RAG: Retrieval Augmented Generation. With a RAG approach, the app first searches a knowledge base for relevant matches to a user’s query, then sends the results to the LLM along with the original question. What if you have documents that should only be accessed by a subset of your users, like a group or a single user? Then you need data access controls to ensure that document visibility is respected during the RAG flow. In this session, we’ll show an approach using Azure AI Search with data access controls to only search the documents that can be seen by the logged in user. We’ll also demonstrate a feature for user-uploaded documents that uses data access controls along with Azure Data Lake Storage Gen2.
:link: Helpful links:
Slides
GitHub project: RAG app with optional data access control
Documentation for enabling data access control
Deploying an AI App to a Private Network on Azure
To ensure that your AI app can only be accessed within your enterprise network, you should deploy it to an Azure virtual network with private endpoints for each Azure service used. In this session, we’ll show how to deploy an AI RAG application to a virtual network that includes App Service, AI Search, OpenAI, Document Intelligence, and Blob storage, and we’ll do it entirely with infrastructure-as-code (Bicep) so that you can do the same deployment. Then we’ll log in to the virtual network using Azure Bastion with a virtual machine to demonstrate that we can access the RAG app from inside the network, and only inside the network.
:link: Helpful links:
Slides
GitHub project: RAG app with optional private network deployment
Documentation for enabling private deployment
Microsoft Tech Community – Latest Blogs –Read More
Building Trust with Responsible AI: Ensuring Content Safety and Empowering Developers
Hi, I’m Chanchal Kuntal, a pre-final year student at Banasthali Vidyapith and a Beta Microsoft Learn Student Ambassador (MLSA). In the fast-paced world of technology, Artificial Intelligence (AI) has become a critical tool driving innovation across industries. From healthcare to finance, AI systems are transforming how we live and work. However, with great power comes great responsibility. As AI continues to permeate our daily lives, the need for responsible AI practices has never been more pressing. This blog delves into the concept of responsible AI, the importance of content safety, and how these practices empower developers to create trustworthy and impactful AI solutions.
Understanding Responsible AI
Responsible AI refers to the ethical development and deployment of AI systems that prioritize fairness, transparency, accountability, and inclusivity. It is about ensuring that AI technologies are designed and used in ways that respect human rights, avoid harm, and promote positive societal outcomes. As AI becomes more integrated into decision-making processes, the risks of bias, discrimination, and unintended consequences grow. Responsible AI aims to mitigate these risks by embedding ethical considerations into the AI lifecycle—from design to deployment.
Key principles of responsible AI include:
Fairness: Ensuring AI systems do not perpetuate or amplify biases present in data.
Reliability and Safety: Guaranteeing that AI systems perform consistently and safely in a wide range of scenarios, protecting users from harm.
Privacy and Security: Safeguarding sensitive data and ensuring that AI systems do not compromise user privacy.
Inclusiveness: Designing AI systems that consider the needs and perspectives of diverse groups, ensuring equitable access and outcomes.
Transparency: Providing clear explanations of how AI systems work and make decisions, making them understandable and accountable to users.
Accountability: Holding developers and organizations responsible for the outcomes of their AI systems, ensuring they can answer for the impact of their technologies.
Responsible AI in Action
These principles are not just theoretical—they are actively shaping the development and deployment of AI systems across industries. For instance, companies are increasingly using fairness auditing tools to identify and mitigate bias in their AI models. Meanwhile, reliability and safety are being enhanced through rigorous testing and the implementation of fail-safes that prevent AI from making harmful decisions. Privacy is being preserved through advanced encryption techniques, and transparency is achieved by providing users with explanations of how AI systems reach their conclusions.
Content Safety: A Critical Component of Responsible AI
Content safety is a significant aspect of responsible AI, particularly as AI plays a growing role in moderating online content, generating media, and personalizing user experiences. Content safety involves ensuring that AI systems do not produce or promote harmful, misleading, or inappropriate content. This is crucial in an era where misinformation, hate speech, and deepfakes can have serious consequences.
Developers must prioritize content safety by implementing robust safeguards and continuously monitoring AI outputs. This includes:
– Data Curation: Using high-quality, representative data sets to train AI models, minimizing the risk of biased or harmful outputs.
– Algorithmic Checks: Incorporating mechanisms to detect and filter out inappropriate content.
– Human Oversight: Combining AI-driven content moderation with human review to ensure contextually accurate decisions.
How Responsible AI and Content Safety Empower Developers
For developers, embracing responsible AI and content safety is not just a moral imperative—it’s a pathway to building better products and earning user trust. Here’s how:
Enhanced User Trust: When AI systems are transparent, fair, reliable, safe, and secure, users are more likely to trust and adopt them. This trust is essential for the long-term success of AI-driven products.
Innovation with Confidence: By embedding responsible AI practices, developers can experiment and innovate without fear of unintended harm, leading to more creative and impactful solutions.
Regulatory Compliance: As governments and organizations increasingly emphasize AI ethics, adhering to responsible AI principles helps developers stay ahead of regulatory requirements, reducing legal and reputational risks.
Broader Market Reach: AI systems that are inclusive and considerate of diverse user needs can tap into a broader market, driving adoption and success across different demographics.
Conclusion
Incorporating responsible AI and content safety into AI development is more than just a trend; it’s a necessity. As developers, the choices we make today will shape the AI systems of tomorrow. By prioritizing fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability, we can build AI technologies that not only solve problems but also foster trust and drive positive societal change.
In the journey of AI development, let’s commit to being responsible architects of the future.
References
To dive deeper into this, Microsoft Learn offers comprehensive modules and resources that developers can use to get hands-on experience with Azure AI Content Safety and other responsible AI tools:
Collection of Responsible AI Learning Modules: Explore Here
Responsible AI YouTube Playlist: Watch Here
Learn Modules for Azure AI Content Safety:
Azure AI Content Safety Studio Workshop
Azure AI Content Safety Code Workshop
Responsible AI Dashboard Workshop
RAI Tools for Developers Blog Post: Read Here
Microsoft Tech Community – Latest Blogs –Read More
Developing a Comprehensive Analytics Platform for Daily Pandemic Updates within Fabric
In this tutorial, we will create a complete solution using Microsoft Fabric to monitor news related to the MPOX virus at a specific time. This solution is essential for healthcare professionals as it allows for a quick response to the changing situation by sending alerts when significant negative information is identified.
What is Microsoft Fabric?
Microsoft Fabric is a comprehensive data management solution that leverages artificial intelligence to integrate, organize, analyze, and share data across a unified and multi-cloud data lake. It simplifies analytics by providing a single platform that combines various services such as Power BI, Azure Synapse Analytics, Azure Data Factory, and more into a seamless SaaS (Software as a Service) foundation. With Microsoft Fabric, you can centralize data storage with OneLake, integrate AI capabilities, and transform raw data into actionable insights for business users.
1. Exploring the Features of Microsoft Fabric Components
Each component of Microsoft Fabric offers unique features that enhance data handling:
Data Factory: Supports pipelines and data flows for seamless data integration and transformation.
Synapse Data Engineering: Includes advanced features like Lakehouse databases and shortcuts for accessing data from external sources without duplication.
Synapse Data Warehouse: Simplifies data modeling and warehouse creation with integrated pipelines for direct data loading.
Power BI: Features like Copilot and Git integration streamline report creation and collaborative development.
OneLake: Centralized storage that supports various databases, ensuring data consistency across different components.
AI – Copilot: An AI-enhanced toolset tailored to support data professionals in their workflow. It includes Copilot for Data Science and Data Engineering, Copilot for Data Factory, and Copilot for Power BI.
Purview: Microsoft Purview, a unified data governance service that helps manage and govern data.
2. Architecture of Microsoft Fabric
The architecture of Microsoft Fabric is centered around its components and their serverless compute capabilities. Serverless compute allows users to run applications on-demand without managing infrastructure. For example:
Synapse Data Warehouse: Uses T-SQL for querying tables with serverless compute.
Synapse Data Engineering: Leverages a Spark engine for notebook-based data transformations, eliminating the wait time typically seen in Azure Synapse Analytics.
Synapse Real-Time Analytics: Uses KQL for querying streaming data stored in Kusto DB.
The unified storage solution, OneLake, ensures that all data is stored in a single, cohesive location, making data management and retrieval more efficient.
3. Project Architecture Overview
Let’s break down the architecture of this project:
Data Ingestion:
Tool: Data Factory (built into Microsoft Fabric).
Process: We’ll configure the Bing API in Azure to ingest the latest news data about Mpox virus into the Microsoft Fabric workspace. The data will be stored in OneLake as a JSON file.
Data Storage:
Tool: OneLake.
Process: The raw JSON data ingested will be stored in the Lake Database within OneLake. This unified storage solution ensures that all data is easily accessible for subsequent processing.
Data Transformation:
Tool: Synapse Data Engineering.
Process: The raw JSON file will be transformed into a structured Delta table using Spark notebooks within Synapse Data Engineering. This step includes implementing incremental loads to ensure that only new data about Mpox virus is processed.
Sentiment Analysis:
Tool: Synapse Data Science.
Process: We’ll use a pre-trained text analytics model to perform sentiment analysis on the news data about Mpox virus. The results, indicating whether news articles are positive, mixed, negative, or neutral, will be stored as a Delta table in the Lake Database.
Data Reporting:
Tool: Power BI.
Process: The final step involves creating a news dashboard in Power BI using the sentiment-analyzed data. This dashboard will provide insights into the latest news trends about Mpox virus and sentiments.
Alerting System:
Tool: Data Activator.
Process: We’ll set up alerts based on the sentiment analysis. For example, alerts can be triggered when a news article about Mpox virus with a negative sentiment is detected. Alerts will be configured to notify via email.
Orchestration:
Tool: Data Factory Pipelines.
Process: All tasks, from data ingestion to reporting, will be orchestrated using pipelines in Data Factory. This ensures that the entire workflow is automated and connected, allowing for efficient data processing and analysis.
4. The Agenda for the Project
Here’s what you’ll learn in this tutorial:
Environment Setup: Creating and configuring the necessary resources in Microsoft Fabric.
Data Ingestion: Using Data Factory to ingest Bing News data into OneLake.
Data Transformation: Converting the raw JSON data into structured Delta tables with incremental loads using Synapse Data Engineering.
Sentiment Analysis: Performing sentiment analysis on the news data using Synapse Data Science.
Data Reporting: Building a Power BI dashboard to visualize the analyzed news data about Mpox virus.
Orchestration: Creating pipelines in Data Factory to automate the end-to-end process.
Alerting: Setting up alerts in Data Activator to monitor and respond to critical data events.
5. Setting Up Your Environment for an End-to-End Azure Data Engineering Project with Microsoft Fabric
In this section, we’ll guide you through the environment setup required for the project. Let’s get started!
Accessing the Azure Portal
To begin, open your browser and navigate to Portal.azure.com. This will take you to the Azure portal, where you’ll perform all the necessary configurations for this project.
Creating a Resource Group
The first task is to create a dedicated Resource Group for this project. Resource groups help organize and manage your Azure resources efficiently.
1. Navigate to Resource Groups: On the Azure portal, locate the Resource Groups option at the top of the page and click on it. This will show all the resource groups currently in your subscription.
2. Create a New Resource Group:
Click the Create button.
Subscription: Choose your subscription.
Resource Group Name: Enter a meaningful name, such as rg-bing-data-analytics
to represent the project.
Region: Select the region closest to your current location (e.g., “Central US”).
Tags: Optionally to help identify resources later.
3. Review and Create: After entering the details, click Review and Create. Once validated, click Create to set up the resource group.
4. Setting Up the Bing Search API
Now that we have a resource group, the next step is to create the Bing Search API, which will serve as the data source for our project.
Create Bing Search API:
Inside your resource group, click Create.
In the marketplace search box, type “Bing” and hit Enter.
Select Bing Search v7 from the results and click Create.
Configure the API:
Subscription: Ensure your subscription is selected.
Resource Group: Confirm the resource group you just created is selected.
Resource Name: Enter a name like Bing-news-api.
Region: The only available option is “Global”.
Pricing Tier: Select F1 (Free Tier), which supports up to 1,000 API calls per month.
Accept Terms and Create:
Ensure you accept the terms and conditions by checking the required box.
Review the settings and click Create.
Once the API is created, you’ll be redirected to the resource page, where you can find essential details like the API key and endpoint. These will be crucial for connecting to the API later in the project.
5. Setting Up Microsoft Fabric in Power BI
Next, we’ll move to Microsoft Fabric within the Power BI workspace.
Access Power BI: Open another tab in your browser and navigate to app.powerbi.com. This is where you’ll interact with Microsoft Fabric.
Create a Dedicated Workspace:
Click on New Workspace at the bottom.
Workspace Name: Enter a name like News Bing.
Description: Optionally, add a brief description.
Licensing: Assign the Fabric trial license to this workspace.
Note: If you don’t have the necessary permissions to create a workspace, contact your Power BI admin to request access.
3. Enable Fabric: If you haven’t already, ensure that Microsoft Fabric is enabled in your workspace. You can do this by navigating to the settings and activating the free trial provided by Microsoft.
4. Creating the Lakehouse Database
Finally, we’ll set up the Lakehouse Database, where all the data will be stored and processed.
5. Switch to Data Engineering Component:
In the Power BI workspace, click the Data Engineering option in the bottom left corner.
6. Create the Lakehouse Database:
Click on Lakehouse at the top.
Database Name: Enter a name like bing_lake_db.
Click Create.
This Lakehouse database will store both the raw JSON data from the Bing API and the processed data in Delta table format.
6. Data Ingestion with Microsoft Fabric: Step-by-Step Guide
In this section, we’ll dive into the data ingestion process using Microsoft Fabric’s Data Factory component. This is a critical step where we’ll connect to the Bing API, retrieve the latest news data about Mpox virus, and store it in our Lakehouse database as a JSON file.
Step 1: Accessing the Bing API
First, let’s revisit the Bing API resource we created earlier in Azure. This API provides the keys and endpoints necessary to connect and retrieve data.
Retrieve API Details:
Go to the Azure Portal and navigate to the Bing API resource.
Under the Keys and Endpoints section, note the key and the base URL.
2. Documentation and Tutorials:
Azure’s Bing API resource includes a tutorials section that links to the official documentation. This documentation will help us configure our API calls, including endpoints, headers, and query parameters.
Step 2: Setting Up Data Factory in Microsoft Fabric
Next, we’ll switch over to Microsoft Fabric to set up our data ingestion pipeline.
Access Data Factory: In the bottom left, click on the Power BI icon and choose the Data Factory component.
Step 3: Creating the Data Ingestion Pipeline
We’ll now create a pipeline that will handle the data ingestion process.
Create a New Pipeline:
Click on Data Pipeline and give it a meaningful name, such as News Ingestion Pipeline.
Click Create to set up the new pipeline.
2. Add Copy Data Activity:
In the pipeline workspace, click on Copy Data Activity and choose Add to Canvas.
Name this activity something descriptive, like Copy Latest News.
Move to the Source tab to configure the data source.
Step 4: Configuring the Data Source (Bing API)
Now, we’ll configure the source of our data, which is the Bing API.
Select Data Store Type:
Choose More since the Bing API is outside the Fabric workspace.
Click REST to establish a new connection.
2. Set Up API Connection:
Select REST as the data source type and click Continue.
Enter the Base URL of the Bing News API from the Azure portal.
Use Anonymous as the authentication method because we’ll handle authentication with headers.
The connect URL information is available here: Bing News Search APIs v7 Reference – Bing Services | Microsoft Learn
3. Add Headers for Authentication:
In the Source tab, expand the Advanced section.
Add a new header with the name Ocp-Apim-Subscription-Key (copied from the documentation) and paste your API key from Azure.
4. Configure Query Parameters:
q=mpox for the search term.
count=100 to retrieve up to 100 news articles.
freshness=Day to get news from the past 24 hours.
Use the Relative URL field to add query parameters:
Please note that we are incorporating updates on the Mpox virus into the Bing news engine.
5. Preview Data:
Click Preview Data to verify the setup. You should see JSON output with the latest news articles.
Step 5: Configuring the Destination (Lakehouse Database)
Now, we’ll set up the destination for the data – our Lakehouse database.
Choose Workspace Option:
In the Destination tab, select Workspace as the data store type.
Select Lakehouse and choose the bing_lake_db.
Set File Path:
Choose Files as the root folder.
Set the file name to bing-latest-news.json and select JSON as the file format.
3. Save and Run Pipeline:
Save your pipeline by clicking the Save button.
Run the pipeline by clicking Run.
Step 6: Verifying Data Ingestion
Once the pipeline runs successfully, we’ll verify that the data has been ingested correctly.
Check Lakehouse Database:
Open the Lakehouse database in the Data Engineering component.
You should see the bing-latest-news.json file listed under Files.
Review the Data:
Ensure that the JSON file contains the expected news data based on the query parameters you configured.
7. Data Transformation Process
The process involves several steps, including reading the raw JSON file, processing the data, and loading it into a Delta table.
Step 1. Creating notebooks
From the workload switcher located at the bottom left of the screen, select Data engineering. Select Notebook from the New section at the top of the landing page of the Data Engineering experience.
Once the notebooks are created, go to the items view in your workspace to view the imported notebooks and begin the transformation process.
Step 2: Reading the Raw JSON File
The first step involves reading the raw JSON file from the Lakehouse database into a Spark DataFrame.
df = spark.read.option(“multiline”, “true”).json(“Files/bing-latest-news.json”)
display(df)
Step 3: Selecting the Relevant Column
Since we’re only interested in the value column that contains the actual JSON structure of the news articles, we select this column from the DataFrame.
df = df.select(“value”)
display(df)
Step 4: Exploding the JSON Objects
We use the explode function to explode all the JSON objects that exist in the value column from a single row structure to multiple rows. This allows us to represent each news article as a separate row.
from pyspark.sql.functions import explode
df_exploded = df.select(explode(df[“value”]).alias(“json_object”))
display(df_exploded)
Step 5: Converting JSON Objects to JSON Strings
Next, we convert the exploded JSON objects into a list of JSON strings.
json_list = df_exploded.toJSON().collect()
Step 6: Parsing and Extracting Information from JSON Strings
Using the json library, we parse the JSON strings into dictionaries and extract the required information for each news article, such as title, description, category, URL, image URL, provider, and publication date.
import json
# Initialize lists to store extracted information
title = []
description = []
category = []
url = []
image = []
provider = []
datePublished = []
# Process each JSON string in the list
for json_str in json_list:
try:
article = json.loads(json_str)
if article[“json_object”].get(“name”):
title.append(article[“json_object”][“name”])
if article[“json_object”].get(“description”):
description.append(article[“json_object”][“description”])
if article[“json_object”].get(“category”):
category.append(article[“json_object”][“category”])
else:
category.append(None)
if article[“json_object”].get(“url”):
url.append(article[“json_object”][“url”])
if article[“json_object”][“provider”][0].get(“image”):
image.append(article[“json_object”][“provider”][0][“image”][“thumbnail”][“contentUrl”])
if article[“json_object”][“provider”][0].get(“name”):
provider.append(article[“json_object”][“provider”][0][“name”])
if article[“json_object”].get(“datePublished”):
datePublished.append(article[“json_object”][“datePublished”])
except Exception as e:
print(f”Error processing JSON object: {e}”)
Step 7: Creating a DataFrame with Extracted Information
We then combine all the extracted information into a structured DataFrame.
from pyspark.sql.types import StructType, StructField, StringType
# Combine the lists
data = list(zip(title, description, url, image, provider, datePublished))
# Define schema
schema = StructType([
StructField(“title”, StringType(), True),
StructField(“description”, StringType(), True),
StructField(“url”, StringType(), True),
StructField(“image”, StringType(), True),
StructField(“provider”, StringType(), True),
StructField(“datePublished”, StringType(), True)
])
# Create DataFrame
df_cleaned = spark.createDataFrame(data, schema=schema)
display(df_cleaned)
Step 8: Formatting the Date Column
The datePublished column, originally in timestamp format, is converted to a more readable date format.
from pyspark.sql.functions import to_date, date_format
df_cleaned_final = df_cleaned.withColumn(“datePublished”, date_format(to_date(“datePublished”), “dd-MMM-yyyy”))
display(df_cleaned_final)
Implementing Incremental Load in Data Transformation with Microsoft Fabric
In this section, we’ll cover how to handle the error that occurs when trying to write data to an existing table in your Lakehouse database and how to implement an incremental load using the Type 1 Slowly Changing Dimension (SCD) method. This will help ensure that only new or updated data is added to your table without unnecessarily duplicating or overwriting existing data.
Initial method: Comprehending the Error and Overwrite Functionality
When you attempt to write data to an existing table without handling it correctly, you might encounter an error stating that the “table or view already exists.” This is because the system tries to create a new table with the same name as an existing one.
Overwrite Mode: One way to solve this is by using the overwrite mode, which replaces the entire content of the existing table with the new data. However, this approach can lead to performance issues, especially with large datasets, and can result in data loss if previous records are simply overwritten.
Alternative method: Comprehending the Append Mode
Another approach is the append mode, where the new data is simply added to the existing table.
Append Mode: This method appends new data to the existing table without checking for duplicates. As a result, it can lead to data duplication and an unnecessary increase in table size.
Third method: Incremental Loading with Type 1 and Type 2 Merging
– Type 1 Merge Logic: When a record with the same unique identifier (in this case, the URL) exists in both the new data and the table, the system checks if any fields have changed. If there are changes, the system updates the existing record with the new data. If there are no changes, the record is ignored.
– Type 2 Merge Logic: When a record with the same unique identifier (in this case, the URL) exists in both the new data and the table, the system checks if any fields have changed. If there are changes, the system inserts a new record with the new data,
and the old record is marked as expired or archived.
We implement an incremental load using a Type 1 merge to address these issues. This method ensures that only new or changed data is added to the table, avoiding duplicates and maintaining data integrity.
from pyspark.sql.utils import AnalysisException
try:
# Define the table name
table_name = “tbl_latest_news”
# Attempt to write the DataFrame as a Delta table
df_cleaned_final.write.format(“delta”).saveAsTable(table_name)
except AnalysisException:
print(“Table Already Exists”)
# Merge the new data with the existing table
df_cleaned_final.createOrReplaceTempView(“vw_df_cleaned_final”)
spark.sql(f”””
MERGE INTO {table_name} target_table
USING vw_df_cleaned_final source_view
ON source_view.url = target_table.url
WHEN MATCHED AND
source_view.title <> target_table.title OR
source_view.description <> target_table.description OR
source_view.image <> target_table.image OR
source_view.provider <> target_table.provider OR
source_view.datePublished <> target_table.datePublished
THEN UPDATE SET *
WHEN NOT MATCHED THEN INSERT *
“””)
Step 10: Verifying the Data
After the table is updated, you can verify the contents by running a simple SQL query to count the number of records in the table.
%%sql
SELECT count(*) FROM tbl_latest_news
8. Performing Sentiment Analysis using Synapse Data Science in Microsoft Fabric
In this section, we delve into performing sentiment analysis on the news articles we ingested and processed earlier. We’ll use the Synapse Data Science tool within Microsoft Fabric, leveraging the Synapse ML library to apply a pre-trained machine learning model for sentiment analysis. Let’s walk through the process step by step.
Step 1: Accessing Synapse Data Science in Microsoft Fabric
Navigating to Synapse Data Science:
Start by accessing the Synapse Data Science tool from the Microsoft Fabric workspace. You can do this by clicking on the Power BI icon in the bottom left and selecting the “Data Science” option.
2. Creating a Notebook:
Create a new notebook in the Synapse Data Science workspace. Rename it to something meaningful like “News-Sentiment-Analysis” for easy identification.
Step 2: Setting Up the Environment
Attach the Lakehouse Database:
To work with the data you’ve processed earlier, you need to attach the Lakehouse database to the notebook. This will allow you to access the clean table containing the news articles.
After attaching the Lakehouse database, you can load the data using the following code:
# Load the clean table into a DataFrame
df = spark.sql(“SELECT * FROM tbl_Latest_News”)
df.show()
This code will display the data from the table, showing the columns and rows of news articles about Mpox data.
Step 3: Implementing Sentiment Analysis with Synapse ML
Import Synapse ML and Configure the Model:
Synapse ML (formerly ML Spark) provides various pre-built models, including those for sentiment analysis. You can use the AnalyzeText model for this task.
First, import the necessary libraries and set up the model:
import synapse.ml.core
from synapse.ml.services import AnalyzeText
# Import the model and configure the input and output columns
model = (AnalyzeText()
.setTextCol(“description”)
.setKind(“SentimentAnalysis”)
.setOutputCol(“response”)
.setErrorCol(“error”))
2. Apply the Model to the DataFrame: Once the model is configured, apply it to the DataFrame containing the news about Mpox data descriptions:
# Apply the model to our dataframe
result = model.transform(df)
display(result)
3. Extract the Sentiment Value: The sentiment results are stored as a JSON object in the response column. You’ll need to extract the actual sentiment value:
# Create Sentiment Column
from pyspark.sql.functions import col
sentiment_df = result.withColumn(“sentiment”, col(“response.documents.sentiment”))
display(sentiment_df)
4. Let remove the response and error columns :
sentiment_df_final = sentiment_df.drop(“error”, “response”)
display(sentiment_df_final)
Step 4: Writing the Results to the Lakehouse Database with Incremental Load
Perform Incremental Load with Type 1 Merge:
Similar to the data processing step, use a Type 1 merge to write the sentiment analysis results to the Lakehouse database, ensuring that only new or updated records are added.
from pyspark.sql.utils import AnalysisException
try:
table_name = ‘tbl_sentiment_analysis’
sentiment_df_final.write.format(“delta”).saveAsTable(table_name)
except AnalysisException:
print(“Table Already Exists”)
sentiment_df_final.createOrReplaceTempView(“vw_sentiment_df_final”)
spark.sql(f”””
MERGE INTO {table_name} target_table
USING vw_sentiment_df_final source_view
ON source_view.url = target_table.url
WHEN MATCHED AND
source_view.title <> target_table.title OR
source_view.description <> target_table.description OR
source_view.image <> target_table.image OR
source_view.provider <> target_table.provider OR
source_view.datePublished <> target_table.datePublished
THEN UPDATE SET *
WHEN NOT MATCHED THEN INSERT *
“””)
This code ensures that the sentiment analysis results are incrementally loaded into the tbl_sentiment_analysis table, updating only the changed records and adding new ones.
After running the merge operation, validate that the TBL_Sentiment_Analysis table in the Lakehouse database contains the correct data, with no duplicates and all sentiments accurately recorded.
9. Building a News Dashboard using Power BI in Microsoft Fabric
In this section, we will create a news dashboard using Power BI within Microsoft Fabric. This dashboard will visualize the news articles and their corresponding sentiment analysis results from the table TBL_Sentiment_Analysis. Here’s a step-by-step guide on how to build and customize this dashboard.
Step 1: Access the Data in Power BI
Connect Power BI to the Lakehouse Database:
Navigate to the Bing Lake DB database in Microsoft Fabric.
Identify the tbl_sentiment_analysis table, which contains the news articles along with the sentiment analysis results.
2. Create a Semantic Model:
A semantic model in Microsoft Fabric acts similarly to a Power BI dataset. It holds table information from the Lakehouse database, enabling Power BI to connect and build reports.
Create a new semantic model named News Dashboard DataSet and select the tbl_sentiment_analysis table for inclusion.
Step 2: Build the Initial Report
Auto-Create Report:
Microsoft Fabric offers a feature called “Auto Create Report,” which quickly generates an initial report based on your dataset. This is a good starting point for building more customized reports.
Navigate to Power BI within your workspace, select the News Dashboard DataSet, and use the “Auto Create Report” feature to generate a report.
Edit and Customize the Report:
After the report is auto-generated, click the Edit button to modify and add your own visuals.
Create a new page in the report for building your custom visuals.
Step 3: Create and Customize Visuals
Table Visual:
Add a Table visual to display key information such as the news Title, Provider, URL, Category, and Date Published.
Adjust the column widths to ensure all information is clearly visible.
Filter Visual (Slicer):
Add a Slicer visual to filter the table based on the Date Published column. Convert this slicer to a dropdown format for a cleaner look.
Convert URL to Clickable Links:
Go back to the semantic model
Convert the URL column to a web URL format under the Data Category in the column properties.
Refresh your Power BI report to see the URLs as clickable links, allowing users to directly access the full news articles.
Apply Default Filter for Latest News:
Configure the table visual to always display the latest news by default. Use the Top N filtering option to ensure the table shows only the news articles with the most recent Date Published.
Step 4: Create Measures for Sentiment Analysis
Create Measures in the Semantic Model:
In the semantic model, create three DAX measures to calculate the percentage of positive, negative, and neutral sentiments among the news articles.
Use the following DAX code snippets to create the measures:
Negative Sentiment % =
IF(COUNTROWS(FILTER(tbl_sentiment_analysis, tbl_sentiment_analysis[sentiment] = “negative”)) >= 0,
DIVIDE(
CALCULATE(
COUNTROWS(FILTER(tbl_sentiment_analysis, tbl_sentiment_analysis[sentiment] = “negative”))
),
COUNTROWS(tbl_sentiment_analysis)
) * 100,
0
)
</code></pre>
You can create other measures for mixed sentiment and neutral sentiment.
Add Card Visuals for Sentiment Scores:
In the Power BI report, add three Card visuals to display the positive, negative, and neutral sentiment percentages.
Configure each card to use the corresponding measure you created.
Apply the same Top N filter to these card visuals to ensure they display sentiment scores for the latest news articles.
Step 5: Finalize and Save the Report
Save the Report:
Name your report News Dashboard and save it within your workspace.
Ensure that the report is updated with the latest data every time new news articles are ingested.
Review and Test:
Test the functionality of the report by filtering the news articles and ensuring that sentiment scores update accordingly.
Verify that the URLs are clickable and lead to the correct news articles.
10. Setting Up Alerts in Power BI Reports Using Data Activator
In this section, we’ll go through how to set up alerts in Power BI reports using the Data Activator tool in Microsoft Fabric. This will allow you to receive notifications when certain conditions are met in your Power BI visualizations, such as when a sentiment score crosses a specific threshold.
Step 1: Access the Data Activator Tool
Open the Data Activator:
Navigate to the workspace in Microsoft Fabric.
Click on the Power BI icon at the bottom left of the screen.
Select Data Activator from the options.
Step 2: Setting Up Alerts in Power BI
Navigate to Your Power BI Report:
Open the News Dashboard Power BI report that you created.
Go to the specific page where you want to set up the alert (e.g., Page 2 where your custom dashboard is located).
Select a Visual to Monitor:
Identify the visual you want to monitor for alerts. For this example, we’ll use the Positive Sentiment card visual.
Click on the Edit button in Power BI.
Select the Neutral Sentiment card visual, then click on the three dots (More options) at the top-right of the visual.
Configure the Alert:
Choose Set Alert from the dropdown.
On the right side, configure the alert options:
Visual: Automatically selected based on the visual you clicked.
Measure: Select the measure related to the positive sentiment (e.g., Neutral Sentiment Percentage).
Condition: Set the condition for the alert. For example, set the condition to “becomes less than 50%” to trigger an alert when Neutral sentiment is detected.
Notification Type: Choose between Email and Teams. For this project, select mail to receive alerts via Email.
Workspace and Reflex: Choose the workspace where this reflex item will be saved (e.g., News Bing). Create a new Reflex item, such as Neutral Sentiment Item, and make sure the Start My Alert checkbox is selected.
Create the Alert:
After verifying your configurations, click Create Alert.
The system will create the alert and link it to a Reflex item in Data Activator.
Step 3: Managing and Monitoring Alerts
View and Manage Alerts:
Once the alert is created, click on View Alert to be redirected to the Data Activator Reflex item.
In the Reflex item, you’ll find tabs like Triggers, Properties, and Events:
Triggers: View the condition that will activate the alert.
Properties: Adjust settings related to the alert.
Events: Monitor all events triggered by this Reflex item.
Modify or Stop Alerts:
If you need to adjust any settings, you can do so directly within the Reflex item.
You can also stop or delete the alert if it is no longer needed.
Step 4: Testing the Alerts
Monitor Alerts in Email:
Once the alert is configured, test it by ingesting new data through your pipeline or waiting for the scheduled pipeline to run.
If the condition set in the alert is met, you will receive a notification in mail.
11. Creating an End-to-End Pipeline Using Data Factory in Microsoft Fabric
In this section, we’ll walk through creating an automated pipeline in Data Factory that orchestrates all the tasks we’ve done in the project so far. This pipeline will handle data ingestion, transformation, sentiment analysis, and update the Power BI reports with the latest news data.
Step 1: Review and Enhance the Existing Pipeline
Open the Existing Pipeline:
Start by opening the news_ingestion_pipeline that was previously created to ingest data from the Bing API.
Enhance the Pipeline with Additional Activities:
Add Data Transformation:
Drag a Notebook activity onto the canvas.
Connect the Copy Data activity’s success output to this notebook activity using the On Success connection.
Rename the activity to Data Transformation.
In the Settings tab, select the appropriate workspace and choose the process_bing_news notebook.
Add Sentiment Analysis:
Add another Notebook activity and connect it to the success output of the Data Transformation activity.
Rename this activity to Sentiment Analysis.
In the Settings tab, select the new_sentiment_analysis notebook.
Step 3: Schedule the Pipeline for Daily Execution
Schedule the Pipeline:
Click on the Schedule button to set up a daily trigger.
Configure the trigger to run every day at 60 Minutes.
Set the start date to today and the end date to one year from now.
2. Run the Pipeline Manually:
To test the pipeline, run it manually by providing a search term like “sports” to ingest sports-related news articles.
Verify the pipeline execution by checking each step to ensure it completes successfully.
Step 5: Save and Finalize
Save the Pipeline and Report:
After making all changes, ensure that both the pipeline and the Power BI report are saved.
Monitor and Validate:
Once the pipeline runs on schedule, monitor the report daily to ensure it updates with the latest news and sentiment analysis correctly.
Congratulations on successfully completing this end-to-end project! You’ve covered a comprehensive workflow, integrating multiple components of Microsoft Fabric to achieve a fully automated system for data ingestion, transformation, sentiment analysis, reporting, and alerting.
Key Takeaways:
Data Ingestion: You created a pipeline in Data Factory to ingest news articles from the Bing API, storing them in a Lakehouse database as raw JSON files.
Data Transformation: You processed the raw JSON files into a structured delta table, preparing the data for further analysis.
Sentiment Analysis: Utilizing Synapse Data Science, you performed sentiment analysis on the news articles using a pre-trained Synapse ML model, storing the results in a delta table.
Reporting with Power BI: You built a Power BI dashboard that dynamically displays the latest news articles and their associated sentiments, with a focus on the most recent data.
Alert Configuration with Data Activator: You set up alerts using Data Activator to monitor changes in the Power BI visuals, specifically alerting when the positive sentiment exceeds zero. The alerts were configured to send notifications via Microsoft Teams.
End-to-End Testing: You tested the entire pipeline by running it with a new search term (“Mpox”), verifying that the system correctly ingested the data, updated the dashboard, and sent the appropriate alerts.
Final Thoughts:
This project has provided a deep dive into Microsoft Fabric’s capabilities, showcasing how its various components can be integrated to build a robust, automated data processing and reporting solution. By completing this project, you’ve gained valuable hands-on experience that will be incredibly useful in real-world Azure Data Engineering scenarios.
Next Steps:
Expand the Project: You can further enhance this project by adding more features, such as custom machine learning models, additional data sources, or advanced visualizations.
Optimize Performance: Consider exploring ways to optimize the pipeline for performance, especially when dealing with large datasets.
Explore More Features: Microsoft Fabric offers many more features. Delving deeper into its capabilities, like real-time streaming or advanced data governance, could further enhance your skills.
Thank you for following through this comprehensive project. Your dedication and attention to detail will undoubtedly pay off in your future endeavors as a data engineer.
Good luck, and keep learning!
Resources
Microsoft Certified: Fabric Analytics Engineer Associate – Certifications | Microsoft Learn
Query parameters used by News Search APIs – Bing Services | Microsoft Learn
Bing News Search APIs v7 Reference – Bing Services | Microsoft Learn
Explore end-to-end analytics with Microsoft Fabric – Training | Microsoft Learn
Lakehouse end-to-end scenario: overview and architecture – Microsoft Fabric | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Unique function to return last duplicate value
Hi there,
Is there a way I can use the ‘unique’ function to return the last duplicate value instead of the first one? Refer to the images attached for clarity. Essentially I’m using
"[Sp_Au,ix] = unique(Data.SetPoint, ‘stable’);
DataE = Data(ix,:);"
to retireve the unique values as per the SetPoint column and save it into ‘DataE’, but instead of it starting at rows that correspond to the first "10", I want it to retrieve the last "10" row just before it starts increasing. Is there a way to do that?
So from table Data, instead of it taking rows (1,6,7,8,…etc), I want it export rows (5,6,7,8,9,…etc). So from the last duplicated value. Hope that makes sense.Hi there,
Is there a way I can use the ‘unique’ function to return the last duplicate value instead of the first one? Refer to the images attached for clarity. Essentially I’m using
"[Sp_Au,ix] = unique(Data.SetPoint, ‘stable’);
DataE = Data(ix,:);"
to retireve the unique values as per the SetPoint column and save it into ‘DataE’, but instead of it starting at rows that correspond to the first "10", I want it to retrieve the last "10" row just before it starts increasing. Is there a way to do that?
So from table Data, instead of it taking rows (1,6,7,8,…etc), I want it export rows (5,6,7,8,9,…etc). So from the last duplicated value. Hope that makes sense. Hi there,
Is there a way I can use the ‘unique’ function to return the last duplicate value instead of the first one? Refer to the images attached for clarity. Essentially I’m using
"[Sp_Au,ix] = unique(Data.SetPoint, ‘stable’);
DataE = Data(ix,:);"
to retireve the unique values as per the SetPoint column and save it into ‘DataE’, but instead of it starting at rows that correspond to the first "10", I want it to retrieve the last "10" row just before it starts increasing. Is there a way to do that?
So from table Data, instead of it taking rows (1,6,7,8,…etc), I want it export rows (5,6,7,8,9,…etc). So from the last duplicated value. Hope that makes sense. unique, data MATLAB Answers — New Questions
How to identify duplicate rows between tables
I’m using R2020b, and I want to set up a master table for appending new data to – and as part of this I want to identify any duplicate rows in the new, incoming table to filter them out before appending. Ideally, the master table will live in a related directory in a .mat file, and the new data will be read in directly from a set-name, set-location .csv using e.g.
fullname = fullfile(‘relativepath’,’newdata.csv’);
% grab column headers from input sheet
opts = detectImportOptions(fullname);
% set all variable types to categorical
opts.VariableTypes(:) = {‘categorical’};
% read in new data
T = readtable(fullname,opts);
% make any modifications to new data headers to match old data
T = renamevars(T,"NewLabel","OldLabel");
% clean new table headers to match originally-wizard-imported headers (I’d ask why these exhibit different behaviour, but that’s a separate tragedy, and this current fix works – I think)
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘ ‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘(‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘)’, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘_’, ”);
I found the solution suggested here: https://au.mathworks.com/matlabcentral/answers/514921-finding-identical-rows-in-2-tables, but having done a quick test via:
foo = T(str2double(string(T.Year))<1943,:); % not my actual query, but structurally the same; this gave me ~40% of my original data
bar = T(str2double(string(T.Year))>1941,:); % similar, gave me ~70% of the original data
baz = ismember(foo,bar); % similar, gives the overlap for 1 particular year (should be about 14% of my original data)
blah = T(str2double(string(T.Year))==1942,:); % to directly extract the number of rows I am looking for
sum(baz) % What I expect here is the number of rows in the overlap
ans =
0
I found that ismember was not finding any duplicates (which were there by construction).
Note: due to categorical data I actually used T(str2double(string(T.Year))…)
Replacing
baz = ismember(foo,bar,’rows’);
sum(baz)
ans =
0
results in the same not finding any duplicates. Using double quotes "rows" does not change the behaviour.
On the other hand, using the function to assess single variables gives the expected behaviour (to some degree):
testest = ismember(foo.var1,bar.var1)
sum(testest)
The sum is now non-zero, and (because single variables are repeated more often than their combinations) gives more like 30% of the original data, which seems reasonable (the number of unique entries in the original set in that variable was about 40% of the total).
I guess I could create a logical index based on the product of multiple calls of this kind, but that seems rather… inefficient… and sensitive to the exact construction of the table/variables used in the filter. I’d rather have a generic solution for full table rows that will be robust if the overall table changes over the long term (or if/when I functionalise the code and use it for other work). Whilst most of the time, a couple of key variables can be used to identify unique rows, occasionally more information is required to distinguish pathological cases. I will probably use this approach if a more elegant solution doesn’t appear, though, and put some thought into which groups of variables are 100% correlated (and therefore useless for this distinction) to cut down the Boolean product.
I could also throw good coding practice to the winds and just write two nested loops (one for rows, one for variables) and exhaustively test every combination, but I suspect that would be even less efficient (although I wonder whether the scaling order would be the same given the nature of the comparisons required).
If it is pertinent, I imported all (>25) data columns from a .csv file as categorical variables. The original data before that were a mix of number and general columns from an Excel sheet; I could have used any or all of {double,string,categorical,datetime} to store the various variables, but there are some data which are best stored as categorical to avoid character trimming and consequent data cleaning / returning to original state steps.
Digging further, I also found this: https://au.mathworks.com/matlabcentral/answers/1775400-how-do-i-find-all-indexes-of-duplicate-names-in-a-table-column-then-compare-the-row-values-for-each which appears to imply that ismember should have the functionality I need here.
Similarly, methods using unique (see e.g. https://au.mathworks.com/matlabcentral/answers/1999193-find-duplicated-rows-in-matlab-without-for-loop or https://au.mathworks.com/matlabcentral/answers/1571588-table-find-duplicate-rows-double-char-datetime or https://au.mathworks.com/matlabcentral/answers/305987-identify-duplicate-rows-in-a-matrix) give:
size(unique([foo;bar],’rows’),1) == size(foo,1)+size(bar,1)
ans =
logical
1
instead of the expected 0 due to the lower amount of actual full-row matches. (Same for "rows" again.)
I’ve also looked into outerjoin/join/innerjoin, but those don’t seem to remove duplicates like I need.I’m using R2020b, and I want to set up a master table for appending new data to – and as part of this I want to identify any duplicate rows in the new, incoming table to filter them out before appending. Ideally, the master table will live in a related directory in a .mat file, and the new data will be read in directly from a set-name, set-location .csv using e.g.
fullname = fullfile(‘relativepath’,’newdata.csv’);
% grab column headers from input sheet
opts = detectImportOptions(fullname);
% set all variable types to categorical
opts.VariableTypes(:) = {‘categorical’};
% read in new data
T = readtable(fullname,opts);
% make any modifications to new data headers to match old data
T = renamevars(T,"NewLabel","OldLabel");
% clean new table headers to match originally-wizard-imported headers (I’d ask why these exhibit different behaviour, but that’s a separate tragedy, and this current fix works – I think)
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘ ‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘(‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘)’, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘_’, ”);
I found the solution suggested here: https://au.mathworks.com/matlabcentral/answers/514921-finding-identical-rows-in-2-tables, but having done a quick test via:
foo = T(str2double(string(T.Year))<1943,:); % not my actual query, but structurally the same; this gave me ~40% of my original data
bar = T(str2double(string(T.Year))>1941,:); % similar, gave me ~70% of the original data
baz = ismember(foo,bar); % similar, gives the overlap for 1 particular year (should be about 14% of my original data)
blah = T(str2double(string(T.Year))==1942,:); % to directly extract the number of rows I am looking for
sum(baz) % What I expect here is the number of rows in the overlap
ans =
0
I found that ismember was not finding any duplicates (which were there by construction).
Note: due to categorical data I actually used T(str2double(string(T.Year))…)
Replacing
baz = ismember(foo,bar,’rows’);
sum(baz)
ans =
0
results in the same not finding any duplicates. Using double quotes "rows" does not change the behaviour.
On the other hand, using the function to assess single variables gives the expected behaviour (to some degree):
testest = ismember(foo.var1,bar.var1)
sum(testest)
The sum is now non-zero, and (because single variables are repeated more often than their combinations) gives more like 30% of the original data, which seems reasonable (the number of unique entries in the original set in that variable was about 40% of the total).
I guess I could create a logical index based on the product of multiple calls of this kind, but that seems rather… inefficient… and sensitive to the exact construction of the table/variables used in the filter. I’d rather have a generic solution for full table rows that will be robust if the overall table changes over the long term (or if/when I functionalise the code and use it for other work). Whilst most of the time, a couple of key variables can be used to identify unique rows, occasionally more information is required to distinguish pathological cases. I will probably use this approach if a more elegant solution doesn’t appear, though, and put some thought into which groups of variables are 100% correlated (and therefore useless for this distinction) to cut down the Boolean product.
I could also throw good coding practice to the winds and just write two nested loops (one for rows, one for variables) and exhaustively test every combination, but I suspect that would be even less efficient (although I wonder whether the scaling order would be the same given the nature of the comparisons required).
If it is pertinent, I imported all (>25) data columns from a .csv file as categorical variables. The original data before that were a mix of number and general columns from an Excel sheet; I could have used any or all of {double,string,categorical,datetime} to store the various variables, but there are some data which are best stored as categorical to avoid character trimming and consequent data cleaning / returning to original state steps.
Digging further, I also found this: https://au.mathworks.com/matlabcentral/answers/1775400-how-do-i-find-all-indexes-of-duplicate-names-in-a-table-column-then-compare-the-row-values-for-each which appears to imply that ismember should have the functionality I need here.
Similarly, methods using unique (see e.g. https://au.mathworks.com/matlabcentral/answers/1999193-find-duplicated-rows-in-matlab-without-for-loop or https://au.mathworks.com/matlabcentral/answers/1571588-table-find-duplicate-rows-double-char-datetime or https://au.mathworks.com/matlabcentral/answers/305987-identify-duplicate-rows-in-a-matrix) give:
size(unique([foo;bar],’rows’),1) == size(foo,1)+size(bar,1)
ans =
logical
1
instead of the expected 0 due to the lower amount of actual full-row matches. (Same for "rows" again.)
I’ve also looked into outerjoin/join/innerjoin, but those don’t seem to remove duplicates like I need. I’m using R2020b, and I want to set up a master table for appending new data to – and as part of this I want to identify any duplicate rows in the new, incoming table to filter them out before appending. Ideally, the master table will live in a related directory in a .mat file, and the new data will be read in directly from a set-name, set-location .csv using e.g.
fullname = fullfile(‘relativepath’,’newdata.csv’);
% grab column headers from input sheet
opts = detectImportOptions(fullname);
% set all variable types to categorical
opts.VariableTypes(:) = {‘categorical’};
% read in new data
T = readtable(fullname,opts);
% make any modifications to new data headers to match old data
T = renamevars(T,"NewLabel","OldLabel");
% clean new table headers to match originally-wizard-imported headers (I’d ask why these exhibit different behaviour, but that’s a separate tragedy, and this current fix works – I think)
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘ ‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘(‘, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘)’, ”);
T.Properties.VariableNames = regexprep(T.Properties.VariableNames, ‘_’, ”);
I found the solution suggested here: https://au.mathworks.com/matlabcentral/answers/514921-finding-identical-rows-in-2-tables, but having done a quick test via:
foo = T(str2double(string(T.Year))<1943,:); % not my actual query, but structurally the same; this gave me ~40% of my original data
bar = T(str2double(string(T.Year))>1941,:); % similar, gave me ~70% of the original data
baz = ismember(foo,bar); % similar, gives the overlap for 1 particular year (should be about 14% of my original data)
blah = T(str2double(string(T.Year))==1942,:); % to directly extract the number of rows I am looking for
sum(baz) % What I expect here is the number of rows in the overlap
ans =
0
I found that ismember was not finding any duplicates (which were there by construction).
Note: due to categorical data I actually used T(str2double(string(T.Year))…)
Replacing
baz = ismember(foo,bar,’rows’);
sum(baz)
ans =
0
results in the same not finding any duplicates. Using double quotes "rows" does not change the behaviour.
On the other hand, using the function to assess single variables gives the expected behaviour (to some degree):
testest = ismember(foo.var1,bar.var1)
sum(testest)
The sum is now non-zero, and (because single variables are repeated more often than their combinations) gives more like 30% of the original data, which seems reasonable (the number of unique entries in the original set in that variable was about 40% of the total).
I guess I could create a logical index based on the product of multiple calls of this kind, but that seems rather… inefficient… and sensitive to the exact construction of the table/variables used in the filter. I’d rather have a generic solution for full table rows that will be robust if the overall table changes over the long term (or if/when I functionalise the code and use it for other work). Whilst most of the time, a couple of key variables can be used to identify unique rows, occasionally more information is required to distinguish pathological cases. I will probably use this approach if a more elegant solution doesn’t appear, though, and put some thought into which groups of variables are 100% correlated (and therefore useless for this distinction) to cut down the Boolean product.
I could also throw good coding practice to the winds and just write two nested loops (one for rows, one for variables) and exhaustively test every combination, but I suspect that would be even less efficient (although I wonder whether the scaling order would be the same given the nature of the comparisons required).
If it is pertinent, I imported all (>25) data columns from a .csv file as categorical variables. The original data before that were a mix of number and general columns from an Excel sheet; I could have used any or all of {double,string,categorical,datetime} to store the various variables, but there are some data which are best stored as categorical to avoid character trimming and consequent data cleaning / returning to original state steps.
Digging further, I also found this: https://au.mathworks.com/matlabcentral/answers/1775400-how-do-i-find-all-indexes-of-duplicate-names-in-a-table-column-then-compare-the-row-values-for-each which appears to imply that ismember should have the functionality I need here.
Similarly, methods using unique (see e.g. https://au.mathworks.com/matlabcentral/answers/1999193-find-duplicated-rows-in-matlab-without-for-loop or https://au.mathworks.com/matlabcentral/answers/1571588-table-find-duplicate-rows-double-char-datetime or https://au.mathworks.com/matlabcentral/answers/305987-identify-duplicate-rows-in-a-matrix) give:
size(unique([foo;bar],’rows’),1) == size(foo,1)+size(bar,1)
ans =
logical
1
instead of the expected 0 due to the lower amount of actual full-row matches. (Same for "rows" again.)
I’ve also looked into outerjoin/join/innerjoin, but those don’t seem to remove duplicates like I need. table, ismember, rows, duplicate MATLAB Answers — New Questions
PSpice matlab co-simulation
Can someone help me with this problem?
After I configured the co-simulation environment, I clicked the run button in simulink, and finally the matlab crash task box popped up. The crash log file is as follows:
——————————————————————————–
Access violation detected at 2024-08-23 11:04:18 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 2060 Version 27.21.14.6109 (2020-12-31)
Graphics card 2 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.14046.0 (2023-3-29)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 6257193
MATLAB Root : E:
MATLAB Version : 9.13.0.2049777 (R2022b)
OpenGL : hardware
Operating System : Microsoft Windows 10 专业版
Process ID : 4572
Processor ID : x86 Family 23 Model 96 Stepping 1, AuthenticAMD
Session Key : 48ca539b-11d4-4b26-8160-2f1f574fff65
Window System : Version 10.0 (Build 19044)
Fault Count: 1
Abnormal termination:
Access violation
Current Thread: ” id 17540
Register State (from fault):
RAX = 3e112e0be826d695 RBX = 0000000000000002
RCX = 00000207c4e87d50 RDX = 000000a5b8fff0f0
RSP = 000000a5b8fff0a8 RBP = 000000000000002a
RSI = 00007ffe3b362a58 RDI = 00007ffe3b345d78
R8 = 0000020724ce0110 R9 = 0000000000004000
R10 = 0000000000000000 R11 = 0000000000000246
R12 = 0000000000000001 R13 = 00007ffe3be3e2f0
R14 = 00007ffe3b3ef180 R15 = 000000000000002a
RIP = 00007ffe3be32166 EFL = 00010202
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffe3be32166 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00008550
[ 1] 0x00007ffe3adfa844 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00501828 pspMatlabEng_eng::operator=+00022116
[ 2] 0x00007ffe3ae31c6a E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00728170 PSpiceNewDevEqDLL+00138858
[ 3] 0x00007ffe3ad8f101 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00061697 InitializeDevice+00043665
[ 4] 0x00007ffe3af7dc72 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+02088050 setHInstanceExt+00095330
[ 5] 0x00007ffe3adf3171 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471409 descSetMinTerminalCount+00108705
[ 6] 0x00007ffe3adf30d9 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471257 descSetMinTerminalCount+00108553
[ 7] 0x00007ffe3af592ad E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+01938093 PSpiceDoSimulinkAnalysis+00000925
[ 8] 0x00007ffe3be3155a E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00005466
[ 9] 0x00007ffe3be32584 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00009604
[ 10] 0x00007ffe3be73fef E:binwin64MSVCR110.dll+00147439 beginthreadex+00000263
[ 11] 0x00007ffe3be74196 E:binwin64MSVCR110.dll+00147862 endthreadex+00000402
[ 12] 0x00007fff2b4d7034 C:WindowsSystem32KERNEL32.DLL+00094260 BaseThreadInitThunk+00000020
[ 13] 0x00007fff2bf026a1 C:WindowsSYSTEM32ntdll.dll+00337569 RtlUserThreadStart+00000033
Program State:
Most Recent Simulink Activity:
playSimulationAction : OK in editor 1 at Fri Aug 23 11:03:58 2024Can someone help me with this problem?
After I configured the co-simulation environment, I clicked the run button in simulink, and finally the matlab crash task box popped up. The crash log file is as follows:
——————————————————————————–
Access violation detected at 2024-08-23 11:04:18 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 2060 Version 27.21.14.6109 (2020-12-31)
Graphics card 2 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.14046.0 (2023-3-29)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 6257193
MATLAB Root : E:
MATLAB Version : 9.13.0.2049777 (R2022b)
OpenGL : hardware
Operating System : Microsoft Windows 10 专业版
Process ID : 4572
Processor ID : x86 Family 23 Model 96 Stepping 1, AuthenticAMD
Session Key : 48ca539b-11d4-4b26-8160-2f1f574fff65
Window System : Version 10.0 (Build 19044)
Fault Count: 1
Abnormal termination:
Access violation
Current Thread: ” id 17540
Register State (from fault):
RAX = 3e112e0be826d695 RBX = 0000000000000002
RCX = 00000207c4e87d50 RDX = 000000a5b8fff0f0
RSP = 000000a5b8fff0a8 RBP = 000000000000002a
RSI = 00007ffe3b362a58 RDI = 00007ffe3b345d78
R8 = 0000020724ce0110 R9 = 0000000000004000
R10 = 0000000000000000 R11 = 0000000000000246
R12 = 0000000000000001 R13 = 00007ffe3be3e2f0
R14 = 00007ffe3b3ef180 R15 = 000000000000002a
RIP = 00007ffe3be32166 EFL = 00010202
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffe3be32166 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00008550
[ 1] 0x00007ffe3adfa844 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00501828 pspMatlabEng_eng::operator=+00022116
[ 2] 0x00007ffe3ae31c6a E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00728170 PSpiceNewDevEqDLL+00138858
[ 3] 0x00007ffe3ad8f101 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00061697 InitializeDevice+00043665
[ 4] 0x00007ffe3af7dc72 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+02088050 setHInstanceExt+00095330
[ 5] 0x00007ffe3adf3171 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471409 descSetMinTerminalCount+00108705
[ 6] 0x00007ffe3adf30d9 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471257 descSetMinTerminalCount+00108553
[ 7] 0x00007ffe3af592ad E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+01938093 PSpiceDoSimulinkAnalysis+00000925
[ 8] 0x00007ffe3be3155a E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00005466
[ 9] 0x00007ffe3be32584 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00009604
[ 10] 0x00007ffe3be73fef E:binwin64MSVCR110.dll+00147439 beginthreadex+00000263
[ 11] 0x00007ffe3be74196 E:binwin64MSVCR110.dll+00147862 endthreadex+00000402
[ 12] 0x00007fff2b4d7034 C:WindowsSystem32KERNEL32.DLL+00094260 BaseThreadInitThunk+00000020
[ 13] 0x00007fff2bf026a1 C:WindowsSYSTEM32ntdll.dll+00337569 RtlUserThreadStart+00000033
Program State:
Most Recent Simulink Activity:
playSimulationAction : OK in editor 1 at Fri Aug 23 11:03:58 2024 Can someone help me with this problem?
After I configured the co-simulation environment, I clicked the run button in simulink, and finally the matlab crash task box popped up. The crash log file is as follows:
——————————————————————————–
Access violation detected at 2024-08-23 11:04:18 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 2060 Version 27.21.14.6109 (2020-12-31)
Graphics card 2 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.14046.0 (2023-3-29)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 6257193
MATLAB Root : E:
MATLAB Version : 9.13.0.2049777 (R2022b)
OpenGL : hardware
Operating System : Microsoft Windows 10 专业版
Process ID : 4572
Processor ID : x86 Family 23 Model 96 Stepping 1, AuthenticAMD
Session Key : 48ca539b-11d4-4b26-8160-2f1f574fff65
Window System : Version 10.0 (Build 19044)
Fault Count: 1
Abnormal termination:
Access violation
Current Thread: ” id 17540
Register State (from fault):
RAX = 3e112e0be826d695 RBX = 0000000000000002
RCX = 00000207c4e87d50 RDX = 000000a5b8fff0f0
RSP = 000000a5b8fff0a8 RBP = 000000000000002a
RSI = 00007ffe3b362a58 RDI = 00007ffe3b345d78
R8 = 0000020724ce0110 R9 = 0000000000004000
R10 = 0000000000000000 R11 = 0000000000000246
R12 = 0000000000000001 R13 = 00007ffe3be3e2f0
R14 = 00007ffe3b3ef180 R15 = 000000000000002a
RIP = 00007ffe3be32166 EFL = 00010202
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffe3be32166 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00008550
[ 1] 0x00007ffe3adfa844 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00501828 pspMatlabEng_eng::operator=+00022116
[ 2] 0x00007ffe3ae31c6a E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00728170 PSpiceNewDevEqDLL+00138858
[ 3] 0x00007ffe3ad8f101 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00061697 InitializeDevice+00043665
[ 4] 0x00007ffe3af7dc72 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+02088050 setHInstanceExt+00095330
[ 5] 0x00007ffe3adf3171 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471409 descSetMinTerminalCount+00108705
[ 6] 0x00007ffe3adf30d9 E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+00471257 descSetMinTerminalCount+00108553
[ 7] 0x00007ffe3af592ad E:CadenceSPB_17.2toolsbinorPSP_ENG64.dll+01938093 PSpiceDoSimulinkAnalysis+00000925
[ 8] 0x00007ffe3be3155a E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00005466
[ 9] 0x00007ffe3be32584 E:CadenceSPB_17.2toolspspiceslpspsstub.dll+00009604
[ 10] 0x00007ffe3be73fef E:binwin64MSVCR110.dll+00147439 beginthreadex+00000263
[ 11] 0x00007ffe3be74196 E:binwin64MSVCR110.dll+00147862 endthreadex+00000402
[ 12] 0x00007fff2b4d7034 C:WindowsSystem32KERNEL32.DLL+00094260 BaseThreadInitThunk+00000020
[ 13] 0x00007fff2bf026a1 C:WindowsSYSTEM32ntdll.dll+00337569 RtlUserThreadStart+00000033
Program State:
Most Recent Simulink Activity:
playSimulationAction : OK in editor 1 at Fri Aug 23 11:03:58 2024 pspice simulink MATLAB Answers — New Questions
how to change slope of the grid line in logarithmic scale?
x=1:10;
y=exp(.3).*x.^(1); % equation 1
loglog(x,y);
grid on
I want to change the grid line of y axis with slope of equation 1.x=1:10;
y=exp(.3).*x.^(1); % equation 1
loglog(x,y);
grid on
I want to change the grid line of y axis with slope of equation 1. x=1:10;
y=exp(.3).*x.^(1); % equation 1
loglog(x,y);
grid on
I want to change the grid line of y axis with slope of equation 1. grid line, slope, logarithmic MATLAB Answers — New Questions
How to generate a triggered pulse using Simulink
I want to generate a pulse using Simulink when triggered. The output signal is initially 0, then becomes 1 when triggered for a specified time, then -1 for the same amount of time, then back to 0 and hold until triggered again. How can I do this using Simulink.I want to generate a pulse using Simulink when triggered. The output signal is initially 0, then becomes 1 when triggered for a specified time, then -1 for the same amount of time, then back to 0 and hold until triggered again. How can I do this using Simulink. I want to generate a pulse using Simulink when triggered. The output signal is initially 0, then becomes 1 when triggered for a specified time, then -1 for the same amount of time, then back to 0 and hold until triggered again. How can I do this using Simulink. trigger pulse MATLAB Answers — New Questions
figure resize behavior control: from bottom or from top
Hi, I encountered a problem when I resized a figure: The aim of resizing is to manually drag the top/bottom boundary to adjust the panel size, in order to hide the label and only show the buttons. However, the reality is that no matter where I drag, it is the top space hidden, but the space between the label and bottom remains constant. I can only hide the buttons or hide the buttons and label together, but I cannot find a way to resize to keep the bottom. Is there a way to reverse the direction of resizing?
fig = figure(‘Name’,’GUI with Buttons and Labels’,’NumberTitle’,’off’,’Position’,[100 100 400 300]);
% Create buttons
btn1 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 1′,’Position’,[50 200 100 30]);
btn2 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 2′,’Position’,[150 200 100 30]);
btn3 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 3′,’Position’,[250 200 100 30]);
% Create labels
lbl1 = uicontrol(fig,’Style’,’text’,’String’,’Label 1′,’Position’,[50 150 100 30]);
lbl2 = uicontrol(fig,’Style’,’text’,’String’,’Label 2′,’Position’,[150 150 100 30]);
lbl3 = uicontrol(fig,’Style’,’text’,’String’,’Label 3′,’Position’,[250 150 100 30]);Hi, I encountered a problem when I resized a figure: The aim of resizing is to manually drag the top/bottom boundary to adjust the panel size, in order to hide the label and only show the buttons. However, the reality is that no matter where I drag, it is the top space hidden, but the space between the label and bottom remains constant. I can only hide the buttons or hide the buttons and label together, but I cannot find a way to resize to keep the bottom. Is there a way to reverse the direction of resizing?
fig = figure(‘Name’,’GUI with Buttons and Labels’,’NumberTitle’,’off’,’Position’,[100 100 400 300]);
% Create buttons
btn1 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 1′,’Position’,[50 200 100 30]);
btn2 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 2′,’Position’,[150 200 100 30]);
btn3 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 3′,’Position’,[250 200 100 30]);
% Create labels
lbl1 = uicontrol(fig,’Style’,’text’,’String’,’Label 1′,’Position’,[50 150 100 30]);
lbl2 = uicontrol(fig,’Style’,’text’,’String’,’Label 2′,’Position’,[150 150 100 30]);
lbl3 = uicontrol(fig,’Style’,’text’,’String’,’Label 3′,’Position’,[250 150 100 30]); Hi, I encountered a problem when I resized a figure: The aim of resizing is to manually drag the top/bottom boundary to adjust the panel size, in order to hide the label and only show the buttons. However, the reality is that no matter where I drag, it is the top space hidden, but the space between the label and bottom remains constant. I can only hide the buttons or hide the buttons and label together, but I cannot find a way to resize to keep the bottom. Is there a way to reverse the direction of resizing?
fig = figure(‘Name’,’GUI with Buttons and Labels’,’NumberTitle’,’off’,’Position’,[100 100 400 300]);
% Create buttons
btn1 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 1′,’Position’,[50 200 100 30]);
btn2 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 2′,’Position’,[150 200 100 30]);
btn3 = uicontrol(fig,’Style’,’pushbutton’,’String’,’Button 3′,’Position’,[250 200 100 30]);
% Create labels
lbl1 = uicontrol(fig,’Style’,’text’,’String’,’Label 1′,’Position’,[50 150 100 30]);
lbl2 = uicontrol(fig,’Style’,’text’,’String’,’Label 2′,’Position’,[150 150 100 30]);
lbl3 = uicontrol(fig,’Style’,’text’,’String’,’Label 3′,’Position’,[250 150 100 30]); figure, resize MATLAB Answers — New Questions
Automatizing a process – automatically populating a cells
Hi Everyone,
I’m trying to automatize a process where with the data from excel file i have created a Power BI dashboard.
Long story short, I have one column “Column 5” where it’s copying the data from another column “Column 2”. I’m using the following formula =IF(G2<>””, G2, “”)
The whole data in the excel file comes through automated flow, when form is submitted through Microsoft Forms it’s populating the fields. The issue here is, when there is a new answer, Column 5 is not copying the data automatically from Column 2 but i have to drag the formula manually the whole time. If i enter a new answer manually then it’s copying it automatically.
Is there a way that Column 5 can automatically copy the data from Column 2 without manually dragging the formula?
Hi Everyone, I’m trying to automatize a process where with the data from excel file i have created a Power BI dashboard.Long story short, I have one column “Column 5” where it’s copying the data from another column “Column 2″. I’m using the following formula =IF(G2<>””, G2, “”)The whole data in the excel file comes through automated flow, when form is submitted through Microsoft Forms it’s populating the fields. The issue here is, when there is a new answer, Column 5 is not copying the data automatically from Column 2 but i have to drag the formula manually the whole time. If i enter a new answer manually then it’s copying it automatically. Is there a way that Column 5 can automatically copy the data from Column 2 without manually dragging the formula? Read More