Month: June 2024
Formula not inserting blank instead it defaults 12:00:00 AM
I have a Spreadsheet pulling data from a couple of cells and the issue i’m having is when the Ship time is blank my formula is putting in a default 12am time stamp instead of just showing a blank cell.
Formula i’m having trouble with is in the “Live Load” tab and column K
=IFNA(IF($B$2=””,””,XLOOKUP($B25,’Lavern File’!$B:$B,’Lavern File’!$K:$K)),””)
I have a Spreadsheet pulling data from a couple of cells and the issue i’m having is when the Ship time is blank my formula is putting in a default 12am time stamp instead of just showing a blank cell. Formula i’m having trouble with is in the “Live Load” tab and column K =IFNA(IF($B$2=””,””,XLOOKUP($B25,’Lavern File’!$B:$B,’Lavern File’!$K:$K)),””) Read More
SQL Server installation error on Windows 11 Pro system
Hi All,
I’m facing the decimal error code while installing the SQL Server 2022(Developer edition) on my local system. I tried the few of the methods suggested which are – 1. removing all components of previously installed instance. 2. Running setup.exe file with admin access. 3. Turn off the firewall. Didn’t get expected results. It would be very helpful if someone support me. Thanks in advance.
ERROR – exit code decimal 2068052377 Error Description: Invalid command line argument. Consult windows installer SDK for detailed command line help.
Regards,
Manu
Hi All, I’m facing the decimal error code while installing the SQL Server 2022(Developer edition) on my local system. I tried the few of the methods suggested which are – 1. removing all components of previously installed instance. 2. Running setup.exe file with admin access. 3. Turn off the firewall. Didn’t get expected results. It would be very helpful if someone support me. Thanks in advance. ERROR – exit code decimal 2068052377 Error Description: Invalid command line argument. Consult windows installer SDK for detailed command line help. Regards,Manu Read More
Beginner question: permissions for sending message to a shared channel
Very new to working with APIs, apologies in advance for any amateurishness.
I’m building a Power Automate Flow, and I have a step where I need to send a message to a shared channel. Power Automate doesn’t have an action for shared channels, so I would like to try the “Send a Microsoft Graph HTTP request” action.
I’ve written exactly 1 (one) HTTP request flow, so I know the absolute barest minimum.
I’m trying it out in the Graph explorer, but I immediately got the message:
“Missing scope permissions on the request. API requires one of ‘ChannelMessage.Send, Group.ReadWrite.All’. Scopes on the request ‘openid, profile, User.Read, email”
I understand our admin needs to grant some sort of permissions, but I’m not sure what should I ask them.
I found a few similar forum threads, but they mentioned registering an app on the Azure AD portal and granting the permissions, and I’m not sure if a Power Automate flow works the same way.
Thank you in advance!
Very new to working with APIs, apologies in advance for any amateurishness. I’m building a Power Automate Flow, and I have a step where I need to send a message to a shared channel. Power Automate doesn’t have an action for shared channels, so I would like to try the “Send a Microsoft Graph HTTP request” action.I’ve written exactly 1 (one) HTTP request flow, so I know the absolute barest minimum. I’m trying it out in the Graph explorer, but I immediately got the message:”Missing scope permissions on the request. API requires one of ‘ChannelMessage.Send, Group.ReadWrite.All’. Scopes on the request ‘openid, profile, User.Read, email” I understand our admin needs to grant some sort of permissions, but I’m not sure what should I ask them.I found a few similar forum threads, but they mentioned registering an app on the Azure AD portal and granting the permissions, and I’m not sure if a Power Automate flow works the same way. Thank you in advance! Read More
PostgreSQL for your AI app’s backend | Azure Database for PostgreSQL Flexible Server
Use PostgreSQL as a managed service on Azure. As you build generative AI apps, explore advantages of Azure Database for PostgreSQL Flexible Server such as integration with Azure AI services, as well as extensibility and compatibility, integrated enterprise capabilities to protect data, and controls for managing business continuity.
Charles Feddersen, product team lead for PostgreSQL on Azure, joins host Jeremy Chapman to share how Flexible Server is a complete PostgreSQL platform for enterprise and developers.
Generate vector embeddings for data and images.
Enhance search accuracy and semantic matches. Watch how to use the Azure AI extension with Azure Database for PostgreSQL here.
Leverage the Azure AI extension.
Calculate sentiment and show a summarization of reviews using PostgreSQL. See it here.
Simplify disaster recovery for enterprise apps.
Achieve multi-zone high availability, zero data loss, and planned failover with GeoDR.
Watch our video here:
QUICK LINKS:
00:00 — Azure Database for PostgreSQL Flexible Server
00:51 — Microsoft and PostgreSQL
01:40 — Open-source PostgreSQL
03:18 — Vector embeddings for data
04:32 — How it works with an app
06:59 — Azure AI Vision
08:14 — Azure AI extension using PostgreSQL
09:37 — Text generation using Azure AI extension
10:30 — High availability and disaster recovery|
12:45 — Wrap up
Link References
Get started with the Azure Database for PostgreSQL flexible server at https://aka.ms/postgresql
Stay current with all the updates at https://aka.ms/AzurePostgresBlog
Unfamiliar with Microsoft Mechanics?
As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
Keep getting this insider knowledge, join us on social:
Follow us on Twitter: https://twitter.com/MSFTMechanics
Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Video Transcript:
– Postgres is one of the most popular open-source databases in use today, and with its built-in vector index, plays a vital role in powering natural language generative AI experiences by searching across billions of data points to find similarity matches to support the generation of more accurate responses. But did you know that you can also use Postgres as a managed service on Azure? Today, in fact, as you build generative AI apps, we’re going to explore Azure Database for Postgres flexible server and the unique advantages such as integration with Azure AI services, as well as extensibility and compatibility, integrated enterprise capabilities to protect your data, controls for managing business continuity and more. And to walk us through all this, I’m joined, once again, by Charles Feddersen who leads the product team for Postgres on Azure. Welcome back to the show.
– Thanks for having me back, Jeremy. It’s great to be here.
– And it’s great to have you back on. You know, before we get into this, it’s probably worth explaining how Microsoft’s role is as part of the Postgres community. We’re not just putting an instance of Postgres on Azure, right?
– Yeah, what a lot of people don’t realize actually is Microsoft is a really significant contributor to Postgres, both major contributions in open-source Postgres and the surrounding ecosystem of features. We’ve contributed to many of the features that you’re probably using every day in Postgres, which include optimizations that speed up queries over highly petitioned tables. Or perhaps the single largest contribution we’re making to Postgres is to enable asynchronous and direct I/O for more efficient read and write operations in the database. We’ve learned a lot from running really demanding Postgres workloads in Azure, and this has inspired many of the performance optimizations that we’ve contributed upstream to open-source Postgres, so that everybody benefits.
– So given the pace of innovation then for the open-source community with Postgres, how do we make sure that, on Azure, we’ve got all the features and that they’re compatible with Azure Database for Postgres?
– Well, the first thing I really want to emphasize is that it’s pure open-source Postgres, and that’s by design. And this means you can run normal tools like pgAdmin, as you can see here. And there’s a really high level of compatibility with Postgres throughout the stack. And we ship new major versions of Postgres on Azure within weeks of the community release, which lets you test those latest features really quickly. Flexible service supports over 60 of the most common extensions, including PostGIS for geospatial workloads and Postgres FDW, which allows you to access data in external Postgres service. It also supports a great community-built extension called pgvector that enables Postgres to store index and query embeddings. And last year, we added the Azure AI extension, which provides direct integration between Postgres and the Azure OpenAI Service to generate vector embeddings from your data. And it also enables you to hook into capabilities like sentiment analysis, summarization, language detection and more. In fact, Azure AI support for Postgres is a major advantage of running Postgres on Azure. And this is in addition to several enterprise capabilities, such as built-in support for Microsoft Entra’s identity and access management, as well as broader security controls, like networking over private endpoints to better protect your data in transit, along with Key Vault encryption, using your own keys, including managed hardware security modules, or HSM, and more.
– Right, and this means basically that your Postgres implementation is natively integrated with your security policies for enterprise workloads, but you also mentioned that AI is a major benefit here in terms of Postgres on flexible server in Azure. So can you show us or walk through an example?
– Sure. Let me walk you through one using a travel website where the Azure AI extension has been used to generate vector embeddings for data for the travel site. And this also works for images where we can use the Azure AI Vision service to convert images to text and vectorize that information, all of which is stored in index in Postgres flexible server. And if you’re new to vectors, they’re a coordinate-like way to refer to chunks of data in your database and used for search for semantic matches. So when users submit natural language searches, those two are converted into vector embeddings. And unlike traditional keyword searches, similarity lookups find the closest semantic meaning between the vector embeddings from the user’s prompt and the embeddings stored in the database. Now additionally, the travel website uses Azure OpenAI’s GPT large language model itself to generate natural language responses using the data presented from Postgres as its context. So let’s try this out with a real app. Here’s our travel website and I’m going to book a much needed vacation. So I’ll search for San Diego and I’ve got over 120 accommodation options that I need to scroll through or filter. But now, I’m also traveling with my dog Mabel as well. So I need to find places where she can also stay. I’m going to add, allow small dogs to my search and this is going to use semantic search with embeddings to find suitable accommodations. And now, we’re down to about 90 results. So let’s look at the code to see how this works. Now, to perform the semantic similarity searches, we first need to generate text embeddings stored in a vector type in Postgres. I’ll create a new generator column of type vector and name it lodging_embedding. And this is going to store the text embeddings in our lodgings table that are based on the text descriptions column. Every time a new record is inserted, the Azure AI extension will call the OpenAI embedding model ada-002, pass the description text and return the embedding to stored. So I’ll run that query and now I’ll add an index to this new column to improve query efficiency. This is a special vector index called hnsw. It’s not your regular B-tree. And so I’ll run that and now we can do a test query against the embeddings. So I’ll switch to the vector similarity tab. And this query does a couple of interesting things. If you look at the order by clause, you can see that we’re ordering by the result of the comparison between the lodging_embedding column and the embedding we dynamically created from the search term to find the best result for allow small dogs. Now, we’re also using the PostGIS extension to add geospatial capabilities to find relevant lodging within 30 miles of a point of interest in San Diego. So I’ll run this query and you can see the top six results within 30 miles of a point of interest, ranked in order of the best semantic results for my small dog.
– So I get it, instead of creating another table or database, what you’re showing here is actually that Postgres provides a native type for embedding, so that you can actually incorporate your semantic search into your existing relational SQL workload.
– Exactly, and that’s the power of it. You don’t need a different database to handle embeddings. If you’ve got any existing Postgres apps, adding embeddings and semantic search and flexible server is as easy as adding a column and running a SQL function to call the Azure OpenAI service. So let’s go back to our hotel booking example. We also want to book a room with a beach view. I’ll add that to the search and how this works as I’m going to show you next is really cool. So I’ll head back over to a notebook and I’ve got one of the images from a property listing. Let’s take a look at the notebook cell. I can use the Azure AI Vision service to extract the embeddings from this image. And if I run this, you could see the embedding has been created and I could go ahead and store that in Postgres as well. And if we check our app again, you can see that we’re doing a text search for beach view, which is actually returning property images with a beach visible from the rooms. And the results are further refined with the suitability for my small dog. And as we can see on the left, it’s in the right distance range, within 30 miles of San Diego, which we’ve specified using geospatial in Postgres. And the amazing thing is we do it all with OpenText search, which is infinitely flexible, and not predefined filters. So I don’t need to hunt around for that often-hidden pets allowed filter.
– And the neat thing here is, as you mentioned, all of this is happening at the database layer, because we’ve actually converted all the text and all the images into vector embeddings, as part of data ingest and that’s all using Azure AI services.
– That’s right. That’s exactly right. And next, I’ll show you how you can make the search experience even richer by bringing Azure AI to summarize reviews and measure sentiment on a property. One of the most time-consuming parts of finding a great place to stay is reading the reviews. Here, we can use the Azure AI extension to calculate the sentiment and show a summarization of the reviews using Postgres. This is the output of the Coastal View College, with a 98% favorable sentiment and summary of reviews. So let’s take a look at the code. In this query, you can see we’re calling the azure_cognitive.analyze_sentiment function and passing the review_text that we want to score. I’ll run that and here you can see a positive sentiment of 98% returns. Now I’ll switch to the summary example. It’s a similar query pattern, except this time, we’re using the summarize_abstractive function to summarize the reviews into a small amount of easily-consumable text. So I’ll run this query, and here, you can see that summarized text,
– Right, and what you’ve shown here is more than just using embeddings, but also how the database can leverage other Azure capabilities to improve your app.
– That’s right. I’ve shown SQL queries that are returning results directly from the AI services, but alternatively, you could return those and store them in Postgres to reuse later. It’s really up to you, as a developer, about how you want to architect your app. Flexible server with the Azure AI extension just makes it easy to do it all using SQL. Now let’s move on to text generation, which is another area where we can use the Azure AI extension. I’m back in the website and I’ve selected the Coastal View Cottage for my stay. On the right, I can ask a freeform question about the property, but I’ve got a suggested prompt to look for hidden fees. These always seem to get me. So here, we’re using the Davinci model in the Azure OpenAI service to generate a response and it’s found a hidden fee buried in the fine print. So moving back to VS Code, I’ll run another query with the hidden fees prompt and I’ll capture those results. Now that I have the relevant context from the database, I’ll pass that to the Azure OpenAI Service Completion API and the prebuilt Davinci model to compose a response based on the results I took from the database. And this is how everything works.
– And this is a really great example of harnessing all of the AI capabilities. But something else that’s really important for an enterprise app is high availability and also disaster recovery.
– It is, and flexible server has all of those covered as well. This includes multi-zone high availability with zero data loss, zero redundant backups across regions, and recently we announced the general availability of planned failover, GeoDR. Here’s how you can configure that. I’m going to start in the portal on the Overview blade, and you can see I’ve got the Postgres flexible server called geodr running in the East US 2 region. I’ll scroll down on the left nav panel and head over to Replication where I’ve got two options: here to either create an endpoint, or create a read replica. Let’s create the read replica first. I’ll enter the replica server name and I’ll go create that in Australia Southeast, because that’s pretty much as far from East US 2 as you can get. I’ll click Review and create, and that’s now submitted. So once the replica is created on the other side of the planet, I need to create a virtual endpoint, which gives me a single endpoint for my application, so that when I do fail over, I don’t need to make any application changes to update connection strings. This time, I’ll create an endpoint. I’ll head over to the right panel and give it a name geodrvip, and you can see that the name has been appended to each of the writer and reader endpoint names below. And the reader server is the replica I just created. I’ll hit Create. And now, you can see I’ve got my virtual endpoint. So let’s test the failover using promotion. I’ll click the small Promote icon next to my replica server name. Now I’ve got some options. I can either promote this to the primary server, which means I reverse the roles of my two servers, that the replica becomes the writer, and the current writer becomes the replica. Or alternatively, I can promote this server to Standalone. I can also select if this as Planned, which means all data is synchronized to the replica prior to failover, or Forced, which executes immediately and doesn’t wait for the asynchronous replication to finish. I’ll leave everything as is and I’ll click Promote. And now, once this is finished, my geodr server that was the primary is now the replica under the reader endpoint and geodrausse is now the primary.
– Okay, so now you’ve got all your enterprise-grade data protections in place. You’ve got native vector search support and also GenAI capabilities for your apps, all powered by Postgres flexible server on Azure on the backend. So what’s next?
– So I’ve shown you how Flexible Server is a complete Postgres platform for enterprise and developers, and it’s only going to get better. We’ve got really big plans for the future, so stay tuned.
– So for everyone who’s watching right now, what do you recommend for them to get started?
– So to get started with the Azure Database for Postgres flexible server, go to aka.ms/postgresql, and to stay current with all the updates that we’re constantly shipping, check out our blog at aka.ms/AzurePostgresBlog.
– Thanks so much for joining us today, Charles. Always great to have you on to share all the updates to Postgres. Looking forward to having you back on the show. Of course, keep checking back to Microsoft Mechanics. We’ll see you next time and thanks for watching.
Microsoft Tech Community – Latest Blogs –Read More
simulink run button is missing?
. I am using MATLAB R2023b and learning simulink and facing dificulty in finding the run button. Any solution Please.
Your kind help will be appreciated. I am using MATLAB R2023b and learning simulink and facing dificulty in finding the run button. Any solution Please.
Your kind help will be appreciated . I am using MATLAB R2023b and learning simulink and facing dificulty in finding the run button. Any solution Please.
Your kind help will be appreciated transferred MATLAB Answers — New Questions
Multicomponent gas separation in hollow fiber membranes
Hi!
I’m trying to model gas separation of a gas with 4 components (H2, N2, N3H and O2) using a hollow fiber membrane. In papers I found the following equations:
(the change in the retentate flow rate)
( the change in the mole fraction of component i in the retentate)
( the change in the mole fraction of component i in the permeate)
I am solving these equations by first using the backwards finite difference method which turns the first two equations into:
Next, I am trying to use Newton’s method to solve all 9 equations together. This is my code:
%% Gas separation in a cocurrent hollow fiber membrane
thickness_membrane = 1e-6; % thickness membrane [m]
Perm_H2 = 250*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_N2 = 9*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_NH3 = 830*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_O2 = 37*3.35e-16;
R=8.314;
T = 273.15+25; % correlation temperature [K]
Per_H2 = Perm_H2/thickness_membrane; % Permeance of H2 [mol/m2 s Pa]
Per_N2 = Perm_N2/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_NH3 = Perm_NH3/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_O2 = Perm_O2/thickness_membrane;
%% Input parameters
F_feed = 1133.95/3.6; % feed [mol/s]
x_H2_F = 0.0268; % [-]
x_N2_F =0.9682; % [-]
x_NH3_F = 0.0049; % [-]
x_O2_F = 0.0001;
P_F = 35e5; % pressure feed [Pa]
P_P = 1e5; % pressure Permeate [Pa]
%% Assumptions membrane
L_fiber = 10;
r_fiber = (0.75e-3)/2;
n_fiber = 4000;
% Define the mesh
step= 80;
mesh = 0:(L_fiber/step):L_fiber; % linear space vector
nmesh= numel(mesh);
h = mesh(2)-mesh(1);
visc_H2 = 88e-6;
visc_N2 = 17.82e-6; % [Pa s]
visc_NH3 = 9.9e-6;
visc_O2 = 20.4e-6;
mu = visc_N2; % most present
syms xH2 yH2 Ff xN2 yN2 xNH3 yNH3 xO2 yO2
variables = [ Ff; xH2; xN2 ; xNH3; xO2; yH2; yN2; yNH3; yO2];
Final_results = zeros(10,10,numel(mesh));
xH2prev = x_H2_F;
xN2prev = x_N2_F;
xNH3prev = x_NH3_F;
xO2prev = x_O2_F;
%P_Pprev= P_P;
Ffprev = F_feed;
BETA = P_P/P_F;
for z = 1:nmesh
%% Equations to solve making use of the backward difference method
eq1 = (Ff-Ffprev)/h + 2*3.14*r_fiber*L_fiber*n_fiber*((Per_H2*(xH2*P_F + yH2*P_P)) + (Per_NH3*(xNH3*P_F + yNH3*P_P)) + (Per_N2*(xN2*P_F + yN2*P_P)) + (Per_O2*(xO2*P_F + yO2*P_P)) );
eq2 = (xH2- xH2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_H2*(xH2*P_F + yH2*P_P) + xH2*(Ff-Ffprev)/h);
eq3 = (xN2- xN2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_N2*(xN2*P_F + yN2*P_P) + xN2*(Ff-Ffprev)/h);
eq4 = (xNH3- xNH3prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_NH3*(xNH3*P_F + yNH3*P_P) + xNH3*(Ff-Ffprev)/h);
eq5 = (xO2- xO2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_O2*(xO2*P_F + yO2*P_P) + xO2*(Ff-Ffprev)/h);
eq6 = yH2 – (Per_H2*xH2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_H2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq7 = yN2 – (Per_N2*xN2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_N2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq8 = yNH3 – (Per_NH3*xNH3 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_NH3*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq9 = yO2 – (Per_O2*xO2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_O2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
%eq10 = ((Pp – Ppprev)/h) -( (8*R*T*mu*(F_feed-Ff))/ (3.14*r_fiber^4*n_fiber*Pp));
F = [eq1;eq2;eq3;eq4;eq5;eq6;eq7;eq8;eq9];
J=jacobian([eq1,eq2,eq3,eq4,eq5,eq6,eq7,eq8,eq9],variables);
x_0 = [10 ;0.5; 0.001; 0.1; 0.0001; 0.3; 0.01; 1; 0.001];
Final_results(1,1:9,1) = x_0′;
iterations = 100;
for iter=1:iterations
%% Newton
% Substitute the initial values into the Jacobian matrix
J_subs = subs(J, variables, x_0);
F_subs = subs(F,variables,x_0);
aug_matrix = [J_subs, -F_subs];
n= 9;
for i = 1:n-1
for j = i+1:n
factor = aug_matrix(j, i) / aug_matrix(i, i);
aug_matrix(j, 🙂 = aug_matrix(j, 🙂 – factor * aug_matrix(i, :);
end
end
y_0 = zeros(n, 1);
y_0(n) = aug_matrix(n, n+1) / aug_matrix(n,n);
for i = n-1:-1:1
y_0(i) = (aug_matrix(i, n+1) – aug_matrix(i, i+1:n) * y_0(i+1:n)) / aug_matrix(i, i);
end
x_result = y_0 + x_0;
Final_results(iter+1,1:9,z) = x_result’;
err = norm(x_result – x_0);
Final_results(iter+1,10,z) = err;
x_0 = x_result;
if err <= 1e-8
%P_Pprev = x_result(10);
disp([‘Converged after ‘, num2str(iter), ‘ iterations’]);
break;
end
end
xH2prev = x_0(2);
xN2prev = x_0(3);
xNH3prev = x_0(4);
xO2prev = x_0(5);
Ffprev = x_0(1);
end
solution_Ff_Per_step = [];
solution_xH2_Per_step = [];
solution_xN2_Per_step = [];
solution_xNH3_Per_step = [];
solution_xO2_Per_step = [];
solution_yH2_Per_step = [];
solution_yN2_Per_step = [];
solution_yNH3_Per_step = [];
solution_yO2_Per_step = [];
solution_P_P_Per_step = [];
for m=1:numel(mesh)
solution_Ff_Per_step = [Final_results(end,1,m) solution_Ff_Per_step];
solution_xH2_Per_step = [Final_results(end,2,m) solution_xH2_Per_step];
solution_xN2_Per_step = [Final_results(end,3,m) solution_xN2_Per_step];
solution_xNH3_Per_step = [Final_results(end,4,m) solution_xNH3_Per_step];
solution_xO2_Per_step = [Final_results(end,5,m) solution_xO2_Per_step];
solution_yH2_Per_step = [Final_results(end,6,m) solution_yH2_Per_step];
solution_yN2_Per_step = [Final_results(end,7,m) solution_yN2_Per_step];
solution_yNH3_Per_step = [Final_results(end,8,m) solution_yNH3_Per_step];
solution_yO2_Per_step = [Final_results(end,9,m) solution_yO2_Per_step];
%solution_P_P_Per_step = [Final_results(end,10,m) solution_P_P_Per_step];
end
figure(1)
plot(mesh,solution_Ff_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Flow rate retentate [m/s]’)
figure(2)
plot(mesh,solution_xH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in retentate’)
figure(3)
plot(mesh,solution_xN2_Per_step )
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in retentate’)
figure(4)
plot(mesh,solution_xNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in retentate’)
figure(5)
plot(mesh,solution_xO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in retentate’)
figure(6)
plot(mesh,solution_yH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in Permeate’)
figure(7)
plot(mesh,solution_yN2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in Permeate’)
figure(8)
plot(mesh,solution_yNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in Permeate’)
figure(9)
plot(mesh,solution_yO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in Permeate’)
figure(10)
% plot(mesh,solution_P_P_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Permeate pressure [bar]’)
For some reason, when the method converges it will always return the initial values of xH2prev, xNH3prev, xO2prev. xN2prev and Ffprev, meaning that there is no change in mole fractions along the membrane. When I try to change the initial conditions, the graphs show spikes in random positions whilst being 0 in other positions along the membrane. This should not be the case. Could you please help me figure out what went wrong in my code?Hi!
I’m trying to model gas separation of a gas with 4 components (H2, N2, N3H and O2) using a hollow fiber membrane. In papers I found the following equations:
(the change in the retentate flow rate)
( the change in the mole fraction of component i in the retentate)
( the change in the mole fraction of component i in the permeate)
I am solving these equations by first using the backwards finite difference method which turns the first two equations into:
Next, I am trying to use Newton’s method to solve all 9 equations together. This is my code:
%% Gas separation in a cocurrent hollow fiber membrane
thickness_membrane = 1e-6; % thickness membrane [m]
Perm_H2 = 250*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_N2 = 9*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_NH3 = 830*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_O2 = 37*3.35e-16;
R=8.314;
T = 273.15+25; % correlation temperature [K]
Per_H2 = Perm_H2/thickness_membrane; % Permeance of H2 [mol/m2 s Pa]
Per_N2 = Perm_N2/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_NH3 = Perm_NH3/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_O2 = Perm_O2/thickness_membrane;
%% Input parameters
F_feed = 1133.95/3.6; % feed [mol/s]
x_H2_F = 0.0268; % [-]
x_N2_F =0.9682; % [-]
x_NH3_F = 0.0049; % [-]
x_O2_F = 0.0001;
P_F = 35e5; % pressure feed [Pa]
P_P = 1e5; % pressure Permeate [Pa]
%% Assumptions membrane
L_fiber = 10;
r_fiber = (0.75e-3)/2;
n_fiber = 4000;
% Define the mesh
step= 80;
mesh = 0:(L_fiber/step):L_fiber; % linear space vector
nmesh= numel(mesh);
h = mesh(2)-mesh(1);
visc_H2 = 88e-6;
visc_N2 = 17.82e-6; % [Pa s]
visc_NH3 = 9.9e-6;
visc_O2 = 20.4e-6;
mu = visc_N2; % most present
syms xH2 yH2 Ff xN2 yN2 xNH3 yNH3 xO2 yO2
variables = [ Ff; xH2; xN2 ; xNH3; xO2; yH2; yN2; yNH3; yO2];
Final_results = zeros(10,10,numel(mesh));
xH2prev = x_H2_F;
xN2prev = x_N2_F;
xNH3prev = x_NH3_F;
xO2prev = x_O2_F;
%P_Pprev= P_P;
Ffprev = F_feed;
BETA = P_P/P_F;
for z = 1:nmesh
%% Equations to solve making use of the backward difference method
eq1 = (Ff-Ffprev)/h + 2*3.14*r_fiber*L_fiber*n_fiber*((Per_H2*(xH2*P_F + yH2*P_P)) + (Per_NH3*(xNH3*P_F + yNH3*P_P)) + (Per_N2*(xN2*P_F + yN2*P_P)) + (Per_O2*(xO2*P_F + yO2*P_P)) );
eq2 = (xH2- xH2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_H2*(xH2*P_F + yH2*P_P) + xH2*(Ff-Ffprev)/h);
eq3 = (xN2- xN2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_N2*(xN2*P_F + yN2*P_P) + xN2*(Ff-Ffprev)/h);
eq4 = (xNH3- xNH3prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_NH3*(xNH3*P_F + yNH3*P_P) + xNH3*(Ff-Ffprev)/h);
eq5 = (xO2- xO2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_O2*(xO2*P_F + yO2*P_P) + xO2*(Ff-Ffprev)/h);
eq6 = yH2 – (Per_H2*xH2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_H2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq7 = yN2 – (Per_N2*xN2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_N2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq8 = yNH3 – (Per_NH3*xNH3 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_NH3*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq9 = yO2 – (Per_O2*xO2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_O2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
%eq10 = ((Pp – Ppprev)/h) -( (8*R*T*mu*(F_feed-Ff))/ (3.14*r_fiber^4*n_fiber*Pp));
F = [eq1;eq2;eq3;eq4;eq5;eq6;eq7;eq8;eq9];
J=jacobian([eq1,eq2,eq3,eq4,eq5,eq6,eq7,eq8,eq9],variables);
x_0 = [10 ;0.5; 0.001; 0.1; 0.0001; 0.3; 0.01; 1; 0.001];
Final_results(1,1:9,1) = x_0′;
iterations = 100;
for iter=1:iterations
%% Newton
% Substitute the initial values into the Jacobian matrix
J_subs = subs(J, variables, x_0);
F_subs = subs(F,variables,x_0);
aug_matrix = [J_subs, -F_subs];
n= 9;
for i = 1:n-1
for j = i+1:n
factor = aug_matrix(j, i) / aug_matrix(i, i);
aug_matrix(j, 🙂 = aug_matrix(j, 🙂 – factor * aug_matrix(i, :);
end
end
y_0 = zeros(n, 1);
y_0(n) = aug_matrix(n, n+1) / aug_matrix(n,n);
for i = n-1:-1:1
y_0(i) = (aug_matrix(i, n+1) – aug_matrix(i, i+1:n) * y_0(i+1:n)) / aug_matrix(i, i);
end
x_result = y_0 + x_0;
Final_results(iter+1,1:9,z) = x_result’;
err = norm(x_result – x_0);
Final_results(iter+1,10,z) = err;
x_0 = x_result;
if err <= 1e-8
%P_Pprev = x_result(10);
disp([‘Converged after ‘, num2str(iter), ‘ iterations’]);
break;
end
end
xH2prev = x_0(2);
xN2prev = x_0(3);
xNH3prev = x_0(4);
xO2prev = x_0(5);
Ffprev = x_0(1);
end
solution_Ff_Per_step = [];
solution_xH2_Per_step = [];
solution_xN2_Per_step = [];
solution_xNH3_Per_step = [];
solution_xO2_Per_step = [];
solution_yH2_Per_step = [];
solution_yN2_Per_step = [];
solution_yNH3_Per_step = [];
solution_yO2_Per_step = [];
solution_P_P_Per_step = [];
for m=1:numel(mesh)
solution_Ff_Per_step = [Final_results(end,1,m) solution_Ff_Per_step];
solution_xH2_Per_step = [Final_results(end,2,m) solution_xH2_Per_step];
solution_xN2_Per_step = [Final_results(end,3,m) solution_xN2_Per_step];
solution_xNH3_Per_step = [Final_results(end,4,m) solution_xNH3_Per_step];
solution_xO2_Per_step = [Final_results(end,5,m) solution_xO2_Per_step];
solution_yH2_Per_step = [Final_results(end,6,m) solution_yH2_Per_step];
solution_yN2_Per_step = [Final_results(end,7,m) solution_yN2_Per_step];
solution_yNH3_Per_step = [Final_results(end,8,m) solution_yNH3_Per_step];
solution_yO2_Per_step = [Final_results(end,9,m) solution_yO2_Per_step];
%solution_P_P_Per_step = [Final_results(end,10,m) solution_P_P_Per_step];
end
figure(1)
plot(mesh,solution_Ff_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Flow rate retentate [m/s]’)
figure(2)
plot(mesh,solution_xH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in retentate’)
figure(3)
plot(mesh,solution_xN2_Per_step )
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in retentate’)
figure(4)
plot(mesh,solution_xNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in retentate’)
figure(5)
plot(mesh,solution_xO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in retentate’)
figure(6)
plot(mesh,solution_yH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in Permeate’)
figure(7)
plot(mesh,solution_yN2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in Permeate’)
figure(8)
plot(mesh,solution_yNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in Permeate’)
figure(9)
plot(mesh,solution_yO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in Permeate’)
figure(10)
% plot(mesh,solution_P_P_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Permeate pressure [bar]’)
For some reason, when the method converges it will always return the initial values of xH2prev, xNH3prev, xO2prev. xN2prev and Ffprev, meaning that there is no change in mole fractions along the membrane. When I try to change the initial conditions, the graphs show spikes in random positions whilst being 0 in other positions along the membrane. This should not be the case. Could you please help me figure out what went wrong in my code? Hi!
I’m trying to model gas separation of a gas with 4 components (H2, N2, N3H and O2) using a hollow fiber membrane. In papers I found the following equations:
(the change in the retentate flow rate)
( the change in the mole fraction of component i in the retentate)
( the change in the mole fraction of component i in the permeate)
I am solving these equations by first using the backwards finite difference method which turns the first two equations into:
Next, I am trying to use Newton’s method to solve all 9 equations together. This is my code:
%% Gas separation in a cocurrent hollow fiber membrane
thickness_membrane = 1e-6; % thickness membrane [m]
Perm_H2 = 250*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_N2 = 9*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_NH3 = 830*3.35e-16; % Permeability H2 [mol m/m2 s Pa]
Perm_O2 = 37*3.35e-16;
R=8.314;
T = 273.15+25; % correlation temperature [K]
Per_H2 = Perm_H2/thickness_membrane; % Permeance of H2 [mol/m2 s Pa]
Per_N2 = Perm_N2/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_NH3 = Perm_NH3/thickness_membrane; % Permeance of N2 [mol/m2 s Pa]
Per_O2 = Perm_O2/thickness_membrane;
%% Input parameters
F_feed = 1133.95/3.6; % feed [mol/s]
x_H2_F = 0.0268; % [-]
x_N2_F =0.9682; % [-]
x_NH3_F = 0.0049; % [-]
x_O2_F = 0.0001;
P_F = 35e5; % pressure feed [Pa]
P_P = 1e5; % pressure Permeate [Pa]
%% Assumptions membrane
L_fiber = 10;
r_fiber = (0.75e-3)/2;
n_fiber = 4000;
% Define the mesh
step= 80;
mesh = 0:(L_fiber/step):L_fiber; % linear space vector
nmesh= numel(mesh);
h = mesh(2)-mesh(1);
visc_H2 = 88e-6;
visc_N2 = 17.82e-6; % [Pa s]
visc_NH3 = 9.9e-6;
visc_O2 = 20.4e-6;
mu = visc_N2; % most present
syms xH2 yH2 Ff xN2 yN2 xNH3 yNH3 xO2 yO2
variables = [ Ff; xH2; xN2 ; xNH3; xO2; yH2; yN2; yNH3; yO2];
Final_results = zeros(10,10,numel(mesh));
xH2prev = x_H2_F;
xN2prev = x_N2_F;
xNH3prev = x_NH3_F;
xO2prev = x_O2_F;
%P_Pprev= P_P;
Ffprev = F_feed;
BETA = P_P/P_F;
for z = 1:nmesh
%% Equations to solve making use of the backward difference method
eq1 = (Ff-Ffprev)/h + 2*3.14*r_fiber*L_fiber*n_fiber*((Per_H2*(xH2*P_F + yH2*P_P)) + (Per_NH3*(xNH3*P_F + yNH3*P_P)) + (Per_N2*(xN2*P_F + yN2*P_P)) + (Per_O2*(xO2*P_F + yO2*P_P)) );
eq2 = (xH2- xH2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_H2*(xH2*P_F + yH2*P_P) + xH2*(Ff-Ffprev)/h);
eq3 = (xN2- xN2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_N2*(xN2*P_F + yN2*P_P) + xN2*(Ff-Ffprev)/h);
eq4 = (xNH3- xNH3prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_NH3*(xNH3*P_F + yNH3*P_P) + xNH3*(Ff-Ffprev)/h);
eq5 = (xO2- xO2prev)/h – 1/Ff * (2*3.14*r_fiber*L_fiber*n_fiber*Per_O2*(xO2*P_F + yO2*P_P) + xO2*(Ff-Ffprev)/h);
eq6 = yH2 – (Per_H2*xH2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_H2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq7 = yN2 – (Per_N2*xN2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_N2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq8 = yNH3 – (Per_NH3*xNH3 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_NH3*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
eq9 = yO2 – (Per_O2*xO2 * ((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) ) / (1-BETA + BETA*Per_O2*((yH2/Per_H2) + (yN2/Per_N2) + (yNH3/Per_NH3) + (yO2/Per_O2) )));
%eq10 = ((Pp – Ppprev)/h) -( (8*R*T*mu*(F_feed-Ff))/ (3.14*r_fiber^4*n_fiber*Pp));
F = [eq1;eq2;eq3;eq4;eq5;eq6;eq7;eq8;eq9];
J=jacobian([eq1,eq2,eq3,eq4,eq5,eq6,eq7,eq8,eq9],variables);
x_0 = [10 ;0.5; 0.001; 0.1; 0.0001; 0.3; 0.01; 1; 0.001];
Final_results(1,1:9,1) = x_0′;
iterations = 100;
for iter=1:iterations
%% Newton
% Substitute the initial values into the Jacobian matrix
J_subs = subs(J, variables, x_0);
F_subs = subs(F,variables,x_0);
aug_matrix = [J_subs, -F_subs];
n= 9;
for i = 1:n-1
for j = i+1:n
factor = aug_matrix(j, i) / aug_matrix(i, i);
aug_matrix(j, 🙂 = aug_matrix(j, 🙂 – factor * aug_matrix(i, :);
end
end
y_0 = zeros(n, 1);
y_0(n) = aug_matrix(n, n+1) / aug_matrix(n,n);
for i = n-1:-1:1
y_0(i) = (aug_matrix(i, n+1) – aug_matrix(i, i+1:n) * y_0(i+1:n)) / aug_matrix(i, i);
end
x_result = y_0 + x_0;
Final_results(iter+1,1:9,z) = x_result’;
err = norm(x_result – x_0);
Final_results(iter+1,10,z) = err;
x_0 = x_result;
if err <= 1e-8
%P_Pprev = x_result(10);
disp([‘Converged after ‘, num2str(iter), ‘ iterations’]);
break;
end
end
xH2prev = x_0(2);
xN2prev = x_0(3);
xNH3prev = x_0(4);
xO2prev = x_0(5);
Ffprev = x_0(1);
end
solution_Ff_Per_step = [];
solution_xH2_Per_step = [];
solution_xN2_Per_step = [];
solution_xNH3_Per_step = [];
solution_xO2_Per_step = [];
solution_yH2_Per_step = [];
solution_yN2_Per_step = [];
solution_yNH3_Per_step = [];
solution_yO2_Per_step = [];
solution_P_P_Per_step = [];
for m=1:numel(mesh)
solution_Ff_Per_step = [Final_results(end,1,m) solution_Ff_Per_step];
solution_xH2_Per_step = [Final_results(end,2,m) solution_xH2_Per_step];
solution_xN2_Per_step = [Final_results(end,3,m) solution_xN2_Per_step];
solution_xNH3_Per_step = [Final_results(end,4,m) solution_xNH3_Per_step];
solution_xO2_Per_step = [Final_results(end,5,m) solution_xO2_Per_step];
solution_yH2_Per_step = [Final_results(end,6,m) solution_yH2_Per_step];
solution_yN2_Per_step = [Final_results(end,7,m) solution_yN2_Per_step];
solution_yNH3_Per_step = [Final_results(end,8,m) solution_yNH3_Per_step];
solution_yO2_Per_step = [Final_results(end,9,m) solution_yO2_Per_step];
%solution_P_P_Per_step = [Final_results(end,10,m) solution_P_P_Per_step];
end
figure(1)
plot(mesh,solution_Ff_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Flow rate retentate [m/s]’)
figure(2)
plot(mesh,solution_xH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in retentate’)
figure(3)
plot(mesh,solution_xN2_Per_step )
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in retentate’)
figure(4)
plot(mesh,solution_xNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in retentate’)
figure(5)
plot(mesh,solution_xO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in retentate’)
figure(6)
plot(mesh,solution_yH2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction H_{2} in Permeate’)
figure(7)
plot(mesh,solution_yN2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction N_{2} in Permeate’)
figure(8)
plot(mesh,solution_yNH3_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction NH_{3} in Permeate’)
figure(9)
plot(mesh,solution_yO2_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Mole fraction O_{2} in Permeate’)
figure(10)
% plot(mesh,solution_P_P_Per_step)
xlabel(‘Distance along membrane module [m]’)
ylabel(‘Permeate pressure [bar]’)
For some reason, when the method converges it will always return the initial values of xH2prev, xNH3prev, xO2prev. xN2prev and Ffprev, meaning that there is no change in mole fractions along the membrane. When I try to change the initial conditions, the graphs show spikes in random positions whilst being 0 in other positions along the membrane. This should not be the case. Could you please help me figure out what went wrong in my code? iterative, newton, backwards difference, finite MATLAB Answers — New Questions
Data to train RL agent (PPO)
I have 2 arrays which are 8001×2 size. one is input and other is output array.
now can i use these two arrays to train my RL agent ? (PPO agent)
i saw the example of using data to train RL agent on mathworks site but their data contains state actions rewards and all the other information as well. is it not possible with just the input and output array to train my RL agent ?I have 2 arrays which are 8001×2 size. one is input and other is output array.
now can i use these two arrays to train my RL agent ? (PPO agent)
i saw the example of using data to train RL agent on mathworks site but their data contains state actions rewards and all the other information as well. is it not possible with just the input and output array to train my RL agent ? I have 2 arrays which are 8001×2 size. one is input and other is output array.
now can i use these two arrays to train my RL agent ? (PPO agent)
i saw the example of using data to train RL agent on mathworks site but their data contains state actions rewards and all the other information as well. is it not possible with just the input and output array to train my RL agent ? reinforcement learning, simulink, rl, ppo, emmanouil tzorakoleftherakis MATLAB Answers — New Questions
Evaluating the quality of AI document data extraction with small and large language models
Evaluating the effectiveness of AI models in document data extraction. Comparing accuracy, speed, and cost-effectiveness between Small and Large Language Models (SLMs and LLMs).
Context
As the adoption of AI in solutions increases, technical decision-makers face challenges in selecting the most effective approach for document data extraction. Ensuring high quality is crucial, particularly when dealing with critical solutions where minor errors have substantial consequences. As the volume of documents increases, it becomes essential to choose solutions that can scale efficiently without compromising performance.
This article evaluates AI document data extraction techniques using Small Language Models (SLMs) and Large Language Models (LLMs). Including a specific focus on structured and unstructured data scenarios.
By evaluating models, the article provides insights into their accuracy, speed, and cost-efficiency for quality data extraction. It provides both guidance into evaluating models, as well as the quality of the outputs from models for specific scenarios.
Key challenges of effective document data extraction
With many AI models available to ISVs and Digital Natives, challenges arise in which technique is the most effective for quality document data extraction. When evaluating the quality of AI models, key challenges include:
Ensuring high accuracy and reliability. High accuracy is crucial, especially for critical applications such as legal or financial documents. Minor errors in data extraction could lead to significant issues. Additionally, robust data validation mechanisms verify the data and minimize false positives and negatives.
Getting results in a timely manner. As the volume of documents increases, the selected approach must scale efficiently to handle large document quantities without significant impact. Balancing the need for fast processing speeds with maintaining high accuracy levels is challenging.
Balancing cost with accuracy and efficiency. Ensuring high accuracy and efficiency often requires the most advanced AI models, which can be expensive. Evaluating AI models and techniques highlights the most cost-effective solution without compromising on the quality of the data extraction.
When choosing an AI model for document data extraction on Azure, there is no one-size-fits-all solution. Depending on the scenario, one may outperform another for accuracy at the sacrifice of cost. While another model may provide sufficient accuracy at a much lower cost.
Establishing evaluation techniques for AI models in document data extraction
When evaluating AI models for document data extraction, it’s important to understand how they perform for specific use cases. This evaluation focused on structured and unstructured scenarios to provide insights into simple and complex document structures.
Evaluation Scenarios
Structured Data: Invoices
Simple: A 2-page invoice, including returns, with a clear table structure, well-defined columns, handwritten signatures, and typed text.
Complex: A 2-page invoice with a grid-based layout, handwritten signatures, overlapping content, and handwritten notes spanning multiple rows.
Unstructured Data: Vehicle Insurance
A 13-page vehicle insurance document containing both structured data in initial pages, and natural, domain-specific language on subsequent pages. This scenario focuses on extracting data by combining structured data with the natural language throughout the document.
Models and Techniques
This evaluation focused on two techniques for data extraction with the language models:
Markdown Extraction with Azure AI Document Intelligence. This technique involves converting the document into Markdown using the pre-built layout model in Azure AI Document Intelligence. Read more about this technique in our detailed article.
Vision Capabilities of Multi-Modal Language Models. This technique focuses on GPT-4 Turbo and Omni models by converting the document pages to images. This leverages the models’ capabilities to analyze both text and visual elements. Explore this technique in more detail in our sample project.
For each technique, the model is prompted using a one-shot technique, providing the expected output schema for the response. This establishes the intention, improving the overall accuracy of the generated output.
The AI models evaluated in this analysis include:
Phi-3 Mini 128K Instruct, an SLM deployed as a serverless endpoint in Azure AI Studio
GPT-3.5 Turbo (1106), an LLM deployed with 10K TPM in Azure OpenAI
GPT-4 Turbo (2024-04-09), an LLM deployed with 10K TPM in Azure OpenAI
GPT-4 Omni (2024-05-13), an LLM deployed with 10K TPM in Azure OpenAI
Evaluation Methodology
To ensure a reliable and consistent evaluation, the following approach was established:
Baseline Accuracy. A single source of truth for the data extraction results ensures each model’s output is compared against a standard. This approach, while manually intensive, provides a precise measure for the accuracy.
Execution Time. This is calculated based on the time between the initial request for data extraction to the response, without streaming. For scenarios utilizing the Markdown technique, the time is based on the end-to-end processing, including the request and response from Azure AI Document Intelligence.
Cost Analysis. Using the average input and output tokens from each iteration, the estimated cost per 1,000 pages is calculated, providing a clearer picture of cost-effectiveness at scale.
Consistent Prompting. Each model has the same system and extraction prompt. The system prompt is consistent across all scenarios as “You are an AI assistant that extracts data from documents and returns them as structured JSON objects. Do not return as a code block”. Each scenario has its own extraction prompt including the output schema.
Multiple Iterations. Each document is run 20 times per model technique. Every property in the result compares for an exact match against the standard response. This establishes the averages for accuracy, execution time, and cost.
These metrics establish the baseline evaluation. By establishing the baseline, it is possible to experiment with the prompt, schema, and request configuration. This allows you to compare improvements in the overall quality by evaluating the accuracy, speed, and cost.
For the evaluation outlined in this article, we created a .NET NUnit test project with multiple fixtures and cases. The tests take advantage of the .NET SDKs for both Azure AI Document Intelligence and Azure OpenAI.
Each model and technique combination per scenario is run independently. This is to ensure that the speed is evaluated fairly for each request.
You can find the repository for this evaluation on GitHub.
Evaluating AI Models for Structured Data
Simple Invoice Document Structure
1,000 pages
The results of this scenario indicates there is consistency across all models in both accuracy and speed. GPT-4 Turbo, when processing Markdown, presents an outlier in this scenario for speed.
GPT-4 Turbo and GPT-4 Omni for both techniques have the highest accuracy, while Phi-3 Mini has the lowest.
Phi-3 Mini and GPT-4 Turbo (Vision) are the fastest at processing. GPT-4 Turbo (Markdown) has the worst speed, almost 3x slower than all other models and techniques.
GPT-4 Omni (Vision) is significantly cheaper per 1,000 pages than other models. GPT-4 Turbo (Markdown) is almost 3x more expensive than GPT-4 Omni using Vision capabilities.
It is important to note that Markdown conversion strips away visual elements, providing only the result of OCR as text. This can often lead to potential misinterpretations, such as false positives for signatures. When using models with vision capabilities, visual elements are often interpreted correctly, resulting in a higher true positive accuracy.
For high-accuracy requirements, GPT-4-Omni with Vision capabilities are the best choice due to its excellent performance and cost-effectiveness. For simpler tasks where speed is a priority, models like Phi-3-Mini-128K-Instruct can be considered, but with the understanding that accuracy will be significantly lower.
Complex Invoice Document Structure
1,000 pages
The results of this scenario indicate that all models provide high accuracy for extracting structured data from invoices, but key differences in speed and cost highlight the trade-offs.
All models have 90%+ accuracy in this scenario. However, GPT-4 Omni (Vision) stands out as the highest accuracy for extraction, with minimal error. This is particularly beneficial for solutions where data integrity is critical, such as financial reporting and compliance.
Both GPT-4 Omni (Vision) and GPT-3.5 Turbo (Markdown) are the fastest for end-to-end data extraction. Like the previous example, GPT-4 Turbo (Markdown) has the worst speed, almost 6x slower than GPT-3.5 Turbo (Markdown).
Like the previous example, GPT-4 Omni (Vision) is cheaper per 1,000 pages than other models. However, closely followed by Phi-3 Mini in this scenario highlights how small language models can perform just as well for data extraction scenarios.
Models using vision techniques, like GPT-4 Omni (Vision), excel in interpreting visual elements of documents, particularly where they overlap content or require visual clues to direct the output. This minimizes false positives that occur when models interpret these elements based purely on surrounding textual clues.
For applications heavily reliant on visual data extraction, avoid extracting data using the Markdown technique and prefer vision-based models like GPT-4 Omni (Vision). For pure text-based extractions where speed is a priority, Markdown-based models like GPT-3.5 Turbo can be suitable.
However, in this specific use case, GPT-4 Omni (Vision) is the best overall technique, providing high accuracy and speed, at lower costs compared to other models.
Evaluating AI Models for Unstructured Data
Complex Vehicle Insurance Document
1,000 pages
The results of this scenario indicate that more advanced, multi-modal models excel in accuracy and cost efficiency for extract data from unstructured documents. It can be assumed that the advanced nature of the language model allows it to better interpret contextual clues in the natural language of the document to infer the expected output.
Accuracy is spread across models for unstructured data, with GPT-4 Omni (Vision) providing the most accurate with minimal error. The complexity in domain language and natural language processing for extracting values results in poor accuracy for small language models, such as Phi-3 Mini.
Speed is also varied across models, possibly due to the increase in the number of pages, as well as the complexity of the language understanding required to extract specific values from text-based rules in the contract. GPT-4 Turbo (Markdown) continues to provide the worst speed, a trend recognized across all scenarios.
In this specific scenario, GPT-4 Omni (Vision) is significantly cheaper than other model scenarios while achieving the highest accuracy. To the next cheapest, the factor is over 2x, and over 4x to the most expensive, GPT-4 Turbo (Markdown). This factor could drastically reduce the overall cost for large-scale document processing solutions.
In analyzing the extraction of data from complex unstructured documents, GPT-4 Omni (Vision) outperforms all other models and techniques. This superiority is seen across accuracy, speed, and cost.
While highly accurate, GPT-4 with Vision models have a limit of 10 images per request which can put a limit on how many pages can be processed in a single request. Effective pre-processing, such as stitching pages together, is essential to maximize the accuracy of the extraction. However, avoid overloading images with too many pages, as this can reducing the overall resolution of text, significantly degrading the model’s performance. Where processing of large page documents is required, considering the adoption of the Markdown extraction technique with advanced models such as GPT-4 Omni may be preferable.
Conclusion
Effective evaluation of AI document data extraction techniques using small language models (SLMs) and large language models (LLMs) can reveal benefits and drawbacks, guiding in the selection of the optimal approach for specific use cases.
The key findings from our analysis shows:
For high-accuracy requirements, especially in critical applications such as legal or financial documents, GPT-4 Omni with Vision capabilities stand out. It consistently delivers the highest accuracy across both structured and unstructured data scenarios.
SLMs like Phi-3 Mini-128K, while cost-effective, show significant limitations in accuracy, particularly with complex and unstructured documents.
Speed varies significantly between models and techniques. GPT-4 Turbo (Markdown) consistently shows the worst performance in terms of speed, making it less suitable for time-sensitive applications.
Models using Vision techniques, such as GPT-4 Omni (Vision), offer a balance of high accuracy and reasonable speed, making them ideal for applications requiring fast and accurate data extraction.
Cost considerations are crucial when scaling to large volumes of documents. GPT-4 Omni (Vision) not only provides high accuracy but also proves to be the most cost-effective per 1,000 pages, especially in scenarios with complex unstructured data.
Recommendations for Evaluating AI Models in Document Data Extraction
High-Accuracy Solutions. For solutions where accuracy is critical or visual elements are necessary, such as financial reporting or compliance, evaluate GPT-4 Omni with Vision capabilities. Its superior performance in accuracy and cost-effectiveness justifies the investment.
Text-Based Extractions. For simpler, text-based document extractions where speed is a priority, consider models like GPT-3.5 Turbo using Markdown. It provides sufficient accuracy at a lower cost and faster processing time.
Adopt Evaluation Techniques. Implement a rigorous evaluation methodology like the one used in this analysis. Establishing a baseline for accuracy, speed, and cost through multiple iterations and consistent prompting ensures reliable and comparable results. Regularly conduct evaluations when considering new techniques, models, prompts, and configurations. This helps in making guided decisions when opting for an approach in your specific use cases.
Read more on AI Document Intelligence
Thank you for taking the time to read this article. We are sharing our insights for ISVs and Digital Natives that enable document intelligence in their AI-powered solutions, based on real-world challenges we encounter. We invite you to continue your learning through our additional insights in this series.
Discover how to enhance data extraction accuracy with Azure AI Document Intelligence by tailoring models to your unique document structures.
Discover how Azure AI Document Intelligence and Azure OpenAI efficiently extract structured data from documents, streamlining document processing workflows for AI-powered solutions.
Evaluating the quality of AI document data extraction with small and large language models
Discover our evaluation of the effectiveness of AI models in quality document data extraction using small and large language models (SLMs and LLMs).
Further Reading
Phi-3 Open Models – Small Language Models | Microsoft Azure
Learn more about the Phi-3 small language models and their potential, including running effectively in offline environments.
Prompt engineering techniques with Azure OpenAI | Microsoft Learn
Discover how to improve your prompting techniques with Azure OpenAI to maximize the accuracy of your document data extraction.
Microsoft Tech Community – Latest Blogs –Read More
How should I set the value of u_tau?
Hello everyone, I would like to use pdepe for solving a SIR partial differential equations in one dimension with time lag, so it looks good. But there has been a problem setting the value of u_tau, resulting in a very different result than expected. I don’t know what’s wrong with the code.Here is the part of the code involved:
function u_tau = interpolate_history(history, x, t, tau)
% Interpolating function to get the state at time t – tau
t_target = t – tau;
if t_target < history.t(1)
% u_tau = squeeze(history.u(1, :, :)); % Out of history range, use first value
u_tau = 0;
else
% u_tau = interp1(history.t, squeeze(history.u(:, :, :)), t_target, ‘linear’, ‘extrap’);
u_tau = 0;
end
end
function u0 = SIRInitialConditions3(x)
S0 = 1.25;
I0 = 0.85;
R0 = 0.74;
u0 = [S0; I0; R0];
end
% This is the part of the main function that deals with u_tau
function [c,f,s] = SIRPDE3(x, t, u, DuDx)
global history;
S = u(1);
I = u(2);
R = u(3);
u_tau = interpolate_history(history, x, t, tau);
S_pre = u_tau(1);
end
% This is the part of the main function that deals with u_tau
function run()
xspan = linspace(0, 10, 100);
tspan = [0, 10, 50];
% Initialising the history
history = initialize_history(xspan, tspan, @SIRInitialConditions3);
sol = pdepe(0, @SIRPDE3, @SIRInitialConditions3, @SIRBoundaryConditions3, xspan, tspan);
I originally ran the simulation with the two equations commented above neither of which resulted in a trend over time, so I set all u_tau to 0. The trend over time then turned out to be correct, but the initial values of I and R were still 0, which was not the expected result. I think there might be something wrong with the part function u_tau = interpolate_history(history, x, t, tau), how should I change it?
Thanks to the community. :)Hello everyone, I would like to use pdepe for solving a SIR partial differential equations in one dimension with time lag, so it looks good. But there has been a problem setting the value of u_tau, resulting in a very different result than expected. I don’t know what’s wrong with the code.Here is the part of the code involved:
function u_tau = interpolate_history(history, x, t, tau)
% Interpolating function to get the state at time t – tau
t_target = t – tau;
if t_target < history.t(1)
% u_tau = squeeze(history.u(1, :, :)); % Out of history range, use first value
u_tau = 0;
else
% u_tau = interp1(history.t, squeeze(history.u(:, :, :)), t_target, ‘linear’, ‘extrap’);
u_tau = 0;
end
end
function u0 = SIRInitialConditions3(x)
S0 = 1.25;
I0 = 0.85;
R0 = 0.74;
u0 = [S0; I0; R0];
end
% This is the part of the main function that deals with u_tau
function [c,f,s] = SIRPDE3(x, t, u, DuDx)
global history;
S = u(1);
I = u(2);
R = u(3);
u_tau = interpolate_history(history, x, t, tau);
S_pre = u_tau(1);
end
% This is the part of the main function that deals with u_tau
function run()
xspan = linspace(0, 10, 100);
tspan = [0, 10, 50];
% Initialising the history
history = initialize_history(xspan, tspan, @SIRInitialConditions3);
sol = pdepe(0, @SIRPDE3, @SIRInitialConditions3, @SIRBoundaryConditions3, xspan, tspan);
I originally ran the simulation with the two equations commented above neither of which resulted in a trend over time, so I set all u_tau to 0. The trend over time then turned out to be correct, but the initial values of I and R were still 0, which was not the expected result. I think there might be something wrong with the part function u_tau = interpolate_history(history, x, t, tau), how should I change it?
Thanks to the community. 🙂 Hello everyone, I would like to use pdepe for solving a SIR partial differential equations in one dimension with time lag, so it looks good. But there has been a problem setting the value of u_tau, resulting in a very different result than expected. I don’t know what’s wrong with the code.Here is the part of the code involved:
function u_tau = interpolate_history(history, x, t, tau)
% Interpolating function to get the state at time t – tau
t_target = t – tau;
if t_target < history.t(1)
% u_tau = squeeze(history.u(1, :, :)); % Out of history range, use first value
u_tau = 0;
else
% u_tau = interp1(history.t, squeeze(history.u(:, :, :)), t_target, ‘linear’, ‘extrap’);
u_tau = 0;
end
end
function u0 = SIRInitialConditions3(x)
S0 = 1.25;
I0 = 0.85;
R0 = 0.74;
u0 = [S0; I0; R0];
end
% This is the part of the main function that deals with u_tau
function [c,f,s] = SIRPDE3(x, t, u, DuDx)
global history;
S = u(1);
I = u(2);
R = u(3);
u_tau = interpolate_history(history, x, t, tau);
S_pre = u_tau(1);
end
% This is the part of the main function that deals with u_tau
function run()
xspan = linspace(0, 10, 100);
tspan = [0, 10, 50];
% Initialising the history
history = initialize_history(xspan, tspan, @SIRInitialConditions3);
sol = pdepe(0, @SIRPDE3, @SIRInitialConditions3, @SIRBoundaryConditions3, xspan, tspan);
I originally ran the simulation with the two equations commented above neither of which resulted in a trend over time, so I set all u_tau to 0. The trend over time then turned out to be correct, but the initial values of I and R were still 0, which was not the expected result. I think there might be something wrong with the part function u_tau = interpolate_history(history, x, t, tau), how should I change it?
Thanks to the community. 🙂 pdepe, lag, diffusion equation, sir model MATLAB Answers — New Questions
Wind oing Dicom image
how we can windoing CT scans image using metedata information( metadata.WindowCenter metadata.WindowWidth)how we can windoing CT scans image using metedata information( metadata.WindowCenter metadata.WindowWidth) how we can windoing CT scans image using metedata information( metadata.WindowCenter metadata.WindowWidth) windoing, dicom image, ct scans MATLAB Answers — New Questions
Convert VCF to PDF : Simplify Your Contacts Management
In our digital age, managing contacts efficiently is crucial for both personal and professional success. While VCF (vCard) files are a popular format for storing contact information, there are times when converting these files to a more universally accessible format like PDF is necessary. This guide will explore why and how to convert VCF to PDF, making your contacts more accessible and presentable.
Why Convert VCF to PDF?
Universal Accessibility
PDFs are widely recognized and can be opened on almost any device without the need for special software. Converting your contacts from VCF to PDF ensures that you can access and share your contact information easily, regardless of the recipient’s device or software capabilities.
Enhanced Presentation
PDFs provide a more visually appealing format for displaying contact information. Whether you’re preparing for a business meeting or organizing personal contacts, a well-formatted PDF can present your information in a clean and professional manner.
Easy Sharing and Printing
PDFs are ideal for sharing and printing. You can quickly send a PDF file via email or print it out for offline access, making it a versatile choice for various scenarios.
How to Convert VCF to PDF
Converting VCF files to PDF can be done using several methods, including online tools, software applications, and manual conversion processes. Here’s a step-by-step guide for each method:
Using Online Tools
Online converters are a quick and easy way to convert VCF files to PDF without installing any software. Here’s how you can do it:
Upload Your VCF File: Go to the chosen website and upload your VCF file. Most platforms support drag-and-drop functionality for convenience.Convert the File: Select PDF as the output format and start the conversion process. This usually takes a few seconds to a minute, depending on the file size.Download the PDF: Once the conversion is complete, download the PDF file to your device.
Using Software Applications
If you prefer using software applications, several programs offer VCF to PDF Converter capabilities:
Microsoft Outlook: Import your VCF file into Outlook, then export the contacts to a CSV file. Use a document editor like Microsoft Word to format the contacts and save or export the document as a PDF.Adobe Acrobat: Import the VCF file into an application that supports contact files (like Google Contacts), export the contacts to a CSV, and then use Adobe Acrobat to convert VCF to PDF.Third-Party Software: Applications like vCard Wizard and ContactsMate provide direct VCF to PDF conversion features, simplifying the process.
Manual Conversion
For those who prefer a hands-on approach, manual conversion involves several steps:
Export VCF to CSV: Use an email client or contact management app to export the VCF file to a CSV format.Format in a Document Editor: Open the CSV file in Excel or Google Sheets to organize and format the data. Copy the formatted contacts into a document editor like Microsoft Word or Google Docs.Save as PDF: Once you’re satisfied with the formatting, save or export the document as a PDF file.
Tips for Effective Conversion
Check Data Integrity: Ensure that all contact information is correctly transferred during the conversion process. Verify email addresses, phone numbers, and other critical details.Format Consistently: Maintain a consistent format throughout the PDF to enhance readability. Use clear headings, bullet points, and appropriate spacing.Protect Sensitive Information: If your contacts include sensitive information, consider encrypting the PDF or adding password protection to ensure privacy and security.
Conclusion
Convert VCF files to PDF can significantly enhance how you manage and share your contact information. Whether you opt for online tools, software applications, or manual methods, the process is straightforward and offers numerous benefits in terms of accessibility, presentation, and convenience. By following the steps outlined in this guide, you can ensure your contacts are well-organized and readily available whenever you need them.
In our digital age, managing contacts efficiently is crucial for both personal and professional success. While VCF (vCard) files are a popular format for storing contact information, there are times when converting these files to a more universally accessible format like PDF is necessary. This guide will explore why and how to convert VCF to PDF, making your contacts more accessible and presentable. Why Convert VCF to PDF? Universal AccessibilityPDFs are widely recognized and can be opened on almost any device without the need for special software. Converting your contacts from VCF to PDF ensures that you can access and share your contact information easily, regardless of the recipient’s device or software capabilities. Enhanced PresentationPDFs provide a more visually appealing format for displaying contact information. Whether you’re preparing for a business meeting or organizing personal contacts, a well-formatted PDF can present your information in a clean and professional manner. Easy Sharing and PrintingPDFs are ideal for sharing and printing. You can quickly send a PDF file via email or print it out for offline access, making it a versatile choice for various scenarios. How to Convert VCF to PDFConverting VCF files to PDF can be done using several methods, including online tools, software applications, and manual conversion processes. Here’s a step-by-step guide for each method: Using Online Tools Online converters are a quick and easy way to convert VCF files to PDF without installing any software. Here’s how you can do it:Upload Your VCF File: Go to the chosen website and upload your VCF file. Most platforms support drag-and-drop functionality for convenience.Convert the File: Select PDF as the output format and start the conversion process. This usually takes a few seconds to a minute, depending on the file size.Download the PDF: Once the conversion is complete, download the PDF file to your device. Using Software ApplicationsIf you prefer using software applications, several programs offer VCF to PDF Converter capabilities:Microsoft Outlook: Import your VCF file into Outlook, then export the contacts to a CSV file. Use a document editor like Microsoft Word to format the contacts and save or export the document as a PDF.Adobe Acrobat: Import the VCF file into an application that supports contact files (like Google Contacts), export the contacts to a CSV, and then use Adobe Acrobat to convert VCF to PDF.Third-Party Software: Applications like vCard Wizard and ContactsMate provide direct VCF to PDF conversion features, simplifying the process.Manual ConversionFor those who prefer a hands-on approach, manual conversion involves several steps:Export VCF to CSV: Use an email client or contact management app to export the VCF file to a CSV format.Format in a Document Editor: Open the CSV file in Excel or Google Sheets to organize and format the data. Copy the formatted contacts into a document editor like Microsoft Word or Google Docs.Save as PDF: Once you’re satisfied with the formatting, save or export the document as a PDF file.Tips for Effective ConversionCheck Data Integrity: Ensure that all contact information is correctly transferred during the conversion process. Verify email addresses, phone numbers, and other critical details.Format Consistently: Maintain a consistent format throughout the PDF to enhance readability. Use clear headings, bullet points, and appropriate spacing.Protect Sensitive Information: If your contacts include sensitive information, consider encrypting the PDF or adding password protection to ensure privacy and security.ConclusionConvert VCF files to PDF can significantly enhance how you manage and share your contact information. Whether you opt for online tools, software applications, or manual methods, the process is straightforward and offers numerous benefits in terms of accessibility, presentation, and convenience. By following the steps outlined in this guide, you can ensure your contacts are well-organized and readily available whenever you need them. Read More
Excel freeze when using power query
I have workbook with 6 power queries I been running daily for months. I been on insider for maybe 2 months now. The queries worked fine until yesterday. Every time I run power query, I get the spinning blue circle and excel freezes. I can then force close excel and restart it. It will then work until I run the query again.
I restarted PC, check for any updates, windows and office and tried Microsoft assistant, which also froze up for like 45 minutes until I closed it.
I have workbook with 6 power queries I been running daily for months. I been on insider for maybe 2 months now. The queries worked fine until yesterday. Every time I run power query, I get the spinning blue circle and excel freezes. I can then force close excel and restart it. It will then work until I run the query again.I restarted PC, check for any updates, windows and office and tried Microsoft assistant, which also froze up for like 45 minutes until I closed it. Read More
PPO convergence guarantee in RL toolbox
Hi,
I am testing my environment using the PPO algorithm in RL toolbox, I recently viewed this paper: https://arxiv.org/abs/2012.01399 which listed some assumptions on the convergence guranteen of PPO, some of them are for the environment itself (like the transition kernel…) and some are for the functions and parameters of the algorithm (like the learning rate alpha, the update function h…)
I am not sure if the PPO algorithm in the RL toolbox satisfies the assumptions of the convergence for the functions and parameters of the algorithm, because I did not find any direct mentioning of convergence in the official mathwork website, so I wonder how the algorithm is designed such that convergence is being considered.
Do I need to look into the train() function to see how those parameters and functions are designed?
Thank youHi,
I am testing my environment using the PPO algorithm in RL toolbox, I recently viewed this paper: https://arxiv.org/abs/2012.01399 which listed some assumptions on the convergence guranteen of PPO, some of them are for the environment itself (like the transition kernel…) and some are for the functions and parameters of the algorithm (like the learning rate alpha, the update function h…)
I am not sure if the PPO algorithm in the RL toolbox satisfies the assumptions of the convergence for the functions and parameters of the algorithm, because I did not find any direct mentioning of convergence in the official mathwork website, so I wonder how the algorithm is designed such that convergence is being considered.
Do I need to look into the train() function to see how those parameters and functions are designed?
Thank you Hi,
I am testing my environment using the PPO algorithm in RL toolbox, I recently viewed this paper: https://arxiv.org/abs/2012.01399 which listed some assumptions on the convergence guranteen of PPO, some of them are for the environment itself (like the transition kernel…) and some are for the functions and parameters of the algorithm (like the learning rate alpha, the update function h…)
I am not sure if the PPO algorithm in the RL toolbox satisfies the assumptions of the convergence for the functions and parameters of the algorithm, because I did not find any direct mentioning of convergence in the official mathwork website, so I wonder how the algorithm is designed such that convergence is being considered.
Do I need to look into the train() function to see how those parameters and functions are designed?
Thank you reinforcement learning, ppo, convergence MATLAB Answers — New Questions
error in ode45 – function must return a column vector
I’m trying to get matlab to solve at plot solution lines on top of my slope field. But it just keeps telling me that my function doesn’t return a column vector.
I have tried doing the transpose f = f(:); but still doesn’t work
this is my code.
f = @(t,y) (3880 – 0.817*y + (731000*y.^7.88)/(116000.^7.88 + y.^7.88));
dirfield(f,0:10:100,0:1000:10000);
hold on;
y0 = 0:100:8000;
f = f(:);
[ts,ys] = ode45(f,[0,50],y0);
plot(ts,ys);
hold offI’m trying to get matlab to solve at plot solution lines on top of my slope field. But it just keeps telling me that my function doesn’t return a column vector.
I have tried doing the transpose f = f(:); but still doesn’t work
this is my code.
f = @(t,y) (3880 – 0.817*y + (731000*y.^7.88)/(116000.^7.88 + y.^7.88));
dirfield(f,0:10:100,0:1000:10000);
hold on;
y0 = 0:100:8000;
f = f(:);
[ts,ys] = ode45(f,[0,50],y0);
plot(ts,ys);
hold off I’m trying to get matlab to solve at plot solution lines on top of my slope field. But it just keeps telling me that my function doesn’t return a column vector.
I have tried doing the transpose f = f(:); but still doesn’t work
this is my code.
f = @(t,y) (3880 – 0.817*y + (731000*y.^7.88)/(116000.^7.88 + y.^7.88));
dirfield(f,0:10:100,0:1000:10000);
hold on;
y0 = 0:100:8000;
f = f(:);
[ts,ys] = ode45(f,[0,50],y0);
plot(ts,ys);
hold off ode45, ode45error, error, function, ode, ode23, odeargument MATLAB Answers — New Questions
Calculate the summation of second column in both arrays
hi…can someone help me with this question?? i need to calculate the summation of second column in both arrays…thank uhi…can someone help me with this question?? i need to calculate the summation of second column in both arrays…thank u hi…can someone help me with this question?? i need to calculate the summation of second column in both arrays…thank u summation, arrays MATLAB Answers — New Questions
fedorov algorithm-matlab code
How to write a Matlab code for optimization algorithms, as Fedorov-Wynn algorithm?
This algorithm is used for optimal input design (determination of input as finite sum of sinusoids).How to write a Matlab code for optimization algorithms, as Fedorov-Wynn algorithm?
This algorithm is used for optimal input design (determination of input as finite sum of sinusoids). How to write a Matlab code for optimization algorithms, as Fedorov-Wynn algorithm?
This algorithm is used for optimal input design (determination of input as finite sum of sinusoids). fedorov algorithm MATLAB Answers — New Questions
How long to install Updates?
I’ve noticed that updates take a long time to install, and sometimes it takes hours or even days for them to complete. I’ve tried restarting my PC, checking for updates manually, and even canceling the installation process and restarting it again, but nothing seems to work. I’ve had to wait for an extended period of time for the updates to finish installing, which can be inconvenient and affect my productivity.
I’m wondering if anyone else has experienced similar issues with the update installation process on the Windows Insider Program.
I’ve noticed that updates take a long time to install, and sometimes it takes hours or even days for them to complete. I’ve tried restarting my PC, checking for updates manually, and even canceling the installation process and restarting it again, but nothing seems to work. I’ve had to wait for an extended period of time for the updates to finish installing, which can be inconvenient and affect my productivity. I’m wondering if anyone else has experienced similar issues with the update installation process on the Windows Insider Program. Read More
How to make a desktop shortcut to open System/Storage/temporaryfiles?
I’m trying to troubleshoot an issue with a particular software application on my Windows 11 Insider beta build, and I’ve been told that I need to turn off Memory Integrity to get it working. However, I’m having trouble finding the option to do so. I’ve checked the Settings app, the Device Manager, and even the Advanced System Settings, but I can’t seem to locate the switch to disable Memory Integrity. Can anyone help me figure out how to turn off Memory Integrity in Windows 11 Insider beta?
I’m trying to troubleshoot an issue with a particular software application on my Windows 11 Insider beta build, and I’ve been told that I need to turn off Memory Integrity to get it working. However, I’m having trouble finding the option to do so. I’ve checked the Settings app, the Device Manager, and even the Advanced System Settings, but I can’t seem to locate the switch to disable Memory Integrity. Can anyone help me figure out how to turn off Memory Integrity in Windows 11 Insider beta? Read More
OneDrive lost all my files
Two days ago OneDrive did not want to sync. And next to all my documents and apps appeared a red circle with a white x in it. Then I uninstalled OneDrive and installed it again. Now I can’t sign in to my personal OneDrive and if I sign in online I can’t access any files. It says something went wrong. And then server error 500.
please help
Two days ago OneDrive did not want to sync. And next to all my documents and apps appeared a red circle with a white x in it. Then I uninstalled OneDrive and installed it again. Now I can’t sign in to my personal OneDrive and if I sign in online I can’t access any files. It says something went wrong. And then server error 500.please help Read More
bcdedit: ID Identification and Verification
Hey
How can i figure out a ID, and how can i verify a existing one?
For example recoverysequence
HeyHow can i figure out a ID, and how can i verify a existing one?For example recoverysequence Read More