Month: July 2024
KQL query with Highlighted Web Part
I have a top-level site with multiple sub-sites in a site collection in O365. All subsites have the same list named “Participants”. This list has a column named “Status” which is a choice field of “Interested”, “Invested” or “Committed”. I want to use a Highlighted Web Part on the top-level site to roll-up content from all Participants lists based off of three different views from the Status field. How do I do this?
In using a “Custom query” and flagging the Source as “All sites”, I am able to pull some information by using the Query text (KQL) “Title:Participants”. This pulls some data, but not the right data. When I combine other parameters to the query, such as:
Title:Participants
Status=Committed
Then the query blows up. What I am looking to get are the individual contents of each Participants list based off the Status field designation. Is this possible?
I have a top-level site with multiple sub-sites in a site collection in O365. All subsites have the same list named “Participants”. This list has a column named “Status” which is a choice field of “Interested”, “Invested” or “Committed”. I want to use a Highlighted Web Part on the top-level site to roll-up content from all Participants lists based off of three different views from the Status field. How do I do this? In using a “Custom query” and flagging the Source as “All sites”, I am able to pull some information by using the Query text (KQL) “Title:Participants”. This pulls some data, but not the right data. When I combine other parameters to the query, such as: Title:ParticipantsStatus=CommittedThen the query blows up. What I am looking to get are the individual contents of each Participants list based off the Status field designation. Is this possible? Read More
Help: Creating a List Based on Two Values From a Data Set
Hi! Struggling with a rather basic issue: I need to pull the name of a class in a list based on “Active” status. Here is how the data is laid out now:
I want to be able to have a formula find all “Active” classes in the set of data above and have them be listed like so below:
Any and all help in this matter would be greatly appreciated!
Hi! Struggling with a rather basic issue: I need to pull the name of a class in a list based on “Active” status. Here is how the data is laid out now:I want to be able to have a formula find all “Active” classes in the set of data above and have them be listed like so below:Any and all help in this matter would be greatly appreciated! Read More
AmazonPay Support
Does DFP support fraud detection for Amazon Pay?
Does DFP support fraud detection for Amazon Pay? Read More
Crie Apps Inteligentes com JavaScript – Integração de RAG, Azure OpenAI e LangChain.js
No último dia 25 e 26 de Abril aconteceu o maior evento de JavaScript do planeta: a BrazilJs Conference 2024. E, o evento foi um grande sucesso, como sempre! Contando com os maiores nomes do mercado e especilistas em JavaScript. E, depois de cinco anos, o evento voltou a ser presencial, com a presença de 4 mil pessoas nos dois dias de evento.
E, fui uma das palestrantes do evento, a qual tive a oportunidade de falar sobre como Criar Aplicações Inteligentes com Javascript: Integrando RAG, Azure OpenAI & LangChain.js.
A partir de agora, compartilharei com vocês um pouco do que foi apresentado na minha palestra!
Se você deseja assistir a palestra na íntegra, acesse:
Vamos lá!
O que é RAG (Retrieval Augmented Generation)?
Logo no início da palestra, procurei explicar o que é RAG e porque esse tipo de modelo é tão importante para a criação de aplicações inteligentes.
RAG, ou Retrieval Augmented Generation, é uma arquitetura que combina a recuperação de informações externas com geração de respostas por grandes modelos de linguagem (LLMs). Assim sendo, esse tipo de abordagem permite pesquisar em Banco de Dados externos além das informações pré-treinadas nos modelos de linguagem, proporcionando respostas mais precisas e contextualizadas. Essa arquitetura é particularmente útil para empresas que desejam utilizar dados específicos e relevantes, sem comprometer informações sensíveis.
Por mais que vejamos muitos exemplos baseados em textos, esse tipo de arquitetura pode ser aplicado em diferentes tipos de dados, como: imagens vetorizadas, documentos e até mesmo áudios.
Simplificando: “A arquitetura RAG permite que empresas utilizem IA para analisar e gerar informações a partir de seus dados específicos, como textos e imagens relacionadas ao seu negócio, de forma controlada e direcionada“.
Se deseja saber mais sobre RAG, recomendo a leitura da documentação oficial da Microsoft que fala sobre o assunto: Retrieval Augmented Generation (RAG) in Azure AI Search
Arquitetura RAG (Retrieval Augmented Generation)
Na palestra, mostrei uma arquitetura padrão RAG, a qual é composta por três componentes principais e segue o seguinte fluxo de execução:
fonte da image: LangChain.js presentation
Indexing (Indexação): esse é um processo de indexação que organiza os dados numa base de dados vetorial de forma a tornar mais fácil e pesquisáveis. Este recurso acaba sendo crítico porque prepara o terreno para que o RAG acesse as informações relevantes rapidamente quando for responder a uma consulta.
Mecanismos: a partir daí, se inicia com a coleta de documentos, que são divididos em pedaços menores por um ‘splitter’. A qual, cada pedaço de texto é transformado num vetor de incorporação por algoritmos complexos. Estes vetores são armazenados na base de dados, permitindo a recuperação eficiente de informações semelhantes.
Retrieval (Recuperação): aqui se utiliza técnicas de similaridade de vetores para encontrar documentos ou passagens mais relevantes para responder a uma consulta.
Mecanismos: são usadas técnicas/algoritmos tais como: Representações vetoriais esparsas, Incorporações vetoriais densas, Busca Híbrida
Generation (Geração): por fim, com as passagens mais relevantes recuperadas, a tarefa do gerador é produzir uma resposta final, sintetizando e expressando essa informação em linguagem natural.
Mecanismos: a forma de mecanismos serão os modelos de linguagem, como: GPT, BERT, Claude ou T5. Assim, eles utilizarão a consulta quanto os documentos relevantes identificados pelo recuperador para gerar a sua resposta.
O que é LangChain.js?
Prosseguindo com a palestra, apresentei o LangChain.js, um framework open-source para desenvolver aplicações alimentadas por modelos de linguagem.
O LangChain.js nos possibilita:
1. Facilidade de Uso: com uma API simples e intuitiva, tornando a biblioteca acessível tanto para desenvolvedores experientes quanto para iniciantes no desenvolvimento de aplicações inteligentes com uso de modelos de linguagem.
2. Desenvolvimento Modular: com componentes e estruturas modulares que permite aos desenvolvedores adicionar e remover componentes conforme necessário, facilitando a customização e a manutenção do código.
3. Suporte para Diferentes Modelos de Linguagem: compatível com vários modelos de linguagem, como: GPT-3, GPT-4, GPT-4o, BERT, Claude, Phi-3 e muitos outros.
4. Componentes: ferramentas combináveis e integrações para trabalhar com modelos de linguagem. Os componentes são modulares e fáceis de usar, esteja você usando a estrutura do LangChain.js ou não.
5. Composição de cadeias (os famosos chains): permite a criação uma sequências de operações ou ‘cadeias’ onde a saída de um modelo de linguagem que pode ser usada como entrada para outro, possibilitando fluxos de trabalho complexos.
6. Memória: há também suporte de memória às cadeias (chains) para que se possa manter o contexto das interações. O que permite um diálogo mais natural e coerente com os modelos de linguagem.
Há muitas outras vantagens ao fazer uso do LangChain.js. Recomendo a leitura da documentação oficial para saber mais sobre o framework: LangChain.js Documentation
Integrando RAG, Azure OpenAI & LangChain.js
Por fim, mostrei como podemos integrar o RAG, Azure OpenAI e LangChain.js para criar aplicações inteligentes com JavaScript com um exemplo prático:
O exemplo prático consiste numa aplicação Serverless AI Chat com RAG usando LangChain.js.
E, conta com as seguintes tecnologias:
Azure OpenAI Service
Azure CosmosDB for MongoDB vCore
Azure Blob Storage
Azure Functions
Azure Static Web Apps
Lit.dev
Vamos entender um pouco sobre a arquitetura da aplicação:
1. Azure Blob Storage:
Função: armazenar os documentos PDF. Poderia ser qualquer tipo de arquivo, mas para este exemplo, optamos por PDFs.
Fluxo de dados: os PDFs são enviados para o Azure Blob Storage via a API documents-get.ts
2. Serverless API:
Função: atua como intermediário entre diversos serviços e a aplicação web, que nesse caso está usando o Azure Static Web Apps com Lit.
Fluxo de Dados: recebe os uploads de documentos PDF do Blob Storage. Armazena e recupera chunks de texto vetorizados no Azure CosmosDB. Depois, envia os chunks de texto vetorizados para o Azure OpenAI Service para geração de respostas.
3. Azure CosmosDB for MongoDB vCore:
Função: armazena e recupera os chunks de texto vetorizados.
Fluxo de Dados: armazena os chunks de texto processados pela Serverless API. Facilitando assim a busca vetorial para recuperação de dados relevantes.
4. Azure OpenAI Service:
Função: os Embbedings (transforma em vetores) os chunks de texto e gera respostas.
Fluxo de Dados: recebe chunks de texto da Serverless API. Gera respostas baseadas nos dados recuperados e nos modelos de linguagem pré-treinados.
5. Web App:
Função: interface de usuário que permite interação com o chat. Nesse caso estamos usando o Azure Static Web Apps com Lit
Fluxo de Dados: envia chamadas HTTP para a Serverless API para fazer perguntas e receber respostas no chat em tempo real.
6. PDF:
Função: documentos que contêm informações relevantes e que são armazenados no Azure Blob Storage.
Fluxo de Dados: são enviados via upload HTTP para o Azure Blob Storage.
Abaixo, podemos ver a aplicação em execução:
Recomendo que todos vocês acessem o repositório do projeto para saber mais sobre a aplicação e como você pode criar a sua própria aplicação inteligente com JavaScript.
Link repositório da Aplicação: Serverless AI Chat with RAG using LangChain.js – branch mongodb-vcore.
Aproveite também deixe a sua estrela no repositório! Pois, isso ajuda a comunidade a encontrar o projeto.
Conclusão
Novamente, se você não assistiu a palestra na íntegra, acesse:
Nesse artigo, compartilhei um pouco de como foi a palestra dada. Aprendemos sobre o que é RAG, a arquitetura RAG, o que é LangChain.js e como integrar RAG, Azure OpenAI e LangChain.js para criar aplicações inteligentes com JavaScript.
Estamos preparando uma sequencia de vídeos explicando com mais detalhes o código desenvolvido para a aplicação. E, sem contar com um workshop baseado nessa aplicação. Então, fique ligado nas novidades futuras!
Recursos Adicionais
Sempre é bom ter mais recursos para aprender mais sobre o assunto. Aqui estão alguns links que podem te ajudar:
Retrieval Augmented Generation (RAG) in Azure AI Search
Curso Grátis – Criar APIs sem servidor com o Azure Functions
Curso Grátis – Publicar uma API dos Aplicativos Web Estáticos do Azure
Curso Grátis – Introdução ao Serviço OpenAI do Azure
Curso Grátis – JavaScript no Azure
E, se você gostou do artigo, compartilhe com seus amigos e colegas de trabalho. E, se tiver alguma dúvida ou sugestão, deixe nos comentários. Ficarei feliz em responder!
E, no próximo artigo, estarei explicando detalhadamente como usar essa aplicação passo a passo! Nos vemos!
Microsoft Tech Community – Latest Blogs –Read More
Collaborate confidently with Task History in Microsoft Planner
Introduction
The task history feature in Microsoft Planner helps task owners stay on top of their tasks. You can quickly find recent progress that has been made or task changes that have impacted the schedule. Edits to tasks such as adding the task to a sprint, changing its duration, giving it a goal, or changes to other tasks that affect the schedule of work all appear in the Changes pane in Task Details.
Watch this 1-minute video for a quick overview of Task History.
If you’re just getting started with Planner, learn more about what we’ve been working on in this recent blog post . Or jump in by opening the updated Planner App in Microsoft Teams.
Getting Started
Task history is available to all Planner users who have a Project Plan 3 or greater license. If you do not have a premium Project license, you can simply click on the diamond icon within the app where you can begin your free 30-day trial of advanced capabilities in Planner or request a premium license.
First, open a premium plan in Planner.
Open task details for any task. You can reach it by clicking the task details icon in the task grid, or by clicking a task card in the board view.
3. The task history icon is in the top corner of task details. Click it to open the changes pane.
Details about the recorded changes
All the changes a user makes to a task are recorded in task history. Details for each edit include who made the change, when they made it, what property was changed, the previous value, and the new value.
History for a task includes edits such as:
Adding or removing labels
Changing the duration or effort
Editing checklists
Adding or removing attachments
Edits to any custom columns
Changes made to other tasks that impact the selected task
Planner makes it easy to track dependencies between tasks. These links mean it is crucial for task owners to understand how changes across the plan impacts their work. Task history makes it easy to identify these changes and stay on track. Changes made to other tasks that impact either the start date or finish date of the selected task have a history record that shows high-level information about the edit.
In the example shown below, Diego Siciliani edited the duration of a related task called “Review organizational marketing strategy.” This task edit moved the start date of the currently selected task.
Navigating to related task edits
Clicking the task title in a history record takes you to the related task and highlights the relevant edit. In this example, clicking the task title of the record shown in the above example opens the “Review organizational marketing strategy” task and highlights the change in duration. Pressing the back button in Teams returns you to the previously selected task.
Frequently Asked Questions
Why don’t I see the task history button?
Task History is only available to users working in premium plans who have a Project Plan 3 or greater license. If you do not have a premium Project license, you can simply click on the diamond icon within the app where you can begin your free 30-day trial of advanced capabilities.
Will edits made by users without a Project Plan 3 license be shown in the Changes pane?
Yes. Edits made by all users, regardless of their license, will appear in the changes pane. Only users with a Project Plan 3 or greater license will be able to open the changes pane to view these edits.
Is task history available in basic plans?
No. Task history -along with other powerful features such as custom columns, goals, and timeline view, gives teams with more sophisticated project management requirements the tools they need to keep their plans on track. These features are only available in premium plans.
Can my team build Power BI reports using Task History data?
Yes. Task history data is stored in Microsoft Dataverse and can be queried using Power BI. Learn more about the schema by visiting our support page.
Do edits to tasks using the Project scheduling APIs appear in task history?
No. Only edits to tasks made using the grid, board, or timeline views appear in the Changes pane.
My team uses Project in Power Apps, do edits made in that context appear in task history?
Edits made in the grid, board, goals, people, and timeline views appear in Task History. Any edits to tasks using Power Apps forms as well as any edits to columns added to tables in Dataverse are not shown in task history.
My team has customized Project in Power Apps, will task history work in our environment?
Yes, but your administrator needs to ensure that they are testing their customizations with the latest release, including any customization of security roles in Dataverse.
Learn more about the new Planner
To get the inside scoop on the new Planner watch the Meet the Makers and our AMA.
Watch the new Planner demos for inspiration on how to get the most out of the new Planner app in Microsoft Teams.
Check out the new Planner adoption website.
We’ve got a lot more ‘planned’ for the new Planner this year! Stay tuned to the Planner Blog – Microsoft Community Hub for news.
For future updates coming to the new Planner app, please view the Microsoft 365 roadmap here.
Learn about Planner and Project plans and pricing here.
Read the FAQs here.
Share your feedback
Microsoft Tech Community – Latest Blogs –Read More
Acquire Images from a Mobile Device Camera- is max resolution locked at 720p?
I am currently running 2023b update 8 with the MATLAB Support Package for Apple iOS Sensors on an iPhone 8. The maximum resolution when taking images is 720p. Is there any way to get full resolution images? Alternatively is there a way to connect the iPhone to matlab over a wire for full resolution pictures, without the cloud streaming method?I am currently running 2023b update 8 with the MATLAB Support Package for Apple iOS Sensors on an iPhone 8. The maximum resolution when taking images is 720p. Is there any way to get full resolution images? Alternatively is there a way to connect the iPhone to matlab over a wire for full resolution pictures, without the cloud streaming method? I am currently running 2023b update 8 with the MATLAB Support Package for Apple iOS Sensors on an iPhone 8. The maximum resolution when taking images is 720p. Is there any way to get full resolution images? Alternatively is there a way to connect the iPhone to matlab over a wire for full resolution pictures, without the cloud streaming method? sensors, mobile, device, camera, iphone, resolution, image acquisition, phone MATLAB Answers — New Questions
Solve and plot system in x and y with varying constants e and t
hello,
i am having troubles solving the following problem:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range: i am trying to see the effect of e and t on the system represented by x and y.
basically i would like to obtain on the same graph different curves in x and y for a fixed number of combinations of e and t.
my code so far is:
n= 21;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
a = 50;
b = 5;
e = linspace(-a, a, b);
t = linspace(-a, a, b);
Z = zeros(n, n);
for k = 1:b
for s = 1:b
b = X + Y + e(k) + t(s);
d = X.*Y – e(k).*t(s);
for i= 1:n
for j= 1:n
if b(i,j) >= 0
Z(i,j) = d(i,j);
else
Z(i,j) = -1;
end
end
v = [0, 0];
contour(X, Y, Z, v, ‘LineWidth’, 1.5)
grid on
hold on
end
end
end
could anybody please give me any suggestions on how to improve it, as the result so far is not what i expect.
thank you very muchhello,
i am having troubles solving the following problem:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range: i am trying to see the effect of e and t on the system represented by x and y.
basically i would like to obtain on the same graph different curves in x and y for a fixed number of combinations of e and t.
my code so far is:
n= 21;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
a = 50;
b = 5;
e = linspace(-a, a, b);
t = linspace(-a, a, b);
Z = zeros(n, n);
for k = 1:b
for s = 1:b
b = X + Y + e(k) + t(s);
d = X.*Y – e(k).*t(s);
for i= 1:n
for j= 1:n
if b(i,j) >= 0
Z(i,j) = d(i,j);
else
Z(i,j) = -1;
end
end
v = [0, 0];
contour(X, Y, Z, v, ‘LineWidth’, 1.5)
grid on
hold on
end
end
end
could anybody please give me any suggestions on how to improve it, as the result so far is not what i expect.
thank you very much hello,
i am having troubles solving the following problem:
solve and plot for x and y
x+y+e+t>=0
And
x*y-e*t>=0
where x and y are the two variables while e and t are two constants whose values has to vary in a range: i am trying to see the effect of e and t on the system represented by x and y.
basically i would like to obtain on the same graph different curves in x and y for a fixed number of combinations of e and t.
my code so far is:
n= 21;
x = linspace(-100, 100, n);
y = linspace(-100, 100, n);
[X, Y] = meshgrid(x, y);
a = 50;
b = 5;
e = linspace(-a, a, b);
t = linspace(-a, a, b);
Z = zeros(n, n);
for k = 1:b
for s = 1:b
b = X + Y + e(k) + t(s);
d = X.*Y – e(k).*t(s);
for i= 1:n
for j= 1:n
if b(i,j) >= 0
Z(i,j) = d(i,j);
else
Z(i,j) = -1;
end
end
v = [0, 0];
contour(X, Y, Z, v, ‘LineWidth’, 1.5)
grid on
hold on
end
end
end
could anybody please give me any suggestions on how to improve it, as the result so far is not what i expect.
thank you very much system of equations, plotting, iteration MATLAB Answers — New Questions
how to plot accuracy?
Error using trainNetwork (line 150)
Invalid training data. Sequence responses must have the same sequence length as the corresponding predictors.
Error in Untitled (line 92)
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Load the files
file_path_e = ‘sig.xlsx’;
file_path_sig = ‘E.xlsx’;
% Read the files
data_e = readtable(file_path_e);
data_sig = readtable(file_path_sig);
% Prepare data
x = table2array(data_e);
y = table2array(data_sig);
% Ensure x and y have the same length
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Display initial data types
disp(‘Initial data types:’);
disp([‘x type: ‘, class(x)]);
disp([‘y type: ‘, class(y)]);
% Convert to numeric arrays if not already
x = str2double(x);
y = str2double(y);
% Display number of NaNs before removing them
fprintf(‘Number of NaNs in x before removal: %dn’, sum(isnan(x)));
fprintf(‘Number of NaNs in y before removal: %dn’, sum(isnan(y)));
% Handle non-numeric entries by removing NaNs
valid_indices = ~isnan(x) & ~isnan(y);
x = x(valid_indices);
y = y(valid_indices);
% Display number of valid data points after removal
fprintf(‘Number of valid data points after preprocessing: %dn’, length(x));
% Ensure x and y have the same length again after removing NaNs
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Check if there are enough valid entries
if min_length <= 1
error(‘Not enough valid data points after preprocessing.’);
end
% Reshape data to be compatible with LSTM input (samples, timesteps, features)
x = reshape(x, [], 1);
y = reshape(y, [], 1);
% Scale data using min-max normalization
x_scaled = (x – min(x)) / (max(x) – min(x));
y_scaled = (y – min(y)) / (max(y) – min(y));
% Split data into train and test sets
cv = cvpartition(length(x_scaled), ‘HoldOut’, 0.2);
x_train = x_scaled(training(cv));
y_train = y_scaled(training(cv));
x_test = x_scaled(test(cv));
y_test = y_scaled(test(cv));
% Create sequences for LSTM
seq_length = 10;
[x_train_seq, y_train_seq] = create_sequences(x_train, y_train, seq_length);
[x_test_seq, y_test_seq] = create_sequences(x_test, y_test, seq_length);
% Reshape for LSTM
x_train_seq = reshape(x_train_seq, [size(x_train_seq, 1), seq_length, 1]);
x_test_seq = reshape(x_test_seq, [size(x_test_seq, 1), seq_length, 1]);
% Build the LSTM model
layers = [
sequenceInputLayer(1)
lstmLayer(20, ‘OutputMode’, ‘sequence’)
dropoutLayer(0.2)
lstmLayer(20)
dropoutLayer(0.2)
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 300, …
‘MiniBatchSize’, 20, …
‘InitialLearnRate’, 0.001, …
‘ValidationData’, {x_test_seq, y_test_seq}, …
‘Plots’, ‘training-progress’, …
‘Verbose’, 0);
% Train the model
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Plot loss curve
training_info = net.TrainingHistory;
figure;
plot(training_info.TrainingLoss, ‘DisplayName’, ‘Train’);
hold on;
plot(training_info.ValidationLoss, ‘DisplayName’, ‘Validation’);
title(‘Model loss’);
xlabel(‘Epoch’);
ylabel(‘Loss’);
legend(‘show’);
hold off;
% Function to create sequences
function [xs, ys] = create_sequences(x_data, y_data, seq_length)
xs = [];
ys = [];
for i = 1:(length(x_data) – seq_length)
x_seq = x_data(i:i+seq_length-1);
y_seq = y_data(i+seq_length-1); % Adjust index to ensure same length
xs = [xs; x_seq’];
ys = [ys; y_seq’];
end
endError using trainNetwork (line 150)
Invalid training data. Sequence responses must have the same sequence length as the corresponding predictors.
Error in Untitled (line 92)
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Load the files
file_path_e = ‘sig.xlsx’;
file_path_sig = ‘E.xlsx’;
% Read the files
data_e = readtable(file_path_e);
data_sig = readtable(file_path_sig);
% Prepare data
x = table2array(data_e);
y = table2array(data_sig);
% Ensure x and y have the same length
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Display initial data types
disp(‘Initial data types:’);
disp([‘x type: ‘, class(x)]);
disp([‘y type: ‘, class(y)]);
% Convert to numeric arrays if not already
x = str2double(x);
y = str2double(y);
% Display number of NaNs before removing them
fprintf(‘Number of NaNs in x before removal: %dn’, sum(isnan(x)));
fprintf(‘Number of NaNs in y before removal: %dn’, sum(isnan(y)));
% Handle non-numeric entries by removing NaNs
valid_indices = ~isnan(x) & ~isnan(y);
x = x(valid_indices);
y = y(valid_indices);
% Display number of valid data points after removal
fprintf(‘Number of valid data points after preprocessing: %dn’, length(x));
% Ensure x and y have the same length again after removing NaNs
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Check if there are enough valid entries
if min_length <= 1
error(‘Not enough valid data points after preprocessing.’);
end
% Reshape data to be compatible with LSTM input (samples, timesteps, features)
x = reshape(x, [], 1);
y = reshape(y, [], 1);
% Scale data using min-max normalization
x_scaled = (x – min(x)) / (max(x) – min(x));
y_scaled = (y – min(y)) / (max(y) – min(y));
% Split data into train and test sets
cv = cvpartition(length(x_scaled), ‘HoldOut’, 0.2);
x_train = x_scaled(training(cv));
y_train = y_scaled(training(cv));
x_test = x_scaled(test(cv));
y_test = y_scaled(test(cv));
% Create sequences for LSTM
seq_length = 10;
[x_train_seq, y_train_seq] = create_sequences(x_train, y_train, seq_length);
[x_test_seq, y_test_seq] = create_sequences(x_test, y_test, seq_length);
% Reshape for LSTM
x_train_seq = reshape(x_train_seq, [size(x_train_seq, 1), seq_length, 1]);
x_test_seq = reshape(x_test_seq, [size(x_test_seq, 1), seq_length, 1]);
% Build the LSTM model
layers = [
sequenceInputLayer(1)
lstmLayer(20, ‘OutputMode’, ‘sequence’)
dropoutLayer(0.2)
lstmLayer(20)
dropoutLayer(0.2)
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 300, …
‘MiniBatchSize’, 20, …
‘InitialLearnRate’, 0.001, …
‘ValidationData’, {x_test_seq, y_test_seq}, …
‘Plots’, ‘training-progress’, …
‘Verbose’, 0);
% Train the model
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Plot loss curve
training_info = net.TrainingHistory;
figure;
plot(training_info.TrainingLoss, ‘DisplayName’, ‘Train’);
hold on;
plot(training_info.ValidationLoss, ‘DisplayName’, ‘Validation’);
title(‘Model loss’);
xlabel(‘Epoch’);
ylabel(‘Loss’);
legend(‘show’);
hold off;
% Function to create sequences
function [xs, ys] = create_sequences(x_data, y_data, seq_length)
xs = [];
ys = [];
for i = 1:(length(x_data) – seq_length)
x_seq = x_data(i:i+seq_length-1);
y_seq = y_data(i+seq_length-1); % Adjust index to ensure same length
xs = [xs; x_seq’];
ys = [ys; y_seq’];
end
end Error using trainNetwork (line 150)
Invalid training data. Sequence responses must have the same sequence length as the corresponding predictors.
Error in Untitled (line 92)
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Load the files
file_path_e = ‘sig.xlsx’;
file_path_sig = ‘E.xlsx’;
% Read the files
data_e = readtable(file_path_e);
data_sig = readtable(file_path_sig);
% Prepare data
x = table2array(data_e);
y = table2array(data_sig);
% Ensure x and y have the same length
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Display initial data types
disp(‘Initial data types:’);
disp([‘x type: ‘, class(x)]);
disp([‘y type: ‘, class(y)]);
% Convert to numeric arrays if not already
x = str2double(x);
y = str2double(y);
% Display number of NaNs before removing them
fprintf(‘Number of NaNs in x before removal: %dn’, sum(isnan(x)));
fprintf(‘Number of NaNs in y before removal: %dn’, sum(isnan(y)));
% Handle non-numeric entries by removing NaNs
valid_indices = ~isnan(x) & ~isnan(y);
x = x(valid_indices);
y = y(valid_indices);
% Display number of valid data points after removal
fprintf(‘Number of valid data points after preprocessing: %dn’, length(x));
% Ensure x and y have the same length again after removing NaNs
min_length = min(length(x), length(y));
x = x(1:min_length);
y = y(1:min_length);
% Check if there are enough valid entries
if min_length <= 1
error(‘Not enough valid data points after preprocessing.’);
end
% Reshape data to be compatible with LSTM input (samples, timesteps, features)
x = reshape(x, [], 1);
y = reshape(y, [], 1);
% Scale data using min-max normalization
x_scaled = (x – min(x)) / (max(x) – min(x));
y_scaled = (y – min(y)) / (max(y) – min(y));
% Split data into train and test sets
cv = cvpartition(length(x_scaled), ‘HoldOut’, 0.2);
x_train = x_scaled(training(cv));
y_train = y_scaled(training(cv));
x_test = x_scaled(test(cv));
y_test = y_scaled(test(cv));
% Create sequences for LSTM
seq_length = 10;
[x_train_seq, y_train_seq] = create_sequences(x_train, y_train, seq_length);
[x_test_seq, y_test_seq] = create_sequences(x_test, y_test, seq_length);
% Reshape for LSTM
x_train_seq = reshape(x_train_seq, [size(x_train_seq, 1), seq_length, 1]);
x_test_seq = reshape(x_test_seq, [size(x_test_seq, 1), seq_length, 1]);
% Build the LSTM model
layers = [
sequenceInputLayer(1)
lstmLayer(20, ‘OutputMode’, ‘sequence’)
dropoutLayer(0.2)
lstmLayer(20)
dropoutLayer(0.2)
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 300, …
‘MiniBatchSize’, 20, …
‘InitialLearnRate’, 0.001, …
‘ValidationData’, {x_test_seq, y_test_seq}, …
‘Plots’, ‘training-progress’, …
‘Verbose’, 0);
% Train the model
net = trainNetwork(x_train_seq, y_train_seq, layers, options);
% Plot loss curve
training_info = net.TrainingHistory;
figure;
plot(training_info.TrainingLoss, ‘DisplayName’, ‘Train’);
hold on;
plot(training_info.ValidationLoss, ‘DisplayName’, ‘Validation’);
title(‘Model loss’);
xlabel(‘Epoch’);
ylabel(‘Loss’);
legend(‘show’);
hold off;
% Function to create sequences
function [xs, ys] = create_sequences(x_data, y_data, seq_length)
xs = [];
ys = [];
for i = 1:(length(x_data) – seq_length)
x_seq = x_data(i:i+seq_length-1);
y_seq = y_data(i+seq_length-1); % Adjust index to ensure same length
xs = [xs; x_seq’];
ys = [ys; y_seq’];
end
end accuracy, lstm MATLAB Answers — New Questions
Are co-organizers alerted to the fact they were made a co-organizer?
I can’t seem to find where (if Teams even does this) a co-organizer is notified they were made a co-organizer of a Teams meeting. Any idea where a person would proactively find this information or, better yet, is there a way for Teams to notify co-organizers that they’ve been assigned this role?
I can’t seem to find where (if Teams even does this) a co-organizer is notified they were made a co-organizer of a Teams meeting. Any idea where a person would proactively find this information or, better yet, is there a way for Teams to notify co-organizers that they’ve been assigned this role? Read More
Can QueryPerformanceCounter return negative timestamps?
Dear community,
this is my first time asking a question here, so apologies if I am in the wrong place or have framed the question incorrectly. I have had a bit of trouble finding the right forum and I am willing to delete this post if I am in the wrong spot.
In our software, we use QueryPerformanceCounter and QueryPerformanceFrequency. We were under the impression that QueryPerformanceCounter could return negative, but increasing, timestamps. However, some searching online suggests that on Windows, getting negative timestamps from the high-performance timer, even if monotonically increasing, is a sign of a problem in the system, perhaps even a BIOS configuration issue. For example, see the following discussions:
https://github.com/SFML/SFML/issues/1167
https://stackoverflow.com/questions/31326115/queryperformancecounter-and-weird-results
https://cboard.cprogramming.com/cplusplus-programming/97413-queryperformancefrequency-negative-value.html
https://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/programs-queryperformancecounter-function-perform-poorly
I also saw some discussion that you could get negative timestamps when combining the output of QueryPerformanceCounter with the output of QueryPerformanceFrequency, if QueryPerformanceFrequency had an uncaught exception.
My question is: -> Is any of this true?
Many thanks and best wishes,
Rob
Dear community,this is my first time asking a question here, so apologies if I am in the wrong place or have framed the question incorrectly. I have had a bit of trouble finding the right forum and I am willing to delete this post if I am in the wrong spot.In our software, we use QueryPerformanceCounter and QueryPerformanceFrequency. We were under the impression that QueryPerformanceCounter could return negative, but increasing, timestamps. However, some searching online suggests that on Windows, getting negative timestamps from the high-performance timer, even if monotonically increasing, is a sign of a problem in the system, perhaps even a BIOS configuration issue. For example, see the following discussions:https://github.com/SFML/SFML/issues/1167https://stackoverflow.com/questions/31326115/queryperformancecounter-and-weird-resultshttps://cboard.cprogramming.com/cplusplus-programming/97413-queryperformancefrequency-negative-value.htmlhttps://learn.microsoft.com/en-us/troubleshoot/windows-server/performance/programs-queryperformancecounter-function-perform-poorlyI also saw some discussion that you could get negative timestamps when combining the output of QueryPerformanceCounter with the output of QueryPerformanceFrequency, if QueryPerformanceFrequency had an uncaught exception.My question is: -> Is any of this true?Many thanks and best wishes,Rob Read More
Possible to choose an older date to set as a due date?
I have some recurring tasks that I had entered with due dates that have passed. It seems that I can no longer choose a date earlier than today. This really messes with the way I have my To Do list organized.
Does anyone know how I can choose a due date that is prior to the current date?
I have some recurring tasks that I had entered with due dates that have passed. It seems that I can no longer choose a date earlier than today. This really messes with the way I have my To Do list organized. Does anyone know how I can choose a due date that is prior to the current date? Read More
Pivot Tables – Not Grouping By Month, Started July 1st
Hi! I pull about 100+ reports of data per week and I pivot all of them. When I pivoted new data today, and pull the date to column, it will not group at all (particularly I need it by month). Even when I go in to Settings > Group, nothing is there that allows me to group by month. I pull the same data every day, so there is no way there is any date that is a different value or format. The same data I pulled last Friday works and it groups by month. It looks like it’s could be a new update problem. My coworkers are having the same issue. Anyone else?
Hi! I pull about 100+ reports of data per week and I pivot all of them. When I pivoted new data today, and pull the date to column, it will not group at all (particularly I need it by month). Even when I go in to Settings > Group, nothing is there that allows me to group by month. I pull the same data every day, so there is no way there is any date that is a different value or format. The same data I pulled last Friday works and it groups by month. It looks like it’s could be a new update problem. My coworkers are having the same issue. Anyone else? Read More
AI and NET: Introducing the official OpenAI library for .NET Developers
OpenAI Library for .NET
Make sure to check out the full video for an in-depth look at how you can start leveraging these tools today and propel your projects into the future of AI-driven development.
Resources
Blog: https://aka.ms/ainetopenainet
Repo: https://github.com/openai/openai-dotnet
Migration guide: https://aka.ms/openai-dotnet/v1-to-v2
Recording
Next Steps
Microsoft Tech Community – Latest Blogs –Read More
Use file name to save .mat files
Hello community.
I’d like to have your support to find a soluion.
I have several ‘.dat’ files, I’m usinng mdfimport to export the variables I need and then save them into ‘.mat’ format.
my code:
clear all;
clc;
[FileName,PathName] = uigetfile(‘*.dat’,’Select the .dat-file(s)’,’MultiSelect’, ‘on’);
if class(FileName) == char(‘cell’);
FileName = FileName’;
end
if class(FileName)==char(‘char’);
FileName = {FileName};
end
%========================================================================%
V = {
[‘EnvT_t’]
[‘CtT_flgHeal’]
[‘CtT_flgEna’]
[‘Epm_nEng’]
[‘CTM_Delta’]
[‘CTM_Flag’]
[‘CTM_Sum’]
};
V=V’;
for k=1:length(FileName)
%———————————————————————-
progress = [‘Working on file ‘ int2str(k) ‘ of ‘ int2str(length(FileName)) ‘…’];
disp(progress);
disp(FileName(k));
LoadPath = char(strcat(PathName, FileName(k)));
mdfimport(LoadPath,[],V, ‘resample_1’);
save([‘@’ num2str(k) ‘.mat’])
end
let’s say I have below data:
Carr1.dat
Carr2.dat
Carr3.dat
Carr4.dat
my code will go trhoug each .dat file, will extract defined variables and then it will save it as follows:
@1.mat
@2.mat
@3.mat
@4.mat
how can I use save command to keep original file name? I’m looking to get this:
Carr1.mat
Carr2.mat
Carr3.mat
Carr4.mat
I’ve tryied different things but always get an error.
as always your feedback will be highly appreciatedHello community.
I’d like to have your support to find a soluion.
I have several ‘.dat’ files, I’m usinng mdfimport to export the variables I need and then save them into ‘.mat’ format.
my code:
clear all;
clc;
[FileName,PathName] = uigetfile(‘*.dat’,’Select the .dat-file(s)’,’MultiSelect’, ‘on’);
if class(FileName) == char(‘cell’);
FileName = FileName’;
end
if class(FileName)==char(‘char’);
FileName = {FileName};
end
%========================================================================%
V = {
[‘EnvT_t’]
[‘CtT_flgHeal’]
[‘CtT_flgEna’]
[‘Epm_nEng’]
[‘CTM_Delta’]
[‘CTM_Flag’]
[‘CTM_Sum’]
};
V=V’;
for k=1:length(FileName)
%———————————————————————-
progress = [‘Working on file ‘ int2str(k) ‘ of ‘ int2str(length(FileName)) ‘…’];
disp(progress);
disp(FileName(k));
LoadPath = char(strcat(PathName, FileName(k)));
mdfimport(LoadPath,[],V, ‘resample_1’);
save([‘@’ num2str(k) ‘.mat’])
end
let’s say I have below data:
Carr1.dat
Carr2.dat
Carr3.dat
Carr4.dat
my code will go trhoug each .dat file, will extract defined variables and then it will save it as follows:
@1.mat
@2.mat
@3.mat
@4.mat
how can I use save command to keep original file name? I’m looking to get this:
Carr1.mat
Carr2.mat
Carr3.mat
Carr4.mat
I’ve tryied different things but always get an error.
as always your feedback will be highly appreciated Hello community.
I’d like to have your support to find a soluion.
I have several ‘.dat’ files, I’m usinng mdfimport to export the variables I need and then save them into ‘.mat’ format.
my code:
clear all;
clc;
[FileName,PathName] = uigetfile(‘*.dat’,’Select the .dat-file(s)’,’MultiSelect’, ‘on’);
if class(FileName) == char(‘cell’);
FileName = FileName’;
end
if class(FileName)==char(‘char’);
FileName = {FileName};
end
%========================================================================%
V = {
[‘EnvT_t’]
[‘CtT_flgHeal’]
[‘CtT_flgEna’]
[‘Epm_nEng’]
[‘CTM_Delta’]
[‘CTM_Flag’]
[‘CTM_Sum’]
};
V=V’;
for k=1:length(FileName)
%———————————————————————-
progress = [‘Working on file ‘ int2str(k) ‘ of ‘ int2str(length(FileName)) ‘…’];
disp(progress);
disp(FileName(k));
LoadPath = char(strcat(PathName, FileName(k)));
mdfimport(LoadPath,[],V, ‘resample_1’);
save([‘@’ num2str(k) ‘.mat’])
end
let’s say I have below data:
Carr1.dat
Carr2.dat
Carr3.dat
Carr4.dat
my code will go trhoug each .dat file, will extract defined variables and then it will save it as follows:
@1.mat
@2.mat
@3.mat
@4.mat
how can I use save command to keep original file name? I’m looking to get this:
Carr1.mat
Carr2.mat
Carr3.mat
Carr4.mat
I’ve tryied different things but always get an error.
as always your feedback will be highly appreciated matlab, data import, save MATLAB Answers — New Questions
daeFunction() automatically removes state variables in resulting function handle
Hi,
I implemented a 14 DOF dynamic Two-Track Model with 36 states variables in form of symbolic differential equations and i want to create a function handle from these equations so i can solve the model numerically. I already tried to use the function odeToVectorField() with no success because the model equations are non-linear. Applying the function reduceDifferentialOrder() and then daeFunction() worked but the resuting function handle is missing some state variables in its function body. Im appending the output of reduceDifferentialOrder() and daeFunction() below as .mat or .m file respectively.
Are there any other methods, to programmatically create the function handle, that im missing?
Best wishes,
TimHi,
I implemented a 14 DOF dynamic Two-Track Model with 36 states variables in form of symbolic differential equations and i want to create a function handle from these equations so i can solve the model numerically. I already tried to use the function odeToVectorField() with no success because the model equations are non-linear. Applying the function reduceDifferentialOrder() and then daeFunction() worked but the resuting function handle is missing some state variables in its function body. Im appending the output of reduceDifferentialOrder() and daeFunction() below as .mat or .m file respectively.
Are there any other methods, to programmatically create the function handle, that im missing?
Best wishes,
Tim Hi,
I implemented a 14 DOF dynamic Two-Track Model with 36 states variables in form of symbolic differential equations and i want to create a function handle from these equations so i can solve the model numerically. I already tried to use the function odeToVectorField() with no success because the model equations are non-linear. Applying the function reduceDifferentialOrder() and then daeFunction() worked but the resuting function handle is missing some state variables in its function body. Im appending the output of reduceDifferentialOrder() and daeFunction() below as .mat or .m file respectively.
Are there any other methods, to programmatically create the function handle, that im missing?
Best wishes,
Tim daefunction MATLAB Answers — New Questions
Connect to external MySql database
Hello all,
I finally convinced to write here after an hour of pointless searching.
I have an App written in .net framework, that uses MysqlConnector to connect to external database, hosted on Aiven.
The app does not work in Azure App service, since it complains that it cannot connect to the database.
The documentation tells about configuring Vnet I’m order to open the necessary tcp ports for the app to connect to the external database. Is this really necessary? App network configuration reports there’s no limitations.
Thanks
Hello all,I finally convinced to write here after an hour of pointless searching.I have an App written in .net framework, that uses MysqlConnector to connect to external database, hosted on Aiven.The app does not work in Azure App service, since it complains that it cannot connect to the database.The documentation tells about configuring Vnet I’m order to open the necessary tcp ports for the app to connect to the external database. Is this really necessary? App network configuration reports there’s no limitations. Thanks Read More
Getting lawyer involved after denied over and over for no reason
Process awful, see : https://techcommunity.microsoft.com/t5/partner-compliance-verification/awful-process-i-worked-at-microsoft-and-complaining-to-vps-amp/m-p/4178955#M552
Now, even though WHOIS matches exactly our “legal info”, they failed us again, and no way to appeal anymore.
“We’re unable to verify your profile information and give you full access to Partner Center.
After several unsuccessful attempts to verify your information, we’ve ended the verification process. No further action will be taken on this profile verification.
Failure to complete the verification process within 30 days may result in termination of your relationship with Microsoft.”
But there is no way to contact anymore. this email is useless because you give no way to contact — that’s very smart way to design a system
Getting lawyer involved. Why is this process so bad? Who is specifically at fault for it?
Process awful, see : https://techcommunity.microsoft.com/t5/partner-compliance-verification/awful-process-i-worked-at-microsoft-and-complaining-to-vps-amp/m-p/4178955#M552 Now, even though WHOIS matches exactly our “legal info”, they failed us again, and no way to appeal anymore. “We’re unable to verify your profile information and give you full access to Partner Center.After several unsuccessful attempts to verify your information, we’ve ended the verification process. No further action will be taken on this profile verification.Failure to complete the verification process within 30 days may result in termination of your relationship with Microsoft.” But there is no way to contact anymore. this email is useless because you give no way to contact — that’s very smart way to design a system Getting lawyer involved. Why is this process so bad? Who is specifically at fault for it? Read More
لاتدفع مقدم مع الشيخ الروحاني ابوعاموس واتس009647842488906؛جلب الحبيب للزواج علاج السحر الأمارات دبي
Today we’re happy to introduce performance, collaboration شيخ روحاني في الإمارات and interactivity upgrades to the file viewer in Microsoft 365. The file viewer in Microsoft 365 opens by رقم ساحر سحر المحبةdefault when you access non-Office files from OneDrive, SharePoint, or Teams. It’s especially handy for previewing files without having to download them and for viewing files without needing the specific app for that file type installed on your
Today we’re happy to introduce performance, collaboration شيخ روحاني في الإمارات and interactivity upgrades to the file viewer in Microsoft 365. The file viewer in Microsoft 365 opens by رقم ساحر سحر المحبةdefault when you access non-Office files from OneDrive, SharePoint, or Teams. It’s especially handy for previewing files without having to download them and for viewing files without needing the specific app for that file type installed on your Read More
Outlook New is resizing inline images and signature images
Since I started using Outlook New, I have had a few issues such as the missing Address Book etc. My issue today is that images are being resized no matter what I do. This makes messages look unprofessional, and the image is wider than the reading pane which is just ridiculous.
Signature images – Clearly, this should be a small logo at the bottom but when the person replies to me, I can see that my signature image takes up the entire width of the screen. That is embarrassing.
Inline images – I can set them to 25%, 50% or best fit, or if I just drag the resize handles to the size that I want, but the recipient receives the message with images overflowing off the sides. Not only will they not want to look at the images, but also it again makes it seem like I am unprofessional and do not not know how to use Outlook like an adult.
I do not see an option to repair. I have not found any sensible answers online to address and fix permanently. This only happens in Outlook New and I need to know how to have this repaired. Or am I just supposed to stop using inline images and a logo in the signature.
Outlook images too big. Signature looks ridiculous. Image resize dimensions are non-functional. Fix it, please.
Since I started using Outlook New, I have had a few issues such as the missing Address Book etc. My issue today is that images are being resized no matter what I do. This makes messages look unprofessional, and the image is wider than the reading pane which is just ridiculous. Signature images – Clearly, this should be a small logo at the bottom but when the person replies to me, I can see that my signature image takes up the entire width of the screen. That is embarrassing. Inline images – I can set them to 25%, 50% or best fit, or if I just drag the resize handles to the size that I want, but the recipient receives the message with images overflowing off the sides. Not only will they not want to look at the images, but also it again makes it seem like I am unprofessional and do not not know how to use Outlook like an adult. I do not see an option to repair. I have not found any sensible answers online to address and fix permanently. This only happens in Outlook New and I need to know how to have this repaired. Or am I just supposed to stop using inline images and a logo in the signature. Outlook images too big. Signature looks ridiculous. Image resize dimensions are non-functional. Fix it, please. Read More
The new Outlook
The software frequently asks me to try the new version. I did recently and it erased my contacts. I’d be happy to try it if it did not erase my contacts.
The software frequently asks me to try the new version. I did recently and it erased my contacts. I’d be happy to try it if it did not erase my contacts. Read More