Month: August 2024
Download Spotify Podcasts/Free Podcasts to MP3
Here is a detailed guide on how to download Spotify podcasts to MP3 without Spotify Premium via Tidabie Music Go:
STEP 1 Select Spotify as the Downloading Source
With Tidabie Music Go, you can download Spotify podcasts to MP3 format from your Spotify library, no matter if you are using a Spotify Free or Premium account. Once you start Tidabie on your computer, please select “Spotify” as the audio source. And then log in to your Spotify account to access your library.
Note: When choosing the Spotify source, you have the choice to capture Spotify podcasts from either the Spotify app or the Spotify web player. You can toggle between the two options by clicking on the switching icon. If you opt to record podcasts from the Spotify app, the operation will function at speeds of up to 10 times faster while preserving the best audio quality.
STEP 2 Customize the Output Settings of Downloaded Spotify Podcasts
When you finish choosing “Spotify” as the downloading audio source, you will see the “Music” interface like the picture below. Simply select the output format as “MP3” under the “Convert Settings” module in this interface. If needed, you can also adjust parameters like the bit rate and sample rate. Additional settings such as the output folder path and file naming can be customized in the full settings pop-up window, which is accessible by clicking the “More settings” button.
STEP 3 Find Spotify Podcasts to Download
Back to the Spotify app or the Spotify web player after choosing the output settings. Then you need to find the Spotify podcasts you want to download on Spotify. As you locate the podcast page, you will see a blue “Click to add” button in the lower right corner. Just tap on it to start parsing the Spotify podcast episodes.
As the podcasts are processed, all the downloadable items will be listed on a small pop-up window. What you need to do next is to choose the episodes you want to download by ticking the square box next to the episode title. When comparing downloading podcasts using Spotify and Tidabie, the advantages of using Tidabie are more noticeable. Tidabie enables you to download Spotify podcasts in batches, while on Spotify, you need to click on the download icon to get episodes one by one.
STEP 4 Start Downloading Spotify Podcasts
Before downloading, you have the option to add more podcasts to download by hitting the “Add More” icon plus the chance to modify the output settings again by tapping on the settings icon on this interface. If everything is all set, just click on the “Convert” button to start downloading. Then Tidabie will run up to 10x faster to get your favorite Spotify podcasts downloaded in MP3 format to the local PC. All you need to do now is to wait patiently.
STEP 5 Start Downloading Spotify Podcasts
Check the Downloaded Spotify Podcats on your Local PC
The output folder that keeps the downloaded Spotify podcasts will pop up by default when the downloading is completed. You can check the downloaded podcasts in the pop-up folder. Or go to the specific podcast files by hitting the folder icon near each song under the “Converted” module.
Here is a detailed guide on how to download Spotify podcasts to MP3 without Spotify Premium via Tidabie Music Go:STEP 1 Select Spotify as the Downloading SourceWith Tidabie Music Go, you can download Spotify podcasts to MP3 format from your Spotify library, no matter if you are using a Spotify Free or Premium account. Once you start Tidabie on your computer, please select “Spotify” as the audio source. And then log in to your Spotify account to access your library.Note: When choosing the Spotify source, you have the choice to capture Spotify podcasts from either the Spotify app or the Spotify web player. You can toggle between the two options by clicking on the switching icon. If you opt to record podcasts from the Spotify app, the operation will function at speeds of up to 10 times faster while preserving the best audio quality.STEP 2 Customize the Output Settings of Downloaded Spotify PodcastsWhen you finish choosing “Spotify” as the downloading audio source, you will see the “Music” interface like the picture below. Simply select the output format as “MP3” under the “Convert Settings” module in this interface. If needed, you can also adjust parameters like the bit rate and sample rate. Additional settings such as the output folder path and file naming can be customized in the full settings pop-up window, which is accessible by clicking the “More settings” button.STEP 3 Find Spotify Podcasts to DownloadBack to the Spotify app or the Spotify web player after choosing the output settings. Then you need to find the Spotify podcasts you want to download on Spotify. As you locate the podcast page, you will see a blue “Click to add” button in the lower right corner. Just tap on it to start parsing the Spotify podcast episodes.As the podcasts are processed, all the downloadable items will be listed on a small pop-up window. What you need to do next is to choose the episodes you want to download by ticking the square box next to the episode title. When comparing downloading podcasts using Spotify and Tidabie, the advantages of using Tidabie are more noticeable. Tidabie enables you to download Spotify podcasts in batches, while on Spotify, you need to click on the download icon to get episodes one by one.STEP 4 Start Downloading Spotify PodcastsBefore downloading, you have the option to add more podcasts to download by hitting the “Add More” icon plus the chance to modify the output settings again by tapping on the settings icon on this interface. If everything is all set, just click on the “Convert” button to start downloading. Then Tidabie will run up to 10x faster to get your favorite Spotify podcasts downloaded in MP3 format to the local PC. All you need to do now is to wait patiently.STEP 5 Start Downloading Spotify PodcastsCheck the Downloaded Spotify Podcats on your Local PCThe output folder that keeps the downloaded Spotify podcasts will pop up by default when the downloading is completed. You can check the downloaded podcasts in the pop-up folder. Or go to the specific podcast files by hitting the folder icon near each song under the “Converted” module. Read More
What is the best spotify music converter for Windows PC?
I enjoy listening to Spotify music offline on various devices, but I don’t have a Spotify Premium account. As a technical writer and web designer, I often deal with different software and tools, and I’m familiar with various conversion processes.
I’ve heard about several Spotify music converters, but I’m looking for one that stands out in terms of quality, ease of use, and reliability. Ideally, it should support saving Spotify tracks as MP3 files without compromising on sound quality. I’m also interested in any additional features that might enhance the overall experience.
I enjoy listening to Spotify music offline on various devices, but I don’t have a Spotify Premium account. As a technical writer and web designer, I often deal with different software and tools, and I’m familiar with various conversion processes.I’ve heard about several Spotify music converters, but I’m looking for one that stands out in terms of quality, ease of use, and reliability. Ideally, it should support saving Spotify tracks as MP3 files without compromising on sound quality. I’m also interested in any additional features that might enhance the overall experience. Read More
Making private meeting in shared mailbox calendar
Hello,
In the calendar of a shared mailbox we can’t set a meeting to private. The lock-tile is greyed out. In the users own calendar the tile is available. Is there a setting for shared mailboxes that makes it possible to set meetings to private?
Kind regards,
Arjan
Hello, In the calendar of a shared mailbox we can’t set a meeting to private. The lock-tile is greyed out. In the users own calendar the tile is available. Is there a setting for shared mailboxes that makes it possible to set meetings to private? Kind regards,Arjan Read More
Building HyDE powered RAG chatbots using Microsoft Azure AI Models & Dataloop
Customer service is undergoing an AI revolution, driven by the demand for smarter, more efficient solutions. HyDE-powered RAG chatbots offer a breakthrough technology that combines vast knowledge bases with real-time data retrieval and hypothetical document embeddings (HyDE) to deliver superior accuracy and context-specific responses. Yet, building and managing these complex systems remains a significant challenge due to the intricate integration of diverse AI components, real-time processing requirements, and the need for specialized expertise in AI and data engineering.
Simplifying GenAI solutions with Microsoft and Dataloop
The Microsoft-Dataloop partnership abstracts the deployment of powerful chatbot applications. By integrating Microsoft’s PHI-3-MINI foundation model with Dataloop’s data platform, we’ve made HyDE-powered RAG chatbots accessible to a wider developer community with minimal coding. Developers can leave the documentation behind and start utilizing these capabilities instantly, accelerating time to value.
This announcement follows our successful integration with Microsoft Azure AI Model as a Service and Azure AI Video Indexer, further enhancing our ability to deliver advanced AI solutions. These integrations enable developers to seamlessly incorporate state-of-the-art AI models into their workflows, significantly accelerating development cycles.
About Dataloop AI development platform
Dataloop is an enterprise-grade end-to-end AI development platform designed to streamline the creation and deployment of powerful GenAI applications. The platform offers a comprehensive suite of tools and services, enabling efficient AI model development and management.
Key features include:
Orchestration: Dataloop provides seamless pipeline management, access to a marketplace for AI models, and a serverless architecture to simplify deployment and scalability.
Data Management: The Dataloop platform supports extensive dataset exploration, allowing users to query, visualize, and curate data efficiently.
Human Knowledge: Dataloop facilitates knowledge-based ground truth creation through tools for annotation, review, and monitoring, ensuring high-quality data labeling.
MLOps: With reliable model management capabilities, Dataloop ensures efficient inference, training, and evaluation of AI models.
Dataloop is also available on Azure Marketplace.
About Azure AI Models as a Service
Azure AI Models as a Service (MaaS) offers developers and businesses access to a robust ecosystem of powerful AI models. This service includes a wide range of models, from pre-trained and custom models to foundation models, covering tasks such as natural language processing, computer vision, and more. The service is backed by Azure’s stringent data privacy and security commitments, ensuring that all data, including prompts and responses, remains private and secure.
Add photo pink screen
Figure: HyDE-powered RAG Chatbot Workflow – This pipeline, created using the Dataloop platform, demonstrates the process of transforming user queries into hypothetical answers, generating embeddings, and retrieving relevant documents from a vector store. This internal Slack chatbot is optimizing information retrieval to ensure that users receive accurate and contextually relevant responses, enhancing the chatbot’s ability to search for answers in the documentation.
This is how we do it!
Powering Efficient AI Inference at Scale: Microsoft’s AI tools build upon a powerful foundation of inference engines like Azure Machine Learning and ONNX Runtime. This robust toolkit ensures smooth, high-performance AI inferencing at scale. These tools specifically fine-tune neural networks for exceptional speed and efficiency, making them ideal for demanding applications like large language models (LLMs). This translates to rapid inference and scalable AI deployment across various environments.
End-to-End AI Development with Drag-and-Drop Ease: Dataloop empowers users to build and manage advanced AI capabilities entirely within its intuitive no-code interface. Simply drag and drop models provided or developed by Microsoft through our marketplace to seamlessly integrate them into your workflows. Pre-built pipeline templates specifically designed for RAG chatbots further streamline development. This eliminates the need for additional tools, making Dataloop your one-stop shop for building next-generation RAG-based chatbots.
A Node-by-Node Look at a RAG-based Document Assistant Chatbot with Microsoft and Dataloop
This section takes you behind the scenes of our RAG-based document assistant chatbot creation, utilizing Microsoft’s AI tools and the Dataloop platform. This breakdown will help you understand each component’s role and how they work together to deliver efficient and accurate responses. Below is a detailed node-by-node explanation of the system.
Node 1: Slack (or Messaging App) – Prompt Entry Point
Description: This node acts as the interface between users and the chatbot system. It integrates with a messaging platform like Slack and receives user interactions (messages, queries, commands) and starts the pipeline.
Functionality: It captures and processes the user input to be forwarded to the predictive model.
Configuration:
Integration:
Specify the target messaging platform (e.g., Slack API token, login credentials for other messaging apps).
Define event types to handle (e.g., messages, direct mentions, specific commands).
Message Handling:
Define how to pre-process messages (e.g., removing emojis, formatting, language detection).
Configure how to identify user intent and extract relevant information from the message.
Node 2 – PHI-3-MINI – Predict Model
Description: This node utilizes a generative prediction model, PHI-3-MINI, optimized with Microsoft’s AI tools.
Functionality: The node takes input from the Slack node and generates hypothetical responses. Research in Zero-Shot Learning suggests that this approach, leveraging contextual understanding and broad knowledge, can often outperform traditional methods.
Configuration:
Model Selection: Choose any LLM optimized using Microsoft’s AI tools. In our chatbot, we leverage PHI-3-MINI, specifically optimized for efficient resource usage.
System Prompt Configuration: A system prompt guides the AI’s behavior by setting tone, style, and content rules, ensuring consistent, relevant, and appropriate responses. For our case, we configure the LLM to give a hypothetical and concise answer.
Parameters: Set parameters for the model (e.g., beam search size, temperature for sampling).
Node 3 – Embed Item
Description: This node is responsible for embedding items, transforming text or data into a format that can be easily used for further processing or retrieval.
Functionality: It generates vector embeddings from the text. These embeddings represent the text in a high-dimensional space, allowing for efficient similarity searches in the next node.
Configuration:
Embedding Model: Choose the model for generating vector embeddings from text (e.g., pre-trained Word2Vec, Sentence Transformers). You can also utilize Microsoft’s embedding tools. Each embedding model comes with its own dimensionality of the vectors.
Normalization: Specify the normalization technique for the embeddings (e.g., L2 normalization).
Node 4 – Retriever Prompt (Search)
Description: This node acts as a retrieval mechanism, responsible for fetching relevant information or context based on the embedded item.
Functionality: It uses the embeddings to search a database or knowledge base, retrieving information that is relevant to the query or input provided by the user. It could use various retrieval techniques, including vector searches, to find the best matching results.
Configuration:
Dataset: Specify your dataset, with all the existing chunks and embeddings.
Similarity Metric: Define the metric for measuring similarity between the query embedding and candidate items (e.g., cosine similarity, dot product).
Retrieval Strategy: Choose the retrieval strategy. In our case, we used our feature store based on SingleStore, a database optimized for fast searches. This allows for efficient vector-based search to quickly retrieve the most relevant information.
Node 5 – PHI-3-MINI – (Refine)
Description: Similar to the earlier PHI-3-MINI node, this node also involves a predictive model, another instance of the PHI-3-MINI model optimized by Microsoft.
Functionality: Processes the retrieved information using the predictive model to generate a response or further refine the data, ensuring a contextually accurate output for the user.
Configuration: Model Selection: Specify another instance of the PHI-3-MINI model optimized with Microsoft’s AI tools.
Task Definition: Instruct the model to take all chunks of documentation and reply accurately to the user’s question.
System Prompt Configuration: Instruct the chatbot on how to respond. In our case, we configured it to respond kindly, act as a helpful documentation assistant, clearly state when it doesn’t know an answer, and avoid inventing information.
Accelerate AI Development with Dataloop’s Integration of Microsoft Foundation Models
Discover a vast ecosystem of pre-built solutions, models, and datasets tailored to your specific needs. Easily filter options by provider, media type, and compatibility to find the perfect fit. Build and customize AI workflows with easy-to-use pipeline tools and out-of-the-box end-to-end AI and GenAI workflows. We are incredibly excited to see what you can create with your new capabilities!
Microsoft Tech Community – Latest Blogs –Read More
GitHub Model Catalog – Getting Started
Welcome to GitHub Models! We’ve got everything fired up and ready for you to explore AI Models hosted on Azure AI. So as Student developer you already have access to amazing GitHub Resources like Codespaces and Copilot from http://education.github.com now you get started on developing with Generative AI and Language Models with the Model Catalog.
For more information about the Models available on GitHub Models, check out the GitHub Model Marketplace
Each model has a dedicated playground and sample code available in a dedicated codespaces environment.
There are a few basic examples that are ready for you to run. You can find them in the samples directory within the codespaces environment.
If you want to jump straight to your favorite language, you can find the examples in the following Languages:
Python
JavaScript
cURL
The dedicated Codespaces Environment is an excellent way to get started running the samples and models.
Below are example code snippets for a few use cases. For additional information about Azure AI Inference SDK, see full documentation and samples.
Create a personal access token You do not need to give any permissions to the token. Note that the token will be sent to a Microsoft service.
To use the code snippets below, create an environment variable to set your token as the key for the client code.
If you’re using bash:
Install the Azure AI Inference SDK using pip (Requires: Python >=3.8):
This sample demonstrates a basic call to the chat completion API. It is leveraging the GitHub AI model inference endpoint and your GitHub token. The call is synchronous.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
token = os.environ[“GITHUB_TOKEN”]
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
response = client.complete(
messages=[
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”What is the capital of France?”),
],
model=model_name,
temperature=1.,
max_tokens=1000,
top_p=1.
)
print(response.choices[0].message.content)
This sample demonstrates a multi-turn conversation with the chat completion API. When using the model for a chat application, you’ll need to manage the history of that conversation and send the latest messages to the model.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import AssistantMessage, SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
messages = [
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”What is the capital of France?”),
AssistantMessage(content=”The capital of France is Paris.”),
UserMessage(content=”What about Spain?”),
]
response = client.complete(messages=messages, model=model_name)
print(response.choices[0].message.content)
For a better user experience, you will want to stream the response of the model so that the first token shows up early and you avoid waiting for long responses.
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.core.credentials import AzureKeyCredential
token = os.environ[“GITHUB_TOKEN”]
endpoint = “https://models.inference.ai.azure.com”
# Replace Model_Name
model_name = “Phi-3-small-8k-instruct”
client = ChatCompletionsClient(
endpoint=endpoint,
credential=AzureKeyCredential(token),
)
response = client.complete(
stream=True,
messages=[
SystemMessage(content=”You are a helpful assistant.”),
UserMessage(content=”Give me 5 good reasons why I should exercise every day.”),
],
model=model_name,
)
for update in response:
if update.choices:
print(update.choices[0].delta.content or “”, end=””)
client.close()
Install Node.js.
Copy the following lines of text and save them as a file package.json inside your folder.
“type”: “module”,
“dependencies”: {
“@azure-rest/ai-inference”: “latest”,
“@azure/core-auth”: “latest”,
“@azure/core-sse”: “latest”
}
}
Note: @azure/core-sse is only needed when you stream the chat completions response.
Open a terminal window in this folder and run npm install.
For each of the code snippets below, copy the content into a file sample.js and run with node sample.js.
This sample demonstrates a basic call to the chat completion API. It is leveraging the GitHub AI model inference endpoint and your GitHub token. The call is synchronous.
import { AzureKeyCredential } from “@azure/core-auth”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role:”system”, content: “You are a helpful assistant.” },
{ role:”user”, content: “What is the capital of France?” }
],
model: modelName,
temperature: 1.,
max_tokens: 1000,
top_p: 1.
}
});
if (response.status !== “200”) {
throw response.body.error;
}
console.log(response.body.choices[0].message.content);
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
This sample demonstrates a multi-turn conversation with the chat completion API. When using the model for a chat application, you’ll need to manage the history of that conversation and send the latest messages to the model.
import { AzureKeyCredential } from “@azure/core-auth”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role: “system”, content: “You are a helpful assistant.” },
{ role: “user”, content: “What is the capital of France?” },
{ role: “assistant”, content: “The capital of France is Paris.” },
{ role: “user”, content: “What about Spain?” },
],
model: modelName,
}
});
if (response.status !== “200”) {
throw response.body.error;
}
for (const choice of response.body.choices) {
console.log(choice.message.content);
}
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
For a better user experience, you will want to stream the response of the model so that the first token shows up early and you avoid waiting for long responses.
import { AzureKeyCredential } from “@azure/core-auth”;
import { createSseStream } from “@azure/core-sse”;
const token = process.env[“GITHUB_TOKEN”];
const endpoint = “https://models.inference.ai.azure.com”;
// Update your modelname
const modelName = “Phi-3-small-8k-instruct”;
export async function main() {
const client = new ModelClient(endpoint, new AzureKeyCredential(token));
const response = await client.path(“/chat/completions”).post({
body: {
messages: [
{ role: “system”, content: “You are a helpful assistant.” },
{ role: “user”, content: “Give me 5 good reasons why I should exercise every day.” },
],
model: modelName,
stream: true
}
}).asNodeStream();
const stream = response.body;
if (!stream) {
throw new Error(“The response stream is undefined”);
}
if (response.status !== “200”) {
stream.destroy();
throw new Error(`Failed to get chat completions, http operation failed with ${response.status} code`);
}
const sseStream = createSseStream(stream);
for await (const event of sseStream) {
if (event.data === “[DONE]”) {
return;
}
for (const choice of (JSON.parse(event.data)).choices) {
process.stdout.write(choice.delta?.content ?? “);
}
}
}
main().catch((err) => {
console.error(“The sample encountered an error:”, err);
});
Paste the following into a shell:
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “What is the capital of France?”
}
],
“model”: “Phi-3-small-8k-instruct”
}’
Call the chat completion API and pass the chat history:
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “What is the capital of France?”
},
{
“role”: “assistant”,
“content”: “The capital of France is Paris.”
},
{
“role”: “user”,
“content”: “What about Spain?”
}
],
“model”: “Phi-3-small-8k-instruct”
}’
This is an example of calling the endpoint and streaming the response.
-H “Content-Type: application/json”
-H “Authorization: Bearer $GITHUB_TOKEN”
-d ‘{
“messages”: [
{
“role”: “system”,
“content”: “You are a helpful assistant.”
},
{
“role”: “user”,
“content”: “Give me 5 good reasons why I should exercise every day.”
}
],
“stream”: true,
“model”: “Phi-3-small-8k-instruct”
}’
The rate limits for the playground and free API usage are intended to help you experiment with models and prototype your AI application. For use beyond those limits, and to bring your application to scale, you must provision resources from an Azure account, and authenticate from there instead of your GitHub personal access token. You don’t need to change anything else in your code. Use this link to discover how to go beyond the free tier limits in Azure AI.
Microsoft Tech Community – Latest Blogs –Read More
How can I update base body transformation matrix during visualization using ‘show’ function?
To visualize the rigidbody tree model from urdf file, the ‘show’ function can be used. However, it supports to set the base body position (x,y,z) and yaw angle, only… I want to arbitrary set the base body position and orientation.
For a internal function in the robotics toolbox, there is a little comment to use 6-DOF xyz & roll, ptich, yaw, thereby obtaining whole base body transformation matrix. But, the strict support does not provide or activate them.
How can I set arbitrary base body transformation matrix of rigidbody tree? I have to simulate the base body trasnform almost real-time, therefore I don’t want to add and remove body during simulation.To visualize the rigidbody tree model from urdf file, the ‘show’ function can be used. However, it supports to set the base body position (x,y,z) and yaw angle, only… I want to arbitrary set the base body position and orientation.
For a internal function in the robotics toolbox, there is a little comment to use 6-DOF xyz & roll, ptich, yaw, thereby obtaining whole base body transformation matrix. But, the strict support does not provide or activate them.
How can I set arbitrary base body transformation matrix of rigidbody tree? I have to simulate the base body trasnform almost real-time, therefore I don’t want to add and remove body during simulation. To visualize the rigidbody tree model from urdf file, the ‘show’ function can be used. However, it supports to set the base body position (x,y,z) and yaw angle, only… I want to arbitrary set the base body position and orientation.
For a internal function in the robotics toolbox, there is a little comment to use 6-DOF xyz & roll, ptich, yaw, thereby obtaining whole base body transformation matrix. But, the strict support does not provide or activate them.
How can I set arbitrary base body transformation matrix of rigidbody tree? I have to simulate the base body trasnform almost real-time, therefore I don’t want to add and remove body during simulation. robotics tool box, base body transformation update MATLAB Answers — New Questions
How should I structure the neural net based on my given input and output training data
I am trying to design a feedforward network that trains on a 4×5 matrix (5 samples of 4 separate inputs into the neural network) and its outputs are represented by a 4x5x1000 matrix (5 samples of 4 outputs where each component of the 4×1 output vector has 1000 points). This neural net is used to determine an optimal trajectory for a given terminal condition from a set of the same initial conditions . The code for this project will be placed below:
%% Neural Net Training Process
% Initial State
x1 = [0;0]; % Initial Positions
x2 = [1;1]; % Initial Velocities
xo = [x1;x2]; % 4×1 Initial State Vector
% Parsing Training Input Data
x_input = [xf1,xf2,xf4,xf5,xf6]; % 4×5 Terminal State Vector (each xf (4×1) represents a different terminal condition)
% Parsing Training Output Data
x_output = [];
for i=1:4
x_output(i,1,:) = x1(:,i);
x_output(i,2,:) = x2(:,i);
x_output(i,3,:) = x4(:,i);
x_output(i,4,:) = x5(:,i);
x_output(i,5,:) = x6(:,i);
end % 4x5x1000 Terminal State Matrix
% Parsing Validation Data
xf_valid = xf3;
x_valid = x3′;
% Neural Net Architecture Initialization
netconfig = 40;
net = feedforwardnet(netconfig);
net.numInputs = 4;
% Training the Network
for j=1:5
curr_xin = x_input(:,j);
curr_xout = x_output(:,j,:);
net = train(net,curr_xin,curr_xout);
end
From here, I am receieve an error in line 89, where I get the following error: Error using nntraining.setup>setupPerWorker (line 96)
Targets T is not two-dimensional. Any advice from here would be appreciated. Thanks.I am trying to design a feedforward network that trains on a 4×5 matrix (5 samples of 4 separate inputs into the neural network) and its outputs are represented by a 4x5x1000 matrix (5 samples of 4 outputs where each component of the 4×1 output vector has 1000 points). This neural net is used to determine an optimal trajectory for a given terminal condition from a set of the same initial conditions . The code for this project will be placed below:
%% Neural Net Training Process
% Initial State
x1 = [0;0]; % Initial Positions
x2 = [1;1]; % Initial Velocities
xo = [x1;x2]; % 4×1 Initial State Vector
% Parsing Training Input Data
x_input = [xf1,xf2,xf4,xf5,xf6]; % 4×5 Terminal State Vector (each xf (4×1) represents a different terminal condition)
% Parsing Training Output Data
x_output = [];
for i=1:4
x_output(i,1,:) = x1(:,i);
x_output(i,2,:) = x2(:,i);
x_output(i,3,:) = x4(:,i);
x_output(i,4,:) = x5(:,i);
x_output(i,5,:) = x6(:,i);
end % 4x5x1000 Terminal State Matrix
% Parsing Validation Data
xf_valid = xf3;
x_valid = x3′;
% Neural Net Architecture Initialization
netconfig = 40;
net = feedforwardnet(netconfig);
net.numInputs = 4;
% Training the Network
for j=1:5
curr_xin = x_input(:,j);
curr_xout = x_output(:,j,:);
net = train(net,curr_xin,curr_xout);
end
From here, I am receieve an error in line 89, where I get the following error: Error using nntraining.setup>setupPerWorker (line 96)
Targets T is not two-dimensional. Any advice from here would be appreciated. Thanks. I am trying to design a feedforward network that trains on a 4×5 matrix (5 samples of 4 separate inputs into the neural network) and its outputs are represented by a 4x5x1000 matrix (5 samples of 4 outputs where each component of the 4×1 output vector has 1000 points). This neural net is used to determine an optimal trajectory for a given terminal condition from a set of the same initial conditions . The code for this project will be placed below:
%% Neural Net Training Process
% Initial State
x1 = [0;0]; % Initial Positions
x2 = [1;1]; % Initial Velocities
xo = [x1;x2]; % 4×1 Initial State Vector
% Parsing Training Input Data
x_input = [xf1,xf2,xf4,xf5,xf6]; % 4×5 Terminal State Vector (each xf (4×1) represents a different terminal condition)
% Parsing Training Output Data
x_output = [];
for i=1:4
x_output(i,1,:) = x1(:,i);
x_output(i,2,:) = x2(:,i);
x_output(i,3,:) = x4(:,i);
x_output(i,4,:) = x5(:,i);
x_output(i,5,:) = x6(:,i);
end % 4x5x1000 Terminal State Matrix
% Parsing Validation Data
xf_valid = xf3;
x_valid = x3′;
% Neural Net Architecture Initialization
netconfig = 40;
net = feedforwardnet(netconfig);
net.numInputs = 4;
% Training the Network
for j=1:5
curr_xin = x_input(:,j);
curr_xout = x_output(:,j,:);
net = train(net,curr_xin,curr_xout);
end
From here, I am receieve an error in line 89, where I get the following error: Error using nntraining.setup>setupPerWorker (line 96)
Targets T is not two-dimensional. Any advice from here would be appreciated. Thanks. neural network, feedforwardnet, control, matlab MATLAB Answers — New Questions
Simscape – Source component – Input
I created a hydraulic system with my own custom components (pipes, elbows, tees, orifices,… ) in order to calculate the flow rate at the outlets.
The fluid temperature is changed with the help of a parameter at a "source component". This temperature is used at two lookuptables in order to determine the corresponding viscosity and density of the fluid.
Viscosity and density are domain parameters which are changed by the "source component" in order to provide these for the rest of the model.
Now I want to change the temperature at run time with the help of a "ramp block". Therefor I created an input at the "source component" and connected the "ramp block" by a "simulink-ps-converter".
The issue is now that of course I can´t use an input for lookuptables at the parameter section of the "source component".
I can move the lookuptables to the equations section, but this creates new issues with missing variables for viscosity and density.
Any Idee how I can solve this problem?
This is my source component…
component(Propagation=source) oil_properties
% Oil properties
%
% Temperature range: -15°C – 110°C
%
%
% Fluid type:
%
%Viscosity | CLP | CLP-PG | CLP-HC | *according Rickmeier – Viscosity/Temperature diagram
%
%——————————————————
%
% 100 | 1 | – | – |
%
% 150 | 2 | 6 | 10 |
%
% 220 | 3 | 7 | 11 |
%
% 320 | 4 | 8 | 12 |
%
% 460 | 5 | 9 | 13 |
%
%——————————————————
parameters
fluid_type = 11;
temperature ={20,’1′};
end
parameters (Access=private)
% Temperatur-Viskositätsdiagramm von Rickmeier
%CLP – Mineralöl
temp_CLP_100 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_100 = {[907 904 902 899 896 893 890 887 884 881 878 876 873 870 867 864 861 858 855 852 850 847 844 841 838 835 ], ‘kg/m^3’};
visk_CLP_100 = {[9300 5000 3000 1800 1100 750 500 350 240 170 130 100 70 60 46 38 32 27 22 19 16.5 14.5 12.5 11 9.6 8.6 ], ‘mm^2/s’};
temp_CLP_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_150 = {[908 905 903 900 897 894 891 888 885 882 879 877 874 871 868 865 862 859 856 853 850 848 845 842 839 836 ], ‘kg/m^3’};
visk_CLP_150 = {[18000 10000 5500 3400 2000 1200 800 550 380 280 200 150 110 85 65 54 44 36 30 25 22 18.5 16.5 14.5 12.5 11 ], ‘mm^2/s’};
temp_CLP_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_220 = {[912 910 907 904 901 898 895 892 889 886 883 880 878 875 872 869 866 863 860 857 854 851 848 846 843 840 ], ‘kg/m^3’};
visk_CLP_220 = {[34000 18000 10000 5500 3400 2000 1300 820 600 420 300 220 165 130 100 80 60 51 42 36 30 26 22 19 17 15 ], ‘mm^2/s’};
temp_CLP_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_320 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_320 = {[60000 30000 16000 9000 5500 3400 2200 1400 900 600 440 320 240 180 140 110 90 70 58 46 40 32 28 24 21 18.5 ], ‘mm^2/s’};
temp_CLP_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_460 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_460 = {[110000 50000 28000 16000 9000 5500 3400 2100 1400 900 650 460 340 260 200 150 120 95 75 60 50 44 36 31 27 23 ], ‘mm^2/s’};
%CLP PG – Synthetisches Öl auf Basis Polyglykole
temp_CLP_PG_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_150 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_150 = {[7500 3900 2420 1650 1200 850 610 440 340 260 210 150 130 105 90 71 61 52 44 38 32 29 25.5 22.5 20 18.5 ], ‘mm^2/s’};
temp_CLP_PG_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_220 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_220 = {[6100 4100 2800 2000 1400 1020 750 550 440 340 260 220 170 140 115 100 95 70 60 50 44 40 34 30 27 24 ], ‘mm^2/s’};
temp_CLP_PG_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_320 = {[1091 1087 1084 1080 1077 1073 1070 1067 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1021 1018 1014 1011 1007 1004 ], ‘kg/m^3’};
visk_CLP_PG_320 = {[5600 4000 3000 2200 1600 1220 950 775 600 480 400 320 272 225 190 165 140 120 105 92 80 70 62 55 50 45 ], ‘mm^2/s’};
temp_CLP_PG_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_460 = {[1081 1077 1074 1070 1067 1063 1060 1057 1053 1050 1046 1043 1039 1036 1032 1029 1026 1022 1019 1015 1012 1008 1005 1001 998 995 ], ‘kg/m^3’};
visk_CLP_PG_460 = {[7300 5400 3900 2900 2200 1700 1300 1050 810 650 550 460 360 310 260 222 190 165 140 120 108 95 85 75 67 60 ], ‘mm^2/s’};
%CLP HC – Synthetisches Öl auf Basis Polyalphaolefine
temp_CLP_HC_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_150 = {[871 868 865 862 860 857 854 851 848 846 843 840 837 835 832 829 826 823 821 818 815 812 810 807 804 801 ], ‘kg/m^3’};
visk_CLP_HC_150 = {[6900 4000 2450 1650 1120 800 560 420 310 230 180 150 110 90 71 60 50 42 35 31 27 23.3 20.3 18.2 16 14.5 ], ‘mm^2/s’};
temp_CLP_HC_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_220 = {[876 873 870 867 865 862 859 856 853 851 848 845 842 839 837 834 831 828 825 823 820 817 814 812 809 806 ], ‘kg/m^3’};
visk_CLP_HC_220 = {[6900 4000 2600 1800 1300 950 680 510 380 300 230 190 150 120 100 81 70 60 50 44 38 32 29 26 23 21 ], ‘mm^2/s’};
temp_CLP_HC_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_320 = {[879 876 873 870 868 865 862 859 856 854 851 848 845 842 840 837 834 831 828 826 823 820 817 814 812 809 ], ‘kg/m^3’};
visk_CLP_HC_320 = {[14500 9000 6000 4000 2700 1900 1350 960 720 540 420 320 260 205 165 137 115 95 80 65 59 50 43 38 33 29.5 ], ‘mm^2/s’};
temp_CLP_HC_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_460 = {[881 878 875 872 870 867 864 861 858 856 853 850 847 844 842 839 836 833 830 827 825 822 819 816 813 811 ], ‘kg/m^3’};
visk_CLP_HC_460 = {[25000 14500 9500 6000 4000 2800 1900 1400 1000 720 550 420 340 260 210 170 140 115 95 80 68 59 51 44 38 34 ], ‘mm^2/s’};
temp = [temp_CLP_100; temp_CLP_150; temp_CLP_220; temp_CLP_320; temp_CLP_460; temp_CLP_PG_150; temp_CLP_PG_220; temp_CLP_PG_320; temp_CLP_PG_460; temp_CLP_HC_150; temp_CLP_HC_220; temp_CLP_HC_320; temp_CLP_HC_460];
dens = [dens_CLP_100; dens_CLP_150; dens_CLP_220; dens_CLP_320; dens_CLP_460; dens_CLP_PG_150; dens_CLP_PG_220; dens_CLP_PG_320; dens_CLP_PG_460; dens_CLP_HC_150; dens_CLP_HC_220; dens_CLP_HC_320; dens_CLP_HC_460];
visk = [visk_CLP_100; visk_CLP_150; visk_CLP_220; visk_CLP_320; visk_CLP_460; visk_CLP_PG_150; visk_CLP_PG_220; visk_CLP_PG_320; visk_CLP_PG_460; visk_CLP_HC_150; visk_CLP_HC_220; visk_CLP_HC_320; visk_CLP_HC_460];
density = tablelookup(temp(fluid_type,:) ,dens(fluid_type,:) ,temperature, interpolation = smooth);
viscosity_kin = tablelookup(temp(fluid_type,:) ,visk(fluid_type,:) ,temperature, interpolation = smooth);
end
%inputs
% temperature = {1 , ‘1’ }; % :left
%end
nodes
G = NORD.Hydraulics.Domain.hydraulic(density=density,viscosity_kin=viscosity_kin); % :right
end
end
… and this is my custom domain:
domain hydraulic
% Hydraulic Domain
variables % Across
p = {value={1,’bar’},imin={0,’bar’}}; % Pressure
end
variables(Balancing = true) % Through
q = {0,’lpm’ }; % Flow rate
end
parameters
viscosity_kin = {0,’mm^2/s’ }; % kinematische Viskosität
density = {0,’kg/m^3′ }; % Dichte des Öls
bulk = {0.8e9 ,’Pa’ }; % Bulk modulus at atm. pressure and no gas
alpha = {0.005 ,’1′ }; % Relative amount of trapped air
range_error = {2 ,’1′ }; % Pressure below absolute zero
RD_18 = {0.015, ‘m’ }; % Rohrdurchmesser NW18
RD_10 = {0.008, ‘m’ }; % Rohrdurchmesser NW10
RD_12_5 = {0.0125,’m’ }; % Rohrdurchmesser für DMO
RD_06 = {0.006, ‘m’ }; % Rohrdurchmesser für DMO
BR_18 = {0.060, ‘m’ }; % Biegeradius des Rohres NW18
BR_10 = {0.027, ‘m’ }; % Biegeradius des Rohres NW10
Zeta_R_AURO = {1 , ‘1’ }; % Zeta Wert für Ausfluß (AURO)
Zeta_U_90_18 = {0.110, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_90_10 = {0.117, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_WV = {1.07, ‘1’ }; % Zeta Wert für WV
end
endI created a hydraulic system with my own custom components (pipes, elbows, tees, orifices,… ) in order to calculate the flow rate at the outlets.
The fluid temperature is changed with the help of a parameter at a "source component". This temperature is used at two lookuptables in order to determine the corresponding viscosity and density of the fluid.
Viscosity and density are domain parameters which are changed by the "source component" in order to provide these for the rest of the model.
Now I want to change the temperature at run time with the help of a "ramp block". Therefor I created an input at the "source component" and connected the "ramp block" by a "simulink-ps-converter".
The issue is now that of course I can´t use an input for lookuptables at the parameter section of the "source component".
I can move the lookuptables to the equations section, but this creates new issues with missing variables for viscosity and density.
Any Idee how I can solve this problem?
This is my source component…
component(Propagation=source) oil_properties
% Oil properties
%
% Temperature range: -15°C – 110°C
%
%
% Fluid type:
%
%Viscosity | CLP | CLP-PG | CLP-HC | *according Rickmeier – Viscosity/Temperature diagram
%
%——————————————————
%
% 100 | 1 | – | – |
%
% 150 | 2 | 6 | 10 |
%
% 220 | 3 | 7 | 11 |
%
% 320 | 4 | 8 | 12 |
%
% 460 | 5 | 9 | 13 |
%
%——————————————————
parameters
fluid_type = 11;
temperature ={20,’1′};
end
parameters (Access=private)
% Temperatur-Viskositätsdiagramm von Rickmeier
%CLP – Mineralöl
temp_CLP_100 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_100 = {[907 904 902 899 896 893 890 887 884 881 878 876 873 870 867 864 861 858 855 852 850 847 844 841 838 835 ], ‘kg/m^3’};
visk_CLP_100 = {[9300 5000 3000 1800 1100 750 500 350 240 170 130 100 70 60 46 38 32 27 22 19 16.5 14.5 12.5 11 9.6 8.6 ], ‘mm^2/s’};
temp_CLP_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_150 = {[908 905 903 900 897 894 891 888 885 882 879 877 874 871 868 865 862 859 856 853 850 848 845 842 839 836 ], ‘kg/m^3’};
visk_CLP_150 = {[18000 10000 5500 3400 2000 1200 800 550 380 280 200 150 110 85 65 54 44 36 30 25 22 18.5 16.5 14.5 12.5 11 ], ‘mm^2/s’};
temp_CLP_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_220 = {[912 910 907 904 901 898 895 892 889 886 883 880 878 875 872 869 866 863 860 857 854 851 848 846 843 840 ], ‘kg/m^3’};
visk_CLP_220 = {[34000 18000 10000 5500 3400 2000 1300 820 600 420 300 220 165 130 100 80 60 51 42 36 30 26 22 19 17 15 ], ‘mm^2/s’};
temp_CLP_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_320 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_320 = {[60000 30000 16000 9000 5500 3400 2200 1400 900 600 440 320 240 180 140 110 90 70 58 46 40 32 28 24 21 18.5 ], ‘mm^2/s’};
temp_CLP_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_460 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_460 = {[110000 50000 28000 16000 9000 5500 3400 2100 1400 900 650 460 340 260 200 150 120 95 75 60 50 44 36 31 27 23 ], ‘mm^2/s’};
%CLP PG – Synthetisches Öl auf Basis Polyglykole
temp_CLP_PG_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_150 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_150 = {[7500 3900 2420 1650 1200 850 610 440 340 260 210 150 130 105 90 71 61 52 44 38 32 29 25.5 22.5 20 18.5 ], ‘mm^2/s’};
temp_CLP_PG_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_220 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_220 = {[6100 4100 2800 2000 1400 1020 750 550 440 340 260 220 170 140 115 100 95 70 60 50 44 40 34 30 27 24 ], ‘mm^2/s’};
temp_CLP_PG_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_320 = {[1091 1087 1084 1080 1077 1073 1070 1067 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1021 1018 1014 1011 1007 1004 ], ‘kg/m^3’};
visk_CLP_PG_320 = {[5600 4000 3000 2200 1600 1220 950 775 600 480 400 320 272 225 190 165 140 120 105 92 80 70 62 55 50 45 ], ‘mm^2/s’};
temp_CLP_PG_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_460 = {[1081 1077 1074 1070 1067 1063 1060 1057 1053 1050 1046 1043 1039 1036 1032 1029 1026 1022 1019 1015 1012 1008 1005 1001 998 995 ], ‘kg/m^3’};
visk_CLP_PG_460 = {[7300 5400 3900 2900 2200 1700 1300 1050 810 650 550 460 360 310 260 222 190 165 140 120 108 95 85 75 67 60 ], ‘mm^2/s’};
%CLP HC – Synthetisches Öl auf Basis Polyalphaolefine
temp_CLP_HC_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_150 = {[871 868 865 862 860 857 854 851 848 846 843 840 837 835 832 829 826 823 821 818 815 812 810 807 804 801 ], ‘kg/m^3’};
visk_CLP_HC_150 = {[6900 4000 2450 1650 1120 800 560 420 310 230 180 150 110 90 71 60 50 42 35 31 27 23.3 20.3 18.2 16 14.5 ], ‘mm^2/s’};
temp_CLP_HC_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_220 = {[876 873 870 867 865 862 859 856 853 851 848 845 842 839 837 834 831 828 825 823 820 817 814 812 809 806 ], ‘kg/m^3’};
visk_CLP_HC_220 = {[6900 4000 2600 1800 1300 950 680 510 380 300 230 190 150 120 100 81 70 60 50 44 38 32 29 26 23 21 ], ‘mm^2/s’};
temp_CLP_HC_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_320 = {[879 876 873 870 868 865 862 859 856 854 851 848 845 842 840 837 834 831 828 826 823 820 817 814 812 809 ], ‘kg/m^3’};
visk_CLP_HC_320 = {[14500 9000 6000 4000 2700 1900 1350 960 720 540 420 320 260 205 165 137 115 95 80 65 59 50 43 38 33 29.5 ], ‘mm^2/s’};
temp_CLP_HC_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_460 = {[881 878 875 872 870 867 864 861 858 856 853 850 847 844 842 839 836 833 830 827 825 822 819 816 813 811 ], ‘kg/m^3’};
visk_CLP_HC_460 = {[25000 14500 9500 6000 4000 2800 1900 1400 1000 720 550 420 340 260 210 170 140 115 95 80 68 59 51 44 38 34 ], ‘mm^2/s’};
temp = [temp_CLP_100; temp_CLP_150; temp_CLP_220; temp_CLP_320; temp_CLP_460; temp_CLP_PG_150; temp_CLP_PG_220; temp_CLP_PG_320; temp_CLP_PG_460; temp_CLP_HC_150; temp_CLP_HC_220; temp_CLP_HC_320; temp_CLP_HC_460];
dens = [dens_CLP_100; dens_CLP_150; dens_CLP_220; dens_CLP_320; dens_CLP_460; dens_CLP_PG_150; dens_CLP_PG_220; dens_CLP_PG_320; dens_CLP_PG_460; dens_CLP_HC_150; dens_CLP_HC_220; dens_CLP_HC_320; dens_CLP_HC_460];
visk = [visk_CLP_100; visk_CLP_150; visk_CLP_220; visk_CLP_320; visk_CLP_460; visk_CLP_PG_150; visk_CLP_PG_220; visk_CLP_PG_320; visk_CLP_PG_460; visk_CLP_HC_150; visk_CLP_HC_220; visk_CLP_HC_320; visk_CLP_HC_460];
density = tablelookup(temp(fluid_type,:) ,dens(fluid_type,:) ,temperature, interpolation = smooth);
viscosity_kin = tablelookup(temp(fluid_type,:) ,visk(fluid_type,:) ,temperature, interpolation = smooth);
end
%inputs
% temperature = {1 , ‘1’ }; % :left
%end
nodes
G = NORD.Hydraulics.Domain.hydraulic(density=density,viscosity_kin=viscosity_kin); % :right
end
end
… and this is my custom domain:
domain hydraulic
% Hydraulic Domain
variables % Across
p = {value={1,’bar’},imin={0,’bar’}}; % Pressure
end
variables(Balancing = true) % Through
q = {0,’lpm’ }; % Flow rate
end
parameters
viscosity_kin = {0,’mm^2/s’ }; % kinematische Viskosität
density = {0,’kg/m^3′ }; % Dichte des Öls
bulk = {0.8e9 ,’Pa’ }; % Bulk modulus at atm. pressure and no gas
alpha = {0.005 ,’1′ }; % Relative amount of trapped air
range_error = {2 ,’1′ }; % Pressure below absolute zero
RD_18 = {0.015, ‘m’ }; % Rohrdurchmesser NW18
RD_10 = {0.008, ‘m’ }; % Rohrdurchmesser NW10
RD_12_5 = {0.0125,’m’ }; % Rohrdurchmesser für DMO
RD_06 = {0.006, ‘m’ }; % Rohrdurchmesser für DMO
BR_18 = {0.060, ‘m’ }; % Biegeradius des Rohres NW18
BR_10 = {0.027, ‘m’ }; % Biegeradius des Rohres NW10
Zeta_R_AURO = {1 , ‘1’ }; % Zeta Wert für Ausfluß (AURO)
Zeta_U_90_18 = {0.110, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_90_10 = {0.117, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_WV = {1.07, ‘1’ }; % Zeta Wert für WV
end
end I created a hydraulic system with my own custom components (pipes, elbows, tees, orifices,… ) in order to calculate the flow rate at the outlets.
The fluid temperature is changed with the help of a parameter at a "source component". This temperature is used at two lookuptables in order to determine the corresponding viscosity and density of the fluid.
Viscosity and density are domain parameters which are changed by the "source component" in order to provide these for the rest of the model.
Now I want to change the temperature at run time with the help of a "ramp block". Therefor I created an input at the "source component" and connected the "ramp block" by a "simulink-ps-converter".
The issue is now that of course I can´t use an input for lookuptables at the parameter section of the "source component".
I can move the lookuptables to the equations section, but this creates new issues with missing variables for viscosity and density.
Any Idee how I can solve this problem?
This is my source component…
component(Propagation=source) oil_properties
% Oil properties
%
% Temperature range: -15°C – 110°C
%
%
% Fluid type:
%
%Viscosity | CLP | CLP-PG | CLP-HC | *according Rickmeier – Viscosity/Temperature diagram
%
%——————————————————
%
% 100 | 1 | – | – |
%
% 150 | 2 | 6 | 10 |
%
% 220 | 3 | 7 | 11 |
%
% 320 | 4 | 8 | 12 |
%
% 460 | 5 | 9 | 13 |
%
%——————————————————
parameters
fluid_type = 11;
temperature ={20,’1′};
end
parameters (Access=private)
% Temperatur-Viskositätsdiagramm von Rickmeier
%CLP – Mineralöl
temp_CLP_100 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_100 = {[907 904 902 899 896 893 890 887 884 881 878 876 873 870 867 864 861 858 855 852 850 847 844 841 838 835 ], ‘kg/m^3’};
visk_CLP_100 = {[9300 5000 3000 1800 1100 750 500 350 240 170 130 100 70 60 46 38 32 27 22 19 16.5 14.5 12.5 11 9.6 8.6 ], ‘mm^2/s’};
temp_CLP_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_150 = {[908 905 903 900 897 894 891 888 885 882 879 877 874 871 868 865 862 859 856 853 850 848 845 842 839 836 ], ‘kg/m^3’};
visk_CLP_150 = {[18000 10000 5500 3400 2000 1200 800 550 380 280 200 150 110 85 65 54 44 36 30 25 22 18.5 16.5 14.5 12.5 11 ], ‘mm^2/s’};
temp_CLP_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_220 = {[912 910 907 904 901 898 895 892 889 886 883 880 878 875 872 869 866 863 860 857 854 851 848 846 843 840 ], ‘kg/m^3’};
visk_CLP_220 = {[34000 18000 10000 5500 3400 2000 1300 820 600 420 300 220 165 130 100 80 60 51 42 36 30 26 22 19 17 15 ], ‘mm^2/s’};
temp_CLP_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_320 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_320 = {[60000 30000 16000 9000 5500 3400 2200 1400 900 600 440 320 240 180 140 110 90 70 58 46 40 32 28 24 21 18.5 ], ‘mm^2/s’};
temp_CLP_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_460 = {[920 917 914 911 908 905 902 899 896 893 890 887 884 881 879 876 873 870 867 864 861 858 855 852 849 846 ], ‘kg/m^3’};
visk_CLP_460 = {[110000 50000 28000 16000 9000 5500 3400 2100 1400 900 650 460 340 260 200 150 120 95 75 60 50 44 36 31 27 23 ], ‘mm^2/s’};
%CLP PG – Synthetisches Öl auf Basis Polyglykole
temp_CLP_PG_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_150 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_150 = {[7500 3900 2420 1650 1200 850 610 440 340 260 210 150 130 105 90 71 61 52 44 38 32 29 25.5 22.5 20 18.5 ], ‘mm^2/s’};
temp_CLP_PG_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_220 = {[1084 1080 1077 1073 1070 1066 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1022 1018 1015 1011 1008 1004 1001 997 ], ‘kg/m^3’};
visk_CLP_PG_220 = {[6100 4100 2800 2000 1400 1020 750 550 440 340 260 220 170 140 115 100 95 70 60 50 44 40 34 30 27 24 ], ‘mm^2/s’};
temp_CLP_PG_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_320 = {[1091 1087 1084 1080 1077 1073 1070 1067 1063 1060 1056 1053 1049 1046 1042 1039 1035 1032 1028 1025 1021 1018 1014 1011 1007 1004 ], ‘kg/m^3’};
visk_CLP_PG_320 = {[5600 4000 3000 2200 1600 1220 950 775 600 480 400 320 272 225 190 165 140 120 105 92 80 70 62 55 50 45 ], ‘mm^2/s’};
temp_CLP_PG_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_PG_460 = {[1081 1077 1074 1070 1067 1063 1060 1057 1053 1050 1046 1043 1039 1036 1032 1029 1026 1022 1019 1015 1012 1008 1005 1001 998 995 ], ‘kg/m^3’};
visk_CLP_PG_460 = {[7300 5400 3900 2900 2200 1700 1300 1050 810 650 550 460 360 310 260 222 190 165 140 120 108 95 85 75 67 60 ], ‘mm^2/s’};
%CLP HC – Synthetisches Öl auf Basis Polyalphaolefine
temp_CLP_HC_150 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_150 = {[871 868 865 862 860 857 854 851 848 846 843 840 837 835 832 829 826 823 821 818 815 812 810 807 804 801 ], ‘kg/m^3’};
visk_CLP_HC_150 = {[6900 4000 2450 1650 1120 800 560 420 310 230 180 150 110 90 71 60 50 42 35 31 27 23.3 20.3 18.2 16 14.5 ], ‘mm^2/s’};
temp_CLP_HC_220 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_220 = {[876 873 870 867 865 862 859 856 853 851 848 845 842 839 837 834 831 828 825 823 820 817 814 812 809 806 ], ‘kg/m^3’};
visk_CLP_HC_220 = {[6900 4000 2600 1800 1300 950 680 510 380 300 230 190 150 120 100 81 70 60 50 44 38 32 29 26 23 21 ], ‘mm^2/s’};
temp_CLP_HC_320 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_320 = {[879 876 873 870 868 865 862 859 856 854 851 848 845 842 840 837 834 831 828 826 823 820 817 814 812 809 ], ‘kg/m^3’};
visk_CLP_HC_320 = {[14500 9000 6000 4000 2700 1900 1350 960 720 540 420 320 260 205 165 137 115 95 80 65 59 50 43 38 33 29.5 ], ‘mm^2/s’};
temp_CLP_HC_460 = {[-15.00 -10.00 -5.00 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 55.00 60.00 65.00 70.00 75.00 80.00 85.00 90.00 95.00 100.00 105.00 110.00], ‘1’};
dens_CLP_HC_460 = {[881 878 875 872 870 867 864 861 858 856 853 850 847 844 842 839 836 833 830 827 825 822 819 816 813 811 ], ‘kg/m^3’};
visk_CLP_HC_460 = {[25000 14500 9500 6000 4000 2800 1900 1400 1000 720 550 420 340 260 210 170 140 115 95 80 68 59 51 44 38 34 ], ‘mm^2/s’};
temp = [temp_CLP_100; temp_CLP_150; temp_CLP_220; temp_CLP_320; temp_CLP_460; temp_CLP_PG_150; temp_CLP_PG_220; temp_CLP_PG_320; temp_CLP_PG_460; temp_CLP_HC_150; temp_CLP_HC_220; temp_CLP_HC_320; temp_CLP_HC_460];
dens = [dens_CLP_100; dens_CLP_150; dens_CLP_220; dens_CLP_320; dens_CLP_460; dens_CLP_PG_150; dens_CLP_PG_220; dens_CLP_PG_320; dens_CLP_PG_460; dens_CLP_HC_150; dens_CLP_HC_220; dens_CLP_HC_320; dens_CLP_HC_460];
visk = [visk_CLP_100; visk_CLP_150; visk_CLP_220; visk_CLP_320; visk_CLP_460; visk_CLP_PG_150; visk_CLP_PG_220; visk_CLP_PG_320; visk_CLP_PG_460; visk_CLP_HC_150; visk_CLP_HC_220; visk_CLP_HC_320; visk_CLP_HC_460];
density = tablelookup(temp(fluid_type,:) ,dens(fluid_type,:) ,temperature, interpolation = smooth);
viscosity_kin = tablelookup(temp(fluid_type,:) ,visk(fluid_type,:) ,temperature, interpolation = smooth);
end
%inputs
% temperature = {1 , ‘1’ }; % :left
%end
nodes
G = NORD.Hydraulics.Domain.hydraulic(density=density,viscosity_kin=viscosity_kin); % :right
end
end
… and this is my custom domain:
domain hydraulic
% Hydraulic Domain
variables % Across
p = {value={1,’bar’},imin={0,’bar’}}; % Pressure
end
variables(Balancing = true) % Through
q = {0,’lpm’ }; % Flow rate
end
parameters
viscosity_kin = {0,’mm^2/s’ }; % kinematische Viskosität
density = {0,’kg/m^3′ }; % Dichte des Öls
bulk = {0.8e9 ,’Pa’ }; % Bulk modulus at atm. pressure and no gas
alpha = {0.005 ,’1′ }; % Relative amount of trapped air
range_error = {2 ,’1′ }; % Pressure below absolute zero
RD_18 = {0.015, ‘m’ }; % Rohrdurchmesser NW18
RD_10 = {0.008, ‘m’ }; % Rohrdurchmesser NW10
RD_12_5 = {0.0125,’m’ }; % Rohrdurchmesser für DMO
RD_06 = {0.006, ‘m’ }; % Rohrdurchmesser für DMO
BR_18 = {0.060, ‘m’ }; % Biegeradius des Rohres NW18
BR_10 = {0.027, ‘m’ }; % Biegeradius des Rohres NW10
Zeta_R_AURO = {1 , ‘1’ }; % Zeta Wert für Ausfluß (AURO)
Zeta_U_90_18 = {0.110, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_90_10 = {0.117, ‘1’ }; % Zeta Wert für 90° Rohrbogen
Zeta_U_WV = {1.07, ‘1’ }; % Zeta Wert für WV
end
end domain parameters, inputs, source component, simscape MATLAB Answers — New Questions
How to simulate the io interface model of hil testing with Simulink?
I have already simulated the entire vehicle model, but I don’t know how to simulate the IO interface model that can interact with hardware data.I have already simulated the entire vehicle model, but I don’t know how to simulate the IO interface model that can interact with hardware data. I have already simulated the entire vehicle model, but I don’t know how to simulate the IO interface model that can interact with hardware data. simulink, io MATLAB Answers — New Questions
how to find the bit allocation factor?
how bit allocation factor affects the quality of a video? how this factor can be used to find the quality of video? how this factors can be calculated if i have a video?how bit allocation factor affects the quality of a video? how this factor can be used to find the quality of video? how this factors can be calculated if i have a video? how bit allocation factor affects the quality of a video? how this factor can be used to find the quality of video? how this factors can be calculated if i have a video? bit allocation factor MATLAB Answers — New Questions
How to copy a built-in Data connector in Global region to China region?
A lot of Sentinel solutions/data connectors are not available in China. For example: Dynamics 365 connector. Is it possible to get the source code of a built-in data connector (like Dynamics 365 connector) and create custom data connector in China?
A lot of Sentinel solutions/data connectors are not available in China. For example: Dynamics 365 connector. Is it possible to get the source code of a built-in data connector (like Dynamics 365 connector) and create custom data connector in China? Read More
Business Central Post Deployment Offer
Hi all, 
I would like to know in this new business central deployment offer: Does the partner need to get all the licenses subscribed by customer in the very first go or the ACR is calculated based on the subscription in 1st year? In other terms the incentives for ACR is eligible when all the licenses are subscripted at first place or is it the sum total in 1st Year?
Hi all, I would like to know in this new business central deployment offer: Does the partner need to get all the licenses subscribed by customer in the very first go or the ACR is calculated based on the subscription in 1st year? In other terms the incentives for ACR is eligible when all the licenses are subscripted at first place or is it the sum total in 1st Year? Read More
How to generate Custom Bitstream for Zedboard to deploy Neural Network model?
imds = imageDatastore(‘Result_fish_images(NA)’, …
‘IncludeSubfolders’,true, …
‘LabelSource’,’foldernames’);
%%
[imdsTrain,imdsValidation,imdsTest] = splitEachLabel(imds,0.7,0.15,0.15,"randomized");
%%
numTrainImages = numel(imdsTrain.Labels);
idx = randperm(numTrainImages,16);
figure
for i = 1:16
subplot(4,4,i)
I = readimage(imdsTrain,idx(i));
imshow(I)
end
%%
classNames = categories(imdsTrain.Labels);
numClasses = numel(classNames)
%%
net = imagePretrainedNetwork("alexnet",NumClasses=numClasses);
net = setLearnRateFactor(net,"fc8/Weights",20);
net = setLearnRateFactor(net,"fc8/Bias",20);
%%
inputSize = net.Layers(1).InputSize
%%
pixelRange = [-30 30];
imageAugmenter = imageDataAugmenter( …
‘RandXReflection’,true, …
‘RandXTranslation’,pixelRange, …
‘RandYTranslation’,pixelRange);
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain, …
‘DataAugmentation’,imageAugmenter);
%%
augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);
%%
options = trainingOptions("sgdm", …
MiniBatchSize=10, …
MaxEpochs=6, …
Metrics="accuracy", …
InitialLearnRate=1e-4, …
Shuffle="every-epoch", …
ValidationData=augimdsValidation, …
ValidationFrequency=3, …
Verbose=false, …
Plots="training-progress");
%%
net = trainnet(augimdsTrain,net,"crossentropy",options);
%%
scores = minibatchpredict(net,augimdsValidation);
YPred = scores2label(scores,classNames);
%%
idx = randperm(numel(imdsValidation.Files),4);
figure
for i = 1:4
subplot(2,2,i)
I = readimage(imdsValidation,idx(i));
imshow(I)
label = YPred(idx(i));
title(string(label));
end
%%
YValidation = imdsValidation.Labels;
accuracy = mean(YPred == YValidation)
%%After the above we performed the quantization and saved the network in quantizedNet variable. We flashed the %%memory card with linux image for zedboard using SoC Blockset support package. We tested the communication between %%our zedboard and laptop via zynq() command and were able to retreive the IP Address of it. Now we want to deploy the %%trained model on zedboard platform using:
%%hTarget = dlhdl.Target(‘Xilinx’,’Interface’,’Ethernet’);
%%hW = dlhdl.Workflow(‘Network’,quantizedNet,’Bitstream’,’zcu102_int8′,’Target’,hTarget);
%%dn=hW.compile;
%%hW.deploy;
%%output=hW.predict(InputImg);
%%This should give us prediction result by performing the operation on FPGA and fetching back the result.
%%But here the pre built bit streams like zc0102 or zc706 are not available for zedboard. How to generate custom %%bitstream targetting zedboard ??imds = imageDatastore(‘Result_fish_images(NA)’, …
‘IncludeSubfolders’,true, …
‘LabelSource’,’foldernames’);
%%
[imdsTrain,imdsValidation,imdsTest] = splitEachLabel(imds,0.7,0.15,0.15,"randomized");
%%
numTrainImages = numel(imdsTrain.Labels);
idx = randperm(numTrainImages,16);
figure
for i = 1:16
subplot(4,4,i)
I = readimage(imdsTrain,idx(i));
imshow(I)
end
%%
classNames = categories(imdsTrain.Labels);
numClasses = numel(classNames)
%%
net = imagePretrainedNetwork("alexnet",NumClasses=numClasses);
net = setLearnRateFactor(net,"fc8/Weights",20);
net = setLearnRateFactor(net,"fc8/Bias",20);
%%
inputSize = net.Layers(1).InputSize
%%
pixelRange = [-30 30];
imageAugmenter = imageDataAugmenter( …
‘RandXReflection’,true, …
‘RandXTranslation’,pixelRange, …
‘RandYTranslation’,pixelRange);
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain, …
‘DataAugmentation’,imageAugmenter);
%%
augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);
%%
options = trainingOptions("sgdm", …
MiniBatchSize=10, …
MaxEpochs=6, …
Metrics="accuracy", …
InitialLearnRate=1e-4, …
Shuffle="every-epoch", …
ValidationData=augimdsValidation, …
ValidationFrequency=3, …
Verbose=false, …
Plots="training-progress");
%%
net = trainnet(augimdsTrain,net,"crossentropy",options);
%%
scores = minibatchpredict(net,augimdsValidation);
YPred = scores2label(scores,classNames);
%%
idx = randperm(numel(imdsValidation.Files),4);
figure
for i = 1:4
subplot(2,2,i)
I = readimage(imdsValidation,idx(i));
imshow(I)
label = YPred(idx(i));
title(string(label));
end
%%
YValidation = imdsValidation.Labels;
accuracy = mean(YPred == YValidation)
%%After the above we performed the quantization and saved the network in quantizedNet variable. We flashed the %%memory card with linux image for zedboard using SoC Blockset support package. We tested the communication between %%our zedboard and laptop via zynq() command and were able to retreive the IP Address of it. Now we want to deploy the %%trained model on zedboard platform using:
%%hTarget = dlhdl.Target(‘Xilinx’,’Interface’,’Ethernet’);
%%hW = dlhdl.Workflow(‘Network’,quantizedNet,’Bitstream’,’zcu102_int8′,’Target’,hTarget);
%%dn=hW.compile;
%%hW.deploy;
%%output=hW.predict(InputImg);
%%This should give us prediction result by performing the operation on FPGA and fetching back the result.
%%But here the pre built bit streams like zc0102 or zc706 are not available for zedboard. How to generate custom %%bitstream targetting zedboard ?? imds = imageDatastore(‘Result_fish_images(NA)’, …
‘IncludeSubfolders’,true, …
‘LabelSource’,’foldernames’);
%%
[imdsTrain,imdsValidation,imdsTest] = splitEachLabel(imds,0.7,0.15,0.15,"randomized");
%%
numTrainImages = numel(imdsTrain.Labels);
idx = randperm(numTrainImages,16);
figure
for i = 1:16
subplot(4,4,i)
I = readimage(imdsTrain,idx(i));
imshow(I)
end
%%
classNames = categories(imdsTrain.Labels);
numClasses = numel(classNames)
%%
net = imagePretrainedNetwork("alexnet",NumClasses=numClasses);
net = setLearnRateFactor(net,"fc8/Weights",20);
net = setLearnRateFactor(net,"fc8/Bias",20);
%%
inputSize = net.Layers(1).InputSize
%%
pixelRange = [-30 30];
imageAugmenter = imageDataAugmenter( …
‘RandXReflection’,true, …
‘RandXTranslation’,pixelRange, …
‘RandYTranslation’,pixelRange);
augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain, …
‘DataAugmentation’,imageAugmenter);
%%
augimdsValidation = augmentedImageDatastore(inputSize(1:2),imdsValidation);
%%
options = trainingOptions("sgdm", …
MiniBatchSize=10, …
MaxEpochs=6, …
Metrics="accuracy", …
InitialLearnRate=1e-4, …
Shuffle="every-epoch", …
ValidationData=augimdsValidation, …
ValidationFrequency=3, …
Verbose=false, …
Plots="training-progress");
%%
net = trainnet(augimdsTrain,net,"crossentropy",options);
%%
scores = minibatchpredict(net,augimdsValidation);
YPred = scores2label(scores,classNames);
%%
idx = randperm(numel(imdsValidation.Files),4);
figure
for i = 1:4
subplot(2,2,i)
I = readimage(imdsValidation,idx(i));
imshow(I)
label = YPred(idx(i));
title(string(label));
end
%%
YValidation = imdsValidation.Labels;
accuracy = mean(YPred == YValidation)
%%After the above we performed the quantization and saved the network in quantizedNet variable. We flashed the %%memory card with linux image for zedboard using SoC Blockset support package. We tested the communication between %%our zedboard and laptop via zynq() command and were able to retreive the IP Address of it. Now we want to deploy the %%trained model on zedboard platform using:
%%hTarget = dlhdl.Target(‘Xilinx’,’Interface’,’Ethernet’);
%%hW = dlhdl.Workflow(‘Network’,quantizedNet,’Bitstream’,’zcu102_int8′,’Target’,hTarget);
%%dn=hW.compile;
%%hW.deploy;
%%output=hW.predict(InputImg);
%%This should give us prediction result by performing the operation on FPGA and fetching back the result.
%%But here the pre built bit streams like zc0102 or zc706 are not available for zedboard. How to generate custom %%bitstream targetting zedboard ?? zedboard, fpga, bitstream, neural network, alexnet MATLAB Answers — New Questions
replace sub-matrix values with zeros
Hello,
I have a large matrix C, which contains a number of matrices in the format C{i}{j}(k1,k2). The size of the matrix (value of k1 and k2) are different for different i and j values. I would like to set all k1 and k2 values to zero except for a certain i and j values. For example, i=1:7, and j varies from 1 to 128 for different i values. I want to keep the values of i=1, and j=1 and replace all other values to zero. e.g. C{1,1}{1,1}. Please help me to make the loop where i can keep the original size of the matrix but replace the values with zero except the target ones.
Related information: I encountered this problem when I am using curvelet. I decomposed the image into different scales and wedges. Now I want to do an inverse curvelet transform of the original image without a particular scale and wedge. I would like to separate and visualize the variations at different scales and wedges. I wonder how I can make the matrix right.
thanks a lot in advance.
please helpHello,
I have a large matrix C, which contains a number of matrices in the format C{i}{j}(k1,k2). The size of the matrix (value of k1 and k2) are different for different i and j values. I would like to set all k1 and k2 values to zero except for a certain i and j values. For example, i=1:7, and j varies from 1 to 128 for different i values. I want to keep the values of i=1, and j=1 and replace all other values to zero. e.g. C{1,1}{1,1}. Please help me to make the loop where i can keep the original size of the matrix but replace the values with zero except the target ones.
Related information: I encountered this problem when I am using curvelet. I decomposed the image into different scales and wedges. Now I want to do an inverse curvelet transform of the original image without a particular scale and wedge. I would like to separate and visualize the variations at different scales and wedges. I wonder how I can make the matrix right.
thanks a lot in advance.
please help Hello,
I have a large matrix C, which contains a number of matrices in the format C{i}{j}(k1,k2). The size of the matrix (value of k1 and k2) are different for different i and j values. I would like to set all k1 and k2 values to zero except for a certain i and j values. For example, i=1:7, and j varies from 1 to 128 for different i values. I want to keep the values of i=1, and j=1 and replace all other values to zero. e.g. C{1,1}{1,1}. Please help me to make the loop where i can keep the original size of the matrix but replace the values with zero except the target ones.
Related information: I encountered this problem when I am using curvelet. I decomposed the image into different scales and wedges. Now I want to do an inverse curvelet transform of the original image without a particular scale and wedge. I would like to separate and visualize the variations at different scales and wedges. I wonder how I can make the matrix right.
thanks a lot in advance.
please help matrix, conversion, curvelet, matrix manipulation MATLAB Answers — New Questions
Problem with estimating PDF (ksdensity)
Attached are two sets of data and I need to estimate the Probability density function (PDF) for both of them.
The attached variable detection has 32 elements and a unit of percentages (between 0 and 100 %), and the variable in_process has 96 elements and a unit of number of days (between 0 and 212 days).
I want to estimate the PDF of both variables. For that I am using ksdenity, with the ‘support’ option, because I don’t want the values on x-axis to be negative or over 100%.
Therefore,
for the estimation of PDF of the detection I use the following code:
detection(detection==0)=0.0001; %data must be between the support boundaries
detection(detection==100)=99.9999;
pts=0:0.1:100;
[f,x]=ksdensity(detection,pts,’support’,[0,100]);
plot(x,f);
and for the estimation of PDF of the in_process I use the same following code:
in_process(in_process==0)=1;
in_process(in_process==212)=211;
pts=0:0.1:212;
[f,x]=ksdensity(in_process,pts,’support’,[0 212]);
plot(x,f);
My problem is that the first one looks pretty well (has similar shape as the histogram of detection and looks similar to the PDF that is produced without the support option), while the other one looks bad (creates artificial bumps at the beginning and at the end of the interval).
I don’t undestand why is this happening? Why the first one looks good and the second one doesn’t.
Is this even a good approach and does it make sense to estimate pdf of these variables?
Thank you for your help.Attached are two sets of data and I need to estimate the Probability density function (PDF) for both of them.
The attached variable detection has 32 elements and a unit of percentages (between 0 and 100 %), and the variable in_process has 96 elements and a unit of number of days (between 0 and 212 days).
I want to estimate the PDF of both variables. For that I am using ksdenity, with the ‘support’ option, because I don’t want the values on x-axis to be negative or over 100%.
Therefore,
for the estimation of PDF of the detection I use the following code:
detection(detection==0)=0.0001; %data must be between the support boundaries
detection(detection==100)=99.9999;
pts=0:0.1:100;
[f,x]=ksdensity(detection,pts,’support’,[0,100]);
plot(x,f);
and for the estimation of PDF of the in_process I use the same following code:
in_process(in_process==0)=1;
in_process(in_process==212)=211;
pts=0:0.1:212;
[f,x]=ksdensity(in_process,pts,’support’,[0 212]);
plot(x,f);
My problem is that the first one looks pretty well (has similar shape as the histogram of detection and looks similar to the PDF that is produced without the support option), while the other one looks bad (creates artificial bumps at the beginning and at the end of the interval).
I don’t undestand why is this happening? Why the first one looks good and the second one doesn’t.
Is this even a good approach and does it make sense to estimate pdf of these variables?
Thank you for your help. Attached are two sets of data and I need to estimate the Probability density function (PDF) for both of them.
The attached variable detection has 32 elements and a unit of percentages (between 0 and 100 %), and the variable in_process has 96 elements and a unit of number of days (between 0 and 212 days).
I want to estimate the PDF of both variables. For that I am using ksdenity, with the ‘support’ option, because I don’t want the values on x-axis to be negative or over 100%.
Therefore,
for the estimation of PDF of the detection I use the following code:
detection(detection==0)=0.0001; %data must be between the support boundaries
detection(detection==100)=99.9999;
pts=0:0.1:100;
[f,x]=ksdensity(detection,pts,’support’,[0,100]);
plot(x,f);
and for the estimation of PDF of the in_process I use the same following code:
in_process(in_process==0)=1;
in_process(in_process==212)=211;
pts=0:0.1:212;
[f,x]=ksdensity(in_process,pts,’support’,[0 212]);
plot(x,f);
My problem is that the first one looks pretty well (has similar shape as the histogram of detection and looks similar to the PDF that is produced without the support option), while the other one looks bad (creates artificial bumps at the beginning and at the end of the interval).
I don’t undestand why is this happening? Why the first one looks good and the second one doesn’t.
Is this even a good approach and does it make sense to estimate pdf of these variables?
Thank you for your help. #ksdensity, #pdf MATLAB Answers — New Questions
HOW TO PLOT ON THE SAME FIGURE PLOTS OF DIFFERENT SCRIPTS
Hi, I’m in truble because I have two programs with the same variables and parameters. The main of the study is to change a value and plot the results. The problem is that I want them on the same plot but I use the same name for the variabes in the two different programs so when I use some function to join the figures togheter matlab resets the values obtained in the first program and runs only the second one.
Is there a method to avoid changing all the names of the variables in one of the two programs (because they have something like 500 lines)?Hi, I’m in truble because I have two programs with the same variables and parameters. The main of the study is to change a value and plot the results. The problem is that I want them on the same plot but I use the same name for the variabes in the two different programs so when I use some function to join the figures togheter matlab resets the values obtained in the first program and runs only the second one.
Is there a method to avoid changing all the names of the variables in one of the two programs (because they have something like 500 lines)? Hi, I’m in truble because I have two programs with the same variables and parameters. The main of the study is to change a value and plot the results. The problem is that I want them on the same plot but I use the same name for the variabes in the two different programs so when I use some function to join the figures togheter matlab resets the values obtained in the first program and runs only the second one.
Is there a method to avoid changing all the names of the variables in one of the two programs (because they have something like 500 lines)? transferred MATLAB Answers — New Questions
Selecting Tables to Sync from finance and operations | programmatically
Dears, I’ve some important questions please
Is there any option to programmatically select Tables to Sync from finance and operations, currently it’s a manual process and time-consuming task? We’re open to any tool: PowerShell, Python, Power Automate & etcIs there any option to force the sync to start without waiting the interval settings in the advanced configuration?
Dears, I’ve some important questions pleaseIs there any option to programmatically select Tables to Sync from finance and operations, currently it’s a manual process and time-consuming task? We’re open to any tool: PowerShell, Python, Power Automate & etcIs there any option to force the sync to start without waiting the interval settings in the advanced configuration? Choose finance and operations data in Azure Synapse Link for Dataverse – Power Apps | Microsoft Learn Read More
First order PID controller
Hi,
I am having a bit of bother with a PID controller I am creating on MATLAB. I have the below transfer function for my system, which i’ve to incorporate a PID controller into.
clc
clear all
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs = tf([num],[den],’InputDelay’,10) %a transfer function with a lag tim); %transfer function
u = 1; %unit step input
% plotting the graph
figure(1)
[x,y]=step(u*Gs);
plot(y,x);
xlabel(‘time /s’); ylabel(‘Level’);
I have tried using the below code for the PID controller, which doesn’t seem to have the same times. Also on all controllers i’ve done the step response never got to 1 without tuning before.
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs= tf(num,den); %TF function
H=[1]; %feedback
M=feedback(Gs,H);
step(M)
grid on
When I add the PID controller script I no longer get anything I expect. I set Kp to 2, which does quicken the response, however I expect some overshoot. I have increased Kp bu different higher values and it only provides quicker respnses without overshoot. This is not how any of my other controllers have operated.
%%
Kp = 2;
Ki = 0;
Kd = 0;
Gc=pid(Kp,Ki,Kd)
Mc=feedback(Gc*Gs,H)
step(Mc)
grid on
I know I am doing something wrong, but can’t see what. Is there anyone that can look this over and assist? I am using Matlab online as I do not have permission to download additional content on my laptop as IT won’t allow it. You may have also noticed that there is no delay added to the scripts. This is due to it not working if I do.Hi,
I am having a bit of bother with a PID controller I am creating on MATLAB. I have the below transfer function for my system, which i’ve to incorporate a PID controller into.
clc
clear all
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs = tf([num],[den],’InputDelay’,10) %a transfer function with a lag tim); %transfer function
u = 1; %unit step input
% plotting the graph
figure(1)
[x,y]=step(u*Gs);
plot(y,x);
xlabel(‘time /s’); ylabel(‘Level’);
I have tried using the below code for the PID controller, which doesn’t seem to have the same times. Also on all controllers i’ve done the step response never got to 1 without tuning before.
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs= tf(num,den); %TF function
H=[1]; %feedback
M=feedback(Gs,H);
step(M)
grid on
When I add the PID controller script I no longer get anything I expect. I set Kp to 2, which does quicken the response, however I expect some overshoot. I have increased Kp bu different higher values and it only provides quicker respnses without overshoot. This is not how any of my other controllers have operated.
%%
Kp = 2;
Ki = 0;
Kd = 0;
Gc=pid(Kp,Ki,Kd)
Mc=feedback(Gc*Gs,H)
step(Mc)
grid on
I know I am doing something wrong, but can’t see what. Is there anyone that can look this over and assist? I am using Matlab online as I do not have permission to download additional content on my laptop as IT won’t allow it. You may have also noticed that there is no delay added to the scripts. This is due to it not working if I do. Hi,
I am having a bit of bother with a PID controller I am creating on MATLAB. I have the below transfer function for my system, which i’ve to incorporate a PID controller into.
clc
clear all
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs = tf([num],[den],’InputDelay’,10) %a transfer function with a lag tim); %transfer function
u = 1; %unit step input
% plotting the graph
figure(1)
[x,y]=step(u*Gs);
plot(y,x);
xlabel(‘time /s’); ylabel(‘Level’);
I have tried using the below code for the PID controller, which doesn’t seem to have the same times. Also on all controllers i’ve done the step response never got to 1 without tuning before.
% Implementing T.F G(s) = 85/(42s+1)
num = [85]; %Numerator of Transfer function (DC Gain k)
den = [42 1]; %Denominator of Transfer function (Time constant tau +1)
Gs= tf(num,den); %TF function
H=[1]; %feedback
M=feedback(Gs,H);
step(M)
grid on
When I add the PID controller script I no longer get anything I expect. I set Kp to 2, which does quicken the response, however I expect some overshoot. I have increased Kp bu different higher values and it only provides quicker respnses without overshoot. This is not how any of my other controllers have operated.
%%
Kp = 2;
Ki = 0;
Kd = 0;
Gc=pid(Kp,Ki,Kd)
Mc=feedback(Gc*Gs,H)
step(Mc)
grid on
I know I am doing something wrong, but can’t see what. Is there anyone that can look this over and assist? I am using Matlab online as I do not have permission to download additional content on my laptop as IT won’t allow it. You may have also noticed that there is no delay added to the scripts. This is due to it not working if I do. pid controller MATLAB Answers — New Questions
Why isn’t the numerical value assigned by matlab verified using == function.
I have calculated the value for R using my code which has assigned 0.0094 but if I try to verify using == function , it shows logical zero.
I had used format short but evidently isn’t doing much.
what should I change here.I have calculated the value for R using my code which has assigned 0.0094 but if I try to verify using == function , it shows logical zero.
I had used format short but evidently isn’t doing much.
what should I change here. I have calculated the value for R using my code which has assigned 0.0094 but if I try to verify using == function , it shows logical zero.
I had used format short but evidently isn’t doing much.
what should I change here. logical MATLAB Answers — New Questions
Issue with Autoruns v14.11 – Offline System Registry Hives Not Unmounted
When using the Analyze Offline System option leaves registry hives mounted, risking system corruption.
Steps to Reproduce:
Open Autoruns v14.11.Use File > Analyze Offline System.Close AutoRuns.Observe that registry hives remain mounted after the process has terminated. (Regedit.exe > HKLM > autoruns.software / autoruns.system / autoruns.user)
Impact:
Can render the offline system unbootable.Prevents you from using Analyze Offline System again as the HKLMautoruns.* mountpoints are already in use.
Workaround: Use v13.100, which works correctly.
When using the Analyze Offline System option leaves registry hives mounted, risking system corruption. Steps to Reproduce:Open Autoruns v14.11.Use File > Analyze Offline System.Close AutoRuns.Observe that registry hives remain mounted after the process has terminated. (Regedit.exe > HKLM > autoruns.software / autoruns.system / autoruns.user)Impact:Can render the offline system unbootable.Prevents you from using Analyze Offline System again as the HKLMautoruns.* mountpoints are already in use. Workaround: Use v13.100, which works correctly. Read More