Month: August 2024
Matlab HDL cosimulation for Lattice CPLD
I regularly develop VHDL code for controllers that is supposed to run on FPGAs/CPLDs and I’m looking for possibilities to co-simulate this code in a closed-loop simulation environment that mimics the behavior of the controlled system. So what I would like to have is that I can define a dyamic model in Matlab (e.g., as a state-space model or a transfer function) and to simulate this model together with the controller’s HDL implementation using the outputs of the models as stimuli for the controller and the output of the controller as input for the dynamic model. Is there any way I can accomplish this with Matlab? If so, what do I have to look for?
What I found so far is apparently called HDL verifier and seems to be pretty much what I want. However, I’m currently using Lattice MachXO2 CPLDs and Lattice diamond comes with "ModelSim Lattice FPGA Edition" as a simulator. According to this post
https://ch.mathworks.com/matlabcentral/answers/493504-which-modelsim-editions-can-i-use-for-cosimulation-with-hdl-verifier
HDL verifier requires a FLI (foreign language interface), which the Lattice Edition of ModelSim does not provide. Is there any way I can use HDL verifier, e.g., by using a different simulator?I regularly develop VHDL code for controllers that is supposed to run on FPGAs/CPLDs and I’m looking for possibilities to co-simulate this code in a closed-loop simulation environment that mimics the behavior of the controlled system. So what I would like to have is that I can define a dyamic model in Matlab (e.g., as a state-space model or a transfer function) and to simulate this model together with the controller’s HDL implementation using the outputs of the models as stimuli for the controller and the output of the controller as input for the dynamic model. Is there any way I can accomplish this with Matlab? If so, what do I have to look for?
What I found so far is apparently called HDL verifier and seems to be pretty much what I want. However, I’m currently using Lattice MachXO2 CPLDs and Lattice diamond comes with "ModelSim Lattice FPGA Edition" as a simulator. According to this post
https://ch.mathworks.com/matlabcentral/answers/493504-which-modelsim-editions-can-i-use-for-cosimulation-with-hdl-verifier
HDL verifier requires a FLI (foreign language interface), which the Lattice Edition of ModelSim does not provide. Is there any way I can use HDL verifier, e.g., by using a different simulator? I regularly develop VHDL code for controllers that is supposed to run on FPGAs/CPLDs and I’m looking for possibilities to co-simulate this code in a closed-loop simulation environment that mimics the behavior of the controlled system. So what I would like to have is that I can define a dyamic model in Matlab (e.g., as a state-space model or a transfer function) and to simulate this model together with the controller’s HDL implementation using the outputs of the models as stimuli for the controller and the output of the controller as input for the dynamic model. Is there any way I can accomplish this with Matlab? If so, what do I have to look for?
What I found so far is apparently called HDL verifier and seems to be pretty much what I want. However, I’m currently using Lattice MachXO2 CPLDs and Lattice diamond comes with "ModelSim Lattice FPGA Edition" as a simulator. According to this post
https://ch.mathworks.com/matlabcentral/answers/493504-which-modelsim-editions-can-i-use-for-cosimulation-with-hdl-verifier
HDL verifier requires a FLI (foreign language interface), which the Lattice Edition of ModelSim does not provide. Is there any way I can use HDL verifier, e.g., by using a different simulator? vhdl, hdl simulation, hdl verifier, cosimlation, fpga MATLAB Answers — New Questions
How to display a color bar in Matlab as shown below
Hello, experts.
I want to draw a color bar with discontinuous and regular intervals like the first picture above.
However, it would be best to draw a continuous color bar like the second picture. Is there a solution?Hello, experts.
I want to draw a color bar with discontinuous and regular intervals like the first picture above.
However, it would be best to draw a continuous color bar like the second picture. Is there a solution? Hello, experts.
I want to draw a color bar with discontinuous and regular intervals like the first picture above.
However, it would be best to draw a continuous color bar like the second picture. Is there a solution? colorbar, colormap, colormapeditor MATLAB Answers — New Questions
Windows 11 Clock Disappeared
Hey everyone,
I recently had to bid farewell to my aging Toshiba laptop due to its excruciatingly slow boot time and overall sluggishness during operation – not to mention its perpetually stuck lid!
I’ve now upgraded to a snazzy Medion gaming laptop running Windows 11, which required a bit of adjustment coming from Windows 10. For the most part, the transition has been smooth. However, a few days back, I noticed that my clock vanished from the taskbar, possibly following one of the numerous updates Windows 11 has automatically installed within the ten days I’ve owned this laptop. Despite my efforts to rectify the issue by delving into the settings under “Time & Language” and “Date & Time,” I found the option to ‘Show time and date in the system tray’ disabled and unalterable due to a peculiar notification at the top of the page stating that “Some of these settings are managed by your organization.”
Considering I am the sole user on this device, I found this restriction rather exasperating – a common frustration with technology.
Numerous attempts to resolve the problem, including tweaking registry settings, resetting the clock app, and scanning system files, all ended in vain. Frustrated but undeterred, I decided to download an alternative clock app called “Eleven Clock” in hopes of filling the time-keeping void. Alas, my efforts led to a new issue – the battery icon disappeared. As I mulled over the dilemma, an idea struck me to move the clock app to the taskbar overflow section, only to realize there was no apparent option to do so within the system settings.
After several fruitless hours spent troubleshooting (or rather, not troubleshooting), I find myself reaching out to seek assistance. Below are screenshots highlighting the predicament, showcasing the message “Some of these settings are managed by your organization” along with the grayed-out “Show time and date” option, the taskbar sans clock but with the battery icon, and the inverse scenario with the clock displayed (via Eleven Clock) but missing the battery icon.
Your help with this conundrum would be greatly appreciated!
Hey everyone, I recently had to bid farewell to my aging Toshiba laptop due to its excruciatingly slow boot time and overall sluggishness during operation – not to mention its perpetually stuck lid! I’ve now upgraded to a snazzy Medion gaming laptop running Windows 11, which required a bit of adjustment coming from Windows 10. For the most part, the transition has been smooth. However, a few days back, I noticed that my clock vanished from the taskbar, possibly following one of the numerous updates Windows 11 has automatically installed within the ten days I’ve owned this laptop. Despite my efforts to rectify the issue by delving into the settings under “Time & Language” and “Date & Time,” I found the option to ‘Show time and date in the system tray’ disabled and unalterable due to a peculiar notification at the top of the page stating that “Some of these settings are managed by your organization.” Considering I am the sole user on this device, I found this restriction rather exasperating – a common frustration with technology. Numerous attempts to resolve the problem, including tweaking registry settings, resetting the clock app, and scanning system files, all ended in vain. Frustrated but undeterred, I decided to download an alternative clock app called “Eleven Clock” in hopes of filling the time-keeping void. Alas, my efforts led to a new issue – the battery icon disappeared. As I mulled over the dilemma, an idea struck me to move the clock app to the taskbar overflow section, only to realize there was no apparent option to do so within the system settings. After several fruitless hours spent troubleshooting (or rather, not troubleshooting), I find myself reaching out to seek assistance. Below are screenshots highlighting the predicament, showcasing the message “Some of these settings are managed by your organization” along with the grayed-out “Show time and date” option, the taskbar sans clock but with the battery icon, and the inverse scenario with the clock displayed (via Eleven Clock) but missing the battery icon. Your help with this conundrum would be greatly appreciated! Read More
Problem with Windows File Explorer
Hello, technical enthusiasts! After updating to 23H2, I noticed a new feature in Windows File Explorer. When you click on a folder or file and begin to move it around, there is now a visual representation of the item being moved. I’m curious if there’s a way to deactivate or hide this image display during the moving process in File Explorer. Here’s a glimpse of what it entails:
Hello, technical enthusiasts! After updating to 23H2, I noticed a new feature in Windows File Explorer. When you click on a folder or file and begin to move it around, there is now a visual representation of the item being moved. I’m curious if there’s a way to deactivate or hide this image display during the moving process in File Explorer. Here’s a glimpse of what it entails: Read More
“Task Bar with Oscillating Feature”
Windows 11 23H2 Update
Windows Feature Experience Pack version 1000.22677.1000.0
Latest Update: November 17, 2023
Issue with Taskbar Oscillation in Visio on Asus OLED Vivobook
Hello,
I have been experiencing a persistent issue with the taskbar in Visio on my Asus OLED Vivobook. The taskbar keeps oscillating between hiding and unhiding, causing a significant distraction during work.
I have attempted the following troubleshooting steps without success:
1. Enabled and disabled the Asus OLED Care function “Automatically hide Windows taskbar in desktop mode.”
2. Adjusted all taskbar settings in Settings to toggle functions on and off.
3. Ran a repair on Visio to address the problem.
Despite these efforts, the oscillation issue persists. Do you have any suggestions or additional solutions that could help resolve this frustrating problem?
Thank you for your assistance.
Best regards,
John
Windows 11 23H2 Update Windows Feature Experience Pack version 1000.22677.1000.0 Latest Update: November 17, 2023 Issue with Taskbar Oscillation in Visio on Asus OLED Vivobook Hello, I have been experiencing a persistent issue with the taskbar in Visio on my Asus OLED Vivobook. The taskbar keeps oscillating between hiding and unhiding, causing a significant distraction during work. I have attempted the following troubleshooting steps without success: 1. Enabled and disabled the Asus OLED Care function “Automatically hide Windows taskbar in desktop mode.”2. Adjusted all taskbar settings in Settings to toggle functions on and off.3. Ran a repair on Visio to address the problem. Despite these efforts, the oscillation issue persists. Do you have any suggestions or additional solutions that could help resolve this frustrating problem? Thank you for your assistance. Best regards,John Read More
How to Manage without Cloud Storage on Windows
Recently upgraded to Windows 11 on my new computer and I’m not thrilled about being prompted to install and pay for Windows 365 in order to store my data in the cloud. I prefer to continue using the three hard drives I’ve always relied on, but I can’t figure out how to disable the cloud system. Any assistance would be greatly appreciated.
One of my main concerns is the slow internet speeds in my area, with only 6 Mbps upload and around 14 Mbps download. It’s incredibly time-consuming to save files under these conditions. Additionally, I don’t want to be pressured into paying for storage that I have already paid for elsewhere. I keep getting reminders to renew Windows 365, even though I haven’t purchased it yet, as I’m content with using Office 2013 for now.
Recently upgraded to Windows 11 on my new computer and I’m not thrilled about being prompted to install and pay for Windows 365 in order to store my data in the cloud. I prefer to continue using the three hard drives I’ve always relied on, but I can’t figure out how to disable the cloud system. Any assistance would be greatly appreciated. One of my main concerns is the slow internet speeds in my area, with only 6 Mbps upload and around 14 Mbps download. It’s incredibly time-consuming to save files under these conditions. Additionally, I don’t want to be pressured into paying for storage that I have already paid for elsewhere. I keep getting reminders to renew Windows 365, even though I haven’t purchased it yet, as I’m content with using Office 2013 for now. Read More
Build your own AI Text-to-Image Generator in Visual Studio Code
Hello, I’m Hamna Khalil, a Beta Microsoft Learn Student Ambassador from Pakistan. Currently, I am in my third year of pursuing a bachelor’s degree in Software Engineering at Fatima Jinnah Women University.
Do you want to build your own AI Text-to-Image Generator in less than 15 minutes? Join me as I’ll walk you through the process of building one using Stable Diffusion within Visual Studio Code!
Prerequisites
Before you start, ensure you have the following:
Python 3.9 or higher.
Hugging Face Account.
Step 1: Set Up the Development Environment
In your project directory, create a file named requirements.txt and add the following dependencies to the file:
certifi==2022.9.14
charset-normalizer==2.1.1
colorama==0.4.5
customtkinter==4.6.1
darkdetect==0.7.1
diffusers==0.3.0
filelock==3.8.0
huggingface-hub==0.9.1
idna==3.4
importlib-metadata==4.12.0
numpy==1.23.3
packaging==21.3
Pillow==9.2.0
pyparsing==3.0.9
PyYAML==6.0
regex==2022.9.13
requests==2.28.1
tk==0.1.0
tokenizers==0.12.1
torch==1.12.1+cu113
torchaudio==0.12.1+cu113
torchvision==0.13.1+cu113
tqdm==4.64.1
transformers==4.22.1
typing_extensions==4.3.0
urllib3==1.26.12
zipp==3.8.1
To install the listed dependenciesin the requirements.txt file, run the following command in your terminal:
pip install -r requirements.txt
Step 2: Configure Authentication
In your project directory, create a file named authtoken.py and add the following code to the file:
auth_token = “ACCESS TOKEN FROM HUGGING FACE”
To obtain access token from Hugging Face, follow these steps:
Log in to your Hugging Face account.
Go to your profile settings and select Access Tokens
Click on Create new token.
Choose the token type as Read.
Enter Token name and click Create token.
Copy the generated token and replace ACCESS TOKEN FROM HUGGINGFACE in authtoken.py file with your token.
Step 3: Develop the Application
In your project directory, create a file named application.py and add the following code to the file:
# Import the Tkinter library for GUI
import tkinter as tk
# Import the custom Tkinter library for enhanced widgets
import customtkinter as ctk
# Import PyTorch for handling tensors and model
import torch
# Import the Stable Diffusion Pipeline from diffusers library
from diffusers import StableDiffusionPipeline
# Import PIL for image handling
from PIL import Image, ImageTk
# Import the authentication token from a file
from authtoken import auth_token
# Initialize the main Tkinter application window
app = tk.Tk()
# Set the size of the window
app.geometry(“532×632”)
# Set the title of the window
app.title(“Text-to-Image Generator”)
# Set the appearance mode of customtkinter to dark
ctk.set_appearance_mode(“dark”)
# Create an entry widget for the prompt text input
prompt = ctk.CTkEntry(height=40, width=512, text_font=(“Arial”, 20), text_color=”black”, fg_color=”white”)
# Place the entry widget at coordinates (10, 10)
prompt.place(x=10, y=10)
# Create a label widget for displaying the generated image
lmain = ctk.CTkLabel(height=512, width=512)
# Place the label widget at coordinates (10, 110)
lmain.place(x=10, y=110)
# Define the model ID for Stable Diffusion
modelid = “CompVis/stable-diffusion-v1-4”
# Define the device to run the model on
device = “cpu”
# Load the Stable Diffusion model pipeline
pipe = StableDiffusionPipeline.from_pretrained(modelid, revision=”fp16″, torch_dtype=torch.float32, use_auth_token=auth_token)
# Move the pipeline to the specified device (CPU)
pipe.to(device)
# Define the function to generate the image from the prompt
def generate():
# Disable gradient calculation for efficiency
with torch.no_grad():
# Generate the image with guidance scale
image = pipe(prompt.get(), guidance_scale=8.5)[“sample”][0]
# Convert the image to a PhotoImage for Tkinter
img = ImageTk.PhotoImage(image)
# Keep a reference to the image to prevent garbage collection
lmain.image = img
# Update the label widget with the new image
lmain.configure(image=img)
# Create a button widget to trigger the image generation
trigger = ctk.CTkButton(height=40, width=120, text_font=(“Arial”, 20), text_color=”white”, fg_color=”black”, command=generate)
# Set the text on the button to “Generate”
trigger.configure(text=”Generate”)
# Place the button at coordinates (206, 60)
trigger.place(x=206, y=60)
# Start the Tkinter main loop
app.mainloop()
To run the application, execute the following command in your terminal:
python application.py
This will launch the GUI where you can enter a text prompt and generate corresponding images by clicking the Generate button.
Congratulations! You have successfully built an AI Text-to-Image Generator using Stable Diffusion in Visual Studio Code. Feel free to explore and enhance the application further by adding new features and improving the user interface. Happy coding!
Resources
Microsoft Tech Community – Latest Blogs –Read More
Unlock the Power of GitHub Copilot Workspaces: A Beginner’s Guide
GitHub Copilot Workspaces builds on the foundation of Copilot, offering a collaborative environment where teams can leverage AI to enhance their development processes. Unlike the standalone Copilot, Workspaces integrates seamlessly with your development workflow, providing a shared space for code, documentation, and real-time collaboration.
Why Use Copilot Workspaces?
Enhanced Collaboration: Work together seamlessly with real-time code sharing and editing.
Improved Code Quality: Leverage AI for intelligent code suggestions and reviews.
Streamlined Workflows: Integrate with existing tools and processes for a cohesive development environment.
Setting Up Copilot Workspaces
Prerequisites
A GitHub account
GitHub Copilot subscription
Compatible IDE (e.g., Visual Studio Code)
Getting Started
Option 1: Open an issue in a GitHub repo and click the “Open in Workspace” button. This will start a new Copilot Workspace session, pre-seeded with the issue as the task, and allow you to iterate on the spec/plan/implementation for it
Option 2: Visit the Copilot Workspace dashboard and start a new session by clicking the “New Session” button. This will allow you to search for a repo and then define an ad-hoc task for it. Effectively like a draft issue. And if you select a template repo, you can define the requirements of a new repo that you create from that.
Iterate on a pull request by clicking the “Open in Workspace” button, defining the change you’d like to make (e.g. “Add checks for potential errors”) and then implementing them.
Notice the changes suggested by GitHub Copilot
Open a workspace session in a Codespace, by clicking the “Open in Codespace” button in the header bar or in the “Implementation” panel.
Note that your workspace edits will be synced to the Codespace, and any edits you make in the Codespace are synced back to the workspace. This allows you to use VS Code/Codespaces as a companion experience for making larger edits, debugging, etc.
For instance, replacing `cp -r` with `rsync –av` for more efficient directory copying, changes become reflected in the workspace.
Revise the spec, plan, and code with natural language – In addition to making direct edits to the specification or plan. The same capability is also available on the header for changed files, which allows you to revise code based on a specific instruction (e.g. Use $HOME Instead of /home/$USER) and click revise
Copilot goes ahead and implements the request file changes which also get reflected in the open Codespaces.
Once you’re satisfied with your changes, you can go ahead and update the PR or select any of the following available options that suite your need.
Other capabilities provided by GitHub copilot workspaces are file explorer within the browser and an integrated terminal for compiling code, package management and self-customization of the environment.
Additional Resources
Step-by-Step: Setting Up GitHub Student and GitHub Copilot as an Authenticated Student Developer.
Learn more GitHub Copilot
Copilot Workspace User Manual
Microsoft Tech Community – Latest Blogs –Read More
How to train Unet semantic segmentation with only one single class/label?
Hello, I’m currently working on a task to do a semantic segmentation on USG image to locate TMJ. I did my image labelling in Image Labeler App and only did one class so that the class region will be 1 and background is 0. I was about to train my model with unetLayers but it says "The value of ‘numClasses’ is invalid. Expected numClasses to be a scalar with value > 1."
I’m aware that someone asked this similar question here with an answer, but I want to ask, how to customize specifically unetLayers to accomodate single class? I see that unetLayers also has softmaxLayers but i can’t find the pixel layers. Thank you in advance!Hello, I’m currently working on a task to do a semantic segmentation on USG image to locate TMJ. I did my image labelling in Image Labeler App and only did one class so that the class region will be 1 and background is 0. I was about to train my model with unetLayers but it says "The value of ‘numClasses’ is invalid. Expected numClasses to be a scalar with value > 1."
I’m aware that someone asked this similar question here with an answer, but I want to ask, how to customize specifically unetLayers to accomodate single class? I see that unetLayers also has softmaxLayers but i can’t find the pixel layers. Thank you in advance! Hello, I’m currently working on a task to do a semantic segmentation on USG image to locate TMJ. I did my image labelling in Image Labeler App and only did one class so that the class region will be 1 and background is 0. I was about to train my model with unetLayers but it says "The value of ‘numClasses’ is invalid. Expected numClasses to be a scalar with value > 1."
I’m aware that someone asked this similar question here with an answer, but I want to ask, how to customize specifically unetLayers to accomodate single class? I see that unetLayers also has softmaxLayers but i can’t find the pixel layers. Thank you in advance! deep learning MATLAB Answers — New Questions
Troubleshooting Build Error in MAC Apple Silicon Processor for Simulink Support Package for Arduino Hardware
Build for any Target Hardware selected fails for MAC Apple Silicon Processor with the following error "bad CPU Type in executable".Build for any Target Hardware selected fails for MAC Apple Silicon Processor with the following error "bad CPU Type in executable". Build for any Target Hardware selected fails for MAC Apple Silicon Processor with the following error "bad CPU Type in executable". arduino, simulink, matlab, mac MATLAB Answers — New Questions
FFT of 3D array in MATLAB
I am trying to understand how the FFT of different directions in MATLAB works to reproduce in C/C++ instead.
So far I have the following simple example in MATLAB:
clearvars; clc; close all;
%3D FFT test
Nx = 8;
Ny = 4;
Nz= 6;
Lx =16;
Ly = 6;
dx = Lx/Nx;
dy = Ly/Ny;
%———–
xi_x = (2*pi)/Lx;
yi_y = (2*pi)/Ly;
xi = ((0:Nx-1)/Nx)*(2*pi);
yi = ((0:Ny-1)/Ny)*(2*pi);
x = xi/xi_x;
y = yi/yi_y;
zlow = 0; %a
zupp =6; %b
Lz = (zupp-zlow);
eta_zgl = 2/Lz;
[D,zgl] = cheb(Nz);
zgl = (1/2)*(((zupp-zlow)*zgl) + (zupp+zlow));
[X,Z,Y] = meshgrid(x,zgl,y); %this gives 3d grid with z-by-x-by-y size (i.e. ZXY)
%ICs
A = 2*pi / Lx;
B = 2*pi / Ly;
u = (Z-zlow) .* (Z-zupp) .* sin(A*X).* sin(B*Y);
uh1 =(fft(u,[],3));%ZXY
uh2 =(fft(u,[],1));%ZXY
uh3 =(fft(u,[],2));%ZXY
So, in C/C++ I have a 3D tensor with (Nz+1) rows and Nx coumns and Ny matrices and taking the 1D FFT along each "row" of u returns the same results as the following in MATLAB:
uh3 =(fft(u,[],2));%ZXY
While taking the 1D FFT of u along each column of u in C/C++ returns the same result as the following in MATLAB:
uh2 =(fft(u,[],1));%ZXY
Then my question is what does this 1D FFT represent? and how should I represent it in C/C++?
uh1 =(fft(u,[],3));%ZXY
The cheb(N) function is:
function [ D, x ] = cheb ( N )
if ( N == 0 )
D = 0.0;
x = 1.0;
return
end
x = cos ( pi * ( 0 : N ) / N )’;
c = [ 2.0; ones(N-1,1); 2.0 ] .* (-1.0).^(0:N)’;
X = repmat ( x, 1, N + 1 );
dX = X – X’;
% Set the off diagonal entries.
D =( c * (1.0 ./ c )’ ) ./ ( dX + ( eye ( N + 1 ) ) );
% Diagonal entries.
D = D – diag ( sum ( D’ ) );
return
endI am trying to understand how the FFT of different directions in MATLAB works to reproduce in C/C++ instead.
So far I have the following simple example in MATLAB:
clearvars; clc; close all;
%3D FFT test
Nx = 8;
Ny = 4;
Nz= 6;
Lx =16;
Ly = 6;
dx = Lx/Nx;
dy = Ly/Ny;
%———–
xi_x = (2*pi)/Lx;
yi_y = (2*pi)/Ly;
xi = ((0:Nx-1)/Nx)*(2*pi);
yi = ((0:Ny-1)/Ny)*(2*pi);
x = xi/xi_x;
y = yi/yi_y;
zlow = 0; %a
zupp =6; %b
Lz = (zupp-zlow);
eta_zgl = 2/Lz;
[D,zgl] = cheb(Nz);
zgl = (1/2)*(((zupp-zlow)*zgl) + (zupp+zlow));
[X,Z,Y] = meshgrid(x,zgl,y); %this gives 3d grid with z-by-x-by-y size (i.e. ZXY)
%ICs
A = 2*pi / Lx;
B = 2*pi / Ly;
u = (Z-zlow) .* (Z-zupp) .* sin(A*X).* sin(B*Y);
uh1 =(fft(u,[],3));%ZXY
uh2 =(fft(u,[],1));%ZXY
uh3 =(fft(u,[],2));%ZXY
So, in C/C++ I have a 3D tensor with (Nz+1) rows and Nx coumns and Ny matrices and taking the 1D FFT along each "row" of u returns the same results as the following in MATLAB:
uh3 =(fft(u,[],2));%ZXY
While taking the 1D FFT of u along each column of u in C/C++ returns the same result as the following in MATLAB:
uh2 =(fft(u,[],1));%ZXY
Then my question is what does this 1D FFT represent? and how should I represent it in C/C++?
uh1 =(fft(u,[],3));%ZXY
The cheb(N) function is:
function [ D, x ] = cheb ( N )
if ( N == 0 )
D = 0.0;
x = 1.0;
return
end
x = cos ( pi * ( 0 : N ) / N )’;
c = [ 2.0; ones(N-1,1); 2.0 ] .* (-1.0).^(0:N)’;
X = repmat ( x, 1, N + 1 );
dX = X – X’;
% Set the off diagonal entries.
D =( c * (1.0 ./ c )’ ) ./ ( dX + ( eye ( N + 1 ) ) );
% Diagonal entries.
D = D – diag ( sum ( D’ ) );
return
end I am trying to understand how the FFT of different directions in MATLAB works to reproduce in C/C++ instead.
So far I have the following simple example in MATLAB:
clearvars; clc; close all;
%3D FFT test
Nx = 8;
Ny = 4;
Nz= 6;
Lx =16;
Ly = 6;
dx = Lx/Nx;
dy = Ly/Ny;
%———–
xi_x = (2*pi)/Lx;
yi_y = (2*pi)/Ly;
xi = ((0:Nx-1)/Nx)*(2*pi);
yi = ((0:Ny-1)/Ny)*(2*pi);
x = xi/xi_x;
y = yi/yi_y;
zlow = 0; %a
zupp =6; %b
Lz = (zupp-zlow);
eta_zgl = 2/Lz;
[D,zgl] = cheb(Nz);
zgl = (1/2)*(((zupp-zlow)*zgl) + (zupp+zlow));
[X,Z,Y] = meshgrid(x,zgl,y); %this gives 3d grid with z-by-x-by-y size (i.e. ZXY)
%ICs
A = 2*pi / Lx;
B = 2*pi / Ly;
u = (Z-zlow) .* (Z-zupp) .* sin(A*X).* sin(B*Y);
uh1 =(fft(u,[],3));%ZXY
uh2 =(fft(u,[],1));%ZXY
uh3 =(fft(u,[],2));%ZXY
So, in C/C++ I have a 3D tensor with (Nz+1) rows and Nx coumns and Ny matrices and taking the 1D FFT along each "row" of u returns the same results as the following in MATLAB:
uh3 =(fft(u,[],2));%ZXY
While taking the 1D FFT of u along each column of u in C/C++ returns the same result as the following in MATLAB:
uh2 =(fft(u,[],1));%ZXY
Then my question is what does this 1D FFT represent? and how should I represent it in C/C++?
uh1 =(fft(u,[],3));%ZXY
The cheb(N) function is:
function [ D, x ] = cheb ( N )
if ( N == 0 )
D = 0.0;
x = 1.0;
return
end
x = cos ( pi * ( 0 : N ) / N )’;
c = [ 2.0; ones(N-1,1); 2.0 ] .* (-1.0).^(0:N)’;
X = repmat ( x, 1, N + 1 );
dX = X – X’;
% Set the off diagonal entries.
D =( c * (1.0 ./ c )’ ) ./ ( dX + ( eye ( N + 1 ) ) );
% Diagonal entries.
D = D – diag ( sum ( D’ ) );
return
end meshgrid, fft, numerical libraries, c/c++ MATLAB Answers — New Questions
Sharepoint Share History
Good morning,
An employee has to share files along time that is responsible with auctions. We need to know if it is possible to see a share history (persons and message) for that file/folder.
It would be very helpful because we would have a track about what was shared and to who.
Please let me know if anybody can help me. Also, I’m open to any recommendations. 🙂 Thank you!
Good morning, An employee has to share files along time that is responsible with auctions. We need to know if it is possible to see a share history (persons and message) for that file/folder. It would be very helpful because we would have a track about what was shared and to who. Please let me know if anybody can help me. Also, I’m open to any recommendations. 🙂 Thank you! Read More
Alternative for dbo.SYSREMOTE_TABLES
I could find this
Function SYSREMOTE_TABLES has changed or does not longer exist after SQL Server 2005.
Can anyone suggest any alternative to use this in 2022 sql server.
I could find this Function SYSREMOTE_TABLES has changed or does not longer exist after SQL Server 2005.Can anyone suggest any alternative to use this in 2022 sql server. Read More
ERR_UNSAFE_PORT 87
Hello,
(Version 127.0.2651.86 (Version officielle) (64 bits))
I would to like to share a recent problem occurred in msedge.
I have a message “ERR_UNSAFE_PORT” with http://192.168.5.180:87/deployABC/publish.htm
I tried to start with “C:Program Files (x86)MicrosoftEdgeApplicationmsedge.exe” –explicitly-allowed-ports=87 in a shortcut
The problem still present.
I would like to set that this url is secured. I don’t find where i can setup that this site is secured.
thank you for your help
Hello,(Version 127.0.2651.86 (Version officielle) (64 bits))I would to like to share a recent problem occurred in msedge.I have a message “ERR_UNSAFE_PORT” with http://192.168.5.180:87/deployABC/publish.htm I tried to start with “C:Program Files (x86)MicrosoftEdgeApplicationmsedge.exe” –explicitly-allowed-ports=87 in a shortcut The problem still present. I would like to set that this url is secured. I don’t find where i can setup that this site is secured. thank you for your help Read More
“Double click cell border to scroll” should have its own settings
I often double click on a cell border by mistake and end up on the very top or bottom of my table.
Since I work with big tables all the time, it is inevitable to sometimes double click the border when I intended just to edit a cell, making this feature very incovinient.
I know F2 also enables editing in a cell, but it means I have to use the arrow keys to get there, scrolling through thousands of rows with arrow keys isn’t very practical, and since I already have my hand in the mouse, it would be awesome if the double click on a cell didn’t have another function for me.
It is currently possible to disable it by disabling “Enable fill handle and cell drag-and-drop”, which I cannot work without anymore.
These two should be separate settings in my opinion. Does anyone else agree?
I often double click on a cell border by mistake and end up on the very top or bottom of my table.Since I work with big tables all the time, it is inevitable to sometimes double click the border when I intended just to edit a cell, making this feature very incovinient. I know F2 also enables editing in a cell, but it means I have to use the arrow keys to get there, scrolling through thousands of rows with arrow keys isn’t very practical, and since I already have my hand in the mouse, it would be awesome if the double click on a cell didn’t have another function for me. It is currently possible to disable it by disabling “Enable fill handle and cell drag-and-drop”, which I cannot work without anymore. These two should be separate settings in my opinion. Does anyone else agree? Read More
Microsoft Edupreneur: Riding Waves from Class to Beach
In a world where technology and education increasingly intersect, few stories capture the transformative potential of this relationship as vividly as that of Ejoe Tso. A dedicated Microsoft Azure and AI Platform MVP in Azure AI and Cloud Native, Ejoe has been recognized as a finalist for two prestigious awards. His journey from the classroom to the beach showcases the incredible impact of leveraging Microsoft technologies to drive educational and environmental change.
Empowering Students with AI and Cloud Computing
Ejoe’s journey began in the classroom, where he used Microsoft technology to teach students advanced skills in AI, cloud computing, and entrepreneurship. As a mentor, he guided students through hands-on projects that allowed them to apply these concepts in real-world settings. One such project was the creation of “SAFERIN” an artificial intelligence nursing assistant designed for a nearby eldercare facility. This prototype leveraged Microsoft Azure resources to enhance the care and support provided to elderly residents, demonstrating the practical application of technology in healthcare.
Ejoe’s dedication to education was recently acknowledged when he announced as finalist for the Entrepreneurship Educator of the Year in the 2024 Triple E Awards Asia Pacific. This recognition inspires him to continue mentoring students, encouraging them to explore and innovate using Microsoft resources. By empowering the next generation of tech enthusiasts, Ejoe is planting the seeds for future breakthroughs in various fields.
Feedback from students,
Tracy Ma, Graduate in Higher Diploma in Software Engineering, explained how she valued Ejoe’s mentoring throughout this project. “Ejoe, your mentorship showed me how technology can empower people. Thank you for guiding me to create innovations that help society. Your vision and dedication to education is inspirational. Keep empowering students – you are paving the way for future changemakers like me,” said Tracey.
Another student, Sage Lai, Graduate in Higher Diploma in Software Engineering, explains the impact from receiving hands on AI skills. Lai says, “Ejoe, your journey from teacher to edupreneur amazes me. I’m honored you gave me hands-on AI and cloud computing skills. Your BeachBot AI innovation proves the immense potential of technology for good. You inspire me to keep pushing boundaries and driving change in the world.” Fok Ho Man, John, Graduate in Higher Diploma in Software Engineering
Finally, John Fok, Graduate in Higher Diploma in Software Engineering shares how he admired the leadership of Ejoe during the collaboration with BeachBot AI. “I’m thrilled by your impact as an educator and founder, Ejoe. You embody how passion and education can better society. Congrats on accolades for BeachBot AI – your work sparks waves of innovation. Thank you for being a role model and guiding the next generation of innovators like me,” commented Fok.
Silver Award & Innovation Award on
“YDC Dare to Change Entrepreneurship Competition 2024”
Innovating for Environmental Change: BeachBot AI
Beyond the classroom, Ejoe has also made significant strides in environmental conservation. His AI-driven beach cleanup company, , has been recognized as a top contender for the SDG Initiative of the Year. This groundbreaking project utilizes sustainable energy and advanced AI technology from Azure Cognitive Services to clean beaches by detecting and removing waste.
With crucial support from Microsoft for Startups, including Azure credits, engineering mentors, and business counsel, Ejoe launched BeachBot AI with a vision to reduce pollution and operational costs on Hong Kong’s beaches. The project has already shown remarkable results, effectively minimizing waste and setting a new standard for environmental cleanup efforts.
Transforming Ideas into Impactful Solutions
Ejoe’s journey is a testament to the power of Microsoft technologies in driving meaningful societal change. By integrating AI and cloud computing into his educational and entrepreneurial ventures, Ejoe has created lasting and viable solutions that address pressing global challenges. His nominations for prestigious awards serve as a reminder of the potential for technology to empower individuals and communities to create significant impact.
A Vision for the Future
As an MVP educator and founder, Ejoe remains committed to mentoring the next cohort of innovators. He is eager to expand the environmental benefits of BeachBot AI and inspire more students to become agents of change with Microsoft. Through his work, Ejoe continues to ride the waves of innovation, demonstrating that the synergy between education, technology, and entrepreneurship can lead to a brighter and more sustainable future.
In conclusion, Ejoe’s edupreneurial journey with Microsoft is a shining example of how one person’s passion and dedication can create ripples of change that extend far beyond the classroom. As he continues to inspire and empower others, Ejoe’s story reminds us that with the right tools and support, we can all make a difference in the world.
Microsoft Tech Community – Latest Blogs –Read More
When to use AzCopy versus Azure PowerShell or Azure CLI
Overview
In this article you will learn the difference between API operations on Azure in the control plane and the data plane, how various tools such as AzCopy, Azure PowerShell, and Azure CLI leverage these APIs, and which tool fits best for your workload. All these tools are CLI based and work across platforms including Windows, Linux, and macOS.
Let’s start with a quick overview of the control plane and data plane, and how to perform operations such as creating a new storage account and uploading a new blob. After that, we’ll explore some of the tools available for interacting with your storage accounts and the data inside of them and the API surfaces those tools support.
API Operations
Azure API operations can be divided into two categories – control plane (also called the management plane) and data plane.
For in-depth details about Azure’s control plane and data plane, refer to the following link: Control plane and data plane operations – Azure Resource Manager.
Control plane
All requests for the control plane operation are sent to Azure Resource manager (ARM). Azure Resource Manager has a RESTful API surface with a URL that varies by the Azure environment. For public Azure regions the URL is: https://management.azure.com. You can find all the supported API calls for each Azure service in the Azure REST API reference documentation.
Azure Resource Manager sends the request to the resource provider, which completes the operation. In the case of Storage, it is called the Storage Resource Provider (SRP).
To create a new storage account, the corresponding HTTP PUT request must be transmitted.
Data plane
Request for data plane operations are sent to an endpoint that is specific to your storage account instance. You can find all supported API call for each Azure service in the Azure REST API reference documentation and for Storage at Azure Storage REST API Reference. Storage services have different data plane REST APIs for each proxy service including Blob, Data Lake Storage Gen2, Table, Queue, and File.
To upload a single blob in one operation, the PutBlob operation can be utilized:
PUT https://myaccount.blob.core.windows.net/mycontainer/myblob
The necessary request header is available at this link: Put Blob (REST API)
Tools
These tools are all designed to interact with the Azure APIs. The AzCopy command-line utility offers high-performance, scriptable data transfer for storage data plane operations. Azure CLI and Azure PowerShell offer more user-friendly options for executing control plane operations cross all Azure services. However, both can also be utilized for fundamental storage data plane operations.
AzCopy
Azure CLI
PowerShell
Control plane operations
No
Yes
Yes
Data plane operations
Yes
Yes
Yes
Single Files
Yes
Yes
Yes
Million Files
Yes (Multithreaded)
Not recommended
Not recommended
While Azure CLI and Azure PowerShell can be used to move multiple files, AzCopy is often better suited for it, especially when dealing with larger data sets, especially those that extend into the millions for copy operations.
AzCopy
This portable and very lightweight binary can be uses to copy files to and from Azure storage. It’s optimized for data plane operation at scale. For more on AzCopy see Get started with AzCopy.
AzCopy offers a wide range of authentication methods familiar to Azure users, such as device login and Managed Identity, both system and user-assigned. Furthermore, Service Principal authentication is supported, along with the capability to repurpose existing authentication tokens from Azure CLI or Azure PowerShell.
Once you’ve authenticated, or if you’ve supplied a SAS-Token to the source and/or destination, you can start utilizing the copy command. The following command should be used to copy data to or from Azure storage. It’s applicable to an individual file, a directory, or an entire container.
azcopy copy [source] [destination] [flags]
Refer to the documentation for detailed guidelines on uploading and downloading data, as well as using flags like include/exclude patterns, wildcards, and tagging data:
Upload files to Azure Blob storage by using AzCopy v10
Download blobs from Azure Blob Storage by using AzCopy v10
In addition, AzCopy integrates a job management subcommand to facilitate handling large-scale data transfers. The integrated sync command enables you to synchronize the source and destination based on the last modified timestamp. For more on how to synchronize data with AzCopy, see Synchronize with Azure Blob storage by using AzCopy v10.
AzCopy is not limited to uploading and downloading files to and from your local device. It also utilizes Azure storage server-to-server copy APIs, making it possible to transfer data between different storage accounts and import data directly from Amazon AWS S3 and Google Cloud Storage.
The diagrams below illustrate the process of migrating data between two Azure storage accounts by utilizing the server-to-server copy APIs.
A virtual machine client in Azure initiates the PutBlobFromURL server-to-server copy API call on the destination account using AzCopy.
The destination account understands the location of the original blob due to the details supplied in the “x-ms-copy-source” header.
The data is securely transferred from the source account to the destination account via the Microsoft backbone.
Examples of how to perform server-to-server copies with AzCopy can be found at:
Copy blobs between Azure storage accounts with AzCopy v10
Copy data from Amazon S3 to Azure Storage by using AzCopy
Copy from Google Cloud Storage to Azure Storage with AzCopy
Azure CLI
The Azure Command-Line Interface (CLI) is a cross-platform command-line tool to connect to Azure and execute administrative commands on Azure resources. It allows the execution of commands through a terminal using interactive command-line prompts or a script. For more on Azure CLI, see Get started with Azure Command-Line Interface (CLI).
Azure CLI supports several common authentication methods. The simplest starting point is the Azure Cloud Shell, but you can also use interactive login, Service Principal, and Managed Identities.
The Azure CLI is versatile and allows you to carry out a wide range of control plane tasks, ranging from creating a storage account to the more complex activities such as establishing network rules or configuring encryption scopes.
You can create a new storage account with a single command. For example:
az storage account create -n [accountName] -g [resourceGroupName] -l [region] –sku [storageSKU]
Azure CLI also is capable of performing simple data plane activities, which include uploading, downloading or copying either an individual file or a whole directory. Nonetheless, for scenarios involving a large number of files, it is strongly advised to utilize AzCopy.
To transfer an individual blob to a storage account container, the command below may be utilized:
az storage blob upload -f /path/to/file -c [containerName] -n [blobName]
Find the complete list of commands at: az storage.
Azure PowerShell
Azure PowerShell is a set of cmdlets for managing Azure resources directly from PowerShell. Azure PowerShell is designed to make it easy to learn and get started with, but provides powerful features for automation. For more on Azure PowerShell, see Get started with Azure PowerShell.
Azure PowerShell, like the previously mentioned tools, supports various authentication methods as well – both interactive and non-interactive.
Like Azure CLI, Azure PowerShell manages control plane operations. The choice of which set of tools you wish to use is entirely yours. Should you wish to incorporate Storage tasks into an existing PowerShell script, then the Az.Storage module could be a preferable option. For those already operating within a Linux shell environment, one might find the Azure CLI to be quickly recognizable and user-friendly.
You can create a new storage account quickly with Azure PowerShell. For example:
$StorageHT = @{
ResourceGroupName = $ResourceGroup
Name = ‘mystorageaccount’
SkuName = ‘Standard_LRS’
Location = $Location
}
$StorageAccount = New-AzStorageAccount @StorageHT
$Context = $StorageAccount.Context
For a basic blob upload to a storage account container, you can use a command such as the following:
$Blob1HT = @{
File = ‘D:ImagesImage001.jpg’
Container = $ContainerName
Blob = “Image001.jpg”
Context = $Context
StandardBlobTier = ‘Hot’
}
Set-AzStorageBlobContent @Blob1HT
Should you need to download or copy blobs or directories, a comprehensive set of commands can be found at Az.Storage Module.
Conclusion
There are many options when it comes to interacting with Azure storage accounts from the command line. Your considerations for which tool you use will depend on the activity you are performing and the API surface that is required.
If you’re interacting with your resources on the control plane, Azure CLI and Azure PowerShell are powerful tools that allow you to create, manage, and delete resources. If you are interacting with data in your storage accounts and performing operations on the data plane, AzCopy is purpose built for uploading, downloading, and copying your data to and from Azure Storage in the most performant way.
Microsoft Tech Community – Latest Blogs –Read More
Moving variables between episodes
To use matlab for RL, I have defined the action and observation space and the agent in a .m file, which also calls a reset function and step function also defined in .m files, and not in simulink. How can I move this variables when Matlab is still running the function train(agent,env)? I want to normalize all discounted rewards across all episodes.To use matlab for RL, I have defined the action and observation space and the agent in a .m file, which also calls a reset function and step function also defined in .m files, and not in simulink. How can I move this variables when Matlab is still running the function train(agent,env)? I want to normalize all discounted rewards across all episodes. To use matlab for RL, I have defined the action and observation space and the agent in a .m file, which also calls a reset function and step function also defined in .m files, and not in simulink. How can I move this variables when Matlab is still running the function train(agent,env)? I want to normalize all discounted rewards across all episodes. reinforcement MATLAB Answers — New Questions
Abnormal exit error while running model advisor check
I am facing below error while running MA check ‘Check Stateflow charts for strong data typing’.
Error: Abnormal exist: Invalid or deleted object
Attaching screenshot for reference. Please help to solve this.I am facing below error while running MA check ‘Check Stateflow charts for strong data typing’.
Error: Abnormal exist: Invalid or deleted object
Attaching screenshot for reference. Please help to solve this. I am facing below error while running MA check ‘Check Stateflow charts for strong data typing’.
Error: Abnormal exist: Invalid or deleted object
Attaching screenshot for reference. Please help to solve this. model advisor, check stateflow charts for strong data typing MATLAB Answers — New Questions
Auto Signalize support for Left Hand Side driving
The Signal tool has a very useful Auto Signalize feature allowing you to automatically add and setup traffic lights at a junction. However, only Right Hand Side driving is supported as you can see by the last two options: 4 Way Protected Left and 4 Way Permitted Left. When designing maps for Left Hand Side driving environments all the Signal Lights need to be manually edited to inverse their internal setup. This is extremely tedious and error prone.
Is there a way to automate this process? Are there any plans to add LHS support to the Auto-Signalize feature?The Signal tool has a very useful Auto Signalize feature allowing you to automatically add and setup traffic lights at a junction. However, only Right Hand Side driving is supported as you can see by the last two options: 4 Way Protected Left and 4 Way Permitted Left. When designing maps for Left Hand Side driving environments all the Signal Lights need to be manually edited to inverse their internal setup. This is extremely tedious and error prone.
Is there a way to automate this process? Are there any plans to add LHS support to the Auto-Signalize feature? The Signal tool has a very useful Auto Signalize feature allowing you to automatically add and setup traffic lights at a junction. However, only Right Hand Side driving is supported as you can see by the last two options: 4 Way Protected Left and 4 Way Permitted Left. When designing maps for Left Hand Side driving environments all the Signal Lights need to be manually edited to inverse their internal setup. This is extremely tedious and error prone.
Is there a way to automate this process? Are there any plans to add LHS support to the Auto-Signalize feature? road runner, traffic light MATLAB Answers — New Questions