Month: June 2024
Custom Log parsing issue
Hi all,
I have an issue with custom logs (NGINX Logs).
I create a new table in the Log Analytics Workspace using DCR-Based Mode.
The raw log looks like
2024-06-09T10:52:13+02:00 | 149.154.229.84 | GET | https://dsk.xxxx.lu/test.html | HTTP/1.1 | 404 | 21 | – | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 | 0.053
I used the following Json formats:
Format 1:
[
{
“TimeGenerated”: “2024-06-04 08:31:56”,
“DateTime”: “2024-06-09T10:52:13+02:00”,
“http_x_forwarded_for” : “149.154.229.84”,
“request_method” : “GET”,
“URL” : “https://https://dsk.xxxx.lu/test.html“
“HTTP_Version”: “HTTP/1.1”,
“status”: “404”,
“body_bytes_sent”: “21”,
“http_referer”: “-“,
“http_user_agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36”,
“request_time”: “0.053”
}
]
Format 2:
[
{
“TimeGenerated”: “2024-06-04 08:31:56”,
“RawData”: “2024-06-09T10:52:13+02:00 | 149.154.229.84 | GET | https://dsk.xxxx.lu/test.html | HTTP/1.1 | 404 | 21 | – | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 | 0.053
“
}
]
If I use Format 1, I can’t see the log payload when performing KQL Query (See Log Nok image )
If I use Format 2, I can see the log payload when performing KQL Query (See Log OK image)
Any idea ??
Regards,
HA
Hi all,I have an issue with custom logs (NGINX Logs).I create a new table in the Log Analytics Workspace using DCR-Based Mode.The raw log looks like2024-06-09T10:52:13+02:00 | 149.154.229.84 | GET | https://dsk.xxxx.lu/test.html | HTTP/1.1 | 404 | 21 | – | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 | 0.053I used the following Json formats:Format 1:[{“TimeGenerated”: “2024-06-04 08:31:56″,”DateTime”: “2024-06-09T10:52:13+02:00″,”http_x_forwarded_for” : “149.154.229.84”,”request_method” : “GET”,”URL” : “https://https://dsk.xxxx.lu/test.html””HTTP_Version”: “HTTP/1.1″,”status”: “404”,”body_bytes_sent”: “21”,”http_referer”: “-“,”http_user_agent”: “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36″,”request_time”: “0.053”}]Format 2:[{“TimeGenerated”: “2024-06-04 08:31:56″,”RawData”: “2024-06-09T10:52:13+02:00 | 149.154.229.84 | GET | https://dsk.xxxx.lu/test.html | HTTP/1.1 | 404 | 21 | – | Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 | 0.053”}]If I use Format 1, I can’t see the log payload when performing KQL Query (See Log Nok image )If I use Format 2, I can see the log payload when performing KQL Query (See Log OK image)Any idea ?? Regards, HA Read More
Know how technological developments can help with animal abuse?
Does anyone know how technological developments can help with animal abuse?
Does anyone know how technological developments can help with animal abuse? Read More
error “brace indexing is not supported for variables of this type” trying to import a xlsx file of 2 columns to a signal editor
i’m trying to import a signal like a time running signal builder block, but it doesn’t work as i’m new to this signal editor block and i tried to understand the matlab tutorial, but i can’t even import the signal as the error appears.
And the archive is a column of time from 0 to 81 seconds, and the second column of rpm that changes over time as the car goes on a track. XLSX DOCUMENT
The process is to use the rpm and link it to a lookup table, as it will have the car rpm range, and the values that will be linked with that rpm, like a heat generated from the combustion engine. As i said, i used the signal builder before and that worked as intended, but now i’m a little confused and i don’t know if the data that i’m importing is "wrong" for the purpose or is really that difficult to understand the "new" blocki’m trying to import a signal like a time running signal builder block, but it doesn’t work as i’m new to this signal editor block and i tried to understand the matlab tutorial, but i can’t even import the signal as the error appears.
And the archive is a column of time from 0 to 81 seconds, and the second column of rpm that changes over time as the car goes on a track. XLSX DOCUMENT
The process is to use the rpm and link it to a lookup table, as it will have the car rpm range, and the values that will be linked with that rpm, like a heat generated from the combustion engine. As i said, i used the signal builder before and that worked as intended, but now i’m a little confused and i don’t know if the data that i’m importing is "wrong" for the purpose or is really that difficult to understand the "new" block i’m trying to import a signal like a time running signal builder block, but it doesn’t work as i’m new to this signal editor block and i tried to understand the matlab tutorial, but i can’t even import the signal as the error appears.
And the archive is a column of time from 0 to 81 seconds, and the second column of rpm that changes over time as the car goes on a track. XLSX DOCUMENT
The process is to use the rpm and link it to a lookup table, as it will have the car rpm range, and the values that will be linked with that rpm, like a heat generated from the combustion engine. As i said, i used the signal builder before and that worked as intended, but now i’m a little confused and i don’t know if the data that i’m importing is "wrong" for the purpose or is really that difficult to understand the "new" block signal processing MATLAB Answers — New Questions
Possible matlab bug with text() function?
I’m having issues with the text function. I am copying your example from the documentation, and I get the following error:
"Subscript indices must either be real positive integers or logicals."
Here is the example:
x = 0:pi/20:2*pi;
y = sin(x);
plot(x,y)
text(pi,0,’leftarrow sin(pi)’)
I am using Matlab 2013a. Very simple example, not sure why I can’t replicate the documentation example. Thank in advance!I’m having issues with the text function. I am copying your example from the documentation, and I get the following error:
"Subscript indices must either be real positive integers or logicals."
Here is the example:
x = 0:pi/20:2*pi;
y = sin(x);
plot(x,y)
text(pi,0,’leftarrow sin(pi)’)
I am using Matlab 2013a. Very simple example, not sure why I can’t replicate the documentation example. Thank in advance! I’m having issues with the text function. I am copying your example from the documentation, and I get the following error:
"Subscript indices must either be real positive integers or logicals."
Here is the example:
x = 0:pi/20:2*pi;
y = sin(x);
plot(x,y)
text(pi,0,’leftarrow sin(pi)’)
I am using Matlab 2013a. Very simple example, not sure why I can’t replicate the documentation example. Thank in advance! text, not a bug MATLAB Answers — New Questions
how to save different size of user groups in a cell array.
Error using vertcat
Dimensions of arrays being concatenated are not consistent.
Error in DNUA (line 73)
feasibleGroups = [feasibleGroups; groups_k];
>>Error using vertcat
Dimensions of arrays being concatenated are not consistent.
Error in DNUA (line 73)
feasibleGroups = [feasibleGroups; groups_k];
>> Error using vertcat
Dimensions of arrays being concatenated are not consistent.
Error in DNUA (line 73)
feasibleGroups = [feasibleGroups; groups_k];
>> cell arrays MATLAB Answers — New Questions
Windows 11 update
I’m user of Windows 11 DEV Channel. Update available on my Laptop “Windows 11 Insider Preview 10.0.26120.770 (ge_release) (repair version)” but every time update failed. What should I do to update?
I’m user of Windows 11 DEV Channel. Update available on my Laptop “Windows 11 Insider Preview 10.0.26120.770 (ge_release) (repair version)” but every time update failed. What should I do to update? Read More
How to get map and projection info from jp2 image?
I have tried for extracting map and projection information using attached matlab code in the format given below;
map info = {UTM, 1, 1, 712580, 3216500, 20, 20, 43, North, WGS-84, Meters}
coordinate system string = {PROJCS["WGS_1984_UTM_Zone_43N",GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",75.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["Latitude_Of_Origin",0.0],UNIT["Meter",1.0]]}
I have flow chart given below;
Projection information
R.ProjectedCRS
Name
ProjectionMethod
LengthUnit
ProjectionParameters
ScaleFactorAtNaturalOrigin
LatitudeofNaturalOrigin
LongitudeOfNaturalOrigin
FalseEasting
FalseNorthing
GeographicCRS
Name
Datum
PrimeMeridian
AngleUnit
Spheroid
Name
SemimajorAxis
InverseFlattening
I am attching my matlab code and providing the link for jp2 image. I request your assistance to get complete projection information from jp2 image as mentioned above. I would be highly obliged to you for your kind help.
https://drive.google.com/file/d/1OwPN2cNq-x6hI43snCMGPkYxCUkPZqDj/view?usp=sharing
AkshI have tried for extracting map and projection information using attached matlab code in the format given below;
map info = {UTM, 1, 1, 712580, 3216500, 20, 20, 43, North, WGS-84, Meters}
coordinate system string = {PROJCS["WGS_1984_UTM_Zone_43N",GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",75.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["Latitude_Of_Origin",0.0],UNIT["Meter",1.0]]}
I have flow chart given below;
Projection information
R.ProjectedCRS
Name
ProjectionMethod
LengthUnit
ProjectionParameters
ScaleFactorAtNaturalOrigin
LatitudeofNaturalOrigin
LongitudeOfNaturalOrigin
FalseEasting
FalseNorthing
GeographicCRS
Name
Datum
PrimeMeridian
AngleUnit
Spheroid
Name
SemimajorAxis
InverseFlattening
I am attching my matlab code and providing the link for jp2 image. I request your assistance to get complete projection information from jp2 image as mentioned above. I would be highly obliged to you for your kind help.
https://drive.google.com/file/d/1OwPN2cNq-x6hI43snCMGPkYxCUkPZqDj/view?usp=sharing
Aksh I have tried for extracting map and projection information using attached matlab code in the format given below;
map info = {UTM, 1, 1, 712580, 3216500, 20, 20, 43, North, WGS-84, Meters}
coordinate system string = {PROJCS["WGS_1984_UTM_Zone_43N",GEOGCS["GCS_WGS_1984",DATUM["D_WGS_1984",SPHEROID["WGS_1984",6378137.0,298.257223563]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",75.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["Latitude_Of_Origin",0.0],UNIT["Meter",1.0]]}
I have flow chart given below;
Projection information
R.ProjectedCRS
Name
ProjectionMethod
LengthUnit
ProjectionParameters
ScaleFactorAtNaturalOrigin
LatitudeofNaturalOrigin
LongitudeOfNaturalOrigin
FalseEasting
FalseNorthing
GeographicCRS
Name
Datum
PrimeMeridian
AngleUnit
Spheroid
Name
SemimajorAxis
InverseFlattening
I am attching my matlab code and providing the link for jp2 image. I request your assistance to get complete projection information from jp2 image as mentioned above. I would be highly obliged to you for your kind help.
https://drive.google.com/file/d/1OwPN2cNq-x6hI43snCMGPkYxCUkPZqDj/view?usp=sharing
Aksh how to get map and projection info from jp2 image? MATLAB Answers — New Questions
how to successfully extract the image in blind color image watermarking using FWHT and SVD
my extracted watermark image is not similar with original watermark image where there exist gray background in the extracted watermark image and the color also did not same as original.
this is my code using FWHT and SVD algorithm
% Read the watermarked image
%watermarked = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageswatermarked_12.png’);
% Read the original host image
host = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageshost 1.png’);
wm = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimagesw image 2.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
% Apply FWHT to host image
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply FWHT to watermark image
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
% Apply SVD to FWHT-transformed host images
[U_h_red, S_h_red, V_h_red] = svd(transformed_h_red);
[U_h_green, S_h_green, V_h_green] = svd(transformed_h_green);
[U_h_blue, S_h_blue, V_h_blue] = svd(transformed_h_blue);
% Apply SVD to FWHT-transformed watermark images
[~, S_wm_red] = svd(transformed_wm_red);
[~, S_wm_green] = svd(transformed_wm_green);
[~, S_wm_blue] = svd(transformed_wm_blue);
% Modify the singular values of the host image by the singular values of the watermark image
alpha = 0.01; % Example scaling factor, adjust as needed
S_new_red = S_h_red + (alpha * S_wm_red);
S_new_green = S_h_green + (alpha * S_wm_green);
S_new_blue = S_h_blue + (alpha * S_wm_blue);
% Apply the inverse SVD to get the watermarked image channels
watermarked_red = U_h_red * S_new_red * V_h_red’;
watermarked_green = U_h_green * S_new_green * V_h_green’;
watermarked_blue = U_h_blue * S_new_blue * V_h_blue’;
% Apply inverse FWHT to each color channel of the watermarked image
watermarked_red = ifwht2(watermarked_red);
watermarked_green = ifwht2(watermarked_green);
watermarked_blue = ifwht2(watermarked_blue);
% Combine the RGB blocks to form the final watermarked color image
watermarked_color_image = cat(3, watermarked_red, watermarked_green, watermarked_blue);
% Scale the image data to the range [0, 1] and convert to uint8
watermarked = (255 * mat2gray(watermarked_color_image));
%watermarked = uint8(watermarked);
% Display the final watermarked color image in original size
%figure; % Open a new figure window
%imshow(watermarked, ‘InitialMagnification’, ‘fit’);
%title(‘Watermarked Image’);
% Open a dialog box to get the file path
%[filename, pathname] = uiputfile({‘*.png’;’*.jpg’;’*.bmp’;’*.tiff’}, ‘Save as’);
%if isequal(filename,0) || isequal(pathname,0)
% disp(‘User pressed cancel’)
%else
% Create the full file path
% full_filename = fullfile(pathname, filename);
% Save the watermarked color image to the chosen location
% imwrite(watermarked_color_image, full_filename);
% Display a message to indicate the image has been saved
% disp([‘Watermarked image saved as ‘, full_filename]);
%end
%END EMBED
%%
%EXTRACTION PROCESS
% Extract individual color channels of the watermarked image
%watermarked = imread(‘C:UsersUserDownloadswatermarked_12.png’);
% Read the original host image
%host = imread(‘C:UsersUserDownloadshost 1.png’);
%wm = imread(‘C:UsersUserDownloadsw image 1.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% % Extract individual color channels of the host image
% h_red = host(:,:,1);
% h_green = host(:,:,2);
% h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
w_red = watermarked(:,:,1);
w_green = watermarked(:,:,2);
w_blue = watermarked(:,:,3);
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Apply FWHT to the watermarked image and the host image
transformed_w_red = fwht2(w_red);
transformed_w_green = fwht2(w_green);
transformed_w_blue = fwht2(w_blue);
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_w_red, S_w_red, V_w_red] = svd(transformed_w_red);
[U_w_green, S_w_green, V_w_green] = svd(transformed_w_green);
[U_w_blue, S_w_blue, V_w_blue] = svd(transformed_w_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_wm_red, S_wm_red, V_wm_red] = svd(transformed_wm_red);
[U_wm_green, S_wm_green, V_wm_green] = svd(transformed_wm_green);
[U_wm_blue, S_wm_blue, V_wm_blue] = svd(transformed_wm_blue);
% Apply SVD to the FWHT-transformed host image
[~, S_h_red, ~] = svd(transformed_h_red);
[~, S_h_green, ~] = svd(transformed_h_green);
[~, S_h_blue, ~] = svd(transformed_h_blue);
% Subtract the singular values of the host image from the singular values of the watermarked image
S_ext_red = (S_w_red – S_h_red)/alpha;
S_ext_green = (S_w_green – S_h_green)/alpha;
S_ext_blue = (S_w_blue – S_h_blue)/alpha;
% Apply the inverse SVD to acquire the extracted watermark image
extracted_red = U_wm_red * S_ext_red * V_wm_red’;
extracted_green = U_wm_green * S_ext_green * V_wm_green’;
extracted_blue = U_wm_blue * S_ext_blue * V_wm_blue’;
% Apply the inverse FWHT
extracted_red = ifwht2(extracted_red);
extracted_green = ifwht2(extracted_green);
extracted_blue = ifwht2(extracted_blue);
% Combine the RGB blocks to form the final extracted watermark image
extracted_watermark = cat(3, extracted_red, extracted_green, extracted_blue);
%%
% figure;imshow(uint8(extracted_watermark));
extracted_watermark = uint8(255 * mat2gray( (extracted_watermark)));
% extracted_watermark = uint8(extracted_watermark);
figure;imshow(extracted_watermark);
% Resize the extracted watermark image to 128 x 128
extracted_watermark = imresize(extracted_watermark, [128 128]);
% Display the host image, watermark image, watermarked image, and extracted image side by side
figure; % Open a new figure window
subplot(1, 4, 1);
imshow(host, ‘InitialMagnification’, ‘fit’);
title(‘Host Image’);
subplot(1, 4, 2);
imshow(wm, ‘InitialMagnification’, ‘fit’);
title(‘Watermark Image’);
watermarked = uint8(watermarked);
subplot(1, 4, 3);
imshow(watermarked, ‘InitialMagnification’, ‘fit’);
title(‘Watermarked Image’);
subplot(1, 4, 4);
imshow(extracted_watermark, ‘InitialMagnification’, ‘fit’);
title(‘Extracted Watermark Image’);
% Function to apply 2D FWHT
function transformed_image = fwht2(image)
% Apply the 2D FWHT
transformed_image = fwht(fwht(double(image)).’).’;
end
% Function to apply 2D inverse FWHT
function image = ifwht2(transformed_image)
% Apply the 2D inverse FWHT
image = ifwht(ifwht(double(transformed_image)).’).’;
endmy extracted watermark image is not similar with original watermark image where there exist gray background in the extracted watermark image and the color also did not same as original.
this is my code using FWHT and SVD algorithm
% Read the watermarked image
%watermarked = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageswatermarked_12.png’);
% Read the original host image
host = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageshost 1.png’);
wm = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimagesw image 2.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
% Apply FWHT to host image
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply FWHT to watermark image
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
% Apply SVD to FWHT-transformed host images
[U_h_red, S_h_red, V_h_red] = svd(transformed_h_red);
[U_h_green, S_h_green, V_h_green] = svd(transformed_h_green);
[U_h_blue, S_h_blue, V_h_blue] = svd(transformed_h_blue);
% Apply SVD to FWHT-transformed watermark images
[~, S_wm_red] = svd(transformed_wm_red);
[~, S_wm_green] = svd(transformed_wm_green);
[~, S_wm_blue] = svd(transformed_wm_blue);
% Modify the singular values of the host image by the singular values of the watermark image
alpha = 0.01; % Example scaling factor, adjust as needed
S_new_red = S_h_red + (alpha * S_wm_red);
S_new_green = S_h_green + (alpha * S_wm_green);
S_new_blue = S_h_blue + (alpha * S_wm_blue);
% Apply the inverse SVD to get the watermarked image channels
watermarked_red = U_h_red * S_new_red * V_h_red’;
watermarked_green = U_h_green * S_new_green * V_h_green’;
watermarked_blue = U_h_blue * S_new_blue * V_h_blue’;
% Apply inverse FWHT to each color channel of the watermarked image
watermarked_red = ifwht2(watermarked_red);
watermarked_green = ifwht2(watermarked_green);
watermarked_blue = ifwht2(watermarked_blue);
% Combine the RGB blocks to form the final watermarked color image
watermarked_color_image = cat(3, watermarked_red, watermarked_green, watermarked_blue);
% Scale the image data to the range [0, 1] and convert to uint8
watermarked = (255 * mat2gray(watermarked_color_image));
%watermarked = uint8(watermarked);
% Display the final watermarked color image in original size
%figure; % Open a new figure window
%imshow(watermarked, ‘InitialMagnification’, ‘fit’);
%title(‘Watermarked Image’);
% Open a dialog box to get the file path
%[filename, pathname] = uiputfile({‘*.png’;’*.jpg’;’*.bmp’;’*.tiff’}, ‘Save as’);
%if isequal(filename,0) || isequal(pathname,0)
% disp(‘User pressed cancel’)
%else
% Create the full file path
% full_filename = fullfile(pathname, filename);
% Save the watermarked color image to the chosen location
% imwrite(watermarked_color_image, full_filename);
% Display a message to indicate the image has been saved
% disp([‘Watermarked image saved as ‘, full_filename]);
%end
%END EMBED
%%
%EXTRACTION PROCESS
% Extract individual color channels of the watermarked image
%watermarked = imread(‘C:UsersUserDownloadswatermarked_12.png’);
% Read the original host image
%host = imread(‘C:UsersUserDownloadshost 1.png’);
%wm = imread(‘C:UsersUserDownloadsw image 1.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% % Extract individual color channels of the host image
% h_red = host(:,:,1);
% h_green = host(:,:,2);
% h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
w_red = watermarked(:,:,1);
w_green = watermarked(:,:,2);
w_blue = watermarked(:,:,3);
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Apply FWHT to the watermarked image and the host image
transformed_w_red = fwht2(w_red);
transformed_w_green = fwht2(w_green);
transformed_w_blue = fwht2(w_blue);
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_w_red, S_w_red, V_w_red] = svd(transformed_w_red);
[U_w_green, S_w_green, V_w_green] = svd(transformed_w_green);
[U_w_blue, S_w_blue, V_w_blue] = svd(transformed_w_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_wm_red, S_wm_red, V_wm_red] = svd(transformed_wm_red);
[U_wm_green, S_wm_green, V_wm_green] = svd(transformed_wm_green);
[U_wm_blue, S_wm_blue, V_wm_blue] = svd(transformed_wm_blue);
% Apply SVD to the FWHT-transformed host image
[~, S_h_red, ~] = svd(transformed_h_red);
[~, S_h_green, ~] = svd(transformed_h_green);
[~, S_h_blue, ~] = svd(transformed_h_blue);
% Subtract the singular values of the host image from the singular values of the watermarked image
S_ext_red = (S_w_red – S_h_red)/alpha;
S_ext_green = (S_w_green – S_h_green)/alpha;
S_ext_blue = (S_w_blue – S_h_blue)/alpha;
% Apply the inverse SVD to acquire the extracted watermark image
extracted_red = U_wm_red * S_ext_red * V_wm_red’;
extracted_green = U_wm_green * S_ext_green * V_wm_green’;
extracted_blue = U_wm_blue * S_ext_blue * V_wm_blue’;
% Apply the inverse FWHT
extracted_red = ifwht2(extracted_red);
extracted_green = ifwht2(extracted_green);
extracted_blue = ifwht2(extracted_blue);
% Combine the RGB blocks to form the final extracted watermark image
extracted_watermark = cat(3, extracted_red, extracted_green, extracted_blue);
%%
% figure;imshow(uint8(extracted_watermark));
extracted_watermark = uint8(255 * mat2gray( (extracted_watermark)));
% extracted_watermark = uint8(extracted_watermark);
figure;imshow(extracted_watermark);
% Resize the extracted watermark image to 128 x 128
extracted_watermark = imresize(extracted_watermark, [128 128]);
% Display the host image, watermark image, watermarked image, and extracted image side by side
figure; % Open a new figure window
subplot(1, 4, 1);
imshow(host, ‘InitialMagnification’, ‘fit’);
title(‘Host Image’);
subplot(1, 4, 2);
imshow(wm, ‘InitialMagnification’, ‘fit’);
title(‘Watermark Image’);
watermarked = uint8(watermarked);
subplot(1, 4, 3);
imshow(watermarked, ‘InitialMagnification’, ‘fit’);
title(‘Watermarked Image’);
subplot(1, 4, 4);
imshow(extracted_watermark, ‘InitialMagnification’, ‘fit’);
title(‘Extracted Watermark Image’);
% Function to apply 2D FWHT
function transformed_image = fwht2(image)
% Apply the 2D FWHT
transformed_image = fwht(fwht(double(image)).’).’;
end
% Function to apply 2D inverse FWHT
function image = ifwht2(transformed_image)
% Apply the 2D inverse FWHT
image = ifwht(ifwht(double(transformed_image)).’).’;
end my extracted watermark image is not similar with original watermark image where there exist gray background in the extracted watermark image and the color also did not same as original.
this is my code using FWHT and SVD algorithm
% Read the watermarked image
%watermarked = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageswatermarked_12.png’);
% Read the original host image
host = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimageshost 1.png’);
wm = imread(‘C:Userswindow10DocumentsJAMILAHKELAS CS249FYPWatermarkingimagesw image 2.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
% Apply FWHT to host image
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply FWHT to watermark image
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
% Apply SVD to FWHT-transformed host images
[U_h_red, S_h_red, V_h_red] = svd(transformed_h_red);
[U_h_green, S_h_green, V_h_green] = svd(transformed_h_green);
[U_h_blue, S_h_blue, V_h_blue] = svd(transformed_h_blue);
% Apply SVD to FWHT-transformed watermark images
[~, S_wm_red] = svd(transformed_wm_red);
[~, S_wm_green] = svd(transformed_wm_green);
[~, S_wm_blue] = svd(transformed_wm_blue);
% Modify the singular values of the host image by the singular values of the watermark image
alpha = 0.01; % Example scaling factor, adjust as needed
S_new_red = S_h_red + (alpha * S_wm_red);
S_new_green = S_h_green + (alpha * S_wm_green);
S_new_blue = S_h_blue + (alpha * S_wm_blue);
% Apply the inverse SVD to get the watermarked image channels
watermarked_red = U_h_red * S_new_red * V_h_red’;
watermarked_green = U_h_green * S_new_green * V_h_green’;
watermarked_blue = U_h_blue * S_new_blue * V_h_blue’;
% Apply inverse FWHT to each color channel of the watermarked image
watermarked_red = ifwht2(watermarked_red);
watermarked_green = ifwht2(watermarked_green);
watermarked_blue = ifwht2(watermarked_blue);
% Combine the RGB blocks to form the final watermarked color image
watermarked_color_image = cat(3, watermarked_red, watermarked_green, watermarked_blue);
% Scale the image data to the range [0, 1] and convert to uint8
watermarked = (255 * mat2gray(watermarked_color_image));
%watermarked = uint8(watermarked);
% Display the final watermarked color image in original size
%figure; % Open a new figure window
%imshow(watermarked, ‘InitialMagnification’, ‘fit’);
%title(‘Watermarked Image’);
% Open a dialog box to get the file path
%[filename, pathname] = uiputfile({‘*.png’;’*.jpg’;’*.bmp’;’*.tiff’}, ‘Save as’);
%if isequal(filename,0) || isequal(pathname,0)
% disp(‘User pressed cancel’)
%else
% Create the full file path
% full_filename = fullfile(pathname, filename);
% Save the watermarked color image to the chosen location
% imwrite(watermarked_color_image, full_filename);
% Display a message to indicate the image has been saved
% disp([‘Watermarked image saved as ‘, full_filename]);
%end
%END EMBED
%%
%EXTRACTION PROCESS
% Extract individual color channels of the watermarked image
%watermarked = imread(‘C:UsersUserDownloadswatermarked_12.png’);
% Read the original host image
%host = imread(‘C:UsersUserDownloadshost 1.png’);
%wm = imread(‘C:UsersUserDownloadsw image 1.png’);
%EMBEDDING
% Resize watermark to match the host image size if necessary
wm = imresize(wm, size(host(:,:,1)));
% % Extract individual color channels of the host image
% h_red = host(:,:,1);
% h_green = host(:,:,2);
% h_blue = host(:,:,3);
% Extract individual color channels of the watermark image
wm_red = wm(:,:,1);
wm_green = wm(:,:,2);
wm_blue = wm(:,:,3);
w_red = watermarked(:,:,1);
w_green = watermarked(:,:,2);
w_blue = watermarked(:,:,3);
% Extract individual color channels of the host image
h_red = host(:,:,1);
h_green = host(:,:,2);
h_blue = host(:,:,3);
% Apply FWHT to the watermarked image and the host image
transformed_w_red = fwht2(w_red);
transformed_w_green = fwht2(w_green);
transformed_w_blue = fwht2(w_blue);
transformed_wm_red = fwht2(wm_red);
transformed_wm_green = fwht2(wm_green);
transformed_wm_blue = fwht2(wm_blue);
transformed_h_red = fwht2(h_red);
transformed_h_green = fwht2(h_green);
transformed_h_blue = fwht2(h_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_w_red, S_w_red, V_w_red] = svd(transformed_w_red);
[U_w_green, S_w_green, V_w_green] = svd(transformed_w_green);
[U_w_blue, S_w_blue, V_w_blue] = svd(transformed_w_blue);
% Apply SVD to the FWHT-transformed watermarked image
[U_wm_red, S_wm_red, V_wm_red] = svd(transformed_wm_red);
[U_wm_green, S_wm_green, V_wm_green] = svd(transformed_wm_green);
[U_wm_blue, S_wm_blue, V_wm_blue] = svd(transformed_wm_blue);
% Apply SVD to the FWHT-transformed host image
[~, S_h_red, ~] = svd(transformed_h_red);
[~, S_h_green, ~] = svd(transformed_h_green);
[~, S_h_blue, ~] = svd(transformed_h_blue);
% Subtract the singular values of the host image from the singular values of the watermarked image
S_ext_red = (S_w_red – S_h_red)/alpha;
S_ext_green = (S_w_green – S_h_green)/alpha;
S_ext_blue = (S_w_blue – S_h_blue)/alpha;
% Apply the inverse SVD to acquire the extracted watermark image
extracted_red = U_wm_red * S_ext_red * V_wm_red’;
extracted_green = U_wm_green * S_ext_green * V_wm_green’;
extracted_blue = U_wm_blue * S_ext_blue * V_wm_blue’;
% Apply the inverse FWHT
extracted_red = ifwht2(extracted_red);
extracted_green = ifwht2(extracted_green);
extracted_blue = ifwht2(extracted_blue);
% Combine the RGB blocks to form the final extracted watermark image
extracted_watermark = cat(3, extracted_red, extracted_green, extracted_blue);
%%
% figure;imshow(uint8(extracted_watermark));
extracted_watermark = uint8(255 * mat2gray( (extracted_watermark)));
% extracted_watermark = uint8(extracted_watermark);
figure;imshow(extracted_watermark);
% Resize the extracted watermark image to 128 x 128
extracted_watermark = imresize(extracted_watermark, [128 128]);
% Display the host image, watermark image, watermarked image, and extracted image side by side
figure; % Open a new figure window
subplot(1, 4, 1);
imshow(host, ‘InitialMagnification’, ‘fit’);
title(‘Host Image’);
subplot(1, 4, 2);
imshow(wm, ‘InitialMagnification’, ‘fit’);
title(‘Watermark Image’);
watermarked = uint8(watermarked);
subplot(1, 4, 3);
imshow(watermarked, ‘InitialMagnification’, ‘fit’);
title(‘Watermarked Image’);
subplot(1, 4, 4);
imshow(extracted_watermark, ‘InitialMagnification’, ‘fit’);
title(‘Extracted Watermark Image’);
% Function to apply 2D FWHT
function transformed_image = fwht2(image)
% Apply the 2D FWHT
transformed_image = fwht(fwht(double(image)).’).’;
end
% Function to apply 2D inverse FWHT
function image = ifwht2(transformed_image)
% Apply the 2D inverse FWHT
image = ifwht(ifwht(double(transformed_image)).’).’;
end #imagewatermarking, #imageprocessing, #mathematics, #fwht, #svd MATLAB Answers — New Questions
is GTA Online working on 261XX.XXX?
Can anybody help me to know if is GTA Online working on the latest Release Preview build? i stayed on canary build “25393.1” (Windows Feature Experience Pack 1000.25350.1000.0) because was the last insider version that worked for GTA Online.
Can anybody help me to know if is GTA Online working on the latest Release Preview build? i stayed on canary build “25393.1” (Windows Feature Experience Pack 1000.25350.1000.0) because was the last insider version that worked for GTA Online. Read More
היחסים בין יהודים לבין ערבים במחצית הראשונה של שנת 1945 המקור המידע עיתון “דבר”
היחסים בין יהודים לבין ערבים במחצית הראשונה של שנת 1945 המקור המידע עיתון “דבר”
היחסים בין יהודים לבין ערבים במחצית הראשונה של שנת 1945 המקור המידע עיתון “דבר” Read More
ADX Web updates – May 2024
Introducing an Enhanced Connections Explorer
We are pleased to introduce a new look and feel to our Connections explorer, designed to help you manage your list of data sources more efficiently.
While maintaining the familiar functionality of the old experience, this updated interface features a modern design, improved performance and an enhanced way to manage and view your Favorites.
We encourage you to turn on the new Connections pane to experience its smoother and more intuitive user experience –
If you are managing a long list of connections in your ADX web UI, you’ll notice a performance improvement as soon as you turn the new experience on.
Moreover, the new connection pane features
1 – multiple actions to help you manage the tree connections, accessible via an intuitive menu
2 – easy access to Get data actions
3 – easy access to tables’ data profile
Please share your thoughts and feedback regarding this new enhancement KustoWebExpFeedback@service.microsoft.com!
Easily Favorite and Find Your Important Dashboards
We are happy to announce that you can now add dashboards to your favorites list from two convenient locations: both from the catalog and, as a newly introduced feature, directly from the dashboard itself! This enhancement, driven by your feedback, makes it easier than ever to quickly mark and access your most-used dashboards.
Azure Data Explorer Web UI team is looking forward for your feedback in KustoWebExpFeedback@service.microsoft.com
You’re also welcome to add more ideas and vote for them here – https://aka.ms/adx.ideas
Read more:
Latest ADX Web updates – ADX Web updates – Jan 2024 – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
what kind of plot is useful for comparing two matrices?
I have two 5*5 matrices. I want to use a plot to show that they are close element-wise.
What plot is suitable?
I think the plot should be transparent.
It is of course doable if we reshape the matrices into vectors, and use a 2d plot. But that is not intuitive.I have two 5*5 matrices. I want to use a plot to show that they are close element-wise.
What plot is suitable?
I think the plot should be transparent.
It is of course doable if we reshape the matrices into vectors, and use a 2d plot. But that is not intuitive. I have two 5*5 matrices. I want to use a plot to show that they are close element-wise.
What plot is suitable?
I think the plot should be transparent.
It is of course doable if we reshape the matrices into vectors, and use a 2d plot. But that is not intuitive. plotting, surface MATLAB Answers — New Questions
Help with using File Explorer ‘my’ way when seeking photos
When using Firefox to upload photos, File Explorer opens but doesn’t display images in a way that makes them easy to find, usually because the view isn’t set up optimally.
What I Want: I’d like File Explorer to show the most recent items first whenever it opens.
Example: I often take snapshots to upload to Facebook or send in texts. I want the view in File Explorer to automatically display the most recent files first, like the snapshot I just took for this post. When I try to upload it, I struggle to find it quickly.
QUESTION — I know this sounds moronic, but how do I set that window to always show the most recent stuff first?
When using Firefox to upload photos, File Explorer opens but doesn’t display images in a way that makes them easy to find, usually because the view isn’t set up optimally. What I Want: I’d like File Explorer to show the most recent items first whenever it opens. Example: I often take snapshots to upload to Facebook or send in texts. I want the view in File Explorer to automatically display the most recent files first, like the snapshot I just took for this post. When I try to upload it, I struggle to find it quickly. QUESTION — I know this sounds moronic, but how do I set that window to always show the most recent stuff first? Read More
How to add kalman filter to the Lidar odometry and mapping slam present in the mathworks document?
This is the code to perform LOAM for the self driving car moving around the parking lot, now how to add kalman filter in this code given below:
% Load reference path % 60 vehicles are parked
data = load("parkingLotReferenceData.mat");
% Set reference trajectory of the ego vehicle
refPosesX = data.refPosesX;
refPosesY = data.refPosesY;
refPosesT = data.refPosesT;
% Set poses of the parked vehicles
parkedPoses = data.parkedPoses;
% Display the reference trajectory and the parked vehicle locations
sceneName = "LargeParkingLot";
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2,DisplayName="Reference Path");
scatter(parkedPoses(:,1),parkedPoses(:,2),[],"filled",DisplayName="Parked Vehicles");
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500]; % Resize figure
title("Large Parking Lot")
legend
modelName = ‘GenerateLidarDataOfParkingLot’;
open_system(modelName)
snapnow
helperAddParkedVehicles(modelName,parkedPoses)
close(hScene)
if ~ispc
error("Unreal Engine Simulation is supported only on Microsoft" + char(174) + " Windows" + char(174) + ".");
end
% Run simulation
simOut = sim(modelName);
close_system(modelName,0)
ptCloudArr = helperGetPointClouds(simOut);
groundTruthPosesLidar = helperGetLidarGroundTruth(simOut);
ptCloud = ptCloudArr(1);
nextPtCloud = ptCloudArr(2);
gridStep = 0.5;
tform = pcregisterloam(ptCloud,nextPtCloud,gridStep);
disp(tform)
egoRadius = 3.5;
cylinderRadius = 50;
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
selectedIdx = findPointsInCylinder(nextPtCloud,[egoRadius cylinderRadius]);
nextPtCloud = select(nextPtCloud,selectedIdx,OutputSize="full");
figure
hold on
title("Cylindrical Neighborhood")
pcshow(ptCloud)
view(2)
maxPlanarSurfacePoints = 8;
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
nextPoints = detectLOAMFeatures(nextPtCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
figure
hold on
title("LOAM Points")
show(points,MarkerSize=12)
[~,rmseValue] = pcregisterloam(points,nextPoints);
disp([‘RMSE: ‘ num2str(rmseValue)])
points = downsampleLessPlanar(points,gridStep);
figure
hold on
title(‘LOAM Points After Downsampling the Less Planar Surface Points’)
show(points,’MarkerSize’,12)
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetOdometry = pcviewset;
viewId = 1;
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
numSkip = 5;
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,InitialTransform=relPose);
% Update the absolute pose and store it in the view set
absPose = rigidtform3d(absPose.A * relPose.A);
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8);
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Odometry")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetMapping = pcviewset;
ptCloud = ptCloudArr(1);
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
points = detectLOAMFeatures(ptCloud,’MaxPlanarSurfacePoints’,maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
viewId = 1;
vSetMapping = addView(vSetMapping,viewId,absPose);
voxelSize = 0.1;
loamMap = pcmaploam(voxelSize);
addPoints(loamMap,points,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPtCloud = ptCloud;
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,MatchingMethod="one-to-one",InitialTransform=relPose);
% Find the refined absolute pose that aligns the points to the map
absPose = findPose(loamMap,points,relPose);
% Store the refined absolute pose in the view set
vSetMapping = addView(vSetMapping,viewId,absPose);
% Add the new points to the map
addPoints(loamMap,points,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8)
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Mapping")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
% Get the ground truth trajectory
groundTruthTrajectory = vertcat(groundTruthPosesLidar.Translation);
% Get the estimated trajectories
odometryPositions = vertcat(vSetOdometry.Views.AbsolutePose.Translation);
mappingPositions = vertcat(vSetMapping.Views.AbsolutePose.Translation);
% Plot the trajectories
figure
plot3(groundTruthTrajectory(:,1),groundTruthTrajectory(:,2),groundTruthTrajectory(:,3),LineWidth=2,DisplayName="Ground Truth")
hold on
plot3(odometryPositions(:,1),odometryPositions(:,2),odometryPositions(:,3),LineWidth=2,DisplayName="Odometry")
plot3(mappingPositions(:,1),mappingPositions(:,2),mappingPositions(:,3),LineWidth=2,DisplayName="Odometry and Mapping")
view(2)
legend
title("Compare Trajectories")
% Select the poses to compare
selectedGroundTruth = groundTruthTrajectory(1:numSkip:end,:);
% Compute the RMSE
rmseOdometry = rmse(selectedGroundTruth,odometryPositions,"all");
rmseWithMapping = rmse(selectedGroundTruth,mappingPositions,"all");
disp([‘RMSE of the trajectory estimated with Odometry: ‘ num2str(rmseOdometry)])
disp([‘RMSE of the trajectory estimated with Odometry and Mapping: ‘ num2str(rmseWithMapping)])
function ptCloudArr = helperGetPointClouds(simOut)
% Extract signal
ptCloudData = simOut.ptCloudData.signals.values;
% Create a pointCloud array
ptCloudArr = pointCloud(ptCloudData(:,:,:,2)); % Ignore first frame
for n = 1:size(ptCloudData,4)
ptCloudArr(end+1) = pointCloud(ptCloudData(:,:,:,n)); %#ok<AGROW>
end
end
function gTruth = helperGetLidarGroundTruth(simOut)
numFrames = size(simOut.lidarLocation.time,1);
gTruth = repmat(rigidtform3d,numFrames-1,1);
for i = 1:numFrames
gTruth(i).Translation = squeeze(simOut.lidarLocation.signals.values(:,:,i));
% Ignore the roll and the pitch rotations since the ground is flat
yaw = double(simOut.lidarOrientation.signals.values(:,3,i));
gTruth(i).R = [cos(yaw) -sin(yaw) 0;
sin(yaw) cos(yaw) 0;
0 0 1];
end
endThis is the code to perform LOAM for the self driving car moving around the parking lot, now how to add kalman filter in this code given below:
% Load reference path % 60 vehicles are parked
data = load("parkingLotReferenceData.mat");
% Set reference trajectory of the ego vehicle
refPosesX = data.refPosesX;
refPosesY = data.refPosesY;
refPosesT = data.refPosesT;
% Set poses of the parked vehicles
parkedPoses = data.parkedPoses;
% Display the reference trajectory and the parked vehicle locations
sceneName = "LargeParkingLot";
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2,DisplayName="Reference Path");
scatter(parkedPoses(:,1),parkedPoses(:,2),[],"filled",DisplayName="Parked Vehicles");
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500]; % Resize figure
title("Large Parking Lot")
legend
modelName = ‘GenerateLidarDataOfParkingLot’;
open_system(modelName)
snapnow
helperAddParkedVehicles(modelName,parkedPoses)
close(hScene)
if ~ispc
error("Unreal Engine Simulation is supported only on Microsoft" + char(174) + " Windows" + char(174) + ".");
end
% Run simulation
simOut = sim(modelName);
close_system(modelName,0)
ptCloudArr = helperGetPointClouds(simOut);
groundTruthPosesLidar = helperGetLidarGroundTruth(simOut);
ptCloud = ptCloudArr(1);
nextPtCloud = ptCloudArr(2);
gridStep = 0.5;
tform = pcregisterloam(ptCloud,nextPtCloud,gridStep);
disp(tform)
egoRadius = 3.5;
cylinderRadius = 50;
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
selectedIdx = findPointsInCylinder(nextPtCloud,[egoRadius cylinderRadius]);
nextPtCloud = select(nextPtCloud,selectedIdx,OutputSize="full");
figure
hold on
title("Cylindrical Neighborhood")
pcshow(ptCloud)
view(2)
maxPlanarSurfacePoints = 8;
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
nextPoints = detectLOAMFeatures(nextPtCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
figure
hold on
title("LOAM Points")
show(points,MarkerSize=12)
[~,rmseValue] = pcregisterloam(points,nextPoints);
disp([‘RMSE: ‘ num2str(rmseValue)])
points = downsampleLessPlanar(points,gridStep);
figure
hold on
title(‘LOAM Points After Downsampling the Less Planar Surface Points’)
show(points,’MarkerSize’,12)
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetOdometry = pcviewset;
viewId = 1;
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
numSkip = 5;
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,InitialTransform=relPose);
% Update the absolute pose and store it in the view set
absPose = rigidtform3d(absPose.A * relPose.A);
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8);
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Odometry")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetMapping = pcviewset;
ptCloud = ptCloudArr(1);
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
points = detectLOAMFeatures(ptCloud,’MaxPlanarSurfacePoints’,maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
viewId = 1;
vSetMapping = addView(vSetMapping,viewId,absPose);
voxelSize = 0.1;
loamMap = pcmaploam(voxelSize);
addPoints(loamMap,points,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPtCloud = ptCloud;
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,MatchingMethod="one-to-one",InitialTransform=relPose);
% Find the refined absolute pose that aligns the points to the map
absPose = findPose(loamMap,points,relPose);
% Store the refined absolute pose in the view set
vSetMapping = addView(vSetMapping,viewId,absPose);
% Add the new points to the map
addPoints(loamMap,points,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8)
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Mapping")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
% Get the ground truth trajectory
groundTruthTrajectory = vertcat(groundTruthPosesLidar.Translation);
% Get the estimated trajectories
odometryPositions = vertcat(vSetOdometry.Views.AbsolutePose.Translation);
mappingPositions = vertcat(vSetMapping.Views.AbsolutePose.Translation);
% Plot the trajectories
figure
plot3(groundTruthTrajectory(:,1),groundTruthTrajectory(:,2),groundTruthTrajectory(:,3),LineWidth=2,DisplayName="Ground Truth")
hold on
plot3(odometryPositions(:,1),odometryPositions(:,2),odometryPositions(:,3),LineWidth=2,DisplayName="Odometry")
plot3(mappingPositions(:,1),mappingPositions(:,2),mappingPositions(:,3),LineWidth=2,DisplayName="Odometry and Mapping")
view(2)
legend
title("Compare Trajectories")
% Select the poses to compare
selectedGroundTruth = groundTruthTrajectory(1:numSkip:end,:);
% Compute the RMSE
rmseOdometry = rmse(selectedGroundTruth,odometryPositions,"all");
rmseWithMapping = rmse(selectedGroundTruth,mappingPositions,"all");
disp([‘RMSE of the trajectory estimated with Odometry: ‘ num2str(rmseOdometry)])
disp([‘RMSE of the trajectory estimated with Odometry and Mapping: ‘ num2str(rmseWithMapping)])
function ptCloudArr = helperGetPointClouds(simOut)
% Extract signal
ptCloudData = simOut.ptCloudData.signals.values;
% Create a pointCloud array
ptCloudArr = pointCloud(ptCloudData(:,:,:,2)); % Ignore first frame
for n = 1:size(ptCloudData,4)
ptCloudArr(end+1) = pointCloud(ptCloudData(:,:,:,n)); %#ok<AGROW>
end
end
function gTruth = helperGetLidarGroundTruth(simOut)
numFrames = size(simOut.lidarLocation.time,1);
gTruth = repmat(rigidtform3d,numFrames-1,1);
for i = 1:numFrames
gTruth(i).Translation = squeeze(simOut.lidarLocation.signals.values(:,:,i));
% Ignore the roll and the pitch rotations since the ground is flat
yaw = double(simOut.lidarOrientation.signals.values(:,3,i));
gTruth(i).R = [cos(yaw) -sin(yaw) 0;
sin(yaw) cos(yaw) 0;
0 0 1];
end
end This is the code to perform LOAM for the self driving car moving around the parking lot, now how to add kalman filter in this code given below:
% Load reference path % 60 vehicles are parked
data = load("parkingLotReferenceData.mat");
% Set reference trajectory of the ego vehicle
refPosesX = data.refPosesX;
refPosesY = data.refPosesY;
refPosesT = data.refPosesT;
% Set poses of the parked vehicles
parkedPoses = data.parkedPoses;
% Display the reference trajectory and the parked vehicle locations
sceneName = "LargeParkingLot";
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2,DisplayName="Reference Path");
scatter(parkedPoses(:,1),parkedPoses(:,2),[],"filled",DisplayName="Parked Vehicles");
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500]; % Resize figure
title("Large Parking Lot")
legend
modelName = ‘GenerateLidarDataOfParkingLot’;
open_system(modelName)
snapnow
helperAddParkedVehicles(modelName,parkedPoses)
close(hScene)
if ~ispc
error("Unreal Engine Simulation is supported only on Microsoft" + char(174) + " Windows" + char(174) + ".");
end
% Run simulation
simOut = sim(modelName);
close_system(modelName,0)
ptCloudArr = helperGetPointClouds(simOut);
groundTruthPosesLidar = helperGetLidarGroundTruth(simOut);
ptCloud = ptCloudArr(1);
nextPtCloud = ptCloudArr(2);
gridStep = 0.5;
tform = pcregisterloam(ptCloud,nextPtCloud,gridStep);
disp(tform)
egoRadius = 3.5;
cylinderRadius = 50;
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
selectedIdx = findPointsInCylinder(nextPtCloud,[egoRadius cylinderRadius]);
nextPtCloud = select(nextPtCloud,selectedIdx,OutputSize="full");
figure
hold on
title("Cylindrical Neighborhood")
pcshow(ptCloud)
view(2)
maxPlanarSurfacePoints = 8;
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
nextPoints = detectLOAMFeatures(nextPtCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
figure
hold on
title("LOAM Points")
show(points,MarkerSize=12)
[~,rmseValue] = pcregisterloam(points,nextPoints);
disp([‘RMSE: ‘ num2str(rmseValue)])
points = downsampleLessPlanar(points,gridStep);
figure
hold on
title(‘LOAM Points After Downsampling the Less Planar Surface Points’)
show(points,’MarkerSize’,12)
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetOdometry = pcviewset;
viewId = 1;
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
numSkip = 5;
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,InitialTransform=relPose);
% Update the absolute pose and store it in the view set
absPose = rigidtform3d(absPose.A * relPose.A);
vSetOdometry = addView(vSetOdometry,viewId,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8);
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Odometry")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
absPose = groundTruthPosesLidar(1);
relPose = rigidtform3d;
vSetMapping = pcviewset;
ptCloud = ptCloudArr(1);
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
points = detectLOAMFeatures(ptCloud,’MaxPlanarSurfacePoints’,maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
viewId = 1;
vSetMapping = addView(vSetMapping,viewId,absPose);
voxelSize = 0.1;
loamMap = pcmaploam(voxelSize);
addPoints(loamMap,points,absPose);
% Display the parking lot scene with the reference trajectory
hScene = figure(Name="Large Parking Lot",NumberTitle="off");
helperShowSceneImage(sceneName);
hold on
plot(refPosesX(:,2),refPosesY(:,2),LineWidth=2)
xlim([-60 40])
ylim([10 60])
hScene.Position = [100 100 1000 500];
for k = (numSkip+1):numSkip:numel(ptCloudArr)
prevPtCloud = ptCloud;
prevPoints = points;
viewId = viewId + 1;
ptCloud = ptCloudArr(k);
% Select a cylindrical neighborhood of interest.
selectedIdx = findPointsInCylinder(ptCloud,[egoRadius cylinderRadius]);
ptCloud = select(ptCloud,selectedIdx,OutputSize="full");
% Detect LOAM points and downsample the less planar surface points
points = detectLOAMFeatures(ptCloud,MaxPlanarSurfacePoints=maxPlanarSurfacePoints);
points = downsampleLessPlanar(points,gridStep);
% Register the points using the previous relative pose as an initial
% transformation
relPose = pcregisterloam(points,prevPoints,MatchingMethod="one-to-one",InitialTransform=relPose);
% Find the refined absolute pose that aligns the points to the map
absPose = findPose(loamMap,points,relPose);
% Store the refined absolute pose in the view set
vSetMapping = addView(vSetMapping,viewId,absPose);
% Add the new points to the map
addPoints(loamMap,points,absPose);
% Visualize the absolute pose in the parking lot scene
plot(absPose.Translation(1),absPose.Translation(2),Color="r",Marker=".",MarkerSize=8)
xlim([-60 40])
ylim([10 60])
title("Build a Map Using Lidar Mapping")
legend(["Ground Truth","Estimated Position"])
pause(0.001)
view(2)
end
% Get the ground truth trajectory
groundTruthTrajectory = vertcat(groundTruthPosesLidar.Translation);
% Get the estimated trajectories
odometryPositions = vertcat(vSetOdometry.Views.AbsolutePose.Translation);
mappingPositions = vertcat(vSetMapping.Views.AbsolutePose.Translation);
% Plot the trajectories
figure
plot3(groundTruthTrajectory(:,1),groundTruthTrajectory(:,2),groundTruthTrajectory(:,3),LineWidth=2,DisplayName="Ground Truth")
hold on
plot3(odometryPositions(:,1),odometryPositions(:,2),odometryPositions(:,3),LineWidth=2,DisplayName="Odometry")
plot3(mappingPositions(:,1),mappingPositions(:,2),mappingPositions(:,3),LineWidth=2,DisplayName="Odometry and Mapping")
view(2)
legend
title("Compare Trajectories")
% Select the poses to compare
selectedGroundTruth = groundTruthTrajectory(1:numSkip:end,:);
% Compute the RMSE
rmseOdometry = rmse(selectedGroundTruth,odometryPositions,"all");
rmseWithMapping = rmse(selectedGroundTruth,mappingPositions,"all");
disp([‘RMSE of the trajectory estimated with Odometry: ‘ num2str(rmseOdometry)])
disp([‘RMSE of the trajectory estimated with Odometry and Mapping: ‘ num2str(rmseWithMapping)])
function ptCloudArr = helperGetPointClouds(simOut)
% Extract signal
ptCloudData = simOut.ptCloudData.signals.values;
% Create a pointCloud array
ptCloudArr = pointCloud(ptCloudData(:,:,:,2)); % Ignore first frame
for n = 1:size(ptCloudData,4)
ptCloudArr(end+1) = pointCloud(ptCloudData(:,:,:,n)); %#ok<AGROW>
end
end
function gTruth = helperGetLidarGroundTruth(simOut)
numFrames = size(simOut.lidarLocation.time,1);
gTruth = repmat(rigidtform3d,numFrames-1,1);
for i = 1:numFrames
gTruth(i).Translation = squeeze(simOut.lidarLocation.signals.values(:,:,i));
% Ignore the roll and the pitch rotations since the ground is flat
yaw = double(simOut.lidarOrientation.signals.values(:,3,i));
gTruth(i).R = [cos(yaw) -sin(yaw) 0;
sin(yaw) cos(yaw) 0;
0 0 1];
end
end slam, loam, autonomous vehicle, kalman filter MATLAB Answers — New Questions
Implement Ridge Regression Equation for a Neural Network MATLAB
I am trying to replicate the following equation in MATLAB to find the optimal output weight matrix of a neural network from training using ridge regression.
Output Weight Matrix of a Neural Network after Training using Ridge Regression:
My attempt is below. Note that y_i is a T by 1 vector and y_i_target is also a T by 1 vector. Wout_i is a N by 1 vector where N is the number of nodes in the neural network. I generate a Wout_i,y_i,y_i_target for each i^th target training signal. I do not compute Wout_i,y_i,y_i_target in my reproducible example for sake of simplicity.
Ny = 1000; % number of training signals
T = 100; % time length of each training signal
reg = 10^-4; % ridge regression coefficient
outer_sum = 0;
for i = 1:Ny
Wouts{i} = Wout_i; % collected cell matrix of each Wout_i for each i^th target training signal
inner_sum = sum(((y_i-y_i_target).^2)+reg*norm(Wout_i)^2);
outer_sum(i) = inner_sum;
end
outer_sum = outer_sum.*(1/Ny);
[minval, minidx] = min(outer_sum);
Wout = cell2mat(Wouts(minidx));
My final answer for Wout is a N by 1 as it should be, but I am uncertain in my answer. I am particularly unsure whether or not I have done the double summation and arg min with respect to Wout operations correctly. Is there any way to validate my answer?I am trying to replicate the following equation in MATLAB to find the optimal output weight matrix of a neural network from training using ridge regression.
Output Weight Matrix of a Neural Network after Training using Ridge Regression:
My attempt is below. Note that y_i is a T by 1 vector and y_i_target is also a T by 1 vector. Wout_i is a N by 1 vector where N is the number of nodes in the neural network. I generate a Wout_i,y_i,y_i_target for each i^th target training signal. I do not compute Wout_i,y_i,y_i_target in my reproducible example for sake of simplicity.
Ny = 1000; % number of training signals
T = 100; % time length of each training signal
reg = 10^-4; % ridge regression coefficient
outer_sum = 0;
for i = 1:Ny
Wouts{i} = Wout_i; % collected cell matrix of each Wout_i for each i^th target training signal
inner_sum = sum(((y_i-y_i_target).^2)+reg*norm(Wout_i)^2);
outer_sum(i) = inner_sum;
end
outer_sum = outer_sum.*(1/Ny);
[minval, minidx] = min(outer_sum);
Wout = cell2mat(Wouts(minidx));
My final answer for Wout is a N by 1 as it should be, but I am uncertain in my answer. I am particularly unsure whether or not I have done the double summation and arg min with respect to Wout operations correctly. Is there any way to validate my answer? I am trying to replicate the following equation in MATLAB to find the optimal output weight matrix of a neural network from training using ridge regression.
Output Weight Matrix of a Neural Network after Training using Ridge Regression:
My attempt is below. Note that y_i is a T by 1 vector and y_i_target is also a T by 1 vector. Wout_i is a N by 1 vector where N is the number of nodes in the neural network. I generate a Wout_i,y_i,y_i_target for each i^th target training signal. I do not compute Wout_i,y_i,y_i_target in my reproducible example for sake of simplicity.
Ny = 1000; % number of training signals
T = 100; % time length of each training signal
reg = 10^-4; % ridge regression coefficient
outer_sum = 0;
for i = 1:Ny
Wouts{i} = Wout_i; % collected cell matrix of each Wout_i for each i^th target training signal
inner_sum = sum(((y_i-y_i_target).^2)+reg*norm(Wout_i)^2);
outer_sum(i) = inner_sum;
end
outer_sum = outer_sum.*(1/Ny);
[minval, minidx] = min(outer_sum);
Wout = cell2mat(Wouts(minidx));
My final answer for Wout is a N by 1 as it should be, but I am uncertain in my answer. I am particularly unsure whether or not I have done the double summation and arg min with respect to Wout operations correctly. Is there any way to validate my answer? neural network, machine learning, matlab, for loop, sum, code generation, regression MATLAB Answers — New Questions
MATLAB System block ‘SCRReferenceApplication/Simulation 3D Vehicle with Ground Following/Simulation 3D Vehicle’ error occurred when invoking ‘stepImpl’ method of ‘Simulation3D
When i use matlab 2021a and unreal engine 4.23 for simulation,in unreal editor model ,the question is appear.
The details of the error are as follows:
MATLAB System block ‘SCRReferenceApplication/Simulation 3D Vehicle with Ground Following/Simulation 3D Vehicle’ error occurred when invoking ‘stepImpl’ method of ‘Simulation3DVehicleTerrainFb’. The error was thrown from ‘
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0′.
The 3D vehicle no longer detects the ground. Review scene limits or initial position.When i use matlab 2021a and unreal engine 4.23 for simulation,in unreal editor model ,the question is appear.
The details of the error are as follows:
MATLAB System block ‘SCRReferenceApplication/Simulation 3D Vehicle with Ground Following/Simulation 3D Vehicle’ error occurred when invoking ‘stepImpl’ method of ‘Simulation3DVehicleTerrainFb’. The error was thrown from ‘
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0′.
The 3D vehicle no longer detects the ground. Review scene limits or initial position. When i use matlab 2021a and unreal engine 4.23 for simulation,in unreal editor model ,the question is appear.
The details of the error are as follows:
MATLAB System block ‘SCRReferenceApplication/Simulation 3D Vehicle with Ground Following/Simulation 3D Vehicle’ error occurred when invoking ‘stepImpl’ method of ‘Simulation3DVehicleTerrainFb’. The error was thrown from ‘
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0
‘D:Program FilesMATLABR2021atoolboxsharedsim3dblkssim3dblksSimulation3DVehicleTerrainFb.p’ at line 0′.
The 3D vehicle no longer detects the ground. Review scene limits or initial position. unreal engine, simulink MATLAB Answers — New Questions
Sync 2 Git repos
git switch main
git checkout -b DEV
#add 2 new yaml files and update 2 files
Git add .
Git commit -m “Added and Updated Config files”
Git remote add github https://github.com/Software/Project.git
git push -u origin DEV
git switch main
git rebase github/main
git push -u origin main
git switch DEV
git rebase github/main
git push -u origin DEV
I cloned a month ago a Github public repo directly into a new private Azure DevOps repo, by using https://learn.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops , because we are implementing new software which is hosted in a Github public repo. The reason for saving the code also in ADO is since we have few parameter files that we need to modify. After this original clone directly from GitHub to ADO, I cloned locally on my laptop and created a new branch for DEV based on main, by adding one commit that contain 4 yaml files : 2 new parameter files and 2 updated files. Then pushed DEV branch to ADO. git clone https://ContosoOrg/Project/_git/RepoName
git switch main
git checkout -b DEV
#add 2 new yaml files and update 2 files
Git add .
Git commit -m “Added and Updated Config files”
Git remote add github https://github.com/Software/Project.git
git push -u origin DEVIn the meantime in Github there were 4 additional commits on the main. I need to update both the main and DEV branches in ADO based on Github main branch changes, now and in future for any further Github new commits. I am fairly new to Git but reviewed the documentation in depth. I very much appreciate if you check following steps and let me know if make sense, or you see an issue. git fetch github
git switch main
git rebase github/main
git push -u origin main
git switch DEV
git rebase github/main
git push -u origin DEV Read More
How do I get rid of these weird lines in the surf plot?
Hello! I was coding something in Matlab (R2023b) and my surf plots all seem to look nice but then they suddenly changed but I didnt even change anything in the code. Even scripts that I didn’t touch at all for day have that… I then updated to R2024a but it didnt help at all… Here is one of the codes:
%SymPoisson.m
close all;
clear varibles;
tic
% Inverse Multiquadric RBF and Laplacians
rbf = @(e,r) 1./sqrt(1+(e*r).^2); ep = 1;
Lrbf = @(e,r) e^2*((e*r).^2-2)./(1+(e*r).^2).^(5/2);
L2rbf = @(e,r) 3*e^4*(3*(e*r).^4-24*(e*r).^2+8)./(1+(e*r).^2).^(9/2);
% Datafunction and Laplacian
u = @(x) sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
Lu = @(x) -1.25*pi^2*sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
% Evaluation Points in Rectangle [a,b]x[c,d]
Neval = 100;
a=0;b=4;c=0;d=4;
[~,~,xeval]=GetRectGrid(a,b,c,d,Neval); % xeval is (Neval^2)x2 Matrix with each line beeing a Point in R^2
% Reshape for Plot
X = reshape(xeval(:,1),Neval,Neval);
Y = reshape(xeval(:,2),Neval,Neval);
% Initialize Arrays with Errors and Conditionnumbers
m=2;M=50; % Bounds for Iteration
rms_fehler = zeros(size(m:M));
max_fehler = zeros(size(m:M));
avg_fehler = zeros(size(m:M));
cond2 = zeros(size(m:M));
anzahlen = zeros(size(m:M));
% Evaluate Datafunction on Rectangle
ueval = u(xeval);
k=1; %Counter
for N = m:M
disp([‘Anzahl Stützstellen: ‘, num2str(N^2)])
% Get Collocationpoints on Boundary and Interior
[xint,xbdy,x]=GetRectGrid(a,b,c,d,N);
NI = size(xint,1);
NB = size(xbdy,1);
% Evaluation-Matrix
DM_int = DistMatr(xeval,xint);
LEM = Lrbf(ep,DM_int);
DM_bdy = DistMatr(xeval,xbdy);
BEM = rbf(ep,DM_bdy);
EM = [LEM BEM];
% Collocation-Matrix
DM_II = DistMatr(xint,xint);
LLCM = L2rbf(ep,DM_II);
DM_IB = DistMatr(xint,xbdy);
LBCM = Lrbf(ep,DM_IB);
BLCM = LBCM’;
DM_BB = DistMatr(xbdy,xbdy);
BBCM = rbf(ep,DM_BB);
CM = [LLCM LBCM; BLCM BBCM];
% Evaluate Datafunction on Collocationpoints
ux = zeros(N^2,1);
ux(1:NI) = Lu(xint);
% Boundary-Conditions
indx = find(xbdy(:,2)==0 | xbdy(:,2)==4);
ux(NI+indx) = sin(pi*xbdy(indx,1));
% Evaluate the Approx. function
warning(‘off’,’MATLAB:nearlySingularMatrix’)
approx = EM * (CMux);
warning(‘on’,’MATLAB:nearlySingularMatrix’)
% Collect errors and conditionnumbers
rms_fehler(k) = norm(approx-ueval)/sqrt(Neval);
max_fehler(k) = max(abs(approx-ueval));
avg_fehler(k) = mean(abs(approx-ueval));
cond2(k) = cond(CM);
anzahlen(k) = N^2;
k=k+1;
% Plot of the last Approx. function and Datafunction and Error-Function
if N == M
% These Plots are WEIRD
figure(N)
approx2 = reshape(approx,Neval,Neval);
surf(X,Y,approx2,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Approximation to the Solution of the Poisson-Equation for N=’,num2str(N^2)])
figure(1);
ueval2 = reshape(ueval,Neval,Neval);
surf(X,Y,ueval2,’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
title(‘Solution of the Poisson-Equation’)
figure(N+1);
fehler = abs(approx2-ueval2);
surf(X,Y,fehler,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Errorfunction for N=’,num2str(N^2)])
end
end
% Plot Error-Decay
figure(2)
semilogy(anzahlen,rms_fehler’,’-r’)
hold on;
semilogy(anzahlen,max_fehler’,’-b’)
hold on;
semilogy(anzahlen,avg_fehler’,’-g’)
hold off;
title(‘Error-Decay’)
legend(‘RMS-Error’,’Max-Error’,’Average Error’)
xlabel(‘Number of Collocationpoints’)
ylabel(‘Error’)
% Plot Collocation-Points
figure(4);
scatter(xint(:,1),xint(:,2),’ob’);
hold on;
scatter(xbdy(:,1),xbdy(:,2),’or’);
hold off;
title([‘Collocationpoints N=’,num2str(M^2)])
legend(‘Interior-Points’, ‘Boundary-Points’)
grid on;
% Plot Conditionnumbers
figure(5)
semilogy(anzahlen,cond2)
title(‘Cond2 of Collocation-Matrix’)
xlabel(‘Number of Collocation-Points’)
ylabel(‘Cond2’)
toc
function [xint,xbdy,x] = GetRectGrid(a,b,c,d,N)
% Assertions
if(a>=b)
error(‘a muss echt kleiner als b sein!’);
end
if(c>=b)
error(‘c muss echt kleiner als d sein!’);
end
if(N<=1)
error(‘N muss echt größer als 1 sein!’);
end
% Get the Grid
[X1,X2] = meshgrid(linspace(a,b,N),linspace(c,d,N));
x = [X1(:),X2(:)];
% Sort for inner points and boundary points
k = 1;l=1; % Zähler
xbdy = zeros(4*N-4,2); % Hier kommen die Randpunkte rein
xint = zeros(N^2-(4*N-4),2); % und hier die inneren Punkte
for j = 1:N^2
if x(j,1) == a || x(j,1) == b || x(j,2) == c || x(j,2) == d
xbdy(k,:) = x(j,:);
k = k+1;
else
xint(l,:) = x(j,:);
l = l+1;
end
end
x = [xint;xbdy];
end
function D = DistMatr(A,B)
A_width = width(A);
B_width = width(B);
if A_width~=B_width
error(‘Die Matrizen sind nicht kompatibel.’);
end
% Evaluate DistanceMatrix
A2 = sum(A.^2,2);
B2 = sum(B.^2,2);
D = sqrt(max(bsxfun(@plus,A2,B2′) – 2*A*B’,0));
end
Now look at the Surf-Plots:
I really don’t know why it does that… It looks like it connects Points with lines that shouldn’t be connected… Weird is, that the follow code works:
%NewtonBasisPlot.m
tic
close all;
clear variables;
% Gaussian RBF
K = @(epsilon,x) exp(-(epsilon*x).^2); epsilon = 1;
n = 25;
N = 50;
% Get Evaluation-Grid
Omega = GetPoints(‘grid’,N,0,1);
% Get Support-Points by P-Greedy Algorithm
X=GetPoints(‘P-Greedy’,n,0,1,K,epsilon);
% Choose wich Newtonbasisfunction to Plot
NewtonIndex = [7 25]; %Bsp: N_7(x) und N_25(x)
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Newton Basis via Cholesky
L = chol(A_X,’lower’);
newton = A_Omega/L’;
% Plot Newtonbasisfunctions
x=reshape(Omega(:,1),N,N);
y=reshape(Omega(:,2),N,N);
Nmat = cellfun(@(nvec) reshape(nvec,N,N), …
num2cell(newton,1),’UniformOutput’,0);
h_surf = zeros(length(NewtonIndex));
for k=1:length(NewtonIndex)
h_surf(k) = figure;
surf(x,y,Nmat{NewtonIndex(k)},’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
hold on;
end
% Plot Support-Points
figure(‘Name’,’Points’)
scatter(X(:,1),X(:,2),’xb’,’LineWidth’,1);
grid on;
toc
function X = GetPoints(type,n,a,b,K,epsilon)
assert(a<b)
if (strcmp(type,’grid’))
[X1,X2] = meshgrid(linspace(a,b,n));
X = [X1(:),X2(:)];
% Via Powerfunction
elseif(strcmp(type,’P-Greedy’))
% Number of Points to choose from
N = 50;
assert(n<N*N)
Omega = GetPoints(‘grid’,N,a,b);
% Start with "random" Point
X = Omega(floor((N*N)/2),:);
for j=1:n
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Evaluate Powerfunction via P^2(x) = K(x,x)-T(x)*A_{K,X}*T(x)^T,
Kxx = K(epsilon,0)*ones(N*N,1); %K(x,x)=phi(0) for RBF-Kernel
warning(‘off’,’MATLAB:nearlySingularMatrix’)
B = A_Omega / A_X; %T(x)*A_{K,X}
C = (B*A_Omega’); %T(x)*A_{K,X}*T(x)^T
warning(‘on’,’MATLAB:nearlySingularMatrix’)
Psquare = Kxx – diag(C); %P^2(x)
% Add Point if its max of powerfunction
[~,index]=max(abs(Psquare));
X=[X;Omega(index(1),:)];
% Console
disp([‘P-Greedy Iteration: ‘, num2str(j)])
end
X=X(1:end-1,:);
else
Error(‘Punkteauswahl-Typ nicht bekannt!’);
end
end
And its giving me the usual and wanted surface Plot of the Functions:
Maybe someone of you guys has a clue whats going on here… Thank you very much! Kind Regards, Max.Hello! I was coding something in Matlab (R2023b) and my surf plots all seem to look nice but then they suddenly changed but I didnt even change anything in the code. Even scripts that I didn’t touch at all for day have that… I then updated to R2024a but it didnt help at all… Here is one of the codes:
%SymPoisson.m
close all;
clear varibles;
tic
% Inverse Multiquadric RBF and Laplacians
rbf = @(e,r) 1./sqrt(1+(e*r).^2); ep = 1;
Lrbf = @(e,r) e^2*((e*r).^2-2)./(1+(e*r).^2).^(5/2);
L2rbf = @(e,r) 3*e^4*(3*(e*r).^4-24*(e*r).^2+8)./(1+(e*r).^2).^(9/2);
% Datafunction and Laplacian
u = @(x) sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
Lu = @(x) -1.25*pi^2*sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
% Evaluation Points in Rectangle [a,b]x[c,d]
Neval = 100;
a=0;b=4;c=0;d=4;
[~,~,xeval]=GetRectGrid(a,b,c,d,Neval); % xeval is (Neval^2)x2 Matrix with each line beeing a Point in R^2
% Reshape for Plot
X = reshape(xeval(:,1),Neval,Neval);
Y = reshape(xeval(:,2),Neval,Neval);
% Initialize Arrays with Errors and Conditionnumbers
m=2;M=50; % Bounds for Iteration
rms_fehler = zeros(size(m:M));
max_fehler = zeros(size(m:M));
avg_fehler = zeros(size(m:M));
cond2 = zeros(size(m:M));
anzahlen = zeros(size(m:M));
% Evaluate Datafunction on Rectangle
ueval = u(xeval);
k=1; %Counter
for N = m:M
disp([‘Anzahl Stützstellen: ‘, num2str(N^2)])
% Get Collocationpoints on Boundary and Interior
[xint,xbdy,x]=GetRectGrid(a,b,c,d,N);
NI = size(xint,1);
NB = size(xbdy,1);
% Evaluation-Matrix
DM_int = DistMatr(xeval,xint);
LEM = Lrbf(ep,DM_int);
DM_bdy = DistMatr(xeval,xbdy);
BEM = rbf(ep,DM_bdy);
EM = [LEM BEM];
% Collocation-Matrix
DM_II = DistMatr(xint,xint);
LLCM = L2rbf(ep,DM_II);
DM_IB = DistMatr(xint,xbdy);
LBCM = Lrbf(ep,DM_IB);
BLCM = LBCM’;
DM_BB = DistMatr(xbdy,xbdy);
BBCM = rbf(ep,DM_BB);
CM = [LLCM LBCM; BLCM BBCM];
% Evaluate Datafunction on Collocationpoints
ux = zeros(N^2,1);
ux(1:NI) = Lu(xint);
% Boundary-Conditions
indx = find(xbdy(:,2)==0 | xbdy(:,2)==4);
ux(NI+indx) = sin(pi*xbdy(indx,1));
% Evaluate the Approx. function
warning(‘off’,’MATLAB:nearlySingularMatrix’)
approx = EM * (CMux);
warning(‘on’,’MATLAB:nearlySingularMatrix’)
% Collect errors and conditionnumbers
rms_fehler(k) = norm(approx-ueval)/sqrt(Neval);
max_fehler(k) = max(abs(approx-ueval));
avg_fehler(k) = mean(abs(approx-ueval));
cond2(k) = cond(CM);
anzahlen(k) = N^2;
k=k+1;
% Plot of the last Approx. function and Datafunction and Error-Function
if N == M
% These Plots are WEIRD
figure(N)
approx2 = reshape(approx,Neval,Neval);
surf(X,Y,approx2,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Approximation to the Solution of the Poisson-Equation for N=’,num2str(N^2)])
figure(1);
ueval2 = reshape(ueval,Neval,Neval);
surf(X,Y,ueval2,’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
title(‘Solution of the Poisson-Equation’)
figure(N+1);
fehler = abs(approx2-ueval2);
surf(X,Y,fehler,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Errorfunction for N=’,num2str(N^2)])
end
end
% Plot Error-Decay
figure(2)
semilogy(anzahlen,rms_fehler’,’-r’)
hold on;
semilogy(anzahlen,max_fehler’,’-b’)
hold on;
semilogy(anzahlen,avg_fehler’,’-g’)
hold off;
title(‘Error-Decay’)
legend(‘RMS-Error’,’Max-Error’,’Average Error’)
xlabel(‘Number of Collocationpoints’)
ylabel(‘Error’)
% Plot Collocation-Points
figure(4);
scatter(xint(:,1),xint(:,2),’ob’);
hold on;
scatter(xbdy(:,1),xbdy(:,2),’or’);
hold off;
title([‘Collocationpoints N=’,num2str(M^2)])
legend(‘Interior-Points’, ‘Boundary-Points’)
grid on;
% Plot Conditionnumbers
figure(5)
semilogy(anzahlen,cond2)
title(‘Cond2 of Collocation-Matrix’)
xlabel(‘Number of Collocation-Points’)
ylabel(‘Cond2’)
toc
function [xint,xbdy,x] = GetRectGrid(a,b,c,d,N)
% Assertions
if(a>=b)
error(‘a muss echt kleiner als b sein!’);
end
if(c>=b)
error(‘c muss echt kleiner als d sein!’);
end
if(N<=1)
error(‘N muss echt größer als 1 sein!’);
end
% Get the Grid
[X1,X2] = meshgrid(linspace(a,b,N),linspace(c,d,N));
x = [X1(:),X2(:)];
% Sort for inner points and boundary points
k = 1;l=1; % Zähler
xbdy = zeros(4*N-4,2); % Hier kommen die Randpunkte rein
xint = zeros(N^2-(4*N-4),2); % und hier die inneren Punkte
for j = 1:N^2
if x(j,1) == a || x(j,1) == b || x(j,2) == c || x(j,2) == d
xbdy(k,:) = x(j,:);
k = k+1;
else
xint(l,:) = x(j,:);
l = l+1;
end
end
x = [xint;xbdy];
end
function D = DistMatr(A,B)
A_width = width(A);
B_width = width(B);
if A_width~=B_width
error(‘Die Matrizen sind nicht kompatibel.’);
end
% Evaluate DistanceMatrix
A2 = sum(A.^2,2);
B2 = sum(B.^2,2);
D = sqrt(max(bsxfun(@plus,A2,B2′) – 2*A*B’,0));
end
Now look at the Surf-Plots:
I really don’t know why it does that… It looks like it connects Points with lines that shouldn’t be connected… Weird is, that the follow code works:
%NewtonBasisPlot.m
tic
close all;
clear variables;
% Gaussian RBF
K = @(epsilon,x) exp(-(epsilon*x).^2); epsilon = 1;
n = 25;
N = 50;
% Get Evaluation-Grid
Omega = GetPoints(‘grid’,N,0,1);
% Get Support-Points by P-Greedy Algorithm
X=GetPoints(‘P-Greedy’,n,0,1,K,epsilon);
% Choose wich Newtonbasisfunction to Plot
NewtonIndex = [7 25]; %Bsp: N_7(x) und N_25(x)
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Newton Basis via Cholesky
L = chol(A_X,’lower’);
newton = A_Omega/L’;
% Plot Newtonbasisfunctions
x=reshape(Omega(:,1),N,N);
y=reshape(Omega(:,2),N,N);
Nmat = cellfun(@(nvec) reshape(nvec,N,N), …
num2cell(newton,1),’UniformOutput’,0);
h_surf = zeros(length(NewtonIndex));
for k=1:length(NewtonIndex)
h_surf(k) = figure;
surf(x,y,Nmat{NewtonIndex(k)},’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
hold on;
end
% Plot Support-Points
figure(‘Name’,’Points’)
scatter(X(:,1),X(:,2),’xb’,’LineWidth’,1);
grid on;
toc
function X = GetPoints(type,n,a,b,K,epsilon)
assert(a<b)
if (strcmp(type,’grid’))
[X1,X2] = meshgrid(linspace(a,b,n));
X = [X1(:),X2(:)];
% Via Powerfunction
elseif(strcmp(type,’P-Greedy’))
% Number of Points to choose from
N = 50;
assert(n<N*N)
Omega = GetPoints(‘grid’,N,a,b);
% Start with "random" Point
X = Omega(floor((N*N)/2),:);
for j=1:n
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Evaluate Powerfunction via P^2(x) = K(x,x)-T(x)*A_{K,X}*T(x)^T,
Kxx = K(epsilon,0)*ones(N*N,1); %K(x,x)=phi(0) for RBF-Kernel
warning(‘off’,’MATLAB:nearlySingularMatrix’)
B = A_Omega / A_X; %T(x)*A_{K,X}
C = (B*A_Omega’); %T(x)*A_{K,X}*T(x)^T
warning(‘on’,’MATLAB:nearlySingularMatrix’)
Psquare = Kxx – diag(C); %P^2(x)
% Add Point if its max of powerfunction
[~,index]=max(abs(Psquare));
X=[X;Omega(index(1),:)];
% Console
disp([‘P-Greedy Iteration: ‘, num2str(j)])
end
X=X(1:end-1,:);
else
Error(‘Punkteauswahl-Typ nicht bekannt!’);
end
end
And its giving me the usual and wanted surface Plot of the Functions:
Maybe someone of you guys has a clue whats going on here… Thank you very much! Kind Regards, Max. Hello! I was coding something in Matlab (R2023b) and my surf plots all seem to look nice but then they suddenly changed but I didnt even change anything in the code. Even scripts that I didn’t touch at all for day have that… I then updated to R2024a but it didnt help at all… Here is one of the codes:
%SymPoisson.m
close all;
clear varibles;
tic
% Inverse Multiquadric RBF and Laplacians
rbf = @(e,r) 1./sqrt(1+(e*r).^2); ep = 1;
Lrbf = @(e,r) e^2*((e*r).^2-2)./(1+(e*r).^2).^(5/2);
L2rbf = @(e,r) 3*e^4*(3*(e*r).^4-24*(e*r).^2+8)./(1+(e*r).^2).^(9/2);
% Datafunction and Laplacian
u = @(x) sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
Lu = @(x) -1.25*pi^2*sin(pi*x(:,1)).*cos(pi*x(:,2)/2);
% Evaluation Points in Rectangle [a,b]x[c,d]
Neval = 100;
a=0;b=4;c=0;d=4;
[~,~,xeval]=GetRectGrid(a,b,c,d,Neval); % xeval is (Neval^2)x2 Matrix with each line beeing a Point in R^2
% Reshape for Plot
X = reshape(xeval(:,1),Neval,Neval);
Y = reshape(xeval(:,2),Neval,Neval);
% Initialize Arrays with Errors and Conditionnumbers
m=2;M=50; % Bounds for Iteration
rms_fehler = zeros(size(m:M));
max_fehler = zeros(size(m:M));
avg_fehler = zeros(size(m:M));
cond2 = zeros(size(m:M));
anzahlen = zeros(size(m:M));
% Evaluate Datafunction on Rectangle
ueval = u(xeval);
k=1; %Counter
for N = m:M
disp([‘Anzahl Stützstellen: ‘, num2str(N^2)])
% Get Collocationpoints on Boundary and Interior
[xint,xbdy,x]=GetRectGrid(a,b,c,d,N);
NI = size(xint,1);
NB = size(xbdy,1);
% Evaluation-Matrix
DM_int = DistMatr(xeval,xint);
LEM = Lrbf(ep,DM_int);
DM_bdy = DistMatr(xeval,xbdy);
BEM = rbf(ep,DM_bdy);
EM = [LEM BEM];
% Collocation-Matrix
DM_II = DistMatr(xint,xint);
LLCM = L2rbf(ep,DM_II);
DM_IB = DistMatr(xint,xbdy);
LBCM = Lrbf(ep,DM_IB);
BLCM = LBCM’;
DM_BB = DistMatr(xbdy,xbdy);
BBCM = rbf(ep,DM_BB);
CM = [LLCM LBCM; BLCM BBCM];
% Evaluate Datafunction on Collocationpoints
ux = zeros(N^2,1);
ux(1:NI) = Lu(xint);
% Boundary-Conditions
indx = find(xbdy(:,2)==0 | xbdy(:,2)==4);
ux(NI+indx) = sin(pi*xbdy(indx,1));
% Evaluate the Approx. function
warning(‘off’,’MATLAB:nearlySingularMatrix’)
approx = EM * (CMux);
warning(‘on’,’MATLAB:nearlySingularMatrix’)
% Collect errors and conditionnumbers
rms_fehler(k) = norm(approx-ueval)/sqrt(Neval);
max_fehler(k) = max(abs(approx-ueval));
avg_fehler(k) = mean(abs(approx-ueval));
cond2(k) = cond(CM);
anzahlen(k) = N^2;
k=k+1;
% Plot of the last Approx. function and Datafunction and Error-Function
if N == M
% These Plots are WEIRD
figure(N)
approx2 = reshape(approx,Neval,Neval);
surf(X,Y,approx2,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Approximation to the Solution of the Poisson-Equation for N=’,num2str(N^2)])
figure(1);
ueval2 = reshape(ueval,Neval,Neval);
surf(X,Y,ueval2,’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
title(‘Solution of the Poisson-Equation’)
figure(N+1);
fehler = abs(approx2-ueval2);
surf(X,Y,fehler,’FaceAlpha’,0.5,’EdgeColor’,’interp’);
colormap cool; camlight; lighting gouraud; material dull;
title([‘Errorfunction for N=’,num2str(N^2)])
end
end
% Plot Error-Decay
figure(2)
semilogy(anzahlen,rms_fehler’,’-r’)
hold on;
semilogy(anzahlen,max_fehler’,’-b’)
hold on;
semilogy(anzahlen,avg_fehler’,’-g’)
hold off;
title(‘Error-Decay’)
legend(‘RMS-Error’,’Max-Error’,’Average Error’)
xlabel(‘Number of Collocationpoints’)
ylabel(‘Error’)
% Plot Collocation-Points
figure(4);
scatter(xint(:,1),xint(:,2),’ob’);
hold on;
scatter(xbdy(:,1),xbdy(:,2),’or’);
hold off;
title([‘Collocationpoints N=’,num2str(M^2)])
legend(‘Interior-Points’, ‘Boundary-Points’)
grid on;
% Plot Conditionnumbers
figure(5)
semilogy(anzahlen,cond2)
title(‘Cond2 of Collocation-Matrix’)
xlabel(‘Number of Collocation-Points’)
ylabel(‘Cond2’)
toc
function [xint,xbdy,x] = GetRectGrid(a,b,c,d,N)
% Assertions
if(a>=b)
error(‘a muss echt kleiner als b sein!’);
end
if(c>=b)
error(‘c muss echt kleiner als d sein!’);
end
if(N<=1)
error(‘N muss echt größer als 1 sein!’);
end
% Get the Grid
[X1,X2] = meshgrid(linspace(a,b,N),linspace(c,d,N));
x = [X1(:),X2(:)];
% Sort for inner points and boundary points
k = 1;l=1; % Zähler
xbdy = zeros(4*N-4,2); % Hier kommen die Randpunkte rein
xint = zeros(N^2-(4*N-4),2); % und hier die inneren Punkte
for j = 1:N^2
if x(j,1) == a || x(j,1) == b || x(j,2) == c || x(j,2) == d
xbdy(k,:) = x(j,:);
k = k+1;
else
xint(l,:) = x(j,:);
l = l+1;
end
end
x = [xint;xbdy];
end
function D = DistMatr(A,B)
A_width = width(A);
B_width = width(B);
if A_width~=B_width
error(‘Die Matrizen sind nicht kompatibel.’);
end
% Evaluate DistanceMatrix
A2 = sum(A.^2,2);
B2 = sum(B.^2,2);
D = sqrt(max(bsxfun(@plus,A2,B2′) – 2*A*B’,0));
end
Now look at the Surf-Plots:
I really don’t know why it does that… It looks like it connects Points with lines that shouldn’t be connected… Weird is, that the follow code works:
%NewtonBasisPlot.m
tic
close all;
clear variables;
% Gaussian RBF
K = @(epsilon,x) exp(-(epsilon*x).^2); epsilon = 1;
n = 25;
N = 50;
% Get Evaluation-Grid
Omega = GetPoints(‘grid’,N,0,1);
% Get Support-Points by P-Greedy Algorithm
X=GetPoints(‘P-Greedy’,n,0,1,K,epsilon);
% Choose wich Newtonbasisfunction to Plot
NewtonIndex = [7 25]; %Bsp: N_7(x) und N_25(x)
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Newton Basis via Cholesky
L = chol(A_X,’lower’);
newton = A_Omega/L’;
% Plot Newtonbasisfunctions
x=reshape(Omega(:,1),N,N);
y=reshape(Omega(:,2),N,N);
Nmat = cellfun(@(nvec) reshape(nvec,N,N), …
num2cell(newton,1),’UniformOutput’,0);
h_surf = zeros(length(NewtonIndex));
for k=1:length(NewtonIndex)
h_surf(k) = figure;
surf(x,y,Nmat{NewtonIndex(k)},’FaceAlpha’,0.5,’EdgeColor’,’interp’)
colormap cool; camlight; lighting gouraud; material dull;
hold on;
end
% Plot Support-Points
figure(‘Name’,’Points’)
scatter(X(:,1),X(:,2),’xb’,’LineWidth’,1);
grid on;
toc
function X = GetPoints(type,n,a,b,K,epsilon)
assert(a<b)
if (strcmp(type,’grid’))
[X1,X2] = meshgrid(linspace(a,b,n));
X = [X1(:),X2(:)];
% Via Powerfunction
elseif(strcmp(type,’P-Greedy’))
% Number of Points to choose from
N = 50;
assert(n<N*N)
Omega = GetPoints(‘grid’,N,a,b);
% Start with "random" Point
X = Omega(floor((N*N)/2),:);
for j=1:n
% Kernel-Matrices
A_X = K(epsilon,DistMatr(X,X));
A_Omega = K(epsilon,DistMatr(Omega,X));
% Evaluate Powerfunction via P^2(x) = K(x,x)-T(x)*A_{K,X}*T(x)^T,
Kxx = K(epsilon,0)*ones(N*N,1); %K(x,x)=phi(0) for RBF-Kernel
warning(‘off’,’MATLAB:nearlySingularMatrix’)
B = A_Omega / A_X; %T(x)*A_{K,X}
C = (B*A_Omega’); %T(x)*A_{K,X}*T(x)^T
warning(‘on’,’MATLAB:nearlySingularMatrix’)
Psquare = Kxx – diag(C); %P^2(x)
% Add Point if its max of powerfunction
[~,index]=max(abs(Psquare));
X=[X;Omega(index(1),:)];
% Console
disp([‘P-Greedy Iteration: ‘, num2str(j)])
end
X=X(1:end-1,:);
else
Error(‘Punkteauswahl-Typ nicht bekannt!’);
end
end
And its giving me the usual and wanted surface Plot of the Functions:
Maybe someone of you guys has a clue whats going on here… Thank you very much! Kind Regards, Max. surf, surface MATLAB Answers — New Questions
Word Editor
I am a writer and I am having some issues with Word Editor. Is there a way to turn it off? I have tried several things over the last year and really want it OFF. I prefer Grammarly. Even though I select for it not to track changes, that part of Word Editor continues to do so. I think if I could get the whole editor program turned off in the Word program rather than in individual documents the problems would stop.
Also, autosave documents is driving crazy. Can it be turned off? I work on a document, save it but leave it minimized in my tray. When I come back to it, it has been auto saved. Sometimes there are 2 or 3 auto saves of the same document in my tray. I have to compare them to determine which one to continue writing on.
I would appreciate any help. thank you Nancy
I am a writer and I am having some issues with Word Editor. Is there a way to turn it off? I have tried several things over the last year and really want it OFF. I prefer Grammarly. Even though I select for it not to track changes, that part of Word Editor continues to do so. I think if I could get the whole editor program turned off in the Word program rather than in individual documents the problems would stop. Also, autosave documents is driving crazy. Can it be turned off? I work on a document, save it but leave it minimized in my tray. When I come back to it, it has been auto saved. Sometimes there are 2 or 3 auto saves of the same document in my tray. I have to compare them to determine which one to continue writing on. I would appreciate any help. thank you Nancy Read More