Category: News
Effectively troubleshoot latency in SQL Server Transactional replication: Part 2
Let us continue our troubleshooting by checking threads in this part.
Step 4.1. Troubleshoot latency in Log Reader agent’s reader thread
Firstly, define the level of reader thread latency by running below query in Publisher server.
sp_replcounters
GO
The above shows reader thread replicating on average 115 transactions per second and more than 7.5mln transactions are waiting to be replicated to the distribution database. On average, transactions are waiting 134880secs to be replicated, which is high latency.
Run below in publisher server and find session id of Log Reader’ reader thread:SELECT
SessionId = s.session_id,
App = ISNULL(s.program_name, N”)
FROM sys.dm_exec_sessions s
WHERE s.program_name LIKE ‘%LogReader%’
Place the session id to the below Event session and create event session. Run the session for about 5mins:CREATE EVENT SESSION [LogReaderMonitor] ON SERVER
ADD EVENT sqlos.wait_completed(
ACTION(package0.callstack)
WHERE ([sqlserver].[session_id]=(123))), —Change session id here
ADD EVENT sqlos.wait_info_external(
ACTION(package0.callstack)
WHERE (([opcode]=(‘End’)) AND ([sqlserver].[session_id]=(123)))), —Change session id here
ADD EVENT sqlserver.rpc_completed(
ACTION(package0.callstack)
WHERE ([sqlserver].[session_id]=(123))) —Change session id here
ADD TARGET package0.event_file(SET filename=N’C:Templogreader_reader_track’,max_file_size=(256),max_rollover_files=(5))
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
GO
Investigate the event file. For example, below you can confirm the activities with the same GUID with sequence ids. As you can see, initially, we are spending time for memory allocation and then sp_replcmds is finishing after 789 microseconds.
Note: duration of wait_info_external is in milliseconds while rpc_completed is in microseconds.
If wait time is high compared to CPU time, check wait type and troubleshoot accordingly. For example, on the above example we faced MEMORY_ALLOCATION_EXT wait_type but duration is 0. So, we are not waiting.
If CPU time is higher, this means log thread is running but latency is being observed because you have high load. High load can be caused by several causes:
Large batch of replicated transactions: large batch of transactions are the main cause of latency in reader thread performance. Check number of commands and transactions in agent statistics from verbose logs we obtained in Step 3.1.b. If the number of commands is significantly high compared to the number of transactions, it is possible that large transactions are being replicated. For example, as below:
If Reader Latency is caused by large number of pending commands, waiting for the Log Reader to catch up may be the best short-term solution. Long-term options include replicating batches during non-peak time.
Large number of non-replication transactions: A transaction log with a high percentage of non-replicated transaction will cause latency as the Log Reader scans over transaction to be ignored. You can check whether this problem exists by looking at Log Reader agent history we checked in Step 3.1.a. For example, in the below log reader history, we can see more than 5mln rows are being scanned but only 142 rows have been marked for replication.
In this case, ensure constant transaction log truncation and try to perform maintenance activities offline.
High number of VLFs: A large number of Virtual Log Files (VLFs) can contribute to long running read times. For the number of VLFs, execute the following command. Counts in 100K+ may be contributing to Log Reader Reader-Thread performance problems.
SELECT COUNT (DISTINCT vlf_sequence_number) FROM sys.dm_db_log_info ( PublisherDBID )
Step 4.2. Troubleshoot latency in Log Reader agent’s writer thread
By using log reader’s history log (refer to Step 3.1.a), you can get last transaction sequence number and delivery rate, latency information. If you do not observe latency in reader thread (Step 4.1), this means the latency rate is mainly by writer thread:
You can use below command to check transaction (xact_seqno) at where we are currently:
— Get publisher db id
USE distribution
GO
SELECT * FROM dbo.MSpublisher_databases
— Get commands we are at
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
BEGIN TRAN
USE distribution
GO
EXEC Sp_browsereplcmds
@xact_seqno_start = ‘xact_seqno’,
@xact_seqno_end = ‘xact_seqno’,
@publisher_database_id = PUBLISHERDB_ID
COMMIT TRAN
GO
Run below in publisher server and find session id of Log Reader’ writer thread:
SELECT
SessionId = s.session_id,
App = ISNULL(s.program_name, N”)
FROM sys.dm_exec_sessions s
WHERE s.program_name LIKE ‘%LogReader%’
Check whether any blocking happening with this session:
sp_who2
Then run the below query in distributor server by changing the session id to log reader session id:
CREATE EVENT SESSION [logreader_writer_track] ON SERVER
ADD EVENT sqlos.wait_completed(
ACTION(package0.callstack)
WHERE ([sqlserver].[session_id]=(64))), — Change session id to log reader writer session id
ADD EVENT sqlos.wait_info_external(
ACTION(package0.callstack)
WHERE (([opcode]=(‘End’)) AND ([sqlserver].[session_id]=(64)))), — Change session id to log reader writer session id
ADD EVENT sqlserver.sp_statement_completed(
ACTION(package0.event_sequence,sqlserver.plan_handle,sqlserver.session_id,sqlserver.transaction_id)
WHERE ([sqlserver].[session_id]=(64))) — Change session id to log reader writer session id
ADD TARGET package0.event_file(SET filename=N’C:Templogreader_writer_track’,max_file_size=(256),max_rollover_files=(5))
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
Investigate the collected event logs. For example, below you can confirm the activities with the same GUID with sequence ids. Each of this activity is writer threads attempt to write replication logs to distribution database. As you can see, we spent very little time (nearly 0) initially for MEMORY_ALLOCATION_EXT, then select statement is finishing after 39 microseconds.
Compare wait time (duration – cpu_time) and cpu_time. If wait time is high compared to CPU time, check wait_type and troubleshoot accordingly. For example, above we face MEMORY_ALLOCATION_EXT wait_type. If CPU time is high, you can investigate the execution plan by using corresponding plan_handle for time consuming query which you can get from above event logs:
SELECT * FROM sys.dm_exec_query_plan(PLAN_HANDLE)
Step 4.3. Troubleshoot latency in Distribution agent’s reader thread
To find the session id for Distribution agent, you need to find if it is a Push or Pull Subscription. In the case of push subscription, run below command in distributor server. In the case of Pull subscription, run below command in subscriber server.
SELECT
SessionId = s.session_id,
App = ISNULL(s.program_name, N”)
FROM sys.dm_exec_sessions s LEFT OUTER JOIN sys.dm_exec_connections c ON (s.session_id = c.session_id)
WHERE (select text from sys.dm_exec_sql_text(c.most_recent_sql_handle)) LIKE ‘%sp_MSget_repl_command%’
Check whether any blocking happening with this session:
sp_who2
Then run the below query in distributor server by changing the session id to distribution agent session id:
CREATE EVENT SESSION [distributor_writer_track] ON SERVER
ADD EVENT sqlos.wait_completed(
ACTION(package0.callstack)
WHERE ([sqlserver].[session_id]=(64))), — Change session id to dist agent session id
ADD EVENT sqlos.wait_info_external(
ACTION(package0.callstack)
WHERE (([opcode]=(‘End’)) AND ([sqlserver].[session_id]=(64)))), — Change session id to dist agent session id
ADD EVENT sqlserver.sp_statement_completed(
ACTION(package0.event_sequence,sqlserver.plan_handle,sqlserver.session_id,sqlserver.transaction_id)
WHERE ([sqlserver].[session_id]=(64))) — Change session id to dist agent session id
ADD TARGET package0.event_file(SET filename=N’C:Tempdistributor_reader_track’,max_file_size=(256),max_rollover_files=(5))
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
Investigate the collected event logs. For example, below you can confirm the activities with the same GUID with sequence ids. Each of this activity is reader threads attempt to read replication logs. As you can see, we spent very little time (nearly 0) initially for MEMORY_ALLOCATION_EXT, then select statement is finishing after 29 microseconds.
Compare wait time (duration – cpu_time) and cpu_time. If wait time is high compared to CPU time, check wait_type and troubleshoot accordingly. For example, above we face MEMORY_ALLOCATION_EXT wait_type. If CPU time is high, you can investigate the execution plan by using corresponding plan_handle for time consuming query which you can get from above event logs:
SELECT * FROM sys.dm_exec_query_plan(PLAN_HANDLE)
High CPU time can often mean there is a high load which can be caused by large batch of replicated transactions. You can compare the number of commands and transactions by using the below query.
SELECT count(c.xact_seqno) as CommandCount, count(DISTINCT t.xact_seqno) as TransactionCount
FROM MSrepl_commands c with (nolock)
LEFT JOIN msrepl_transactions t with (nolock)
on t.publisher_database_id = c.publisher_database_id and t.xact_seqno = c.xact_seqno
WHERE c.publisher_database_id = 1 –Change to target database id here
For the past days’ statistics, you can leverage below command:
USE distribution
select t.publisher_database_id, t.xact_seqno,
max(t.entry_time) as EntryTime, count(c.xact_seqno) as
CommandCount, count(DISTINCT t.xact_seqno) as TransactionCount
into #results
FROM MSrepl_commands c with (nolock)
LEFT JOIN msrepl_transactions t with (nolock)
on t.publisher_database_id = c.publisher_database_id
and t.xact_seqno = c.xact_seqno
GROUP BY t.publisher_database_id, t.xact_seqno
SELECT publisher_database_id
,datepart(year, EntryTime) as Year
,datepart(month, EntryTime) as Month
,datepart(day, EntryTime) as Day
,datepart(hh, EntryTime) as Hour
,sum(CommandCount) as CommandCountPerTimeUnit
,sum(TransactionCount) as TransactionCountPerTimeUnit
FROM #results
GROUP BY publisher_database_id
,datepart(year, EntryTime)
,datepart(month, EntryTime)
,datepart(day, EntryTime)
,datepart(hh, EntryTime)
ORDER BY publisher_database_id, Month, Day, Hour
As you see, I am executing one command per transaction making TransactionCount nearly equal to CommandCount.
Step 4.4. Troubleshoot latency in Distribution agent’s writer thread
Find the session id and App name for Distribution agent by inserting your publication name to WHERE clause below:
SELECT
SessionId = s.session_id,
App = ISNULL(s.program_name, N”)
FROM sys.dm_exec_sessions s
WHERE s.program_name LIKE ‘%publish%’
GO
Check whether there is blocking for the above session id(s):
sp_who2
Create event session by inserting app name:
CREATE EVENT SESSION [distributor_writer_track] ON SERVER
ADD EVENT sqlos.wait_completed(
ACTION(package0.callstack,sqlserver.session_id,sqlserver.sql_text)
WHERE ([sqlserver].[client_app_name]=N’SQLVM4-TRANSACR_AdventureWorksLT_test_table_pub’ AND [package0].[greater_than_uint64]([duration],(0)))),
ADD EVENT sqlos.wait_info_external(
ACTION(package0.callstack,sqlserver.session_id,sqlserver.sql_text)
WHERE ([sqlserver].[client_app_name]=N’SQLVM4-TRANSACR_AdventureWorksLT_test_table_pub’ AND [package0].[greater_than_uint64]([duration],(0)))),
ADD EVENT sqlserver.sp_statement_completed(
ACTION(package0.event_sequence,sqlserver.plan_handle,sqlserver.session_id,sqlserver.transaction_id)
WHERE ([sqlserver].[client_app_name]=N’SQLVM4-TRANSACR_AdventureWorksLT_test_table_pub’))
ADD TARGET package0.event_file(SET filename=N’C:Tempdistributor_writer_track’,max_file_size=(5),max_rollover_files=(5))
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_MULTIPLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
GO
Investigate the collected event logs. For example, below you can confirm the activities with the same GUID with sequence ids as statement levels. Can you find any high duration at any of the statements?
Compare wait time (duration – cpu_time) and cpu_time. If wait time is high compared to CPU time, check wait_type and troubleshoot accordingly. For example, above we face NETWORK_IO wait_type. If CPU time is high, you can investigate the execution plan by using corresponding plan_handle for time consuming query which you can get from above event logs:
SELECT * FROM sys.dm_exec_query_plan(PLAN_HANDLE)
Supervisor: Collin Benkler, Sr EE for SQL Server in Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Relative/ Absolute path in Baseline Text
Is it possible to use relative path instead of absolute when adding baseline criteria?Is it possible to use relative path instead of absolute when adding baseline criteria? Is it possible to use relative path instead of absolute when adding baseline criteria? baseline, relative path MATLAB Answers — New Questions
parpool memory allocation per worker
Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance.Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance. Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance. parpool, numworkers, memory MATLAB Answers — New Questions
In the NewMaze function, get the text of the selected branching mode and use it as the second input to the amaze function.
In the NewMaze function, get the text of the selected branching mode and use it as the second input to the amaze function. In these que, in code view iam writing the correct still it shows "Does the NewMaze callback use the value from the branching mode button group?" What to do now??In the NewMaze function, get the text of the selected branching mode and use it as the second input to the amaze function. In these que, in code view iam writing the correct still it shows "Does the NewMaze callback use the value from the branching mode button group?" What to do now?? In the NewMaze function, get the text of the selected branching mode and use it as the second input to the amaze function. In these que, in code view iam writing the correct still it shows "Does the NewMaze callback use the value from the branching mode button group?" What to do now?? newmaze MATLAB Answers — New Questions
unable to add state in model
how to resolve below error?
Getting error on adding state: missing enumeration for statehow to resolve below error?
Getting error on adding state: missing enumeration for state how to resolve below error?
Getting error on adding state: missing enumeration for state stateflow MATLAB Answers — New Questions
Use COUNTIFS and INDEX/MATCH to pull data from another table
Hi.
I have two tables, one with information (Attendance Sheet) and one where I need to pull information into (Info sheet). I have attached screenshots of both tables.
I would like to count how many athletes are in each Council LGA for each Term (does not need to be specified by each week), based off the value of LGA in the Info Sheet. I would then like to count how many athletes from each council area attended in each week and each term. I have tried many countifs and index/match functions to no avail. Any help would be appreciated. Thank you.
Hi. I have two tables, one with information (Attendance Sheet) and one where I need to pull information into (Info sheet). I have attached screenshots of both tables. I would like to count how many athletes are in each Council LGA for each Term (does not need to be specified by each week), based off the value of LGA in the Info Sheet. I would then like to count how many athletes from each council area attended in each week and each term. I have tried many countifs and index/match functions to no avail. Any help would be appreciated. Thank you. Read More
Unable create a new plan in Microsoft planner
My staff Unable create a new plan in Microsoft planner with error message :
Plans and Microsoft 365 Group Establishment have been retired
you are not a member of an existing group. To establish a new Microsoft 365 group. Contact the group’s global or system administrator.
any1 know how to solve this ?pls
My staff Unable create a new plan in Microsoft planner with error message :Plans and Microsoft 365 Group Establishment have been retiredyou are not a member of an existing group. To establish a new Microsoft 365 group. Contact the group’s global or system administrator. any1 know how to solve this ?pls Read More
O365 not updated from MECM
We have one XML for deploying O365.
When i install it on W10/11 machine it download updates from MECM.
But old installations on servers start downloading updates from internet 1-2 month ago.
I find some strange things here Microsoft 365 Apps admin center (office.com)
We use this 3/4 year ago to migrate all devices to Monthly Enterprise Channel.
Then i exclude all devices. Now i check servers are again managed by this …..
When i install O365 to new server all is working fine.
How to tell existing servers to take updates from MECM?
I try full uninstall, reinstall nothing work.
It not problem of MECM deployment, on new servers where O365 never exist its working fine.
How to get out from this terrible Microsoft 365 Apps admin center (office.com) settings?
We have one XML for deploying O365.When i install it on W10/11 machine it download updates from MECM.But old installations on servers start downloading updates from internet 1-2 month ago.I find some strange things here Microsoft 365 Apps admin center (office.com)We use this 3/4 year ago to migrate all devices to Monthly Enterprise Channel.Then i exclude all devices. Now i check servers are again managed by this ….. When i install O365 to new server all is working fine.How to tell existing servers to take updates from MECM?I try full uninstall, reinstall nothing work. It not problem of MECM deployment, on new servers where O365 never exist its working fine. How to get out from this terrible Microsoft 365 Apps admin center (office.com) settings? Read More
can someone explain this error?
Unrecognized method, property, or field ‘CurrentFileIndex’ for class ‘matlab.io.datastore.CombinedDatastore’.
Error in snake (line 41)
if allImages.CurrentFileIndex <= height(aplostisiImages.Files)Unrecognized method, property, or field ‘CurrentFileIndex’ for class ‘matlab.io.datastore.CombinedDatastore’.
Error in snake (line 41)
if allImages.CurrentFileIndex <= height(aplostisiImages.Files) Unrecognized method, property, or field ‘CurrentFileIndex’ for class ‘matlab.io.datastore.CombinedDatastore’.
Error in snake (line 41)
if allImages.CurrentFileIndex <= height(aplostisiImages.Files) image processing, data acquisition MATLAB Answers — New Questions
Same error values are copied for different input parameters when using MATLAB Experiment Manager
I am trying to run the find gains for PID controller for a powertrain using genetic algorithm. In order to find optimal generations and populations of the genetic algorithm I run almost 1600 experminets in which the following parameters change. So, the input of the model is a WLTP drive cycle (1800s) long. I want to see if I train the Genetic Algorithm on only 30s and find the gains and then using those gains run the whole cycle to calculate the error.
But when I run the experiment I get the same error for all the 3 times with same population and generation combinantion:
However, when i check the simulink model the gains are different for each iteration which means the errors are somehow not updated in the table. I have tried different changes in the code but noting works. Here is the code, if someone could suggest some improvements:
function [mean_abs_error] = Experiment2Function1(params)
tend = params.time;
% Measure the current time before running the simulation
start_simulation_time = tic;
no_var = 2;
lb = [params.lbP params.lbI];
ub = [params.ubP params.ubI];
%GA options
ga_opt = optimoptions(‘ga’,’Display’,’off’,’Generations’,params.generations,’PopulationSize’,params.population,’PlotFcns’,@gaplotbestf);
obj_fn = @(k) optimization_PID(k);
%GA Command
[k, best] = ga((obj_fn),no_var,[],[],[],[],lb,ub,[],ga_opt)
% Measure the simulation time
simulation_time = toc(start_simulation_time);
%%
% Calculate Error
tend = 1800;
sim("Model1.slx")
driveCycleTime = DriveCycle(:,1);
driveCycleSpeed = DriveCycle(:,2);
index1800s = driveCycleTime <= tend;
driveCycle1800s = [driveCycleTime(index1800s), driveCycleSpeed(index1800s)];
% Extract the simulated result for the first 1800 seconds
simulatedTime = tout(tout <= tend);
simulatedSpeed = v_act_lim(tout <= tend);
% Interpolate the simulated result to match the drive cycle time points
simulatedSpeedInterp = interp1(simulatedTime, simulatedSpeed, driveCycle1800s(:, 1), ‘linear’);
% Calculate and plot the error
error = (driveCycle1800s(:, 2) – simulatedSpeedInterp)./ driveCycle1800s(:,2)*100;
abs_error = abs(error);
% Exclude Infinite Values
validIndices = isfinite(abs_error);
validAbsError = abs_error(validIndices);
mean_abs_error = mean(validAbsError)
end
The objective function is as follows:
function cost = optimization_PID(k)
assignin("base", "k", k);
sim("Model1.slx");
itae_values = ITAE.Data;
cost = sum(itae_values);
endI am trying to run the find gains for PID controller for a powertrain using genetic algorithm. In order to find optimal generations and populations of the genetic algorithm I run almost 1600 experminets in which the following parameters change. So, the input of the model is a WLTP drive cycle (1800s) long. I want to see if I train the Genetic Algorithm on only 30s and find the gains and then using those gains run the whole cycle to calculate the error.
But when I run the experiment I get the same error for all the 3 times with same population and generation combinantion:
However, when i check the simulink model the gains are different for each iteration which means the errors are somehow not updated in the table. I have tried different changes in the code but noting works. Here is the code, if someone could suggest some improvements:
function [mean_abs_error] = Experiment2Function1(params)
tend = params.time;
% Measure the current time before running the simulation
start_simulation_time = tic;
no_var = 2;
lb = [params.lbP params.lbI];
ub = [params.ubP params.ubI];
%GA options
ga_opt = optimoptions(‘ga’,’Display’,’off’,’Generations’,params.generations,’PopulationSize’,params.population,’PlotFcns’,@gaplotbestf);
obj_fn = @(k) optimization_PID(k);
%GA Command
[k, best] = ga((obj_fn),no_var,[],[],[],[],lb,ub,[],ga_opt)
% Measure the simulation time
simulation_time = toc(start_simulation_time);
%%
% Calculate Error
tend = 1800;
sim("Model1.slx")
driveCycleTime = DriveCycle(:,1);
driveCycleSpeed = DriveCycle(:,2);
index1800s = driveCycleTime <= tend;
driveCycle1800s = [driveCycleTime(index1800s), driveCycleSpeed(index1800s)];
% Extract the simulated result for the first 1800 seconds
simulatedTime = tout(tout <= tend);
simulatedSpeed = v_act_lim(tout <= tend);
% Interpolate the simulated result to match the drive cycle time points
simulatedSpeedInterp = interp1(simulatedTime, simulatedSpeed, driveCycle1800s(:, 1), ‘linear’);
% Calculate and plot the error
error = (driveCycle1800s(:, 2) – simulatedSpeedInterp)./ driveCycle1800s(:,2)*100;
abs_error = abs(error);
% Exclude Infinite Values
validIndices = isfinite(abs_error);
validAbsError = abs_error(validIndices);
mean_abs_error = mean(validAbsError)
end
The objective function is as follows:
function cost = optimization_PID(k)
assignin("base", "k", k);
sim("Model1.slx");
itae_values = ITAE.Data;
cost = sum(itae_values);
end I am trying to run the find gains for PID controller for a powertrain using genetic algorithm. In order to find optimal generations and populations of the genetic algorithm I run almost 1600 experminets in which the following parameters change. So, the input of the model is a WLTP drive cycle (1800s) long. I want to see if I train the Genetic Algorithm on only 30s and find the gains and then using those gains run the whole cycle to calculate the error.
But when I run the experiment I get the same error for all the 3 times with same population and generation combinantion:
However, when i check the simulink model the gains are different for each iteration which means the errors are somehow not updated in the table. I have tried different changes in the code but noting works. Here is the code, if someone could suggest some improvements:
function [mean_abs_error] = Experiment2Function1(params)
tend = params.time;
% Measure the current time before running the simulation
start_simulation_time = tic;
no_var = 2;
lb = [params.lbP params.lbI];
ub = [params.ubP params.ubI];
%GA options
ga_opt = optimoptions(‘ga’,’Display’,’off’,’Generations’,params.generations,’PopulationSize’,params.population,’PlotFcns’,@gaplotbestf);
obj_fn = @(k) optimization_PID(k);
%GA Command
[k, best] = ga((obj_fn),no_var,[],[],[],[],lb,ub,[],ga_opt)
% Measure the simulation time
simulation_time = toc(start_simulation_time);
%%
% Calculate Error
tend = 1800;
sim("Model1.slx")
driveCycleTime = DriveCycle(:,1);
driveCycleSpeed = DriveCycle(:,2);
index1800s = driveCycleTime <= tend;
driveCycle1800s = [driveCycleTime(index1800s), driveCycleSpeed(index1800s)];
% Extract the simulated result for the first 1800 seconds
simulatedTime = tout(tout <= tend);
simulatedSpeed = v_act_lim(tout <= tend);
% Interpolate the simulated result to match the drive cycle time points
simulatedSpeedInterp = interp1(simulatedTime, simulatedSpeed, driveCycle1800s(:, 1), ‘linear’);
% Calculate and plot the error
error = (driveCycle1800s(:, 2) – simulatedSpeedInterp)./ driveCycle1800s(:,2)*100;
abs_error = abs(error);
% Exclude Infinite Values
validIndices = isfinite(abs_error);
validAbsError = abs_error(validIndices);
mean_abs_error = mean(validAbsError)
end
The objective function is as follows:
function cost = optimization_PID(k)
assignin("base", "k", k);
sim("Model1.slx");
itae_values = ITAE.Data;
cost = sum(itae_values);
end experiment manager, genetic algorithm, error MATLAB Answers — New Questions
s_function 2dof
Hello!!!!
l try to use s_function in simulink and I get this error:
Error in ‘BRAS_2DOF/S-Function1’ while executing MATLAB S-function ‘Dynamique2DOF’, flag = 0 (initialize), at start of simulation.
Caused by:
Subscript indices must either be real positive integers or logicals.
this is my code in joint pieceHello!!!!
l try to use s_function in simulink and I get this error:
Error in ‘BRAS_2DOF/S-Function1’ while executing MATLAB S-function ‘Dynamique2DOF’, flag = 0 (initialize), at start of simulation.
Caused by:
Subscript indices must either be real positive integers or logicals.
this is my code in joint piece Hello!!!!
l try to use s_function in simulink and I get this error:
Error in ‘BRAS_2DOF/S-Function1’ while executing MATLAB S-function ‘Dynamique2DOF’, flag = 0 (initialize), at start of simulation.
Caused by:
Subscript indices must either be real positive integers or logicals.
this is my code in joint piece error sfuntion 2dof MATLAB Answers — New Questions
Champion Management Platform won’t apply digital badge
We added the champion management platform app to the tenant, and now have a champions team. We’ve got the leaders list working etc, but when we send a user to claim their digital badge, it asks them to ‘accept’ then the next prompt says (see pic below).
The badges are included in the Digital Badge Assets library. Only one owner was able to add the badge when we first started and from then on no-one else can. So we’ve had one success.
No-one knows how to fix this here. We’ve tried adding owners/members, removing them etc.
Has someone got some advice or do you know where I can get support for this?
Thank you 🙂
We added the champion management platform app to the tenant, and now have a champions team. We’ve got the leaders list working etc, but when we send a user to claim their digital badge, it asks them to ‘accept’ then the next prompt says (see pic below). The badges are included in the Digital Badge Assets library. Only one owner was able to add the badge when we first started and from then on no-one else can. So we’ve had one success. No-one knows how to fix this here. We’ve tried adding owners/members, removing them etc.Has someone got some advice or do you know where I can get support for this?Thank you 🙂 Read More
Effectively troubleshoot latency in SQL Server Transactional replication: Part 1
High level transactional replication architecture
The initial stage of transactional replication is initializing the subscriber. Although this can be done via backup, the typical approach generating a snapshot by the Snapshot Agent and storing it in the snapshot folder. It’s C:Program FilesMicrosoft SQL Server<INST>MSSQLReplData by default and configurable. Then, the Distribution Agent transfers the snapshot to the subscriber.
Afterwards, incremental changes in the published database are tracked and replicated to subscriber database. This replication process is done in three phases:
Transactions are marked “for replication” in the transaction log.
The Log Reader Agent reader thread scans through the transaction log using sp_replcmds and looks for transactions that are marked “for replication.” These transactions are then saved to the distribution database by the Log Reader agent writer thread using sp_MSadd_replcmds.
The Distribution Agent reader thread scans through the distribution database using sp_MSget_repl_commands. Then, by using the distribution writer thread, this agent connects to the subscriber to apply those changes to the subscriber using sp_MSupd…, sp_MSins…, and sp_Msdel_* (where the “*” denotes the schema and object name of the published article)..
Figure 1. Transactional Replication architecture that shows the locations of each thread and agent in the case of Remote distributor and pull subscription case
Troubleshooting steps
The following graph shows the process we use to troubleshoot. We troubleshoot by dividing the process into two parts.
Step 1. Get information about “Big Picture”
Before you dive into solving any issue, you need to fully understand the type of environment you have as there might have been changes you are unaware of. An easy way to do that is to run script SQLServer/Script Replication Topology at master · sqlserver-parikh/SQLServer (github.com) which gives output like below.
Step 2. Get tracer tokens
After confirming the environment, insert tracer tokens and identify where we are stuck. Tokens can be inserted via Replication Monitor:
For historical tracer token results, you can run below query in distributor and compare with current results, being the last row the last result:
USE Distribution
SELECT p.publication_id, p.publication, agent_id,
Datediff(s,t.publisher_commit,t.distributor_commit) as ‘Time To Dist (sec)’,
Datediff(s,t.distributor_commit,h.subscriber_commit) as ‘Time To Sub (sec)’
FROM MStracer_tokens t
JOIN MStracer_history h
ON t.tracer_id = h.parent_tracer_id
JOIN MSpublications p
ON p.publication_id = t.publication_id
NOTE –
“distribution” is the default for the distribution database. Be sure to change this if you configured a different name for the distribution database. Additionally, this history is cleaned up any time replication upgrade scripts are executed, there is a change to the distributor configuration, sp_MShistory_cleanup (depending on retention duration specified), or sp_MSdelete_tracer_history is executed (again, depending on parameters used).
If you observe latency or “Pending” status in “Publisher to Distributor”, the issue is with Log Reader agent (refer to Step 3.1). If the latency is seen in “Distributor to Subscriber” as the screenshot above, the issue is with Distribution agent (refer to Step 3.2).
Step 3.1. Troubleshoot latency in Log Reader agent
Check agent history table for any errors by specifically paying attention to comments and error text columns:
USE distribution
SELECT a.name AS agent_name,
CASE [runstatus]
WHEN 1 THEN ‘Start’
WHEN 2 THEN ‘Succeed’
WHEN 3 THEN ‘In progress’
WHEN 4 THEN ‘Idle’
WHEN 5 THEN ‘Retry’
WHEN 6 THEN ‘Fail’
END AS Status
,[start_time]
,h.[time] — The time the message is logged.
,[duration] –The duration, in seconds, of the message session.
,[comments]
,h.[xact_seqno] — The last processed transaction sequence number.
,[delivery_time] — The time first transaction is delivered.
,[delivered_transactions] –The total number of transactions delivered in the session.
,[delivered_commands] — The total number of commands delivered in the session.
,[average_commands] — The average number of commands delivered in the session.
,[delivery_rate] — The average delivered commands per second.
,[delivery_latency] — The latency between the command entering the published database and being entered into the distribution database. In milliseconds.
,[error_id] — The ID of the error in the MSrepl_error system table.
,e.error_text — error text
FROM [distribution].[dbo].[MSlogreader_history] h
JOIN MSlogreader_agents a
ON a.id = h.agent_id
LEFT JOIN MSrepl_errors e
ON e.id = h.error_id
ORDER BY h.time DESC
Furthermore, 5-min interval performance statistics have been added to the history table.
If stats state=1, both reader and writer thread of Log Reader agent are performing as expected. If state=2, writer thread is taking a long time to write changes to distribution database. In this case, you should investigate, writer thread (Step 4.2). State=3 means the reader thread is taking a long time scanning the replicated changes from the transaction log and this thread should be investigated (Step 4.1). For example, below, the writer thread is causing latency as reader thread waited for it for 300 seconds to free queue buffer for new replicated data.
Ref: Statistics for Log Reader and Distribution agents – SQL Server | Microsoft Learn
Sometimes the history table is not enough to resolve latency issues. In this case, you should enable verbose logging for detailed logs (-Output C:TempOUTPUTFILE.txt -Outputverboselevel 3 ). https://learn.microsoft.com/en-US/sql/relational-databases/replication/troubleshoot-tran-repl-errors?view=sql-server-ver16#enable-verbose-logging-on-any-agent
You can investigate the verbose detailed logs for any errors. Particularly pay attention to “Status” logs.
Furthermore, verbose logging provides 5-min interval Log Reader agent statistics as below. Check the Fetch time (reader thread performance) and Write time (writer thread performance) for any latency.
If you cannot find any error logs but you detect latencies in either reader or writer thread, go to Step 4 and check the corresponding thread. For example, in the above example, we detected high latencies in Fetch time compared to Write time. So, the issue is probably with reader thread (refer to Step 4.1 in Part 2).
Step 3.2. Troubleshoot latency in Distribution agent
Check Distribution agent history for any errors by specifically paying attention to comments and error text: USE distribution
SELECT a.name AS agent_name,
CASE [runstatus]
WHEN 1 THEN ‘Start’
WHEN 2 THEN ‘Succeed’
WHEN 3 THEN ‘In progress’
WHEN 4 THEN ‘Idle’
WHEN 5 THEN ‘Retry’
WHEN 6 THEN ‘Fail’
END AS Status
,[start_time]
,h.[time] — The time the message is logged.
,[duration] –The duration, in seconds, of the message session.
,[comments]
,h.[xact_seqno] — The last processed transaction sequence number.
,[current_delivery_rate] — The average number of commands delivered per second since the last history entry.
,[current_delivery_latency] –The latency between the command entering the distribution database and being applied to the Subscriber since the last history entry. In milliseconds.
,[delivered_transactions] –The total number of transactions delivered in the session.
,[delivered_commands] — The total number of commands delivered in the session.
,[average_commands] — The average number of commands delivered in the session.
,[delivery_rate] — The average delivered commands per second.
,[delivery_latency] — The latency between the command entering the distribution database and being applied to the Subscriber. In milliseconds.
,[total_delivered_commands] — The total number of commands delivered since the subscription was created.
,[error_id] — The ID of the error in the MSrepl_error system table.
,e.error_text — error text
FROM MSdistribution_history h
JOIN MSdistribution_agents a
ON a.id = h.agent_id
LEFT JOIN MSrepl_errors e
ON e.id = h.error_id
ORDER BY h.time DESCFurthermore, 5-min interval performance statistics have been added to the history table.
If stats state=1, both reader and writer thread of Log Reader agent are performing as expected. If state=2, writer thread is taking a long time to write changes to distribution database. In this case, you should investigate, writer thread (Step 4.2). State=3 means the reader thread is taking a long time scanning the replicated changes from the transaction log and this thread should be investigated (Step 4.1). For example, below, the writer thread is causing latency as reader thread waited for it for 300 seconds to free queue buffer for new replicated data.
Ref: Statistics for Log Reader and Distribution agents – SQL Server | Microsoft Learn
Sometimes the history table is not enough to resolve latency issues. In this case, you should enable verbose logging for detailed logs (-Output C:TempOUTPUTFILE.txt -Outputverboselevel 3 ). https://learn.microsoft.com/en-US/sql/relational-databases/replication/troubleshoot-tran-repl-errors?view=sql-server-ver16#enable-verbose-logging-on-any-agentYou can investigate the verbose detailed logs for any errors. Particularly pay attention to “Status” logs.
Furthermore, verbose logging provides distribution agent statistics as below. Check the Fetch time (reader thread performance) and Write time (writer thread performance) for any latency.
If you cannot find any error logs but you detect latencies in either reader or writer thread, go to Step 4 and check the corresponding thread. For example, in the above example, we detected high latencies in Reader thread compared to Writer thread. Therefore, the issue is with reader thread (Step 4.3). Let us continue doing the next steps in Part 2!!
Supervisor: Colling Benkler, Sr. EE for SQL Server in Microsoft.
Microsoft Tech Community – Latest Blogs –Read More
How do I implement typedefs of unions + structure variables in Simulink?
How can I implement a union + structure variable in Simulink so that when I generate code I will get a variable structure with shared access?
I want to duplicate the behaviour of the C-code below so I can use typedef struct in several similar data sets.
%Code using C
typedef struct st_module_data{
struct st_FaultInfo{
union{
uint8 AllFaults;
struct {
uint8 OVP :1;
uint8 OVW :1;
uint8 UVP :1;
uint8 UVW :1;
uint8 OCP :1;
uint8 OCW :1;
uint8 OTP :1;
uint8 OTW :1;
}
}
}Fault;
struct st_MeasuredData{
uint8 voltage = 0;
uint8 current = 0;
uint8 temperature = 0;
}Measured;
};
st_module_data s_module_01;
s_module_01.Fault.AllFaults = 0; % Clears all flags
s_module_01.Fault.OVP = 1; % Sets only the OVP flag
s_module_01.Measured.voltage = ui8_ADC_5V; % Sets the measured voltage to 5V
I was able to recreate the structure and "typedef"-ish callbacks in matlab by using bus editor:
But I am unable to get the union thing working and can only access the lowest heirarchy of the structure – e.g.:
s_module_01.Fault.AllFaults = 0; % How do I implement this?
s_module_01.Fault = 0; % Does not work…
s_module_01.Fault.OVP = 1; % OK – works in Matlab Functions, Simulink and Stateflow
s_module_01.Measured.voltage = ui8_ADC_5V; % OK – works in Matlab Functions, Simulink and StateflowHow can I implement a union + structure variable in Simulink so that when I generate code I will get a variable structure with shared access?
I want to duplicate the behaviour of the C-code below so I can use typedef struct in several similar data sets.
%Code using C
typedef struct st_module_data{
struct st_FaultInfo{
union{
uint8 AllFaults;
struct {
uint8 OVP :1;
uint8 OVW :1;
uint8 UVP :1;
uint8 UVW :1;
uint8 OCP :1;
uint8 OCW :1;
uint8 OTP :1;
uint8 OTW :1;
}
}
}Fault;
struct st_MeasuredData{
uint8 voltage = 0;
uint8 current = 0;
uint8 temperature = 0;
}Measured;
};
st_module_data s_module_01;
s_module_01.Fault.AllFaults = 0; % Clears all flags
s_module_01.Fault.OVP = 1; % Sets only the OVP flag
s_module_01.Measured.voltage = ui8_ADC_5V; % Sets the measured voltage to 5V
I was able to recreate the structure and "typedef"-ish callbacks in matlab by using bus editor:
But I am unable to get the union thing working and can only access the lowest heirarchy of the structure – e.g.:
s_module_01.Fault.AllFaults = 0; % How do I implement this?
s_module_01.Fault = 0; % Does not work…
s_module_01.Fault.OVP = 1; % OK – works in Matlab Functions, Simulink and Stateflow
s_module_01.Measured.voltage = ui8_ADC_5V; % OK – works in Matlab Functions, Simulink and Stateflow How can I implement a union + structure variable in Simulink so that when I generate code I will get a variable structure with shared access?
I want to duplicate the behaviour of the C-code below so I can use typedef struct in several similar data sets.
%Code using C
typedef struct st_module_data{
struct st_FaultInfo{
union{
uint8 AllFaults;
struct {
uint8 OVP :1;
uint8 OVW :1;
uint8 UVP :1;
uint8 UVW :1;
uint8 OCP :1;
uint8 OCW :1;
uint8 OTP :1;
uint8 OTW :1;
}
}
}Fault;
struct st_MeasuredData{
uint8 voltage = 0;
uint8 current = 0;
uint8 temperature = 0;
}Measured;
};
st_module_data s_module_01;
s_module_01.Fault.AllFaults = 0; % Clears all flags
s_module_01.Fault.OVP = 1; % Sets only the OVP flag
s_module_01.Measured.voltage = ui8_ADC_5V; % Sets the measured voltage to 5V
I was able to recreate the structure and "typedef"-ish callbacks in matlab by using bus editor:
But I am unable to get the union thing working and can only access the lowest heirarchy of the structure – e.g.:
s_module_01.Fault.AllFaults = 0; % How do I implement this?
s_module_01.Fault = 0; % Does not work…
s_module_01.Fault.OVP = 1; % OK – works in Matlab Functions, Simulink and Stateflow
s_module_01.Measured.voltage = ui8_ADC_5V; % OK – works in Matlab Functions, Simulink and Stateflow simulink, stateflow, simulink bus editor, structures, union MATLAB Answers — New Questions
Can I export all the trained model from the classification learner app, using a single command?
Dear experts,
I am trying to find out which model performs on a specific dataset. There are 34 models available in the classification learner app. Do we have any option to export all the 34 trained model using a single command? Presently, I’m doing it one at a time.
Thank you in advance.Dear experts,
I am trying to find out which model performs on a specific dataset. There are 34 models available in the classification learner app. Do we have any option to export all the 34 trained model using a single command? Presently, I’m doing it one at a time.
Thank you in advance. Dear experts,
I am trying to find out which model performs on a specific dataset. There are 34 models available in the classification learner app. Do we have any option to export all the 34 trained model using a single command? Presently, I’m doing it one at a time.
Thank you in advance. matlab classification learner app MATLAB Answers — New Questions
Decrease Existing UIGridLayout RowHeight (or ColumnWidth)
I’m trying to decrease an existing uigridlayout’s number of rows and columns. Below I created the figure and grid.
fig = uifigure; % make uifigure
g = uigridlayout(fig); % put uigridlayout in fig
g.RowHeight = {‘1x’ ‘1x’ ‘1x’ ‘1x’}; % assign to uigridlayout 4 rows
g.ColumnWidth = {‘1x’ ‘1x’}; % assign to uigridlayout 2 columns
So, it already exists and has a specified row height and column width dimensions. And I want to change the number of rows and columns in this existing uigridlayout (g)… instead of deleting it and making a new one with the desired number of rows and columnns (e.g. a grid with 3 rows and 3 columns).
I am able to add rows or columns –>
g.ColumnWidth = {‘1x’ ‘1x’ ‘1x’} % assign greater amount of columns to g.ColumnWidth property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
But I can’t seem to be able to remove rows or columns –>
g.RowHeight = {‘1x’ ‘1x’ ‘1x’} % assign lower amount of columns to g.RowHeight property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
How do I decrease the number of rows or columns in this existing uigridlayout?I’m trying to decrease an existing uigridlayout’s number of rows and columns. Below I created the figure and grid.
fig = uifigure; % make uifigure
g = uigridlayout(fig); % put uigridlayout in fig
g.RowHeight = {‘1x’ ‘1x’ ‘1x’ ‘1x’}; % assign to uigridlayout 4 rows
g.ColumnWidth = {‘1x’ ‘1x’}; % assign to uigridlayout 2 columns
So, it already exists and has a specified row height and column width dimensions. And I want to change the number of rows and columns in this existing uigridlayout (g)… instead of deleting it and making a new one with the desired number of rows and columnns (e.g. a grid with 3 rows and 3 columns).
I am able to add rows or columns –>
g.ColumnWidth = {‘1x’ ‘1x’ ‘1x’} % assign greater amount of columns to g.ColumnWidth property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
But I can’t seem to be able to remove rows or columns –>
g.RowHeight = {‘1x’ ‘1x’ ‘1x’} % assign lower amount of columns to g.RowHeight property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
How do I decrease the number of rows or columns in this existing uigridlayout? I’m trying to decrease an existing uigridlayout’s number of rows and columns. Below I created the figure and grid.
fig = uifigure; % make uifigure
g = uigridlayout(fig); % put uigridlayout in fig
g.RowHeight = {‘1x’ ‘1x’ ‘1x’ ‘1x’}; % assign to uigridlayout 4 rows
g.ColumnWidth = {‘1x’ ‘1x’}; % assign to uigridlayout 2 columns
So, it already exists and has a specified row height and column width dimensions. And I want to change the number of rows and columns in this existing uigridlayout (g)… instead of deleting it and making a new one with the desired number of rows and columnns (e.g. a grid with 3 rows and 3 columns).
I am able to add rows or columns –>
g.ColumnWidth = {‘1x’ ‘1x’ ‘1x’} % assign greater amount of columns to g.ColumnWidth property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
But I can’t seem to be able to remove rows or columns –>
g.RowHeight = {‘1x’ ‘1x’ ‘1x’} % assign lower amount of columns to g.RowHeight property
output –>
g =
GridLayout with properties:
RowHeight: {‘1x’ ‘1x’ ‘1x’ ‘1x’}
ColumnWidth: {‘1x’ ‘1x’ ‘1x’}
How do I decrease the number of rows or columns in this existing uigridlayout? uifigure, uigridlayout, rows and columns, editing properties MATLAB Answers — New Questions
Configure Hybrid Modern Authentication in Exchange on-premises
I have error with Hybrid Modern Authentication for OWA and ECP. After I log in to owa/ecp from https://login.microsoftonline.com/ when I access ecp/owa, it’s show error:
Hope to receive reply soon. Thanks a lot
I have error with Hybrid Modern Authentication for OWA and ECP. After I log in to owa/ecp from https://login.microsoftonline.com/ when I access ecp/owa, it’s show error: Hope to receive reply soon. Thanks a lot Read More
What powershell scripts are you using
Need some more ideas and inspiration.
What powershell script are you using on your own computer or server?
Need some more ideas and inspiration.What powershell script are you using on your own computer or server? Read More
Is it possible save a YouTube video on Mac or a Windows 11?
I mainly use Mac computers, and occasionally use Windows 11. Recently, I was looking for a way to easily save YouTube videos on Mac and Win 11 systems. I want to download some educational videos or personal interest-related content so that I can watch them when I don’t have an Internet connection.
There is a lot of information on the Internet, and I am a little confused about which download tool is safe and effective. If you have any reliable YouTube video downloader or Mac and Windows recommendations , or any good downloading tips, please share them. Thank you very much for your help!
I mainly use Mac computers, and occasionally use Windows 11. Recently, I was looking for a way to easily save YouTube videos on Mac and Win 11 systems. I want to download some educational videos or personal interest-related content so that I can watch them when I don’t have an Internet connection. There is a lot of information on the Internet, and I am a little confused about which download tool is safe and effective. If you have any reliable YouTube video downloader or Mac and Windows recommendations , or any good downloading tips, please share them. Thank you very much for your help! Read More
How to continue drawing bar charts on the basis of drawing maps
How to continue drawing bar charts on the basis of drawing maps using MATLAB softwore? Just like this one.How to continue drawing bar charts on the basis of drawing maps using MATLAB softwore? Just like this one. How to continue drawing bar charts on the basis of drawing maps using MATLAB softwore? Just like this one. mapping toolbox MATLAB Answers — New Questions