Month: June 2024
HLK touch test error : Invaild scan time
hello sirs,
I have passed the touch device test using the 23H2 HLK tool.
However, when the 24H2 HLK tool was released, I retested the same device and confirmed that a fail occurred.
The error message is “[HID] Invaild scan time (Drifted from wall clock) Max drift: 169 Actual: 202”.
I would like to know in detail why this error occurs and what Max dirft means.
I look forward to your help. Thank you.
hello sirs, I have passed the touch device test using the 23H2 HLK tool.However, when the 24H2 HLK tool was released, I retested the same device and confirmed that a fail occurred.The error message is “[HID] Invaild scan time (Drifted from wall clock) Max drift: 169 Actual: 202”. I would like to know in detail why this error occurs and what Max dirft means. I look forward to your help. Thank you. Read More
Keyboard shortcut to collapse formula bar not working
About a week ago the regular keyboard shortcut that I use to collapse and expand the formula bar in Excel (Ctrl+Shift+U) stopped working, and instead increased the worksheet zoom by 5%.
Would anyone know why this has changed, or what the updated keyboard shortcut for collapsing/expanding the formula bar would be?
About a week ago the regular keyboard shortcut that I use to collapse and expand the formula bar in Excel (Ctrl+Shift+U) stopped working, and instead increased the worksheet zoom by 5%. Would anyone know why this has changed, or what the updated keyboard shortcut for collapsing/expanding the formula bar would be? Read More
seeking guidance phrasing my questions so I can use help files
I have spent hours using the help files for a project I am working on. However, since I am so new to excel, I don’t know how to phrase my questions so I can use help files. I am trying to create a template I can use for a source with my mail merge documents. In addition to the standard name, address etc., I want the ability to customize with making selections from drop downs. I also want to use some of the info I select in a different source. I have been able to figure out how to add drop downs as well as replicate entries so I can use elsewhere… but I am trying to take it one step further and have no idea how to ask the questions! I don’t mind doing the work and figuring out “how” by using the help files, but I don’t know the right term/function/feature names to find the process in help. I don’t know if that makes any sense! I am including a dummy version of what I want to do and what questions I need help phrasing so I can find the answers. ANY GUIDANCE IS EXTREMELY APPRECIATED! I just need to figure out how to attach my sample excel doc so this makes sense. :
I have spent hours using the help files for a project I am working on. However, since I am so new to excel, I don’t know how to phrase my questions so I can use help files. I am trying to create a template I can use for a source with my mail merge documents. In addition to the standard name, address etc., I want the ability to customize with making selections from drop downs. I also want to use some of the info I select in a different source. I have been able to figure out how to add drop downs as well as replicate entries so I can use elsewhere… but I am trying to take it one step further and have no idea how to ask the questions! I don’t mind doing the work and figuring out “how” by using the help files, but I don’t know the right term/function/feature names to find the process in help. I don’t know if that makes any sense! I am including a dummy version of what I want to do and what questions I need help phrasing so I can find the answers. ANY GUIDANCE IS EXTREMELY APPRECIATED! I just need to figure out how to attach my sample excel doc so this makes sense. :learning excel.xlsx Read More
Document Intelligence and Index Creation Using Azure ML with Parallel Processing (Part 1)
Besides Azure portal, you can also do document intelligence and index creation in ML studio. The entire process of index creation includes several steps, crack_and_chunk, generate_embeddings, update_index, and register_index. In Azure ML studio you can create or use components for each of those steps and stitch them together as a pipeline.
Section 1. What is it?
Usually, a ML pipeline component does the job in serial, for example, it crack_and_chunk each input file, i.e., pdf file, one by one. If there are a couple of thousands of files, it would take several hours to finish the crack_and_chunk, and several hours for generate_embeddings, a total of a dozen hours for the entire index creation job. Imagine if there are hundreds of thousands or millions of files, it would take weeks to finish the entire index creation process.
Parallel processing capability is extremely important to speed up the index creation process, where the two most time-consuming components are crack_and_chunk and generate_embeddings.
Below figure shows the two components applying parallel processing capability for index creation: crack_and_chunk_with_doc_intel_parallel and generate_embeddings_parallel.
Section 2. How is the parallelism achieved?
Given crack_and_chunk_with_doc_intel_parallel component as an example, the logic of parallel process is like this: the ML job is run on a compute cluster which includes multiple nodes with multiple processors in each node, all files in the input folder is distributed into mini_batches, so each processor can handle some mini_batches, in this way, all processors can execute the crack_and_chunk job in parallel. Compared with serial pipelines, the parallel processing significantly improves the processing speed.
Below shows an experiment of creating an index on about 120 pdf files, and compared the time spent on each step of the index creation. Parallel processing improved the speed a lot. Running on GPU cluster is even faster than on CPU cluster. I want to make a note here, for parallel processing, there is overhead at the beginning of the job for scheduling the tasks to each processor, for small number of input files, time saving of parallel processing comparing to serial process may not be significant; but if the number of input files is huge, the time saving will be more significant.
How is the parallelism implemented in Azure ML? Please see this article:
How to use parallel job in pipeline – Azure Machine Learning | Microsoft Learn
There are several functions: init(), run() and shutdown(). The shutdown() function is optional.
Section 3. Code example
Please see the code in azure-example github repo as example:
This code repo creates parallel run component crack_and_chunk_doc_intel_component_parallel and stitches other Azure built-in components together to create a ML pipeline, the file crack_and_chunk_with_doc_intel/crack_and_chunk_parallel.py implements the parallelism logic. Several ways of providing .pdf inputs are addressed in the .ipynb files in this code repo.
There are some especially important features supported in this implementation:
Error handling. During crack_and_chunk, errors may happen when processing certain files, if there is no error handling, the subsequent job will be halted. In this solution you can decide how many errors you want to ignore before halting the whole job, you can even decide to ignore all errors. So, if there are some input files causing errors, you can continue crack_and_chunk for other input files.
You can set desired timeout value to ensure enough time for some big input files to be processed (crack_and_chunk) and responses to be received.
Retry. If there is error in crack_and_chunk, you can set number of retries.
Be sure to check out this article for guidance of setting optimum parameters for parallel processing:
ParallelRunStep Performance Tuning Guide · Azure ML-Ops (Accelerator) (microsoft.github.io)
Section 4. Benefits of using Azure ML
Although there are other ways of creating AI search indexes, there are benefits of creating indexes in Azure ML.
The jobs get the whole managed environment in Azure ML including monitoring.
There are opportunities to use abundant compute resources in Azure ML platform, including VMs, CPUs, GPUs, etc.
Security and authentication features are provided, such as system identify, managed identity.
There are a variety of logs related to ML job execution, which help debugging and provide statistics for job analysis.
See picture below, this log tells how much time is spent on each min_batch.
There are other logs for performance, errors, user logs, system logs, etc.
Azure ML provides version control for output chunks, embeddings, and indexes. This provides flexibility for users to select desired version of these entities when building applications.
Below picture shows that you can specify the index version when you ask questions.
Azure ML can connect the index to promptflow natively.
For this parallel processing feature, a header is added to indicate the crack_and_chunk_parallel processing API call.
Some other capabilities can be built on top of this parallel processing ML pipeline:
Scheduling. Once you are satisfied with a ML job, you can set a recurrent schedule to run it.
Publish as pipeline endpoint, then submit to setup pipeline job easily.
Use the index in a promptflow.
Section 5. Future enhancements:
Some future enhancements are considered, for example, re-indexing, which is to detect the changes in input files and only update the index with the changes. We will experiment with that part and publish Part 2 of the solution in the future.
Acknowledgement:
Thanks to the reviewers for providing feedback, involving in the discussion, reviewing the code, or sharing experience in Azure ML parallel processing:
Alex Zeltov, Vincent Houdebine, Randy Thurman, Lu Zhang, Shu Peng, Jingyi Zhu, Long Chen, Alain Li, Yi Zhou.
Microsoft Tech Community – Latest Blogs –Read More
How can I use ginput in app designer?
I would like to select a range of points from one particular UIAxes in my app.
I have tried to use ginput ([x,y=ginput]), but the applications prompts an additional empty plot to select the range of points. How can I point an specific UIAxes?
Kind RegardsI would like to select a range of points from one particular UIAxes in my app.
I have tried to use ginput ([x,y=ginput]), but the applications prompts an additional empty plot to select the range of points. How can I point an specific UIAxes?
Kind Regards I would like to select a range of points from one particular UIAxes in my app.
I have tried to use ginput ([x,y=ginput]), but the applications prompts an additional empty plot to select the range of points. How can I point an specific UIAxes?
Kind Regards ginput app MATLAB Answers — New Questions
How to define an array based on Simulink simulation time?
Hello everyone, I’m attempting to define an array in a MATLAB Function block based on the simulation time in Simulink. In my code, I use ‘times index’ to record the number of simulation steps. I have set the time step to 0.01s.To achieve this goal, I used ‘persistent’ variables, but encountered an error. How can I implement my objective?
function [y,y1] = fcn(u)
persistent inputs; % 声明一个持久变量用于保存输入值
persistent time_index;
if isempty(inputs) % 如果持久变量为空,则初始化
inputs = u; % 将当前输入值保存到持久变量中
time_index = 1;
else
inputs = [inputs, u]; % 将当前输入值追加到持久变量中
time_index = time_index + 1;
end
% 计算平均值
mean_val = mean(inputs);
% 计算差值的平方
diff_squared = (inputs – mean_val).^2;
% 计算方差
y = sum(diff_squared) / length(inputs);
all_inputs(time_index) = u;
y1=all_inputs;
This is the error I encountered.Hello everyone, I’m attempting to define an array in a MATLAB Function block based on the simulation time in Simulink. In my code, I use ‘times index’ to record the number of simulation steps. I have set the time step to 0.01s.To achieve this goal, I used ‘persistent’ variables, but encountered an error. How can I implement my objective?
function [y,y1] = fcn(u)
persistent inputs; % 声明一个持久变量用于保存输入值
persistent time_index;
if isempty(inputs) % 如果持久变量为空,则初始化
inputs = u; % 将当前输入值保存到持久变量中
time_index = 1;
else
inputs = [inputs, u]; % 将当前输入值追加到持久变量中
time_index = time_index + 1;
end
% 计算平均值
mean_val = mean(inputs);
% 计算差值的平方
diff_squared = (inputs – mean_val).^2;
% 计算方差
y = sum(diff_squared) / length(inputs);
all_inputs(time_index) = u;
y1=all_inputs;
This is the error I encountered. Hello everyone, I’m attempting to define an array in a MATLAB Function block based on the simulation time in Simulink. In my code, I use ‘times index’ to record the number of simulation steps. I have set the time step to 0.01s.To achieve this goal, I used ‘persistent’ variables, but encountered an error. How can I implement my objective?
function [y,y1] = fcn(u)
persistent inputs; % 声明一个持久变量用于保存输入值
persistent time_index;
if isempty(inputs) % 如果持久变量为空,则初始化
inputs = u; % 将当前输入值保存到持久变量中
time_index = 1;
else
inputs = [inputs, u]; % 将当前输入值追加到持久变量中
time_index = time_index + 1;
end
% 计算平均值
mean_val = mean(inputs);
% 计算差值的平方
diff_squared = (inputs – mean_val).^2;
% 计算方差
y = sum(diff_squared) / length(inputs);
all_inputs(time_index) = u;
y1=all_inputs;
This is the error I encountered. simulink, matlab function, array define MATLAB Answers — New Questions
Geographical binning with HISTA not creating squares
Hello everybody!
My data contains latitude, longitute, time and multiple other variables. I need to bin this data for my needs.
>> head(w_files{1})
time lon lat pci_lte sinr_lte cinr_lte rsrp_lte prb_dl_lte prb_ul_lte bler_dl_lte bler_ul_lte mod_dl_0_lte mod_dl_1_lte mod_ul_lte earfnc_dl_lte earfnc_ul_lte tp_mac_dl_lte tp_mac_ul_lte tp_pdsch_lte tp_pusch_lte tp_pdcp_dl_lte tp_pdcp_ul_lte tm tx_rx bw_dl_lte bw_ul_lte band ca_dl ca_ul ca_comb
_______________________ ______ ______ _______ ________ ________ ________ __________ __________ ___________ ___________ ____________ ____________ __________ _____________ _____________ _____________ _____________ ____________ ____________ ______________ ______________ __________ _____________ _________ _________ ___________ __________ __________ _________________________
2020-12-08 11:48:13.000 14.563 50.035 NaN NaN 0 NaN NaN NaN NaN NaN {0×0 char} {0×0 char} {0×0 char} NaN NaN NaN NaN NaN NaN NaN NaN {0×0 char} {0×0 char } NaN NaN {0×0 char } {0×0 char} {0×0 char} {0×0 char }
2020-12-08 11:48:14.000 14.563 50.035 374 NaN 0 -61 41.49 0.74 9 5 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 238.43 0.002 238.43 0.327 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:15.000 14.563 50.035 374 11 0 -61 43.3 0.7 7 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 256.08 0.002 256.08 0.314 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:16.000 14.563 50.035 374 10 0 -61 44.31 0.69 6 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 257.2 0.001 257.2 0.324 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:17.000 14.563 50.035 374 10 0 -61 43.57 0.81 6 0 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 252.94 0.001 252.94 0.357 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:18.000 14.563 50.035 374 9 0 -63 44.93 0.79 7 0 {’64QAM’ } {0×0 char} {0×0 char} 6200 24200 226.24 0.002 226.24 0.35 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:19.000 14.563 50.035 374 7 0 -62 44.45 0.8 11 0 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 191.17 0.002 191.17 0.359 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:20.000 14.563 50.035 374 2 0 -66 44.18 0.82 12 2 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 177.6 0.002 177.6 0.372 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
I am using HISTA to find a 20×20 meters grid and applying it on the data sets.
[latbin, lonbin] = hista(w_files{ii}.lat,w_files{ii}.lon,0.0004);% bin at 20x20m,hista binning + computing binsID
[w_files{ii}.latEq, w_files{ii}.lonEq] = grn2eqa(w_files{ii}.lat,w_files{ii}.lon);% Convert coordinates to equidistant cartesian coordinates
[latbinEq, lonbinEq] = grn2eqa(latbin, lonbin);% Convert coordinates to equidistant cartesian coordinates
dist = pdist2([w_files{ii}.lonEq,w_files{ii}.latEq],[lonbinEq, latbinEq]);% Compute distance between each coordinate and each bin-center
[~, w_files{ii}.bin20] = min(dist,[],2);% Add bin ID numbers to table
When I run a figure with plot_google_map I am not getting squares but rectangles. Right now, I am not sure if I am just displaying it wrong or I have made some mistake in the binning. I have added one sample file for you guys to try. (You will need plot_google_map function to have the map picture but this issue is clearly visible without it as well).
figure()
hold on
scatter(lonbin, latbin, 200, 1:numel(lonbin), ‘Marker’,’*’,’LineWidth’,2) % bin centers
scatter(w_files{ii}.lon,w_files{ii}.lat, 50,w_files{ii}.bin20,’filled’)
cmap = colorcube(255);
colormap(cmap(1:end-10,:))
xlabel(‘longitude’); ylabel(‘latitude’)
latbinUnq = unique(latbin);
lonbinUnq = unique(lonbin);
set(gca, ‘xtick’, lonbinUnq(2:end)-diff(lonbinUnq)/2,’ytick’,latbinUnq(2:end)-diff(latbinUnq)/2)
grid on
plot_google_map(‘Scale’,2,’resize’,2,’ShowLabels’,0)%google_plot settings
What do you think? If you think you know the answer, please be very specific because I am a beginner and I might be dumb.Hello everybody!
My data contains latitude, longitute, time and multiple other variables. I need to bin this data for my needs.
>> head(w_files{1})
time lon lat pci_lte sinr_lte cinr_lte rsrp_lte prb_dl_lte prb_ul_lte bler_dl_lte bler_ul_lte mod_dl_0_lte mod_dl_1_lte mod_ul_lte earfnc_dl_lte earfnc_ul_lte tp_mac_dl_lte tp_mac_ul_lte tp_pdsch_lte tp_pusch_lte tp_pdcp_dl_lte tp_pdcp_ul_lte tm tx_rx bw_dl_lte bw_ul_lte band ca_dl ca_ul ca_comb
_______________________ ______ ______ _______ ________ ________ ________ __________ __________ ___________ ___________ ____________ ____________ __________ _____________ _____________ _____________ _____________ ____________ ____________ ______________ ______________ __________ _____________ _________ _________ ___________ __________ __________ _________________________
2020-12-08 11:48:13.000 14.563 50.035 NaN NaN 0 NaN NaN NaN NaN NaN {0×0 char} {0×0 char} {0×0 char} NaN NaN NaN NaN NaN NaN NaN NaN {0×0 char} {0×0 char } NaN NaN {0×0 char } {0×0 char} {0×0 char} {0×0 char }
2020-12-08 11:48:14.000 14.563 50.035 374 NaN 0 -61 41.49 0.74 9 5 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 238.43 0.002 238.43 0.327 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:15.000 14.563 50.035 374 11 0 -61 43.3 0.7 7 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 256.08 0.002 256.08 0.314 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:16.000 14.563 50.035 374 10 0 -61 44.31 0.69 6 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 257.2 0.001 257.2 0.324 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:17.000 14.563 50.035 374 10 0 -61 43.57 0.81 6 0 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 252.94 0.001 252.94 0.357 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:18.000 14.563 50.035 374 9 0 -63 44.93 0.79 7 0 {’64QAM’ } {0×0 char} {0×0 char} 6200 24200 226.24 0.002 226.24 0.35 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:19.000 14.563 50.035 374 7 0 -62 44.45 0.8 11 0 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 191.17 0.002 191.17 0.359 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:20.000 14.563 50.035 374 2 0 -66 44.18 0.82 12 2 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 177.6 0.002 177.6 0.372 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
I am using HISTA to find a 20×20 meters grid and applying it on the data sets.
[latbin, lonbin] = hista(w_files{ii}.lat,w_files{ii}.lon,0.0004);% bin at 20x20m,hista binning + computing binsID
[w_files{ii}.latEq, w_files{ii}.lonEq] = grn2eqa(w_files{ii}.lat,w_files{ii}.lon);% Convert coordinates to equidistant cartesian coordinates
[latbinEq, lonbinEq] = grn2eqa(latbin, lonbin);% Convert coordinates to equidistant cartesian coordinates
dist = pdist2([w_files{ii}.lonEq,w_files{ii}.latEq],[lonbinEq, latbinEq]);% Compute distance between each coordinate and each bin-center
[~, w_files{ii}.bin20] = min(dist,[],2);% Add bin ID numbers to table
When I run a figure with plot_google_map I am not getting squares but rectangles. Right now, I am not sure if I am just displaying it wrong or I have made some mistake in the binning. I have added one sample file for you guys to try. (You will need plot_google_map function to have the map picture but this issue is clearly visible without it as well).
figure()
hold on
scatter(lonbin, latbin, 200, 1:numel(lonbin), ‘Marker’,’*’,’LineWidth’,2) % bin centers
scatter(w_files{ii}.lon,w_files{ii}.lat, 50,w_files{ii}.bin20,’filled’)
cmap = colorcube(255);
colormap(cmap(1:end-10,:))
xlabel(‘longitude’); ylabel(‘latitude’)
latbinUnq = unique(latbin);
lonbinUnq = unique(lonbin);
set(gca, ‘xtick’, lonbinUnq(2:end)-diff(lonbinUnq)/2,’ytick’,latbinUnq(2:end)-diff(latbinUnq)/2)
grid on
plot_google_map(‘Scale’,2,’resize’,2,’ShowLabels’,0)%google_plot settings
What do you think? If you think you know the answer, please be very specific because I am a beginner and I might be dumb. Hello everybody!
My data contains latitude, longitute, time and multiple other variables. I need to bin this data for my needs.
>> head(w_files{1})
time lon lat pci_lte sinr_lte cinr_lte rsrp_lte prb_dl_lte prb_ul_lte bler_dl_lte bler_ul_lte mod_dl_0_lte mod_dl_1_lte mod_ul_lte earfnc_dl_lte earfnc_ul_lte tp_mac_dl_lte tp_mac_ul_lte tp_pdsch_lte tp_pusch_lte tp_pdcp_dl_lte tp_pdcp_ul_lte tm tx_rx bw_dl_lte bw_ul_lte band ca_dl ca_ul ca_comb
_______________________ ______ ______ _______ ________ ________ ________ __________ __________ ___________ ___________ ____________ ____________ __________ _____________ _____________ _____________ _____________ ____________ ____________ ______________ ______________ __________ _____________ _________ _________ ___________ __________ __________ _________________________
2020-12-08 11:48:13.000 14.563 50.035 NaN NaN 0 NaN NaN NaN NaN NaN {0×0 char} {0×0 char} {0×0 char} NaN NaN NaN NaN NaN NaN NaN NaN {0×0 char} {0×0 char } NaN NaN {0×0 char } {0×0 char} {0×0 char} {0×0 char }
2020-12-08 11:48:14.000 14.563 50.035 374 NaN 0 -61 41.49 0.74 9 5 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 238.43 0.002 238.43 0.327 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:15.000 14.563 50.035 374 11 0 -61 43.3 0.7 7 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 256.08 0.002 256.08 0.314 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:16.000 14.563 50.035 374 10 0 -61 44.31 0.69 6 2 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 257.2 0.001 257.2 0.324 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:17.000 14.563 50.035 374 10 0 -61 43.57 0.81 6 0 {‘256QAM’} {0×0 char} {0×0 char} 6200 24200 252.94 0.001 252.94 0.357 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:18.000 14.563 50.035 374 9 0 -63 44.93 0.79 7 0 {’64QAM’ } {0×0 char} {0×0 char} 6200 24200 226.24 0.002 226.24 0.35 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:19.000 14.563 50.035 374 7 0 -62 44.45 0.8 11 0 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 191.17 0.002 191.17 0.359 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
2020-12-08 11:48:20.000 14.563 50.035 374 2 0 -66 44.18 0.82 12 2 {’16QAM’ } {0×0 char} {0×0 char} 6200 24200 177.6 0.002 177.6 0.372 0 0 {‘TM4’ } {‘MIMO(4×2)’} 10 10 {‘Band 20’} {‘3CA’ } {‘NonCA’ } {‘Band 20+Band 7+Band 3’}
I am using HISTA to find a 20×20 meters grid and applying it on the data sets.
[latbin, lonbin] = hista(w_files{ii}.lat,w_files{ii}.lon,0.0004);% bin at 20x20m,hista binning + computing binsID
[w_files{ii}.latEq, w_files{ii}.lonEq] = grn2eqa(w_files{ii}.lat,w_files{ii}.lon);% Convert coordinates to equidistant cartesian coordinates
[latbinEq, lonbinEq] = grn2eqa(latbin, lonbin);% Convert coordinates to equidistant cartesian coordinates
dist = pdist2([w_files{ii}.lonEq,w_files{ii}.latEq],[lonbinEq, latbinEq]);% Compute distance between each coordinate and each bin-center
[~, w_files{ii}.bin20] = min(dist,[],2);% Add bin ID numbers to table
When I run a figure with plot_google_map I am not getting squares but rectangles. Right now, I am not sure if I am just displaying it wrong or I have made some mistake in the binning. I have added one sample file for you guys to try. (You will need plot_google_map function to have the map picture but this issue is clearly visible without it as well).
figure()
hold on
scatter(lonbin, latbin, 200, 1:numel(lonbin), ‘Marker’,’*’,’LineWidth’,2) % bin centers
scatter(w_files{ii}.lon,w_files{ii}.lat, 50,w_files{ii}.bin20,’filled’)
cmap = colorcube(255);
colormap(cmap(1:end-10,:))
xlabel(‘longitude’); ylabel(‘latitude’)
latbinUnq = unique(latbin);
lonbinUnq = unique(lonbin);
set(gca, ‘xtick’, lonbinUnq(2:end)-diff(lonbinUnq)/2,’ytick’,latbinUnq(2:end)-diff(latbinUnq)/2)
grid on
plot_google_map(‘Scale’,2,’resize’,2,’ShowLabels’,0)%google_plot settings
What do you think? If you think you know the answer, please be very specific because I am a beginner and I might be dumb. hista, geographical binning, plotting map MATLAB Answers — New Questions
I want to modify the code to plot the Lagrange polynomial interpolation with Chebyshev points. Map the n+ 1 Chebyshev interpolation points from [-1,1] to [2,3]
clear
n = 3; % the order of the polynomial
a = 2.0; % left end of the interval
b = 3.0; % right end of the interval
h = (b – a)/n; % interpolation grid size
t = a:h:b; % interpolation points
f = 1./t; % f(x) = 1./x, This is the function evaluated at interpolation points
%%%% pn(x) = sum f(t_i)l_i(x)
hh = 0.01; % grid to plot the function both f and p
x = a:hh:b;
fexact = 1./x; %exact function f at x
l = zeros(n+1, length(x)); %%%% l(1,:): l_0(x), …, l(n+1): l_n(x)
nn = ones(n+1, length(x));
d = ones(n + 1, length(x));
for i = 1:n+1
for j = 1:length(x)
nn(i,j) = 1;
d(i,j) = 1;
for k = 1:n+1
if i ~= k
nn(i,j) = nn(i,j) * (x(j) – t(k));
d(i,j) = d(i,j) * (t(i) – t(k));
end
end
l(i,j) = nn(i,j)/d(i,j);
end
end
fapp = zeros(length(x),1);
for j = 1:length(x)
for i=1:n+1
fapp(j) = fapp(j) + f(i)*l(i,j);
end
end
En = 0;
Ed = 0;
for i = 1:length(x)
Ed = Ed + fexact(i)^2;
En = En + (fexact(i) – fapp(i))^2;
end
Ed = sqrt(Ed);
En = sqrt(En);
E = En/Ed;
display(E)
plot(x,fexact,’b*-‘)
hold on
plot(x,fapp,’ro-‘ )clear
n = 3; % the order of the polynomial
a = 2.0; % left end of the interval
b = 3.0; % right end of the interval
h = (b – a)/n; % interpolation grid size
t = a:h:b; % interpolation points
f = 1./t; % f(x) = 1./x, This is the function evaluated at interpolation points
%%%% pn(x) = sum f(t_i)l_i(x)
hh = 0.01; % grid to plot the function both f and p
x = a:hh:b;
fexact = 1./x; %exact function f at x
l = zeros(n+1, length(x)); %%%% l(1,:): l_0(x), …, l(n+1): l_n(x)
nn = ones(n+1, length(x));
d = ones(n + 1, length(x));
for i = 1:n+1
for j = 1:length(x)
nn(i,j) = 1;
d(i,j) = 1;
for k = 1:n+1
if i ~= k
nn(i,j) = nn(i,j) * (x(j) – t(k));
d(i,j) = d(i,j) * (t(i) – t(k));
end
end
l(i,j) = nn(i,j)/d(i,j);
end
end
fapp = zeros(length(x),1);
for j = 1:length(x)
for i=1:n+1
fapp(j) = fapp(j) + f(i)*l(i,j);
end
end
En = 0;
Ed = 0;
for i = 1:length(x)
Ed = Ed + fexact(i)^2;
En = En + (fexact(i) – fapp(i))^2;
end
Ed = sqrt(Ed);
En = sqrt(En);
E = En/Ed;
display(E)
plot(x,fexact,’b*-‘)
hold on
plot(x,fapp,’ro-‘ ) clear
n = 3; % the order of the polynomial
a = 2.0; % left end of the interval
b = 3.0; % right end of the interval
h = (b – a)/n; % interpolation grid size
t = a:h:b; % interpolation points
f = 1./t; % f(x) = 1./x, This is the function evaluated at interpolation points
%%%% pn(x) = sum f(t_i)l_i(x)
hh = 0.01; % grid to plot the function both f and p
x = a:hh:b;
fexact = 1./x; %exact function f at x
l = zeros(n+1, length(x)); %%%% l(1,:): l_0(x), …, l(n+1): l_n(x)
nn = ones(n+1, length(x));
d = ones(n + 1, length(x));
for i = 1:n+1
for j = 1:length(x)
nn(i,j) = 1;
d(i,j) = 1;
for k = 1:n+1
if i ~= k
nn(i,j) = nn(i,j) * (x(j) – t(k));
d(i,j) = d(i,j) * (t(i) – t(k));
end
end
l(i,j) = nn(i,j)/d(i,j);
end
end
fapp = zeros(length(x),1);
for j = 1:length(x)
for i=1:n+1
fapp(j) = fapp(j) + f(i)*l(i,j);
end
end
En = 0;
Ed = 0;
for i = 1:length(x)
Ed = Ed + fexact(i)^2;
En = En + (fexact(i) – fapp(i))^2;
end
Ed = sqrt(Ed);
En = sqrt(En);
E = En/Ed;
display(E)
plot(x,fexact,’b*-‘)
hold on
plot(x,fapp,’ro-‘ ) chebyshev, points MATLAB Answers — New Questions
Feature Request: Sync Microsoft Planner Tasks (without due date) to Microsoft To Do
One feature that would help us leverage To Do as a one-stop shop would be syncing in Microsoft Planner Tasks that exists without due date. Is there a current work around other than adding a fake due date for these items?
One feature that would help us leverage To Do as a one-stop shop would be syncing in Microsoft Planner Tasks that exists without due date. Is there a current work around other than adding a fake due date for these items? Read More
Create a master sheet that updates subsequent sheets (like creating or deleting columns)
I would like a master sheet will all the names that will add or remove columns when new names are added/deleted. The arrow represents the end result of what the other sheet would look like after the master sheet removed “Randy”.
I would like a master sheet will all the names that will add or remove columns when new names are added/deleted. The arrow represents the end result of what the other sheet would look like after the master sheet removed “Randy”. Read More
Copilot para usuarios externos de la organización
Hola!
Yo tengo copilot para M365 y creado reuniones en teams con usuarios externos a mi organización(clientes); sin embargo, copilot no se puede utilizar en esas sesiones. ¿Cómo podría darle uso ahí? hay algún permiso que debo tomar?
Me sale que el transcript no está habilitado para estos usuarios, pero con usuarios de mi organización sí está habilitado.
Saludos.
Hola! Yo tengo copilot para M365 y creado reuniones en teams con usuarios externos a mi organización(clientes); sin embargo, copilot no se puede utilizar en esas sesiones. ¿Cómo podría darle uso ahí? hay algún permiso que debo tomar? Me sale que el transcript no está habilitado para estos usuarios, pero con usuarios de mi organización sí está habilitado. Saludos. Read More
Partner Spotlight: Improving Colleague Experiences with Copilot and M365
As part of the Microsoft #BuildFor2030 Initiative, which aligns with the United Nations Sustainable Development Goals, we are committed to showcasing solutions that drive meaningful societal impact and spotlighting our partners’ growth stories on the marketplace. Throughout the series, we will be telling the unique stories of partners who are leading the way with AI in app development, who are building using multiple Microsoft products, and who are publishing transactable applications on the marketplace. In this article, Microsoft’s Andrea Katsivelis sat down with Nexer Digital’s Hilary Stephenson to learn more about their story and partner journey.
About Hilary: Hilary Stephenson is the Founder and Managing Director at Nexer Digital (founded in 2007 under the name Sigma). With a background in content design, and having started out in technical documentation and early web publishing, Hilary has been involved in user-centered design throughout her career. She is passionate about accessibility and inclusion and helping clients to embrace them.
About Andrea: Andrea Katsivelis, a global GTM director at Microsoft, specializes in AI, cloud, industry, and accessible solutions. Andrea leads integration strategies and fosters collaboration for corporate acquisitions and partner co-sell, accelerating market impact and revenue growth. Her commitment to marketing and communications excellence, accessibility, and DEI, along with her results-driven approach, embody Microsoft’s vision for inclusive innovation. Andrea is DEI Workplace certified and mentors in women’s leadership programs.
____________________________________________________________________________________________________________________________
[AK]: Tell us about Nexer and your mission. What inspired the founding?
[HS]: I have a background in user centered content design and accessibility. When asked to set up a new consulting company in the UK for our parent company, Nexer AB in 2007, I was keen to shape our hiring, services and sector focus around digital inclusion and social impact. I’m happy to say that we have grown to 100+ people across the UK, offering user research, product and service design to help make products and services more accessible and inclusive. We do this primarily in the health, government, charity and education sectors, meaning we are lucky to work on projects affecting a huge audience on behalf of our clients.
[AK]: Can you tell us a bit about the offer(s) you have available on the marketplace?
[HS]: We offer a range of services, from role-based training to accessibility audits, including an informative awareness session on Accessibility and Inclusion designed to raise awareness across teams, help organisations establish where they are on their accessibility journey, and where they’d like to be. This townhall-style session is perfect businesses of all shapes and sizes and includes senior stakeholder participation to establish buy-in, which is a crucial step in promoting accessibility as a core value.
Our Nexer Digital accessibility team, some of whom have personal lived experience lead the session. They share valuable insights on inclusive design and the challenges faced by users with disabilities when interacting with non-inclusive products, including those built with Microsoft products. The session also explores how Microsoft 365 and Teams can empower businesses to better support employees, customers, and citizens for more accessible and equitable digital experiences.
[AK]: How is Nexer helping customers make the most of Microsoft Teams and M365 Copilot, from an accessibility and disability inclusion perspective?
[HS]: We’ve been exploring how Copilot and M365 can make the workplace more accessible for users with disabilities. Through sharing their lived experience, our “Access at Nexer” Employee Resource group has been investigating the barriers that exist for individuals with different access needs, and how we can leverage built-in features including captioning and screen readers to address them.
We’ve also been interested in features like Copilot’s ability to summarise information from Teams and Outlook, generate transcripts, and convert data into accessible formats, all of which can reduce reliance on manual notetaking, reduce the cognitive load of such tasks, and allow for easier collaboration. This has particular benefits for neurodivergent colleagues.
This translates directly into the kind of support we can offer our clients too. By helping them prepare for Copilot, testing with real users, and supporting them to mature their approach to accessibility through training and awareness-raising, we’re ensuring they’re optimised and ready to make the most of all the benefits that Copilot and the M365 suite can offer.
[AK]: Nexer has been a part of driving the Accessibility agenda forward, leveraging the Microsoft Accessibility Horizons framework. We’re excited to feature your work as part of Horizons 1- Adopt: Enhance colleague experiences. What has been your experience engaging customers on the topic?
[HS]: We have worked with Microsoft to make our approach to accessibility onto the Horizon levels. For us, this means categorising our awareness raising, audits and training under the headings of Engage, Equip and Embed, taking clients from building knowledge, skills and capacity through to active advocacy and communities of practice. Microsoft have been hugely supportive in helping us develop this model. Clients are now opening their minds to the concept of colleague experience, where we are sharing guidance, use cases and experiments from our work with M265 and Copilot. We feel positive that we can use this framework to bring inclusion to the workplace and enhanced usability to corporate tools, as well as help shape policy around access to work, procurement and support for employees.
[AK]: How does your work align and support the UN SDGs? Can you share how work with customers has created business value and supported positive inclusion outcomes?
[HS]: Our work promotes the prioritisation of accessibility in the workplace, aligning with several UN Sustainable Development Goals (SDGs). By making corporate tools and digital workplaces more inclusive through the accessibility features found in Microsoft 365, we help our clients foster more equitable work environments (SDG 8: Decent Work and Economic Growth). Our approach also contributes to SDG 16: Peace, Justice and Strong Institutions by promoting a more inclusive society, and we aim to create and promote fairer, accessible workplaces, both physical and digital, where everyone can participate and feel welcome.
Our work with Bupa, a major health insurance company with 45 million customers really demonstrates how accessibility efforts can create both business value and positive inclusion outcomes (SDG 8: Decent Work and Economic Growth).
Through accessibility audits, inclusive usability testing, and training programs, we helped Bupa identify and address accessibility barriers across their digital platforms including mobile and web. This programme of work led to a more inclusive user experience for their diverse customer base.
This project also fostered a cultural shift within Bupa. Through role-based training we empowered staff from across the organisation, including the C-suite with the knowledge and tools to prioritise accessibility, creating a more inclusive work environment and aligning with SDG 8’s focus on promoting decent work for all. This commitment from Bupa’s leadership also secured ongoing resources for continued progress within the organisation.
[AK]: How do you suggest other Microsoft partners and all organizations start or grow their accessibility journey?
[HS]: Organizations can often be nervous about sharing their progress with accessibility, in fear of being told what they haven’t yet fixed. Litigation and a ack of understanding on where to start can be a real blocker to organisations talking about the subject. It is a journey though, so at Nexer, we always encourage our customers to share every step taken. Even better if they can do this in the context of a tools audit, product roadmap or accessibility statement, where they acknowledge what they’ve achieved and are transparent about the work still to be done. Making a commitment is vital and the Horizon model works perfectly in this context, as it’s about raising awareness, building confidence and creating mature communities of practice. The more people who share their progress, the greater the encouragement for others to follow.
[AK]: What are you most proud of in your journey building/leading Nexer? What’s next?
[HS]: We’ve built a real sense of community around accessibility over the last 20 years, which extends far beyond our own people and our immediate client work. This includes the relationship we have with Microsoft but also the partners we share in common, such as Purify Technology or Anywhere365. We help them understand the practical applicability of accessibility in their own work, from making meetings and Teams rooms more inclusive to creating contact centre scripts that seek to engage rather than alienate. It’s collaboration over competition. We speak at conferences, host meet-ups, work with freelancers and give accessibility a stage at our Camp Digital conference each year, and this network powers us forward. The next step will be to harness the true potential Copilot has for organizational inclusion, from access to work and on-boarding through to making corporate platforms usable and supportive.
_______________________________________________________________________________________________________
Join the marketplace community today! Just click “join” on the upper right corner of our marketplace community page. You can also subscribe to the community to stay updated on the latest stories of how these inspiring leaders carved their career paths, what lessons they learned along the way, and more.
Resources:
Join ISV Success
Join the marketplace community
Join the Microsoft #BuildFor2030 Initiative, a call-to-action for Microsoft partners to drive changemaking, innovation, and collective impact, to help advance the United Nations Sustainable Development Goals. Hear our partners’ perspective on participating.
Attest as a social impact or diverse business in Partner Center and be discovered on the marketplace.
Watch how Nexer drives inclusive colleague experiences forward with Accessibility Horizons 1
Microsoft Tech Community – Latest Blogs –Read More
Updating the Microsoft Family Safety app – Microsoft Support
Learn how to update the Microsoft Family Safety app by locating your app version.
Simulink Requirement Editor: Indentation of Requirements upon importing them from Excel
I want to import Requirements from an Excel File in the Requirements Editor of Simulink (2018a). I would like to be able to "group", or unfold/collapse requirements according to their topic, as it is shown in the example "Migrating Requirements Management Interface Data to Simulink® Requirements™" (https://ch.mathworks.com/help/slrequirements/examples/_mw_f304f626-0b76-4fbf-a313-1ab233014a0b.html) when requirements are imported from a word file (where apparently Bookmarks are recognized for that).
Is there a way to import requirements from excel, in order to have them in a collapsible structure in the Requirement editor? I have tried the REGEX way where I specified a regex expression for the unique Req IDs (IDs are recognized correctly, i.e. they turn red in the newly opened excel file that appears when pressing "Preview", and after the last item in a line there’s a green item added, which I assume is a line termination item), but the mechanics behind when a new collapsible item is created is a bit shady to me.
Are named Excel cells the way to go? If yes, what’s the strategy?
Thanks for your help!
StefanI want to import Requirements from an Excel File in the Requirements Editor of Simulink (2018a). I would like to be able to "group", or unfold/collapse requirements according to their topic, as it is shown in the example "Migrating Requirements Management Interface Data to Simulink® Requirements™" (https://ch.mathworks.com/help/slrequirements/examples/_mw_f304f626-0b76-4fbf-a313-1ab233014a0b.html) when requirements are imported from a word file (where apparently Bookmarks are recognized for that).
Is there a way to import requirements from excel, in order to have them in a collapsible structure in the Requirement editor? I have tried the REGEX way where I specified a regex expression for the unique Req IDs (IDs are recognized correctly, i.e. they turn red in the newly opened excel file that appears when pressing "Preview", and after the last item in a line there’s a green item added, which I assume is a line termination item), but the mechanics behind when a new collapsible item is created is a bit shady to me.
Are named Excel cells the way to go? If yes, what’s the strategy?
Thanks for your help!
Stefan I want to import Requirements from an Excel File in the Requirements Editor of Simulink (2018a). I would like to be able to "group", or unfold/collapse requirements according to their topic, as it is shown in the example "Migrating Requirements Management Interface Data to Simulink® Requirements™" (https://ch.mathworks.com/help/slrequirements/examples/_mw_f304f626-0b76-4fbf-a313-1ab233014a0b.html) when requirements are imported from a word file (where apparently Bookmarks are recognized for that).
Is there a way to import requirements from excel, in order to have them in a collapsible structure in the Requirement editor? I have tried the REGEX way where I specified a regex expression for the unique Req IDs (IDs are recognized correctly, i.e. they turn red in the newly opened excel file that appears when pressing "Preview", and after the last item in a line there’s a green item added, which I assume is a line termination item), but the mechanics behind when a new collapsible item is created is a bit shady to me.
Are named Excel cells the way to go? If yes, what’s the strategy?
Thanks for your help!
Stefan requiremens editor, simulink, import excel requirements, 2018a, regex MATLAB Answers — New Questions
Can I get user email addresses from license server checkout/checkin log?
Hello,
I would like to find a way to get the email address of users of the MATLAB client software whenever they checkout/checkin a software license. Looking at the current software license log for FlexLM, I only see the system name registered for checkout/checkin events. Can you please tell me if there is a way to also include the user’s email address in the log?
Thank you,
-brianHello,
I would like to find a way to get the email address of users of the MATLAB client software whenever they checkout/checkin a software license. Looking at the current software license log for FlexLM, I only see the system name registered for checkout/checkin events. Can you please tell me if there is a way to also include the user’s email address in the log?
Thank you,
-brian Hello,
I would like to find a way to get the email address of users of the MATLAB client software whenever they checkout/checkin a software license. Looking at the current software license log for FlexLM, I only see the system name registered for checkout/checkin events. Can you please tell me if there is a way to also include the user’s email address in the log?
Thank you,
-brian license, checkout, user email MATLAB Answers — New Questions
What are the payment options for Training Classes?
What are the payment options for Training Classes?What are the payment options for Training Classes? What are the payment options for Training Classes? training, payment MATLAB Answers — New Questions
Automatic Age Calculation in Word Document
Hi everyone,
I am writing a Business Plan and I’d like to insert a person’s age automatically as I don’t know when this document will be needed. Therefore I just want to make the document “smarter” by automating some things hence don’t worry about updating it in x years when it is needed.
After a couple of hours of research now and endless attempts to do so I reached the end of my patience. The last thing I did was working with fields and inserting formulars I found online.
Here is what I have written in a field (by pressing CTRL+F9): {=INT ({DATE@”yyyy”} + {DATE@”M”}/12 + {DATE@”d”}/365.25-1989-8/12-25/365.25)}
Here you can see that I want to calculate the age today of this person (DOB: 25 August 1989).
Any help here is greatly appreciated. Not sure if it is better to work with fields here or macros. Happy for every solution which is “easy” to implement 🙂
Hi everyone, I am writing a Business Plan and I’d like to insert a person’s age automatically as I don’t know when this document will be needed. Therefore I just want to make the document “smarter” by automating some things hence don’t worry about updating it in x years when it is needed. After a couple of hours of research now and endless attempts to do so I reached the end of my patience. The last thing I did was working with fields and inserting formulars I found online. Here is what I have written in a field (by pressing CTRL+F9): {=INT ({DATE@”yyyy”} + {DATE@”M”}/12 + {DATE@”d”}/365.25-1989-8/12-25/365.25)}Here you can see that I want to calculate the age today of this person (DOB: 25 August 1989). Any help here is greatly appreciated. Not sure if it is better to work with fields here or macros. Happy for every solution which is “easy” to implement 🙂 Read More
XIAD Train the Trainer Events for Partners
The Microsoft In a Day (XIAD) events program is thrilled to announce upcoming XIAD Train the Trainer events for partners:
Automation in a Day (AuIAD) – Friday June 14, 2024 – 9am-5pm Central European Standard Time (UTC +2)
Power Pages in a Day (PPIAD) – Friday June 21, 2024 – 9am-5pm Central European Standard Time (UTC +2)
Register for an upcoming session at: https://aka.ms/xiadTTT
This is a great opportunity for partners interested in delivering these events to learn the content, event delivery tips and best practices from an experienced partner. For more information about the XIAD program, please visit https://aka.ms/XIADPartnerOpportunity
The Microsoft In a Day (XIAD) events program is thrilled to announce upcoming XIAD Train the Trainer events for partners:
Automation in a Day (AuIAD) – Friday June 14, 2024 – 9am-5pm Central European Standard Time (UTC +2)
Power Pages in a Day (PPIAD) – Friday June 21, 2024 – 9am-5pm Central European Standard Time (UTC +2)
Register for an upcoming session at: https://aka.ms/xiadTTT
This is a great opportunity for partners interested in delivering these events to learn the content, event delivery tips and best practices from an experienced partner. For more information about the XIAD program, please visit https://aka.ms/XIADPartnerOpportunity Read More
Still waiting for my order to be processed
Dear sirs,
i am still waiting for my order to be processed. I am frankly quite disapointed that it has not been processed. I decided to join you after some investigation. Please help expedite my order and send me details.
Best regards
Pálmi Ragnar PéturssonDear sirs,
i am still waiting for my order to be processed. I am frankly quite disapointed that it has not been processed. I decided to join you after some investigation. Please help expedite my order and send me details.
Best regards
Pálmi Ragnar Pétursson Dear sirs,
i am still waiting for my order to be processed. I am frankly quite disapointed that it has not been processed. I decided to join you after some investigation. Please help expedite my order and send me details.
Best regards
Pálmi Ragnar Pétursson thingspeak MATLAB Answers — New Questions