Category: News
Why are the final values for velocity and acceleration from bsplinepolytraj() always equal to zero?
When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer?When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer? When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer? bsplinepolytraj, spline, curve fitting, robotics, trajectory MATLAB Answers — New Questions
Show Roll up Date in Delivery Plan based on Sprint Planning
Hello,
I am currently managing a T&T with the JIRA Instance from the client. In JIRA you can create a delivery plan (JIRA Plans) that automatically updates the duration of the parent Work items (Initiative / Epic) based on the Stories which are assigned to the sprint.
This means the team “just” assigns their stories to the current and future sprints and you will automatically get a complete Delivery Plan for the project.
My question:
Is it possible in “ADO Delivery Plans” to have the Highlevel Work items (Initiative, Epic, Feature) automatically span over multiple sprints based on the child stories and the sprints that they are assigned to without manually setting Start- and End Dates for the Work items (Initiative, Epic, Feature).
See Example from JIRA below – the dashed lines show that the date for the initiative and the Epics are based on the stories and their sprint assignment.
Thank you very much for your input!
NOTE: Maybe there is an extension that we can buy to make this happen.
Hello, I am currently managing a T&T with the JIRA Instance from the client. In JIRA you can create a delivery plan (JIRA Plans) that automatically updates the duration of the parent Work items (Initiative / Epic) based on the Stories which are assigned to the sprint. This means the team “just” assigns their stories to the current and future sprints and you will automatically get a complete Delivery Plan for the project. My question:Is it possible in “ADO Delivery Plans” to have the Highlevel Work items (Initiative, Epic, Feature) automatically span over multiple sprints based on the child stories and the sprints that they are assigned to without manually setting Start- and End Dates for the Work items (Initiative, Epic, Feature). See Example from JIRA below – the dashed lines show that the date for the initiative and the Epics are based on the stories and their sprint assignment. Thank you very much for your input! NOTE: Maybe there is an extension that we can buy to make this happen. Read More
Q.B has encountered a problem sending your usage data after new update?
I’m facing an issue with Q.B displaying the message “Q.B has encountered a problem sending your usage data.” What does this mean, and how can I fix it?
I’m facing an issue with Q.B displaying the message “Q.B has encountered a problem sending your usage data.” What does this mean, and how can I fix it? Read More
Auto-scroll an Excel workbook
Hi Guys,
Looking for some assistance, I am hooking up a mini PC to a Samsung Smart TV to display an Excel spreadsheet, bascially highlights what projects we have on and some info in a table.
The issue is, it won’t all fit on one page, is there anyway of the workbook auto scrolling to the bottom of the table and then starting from the top again?
I’m presuming you can through VBA but not used it too much so if someone has a guide or a method of doing this so it looks professional in the office please let me know. Just incase this matters too, this document will be stored on a sharepoint site and opened live, not sure if the VBA carries over but users will be updating this table via their own machines and hopefully the one on the TV will update automatically.
Thanks
Hi Guys, Looking for some assistance, I am hooking up a mini PC to a Samsung Smart TV to display an Excel spreadsheet, bascially highlights what projects we have on and some info in a table. The issue is, it won’t all fit on one page, is there anyway of the workbook auto scrolling to the bottom of the table and then starting from the top again? I’m presuming you can through VBA but not used it too much so if someone has a guide or a method of doing this so it looks professional in the office please let me know. Just incase this matters too, this document will be stored on a sharepoint site and opened live, not sure if the VBA carries over but users will be updating this table via their own machines and hopefully the one on the TV will update automatically. Thanks Read More
remove password form ms excel file in ms office professional 2021
Hi Friends,
I want to totally remove the password from my excel file. Please let me know how to go about it. Thanks
Hi Friends, I want to totally remove the password from my excel file. Please let me know how to go about it. Thanks Read More
Schedule report chart editing
Hello Guys,
I am trying to edit the chart to hide the resource legends that have no value or the ones that are not used. The bar chart itself is very small but the unused legends are taking most of the space in the chart. How can i fix it?
Hello Guys, I am trying to edit the chart to hide the resource legends that have no value or the ones that are not used. The bar chart itself is very small but the unused legends are taking most of the space in the chart. How can i fix it? Read More
migrate files from one drive
hi all,
I’m looking for the best way to migrate over thousands of files from one drive to SharePoint. Could you please help.
hi all,I’m looking for the best way to migrate over thousands of files from one drive to SharePoint. Could you please help. Read More
Error when trying to sign the contract Microsoft AI Cloud Partner Program
We are unable to sign the contract due to an error. Please take a look at the screenshot. Can I ask for help in resolving this?
We are unable to sign the contract due to an error. Please take a look at the screenshot. Can I ask for help in resolving this? Read More
Azure Monitor Alert Alerts: Log Search Alerts with Dynamic Thresholds
Azure Monitor introduces Dynamic Thresholds also for Log Search Rules, revolutionizing how you set up log and monitor search alerts. Say goodbye to manual threshold tuning and hello to intelligent, adaptable monitoring.
Here’s why dynamic thresholds are a game-changer:
Automatic Calibration: Dynamic thresholds calculate the right alert levels for you. They adjust as your system evolves, ensuring timely alerts without false positives.
Smart Learning: Dynamic thresholds analyzing historical data, learning patterns and trends. They adapt to your application’s unique behavior, whether it’s daily spikes or weekly lulls.
Alerting At Scale: Create a single rule for any multi dimensions alert. Dynamic thresholds define different alert threshold band for every dimension combination.
Effortless Setup: Just enable dynamic thresholds, no need to have a specific knowledge of the data to setup alert thresholds.
Dynamic thresholds empower you to stay proactive, minimize downtime, and keep your systems running smoothly.
Use Cases
Here you can find use cases for dynamic threshold:
Use Case: Monitoring CPU Behavior in Virtual Machines
Background: Users can now calculate guest VM metrics using the Perf table in Log Analytics, enabling the creation of a single alert rule for all your VMs across different regions using dimensions. Previously, customers could only set up dynamic threshold metric alerts for host CPU usage.
Goal Statement: The primary goal of this use case is to monitor the CPU behavior within virtual machines (VMs) and detect irregular patterns that may indicate performance issues.
Scenario definitions:
Problem Identification:
The team wants to ensure optimal performance and identify any CPU-related issues promptly.
Use Case Description:
The CPU utilization data is being collected from each VM.
The system using the model analyses the CPU behavior over time, looking for deviations from the expected pattern.
Deviations may include sudden spikes, prolonged high usage, or unexpected drops in CPU utilization.
Trigger:
Azure monitor triggers a log search alert once the CPU is higher than the regular patterns, which means that the alert is out of the upper boundaries.
Benefits:
Early detection of CPU-related problems helps prevent performance degradation.
Proactive monitoring ensures efficient resource utilization.
Improved system stability and responsiveness.
In Perf table there is an option to monitor other “Counter Value” instead of CPU. Examples can be found here.
ARM template example: {
“$schema”: “https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“scheduledqueryrules_PerfDemoRule_name”: {
“defaultValue”: “PerfDemoRule”,
“type”: “String”
},
“workspaces_PerfDemoWorkspace_externalid”: {
“defaultValue”: “/subscriptions/XXXX-XXXX-XXXX-XXXX/resourceGroups/XXXX/providers/Microsoft.OperationalInsights/workspaces/PerfDemoWorkspace”,
“type”: “String”
}
},
“variables”: {},
“resources”: [
{
“type”: “microsoft.insights/scheduledqueryrules”,
“apiVersion”: “2024-01-01-preview”,
“name”: “[parameters(‘scheduledqueryrules_PerfDemoRule_name’)]”,
“location”: “eastus2”,
“properties”: {
“displayName”: “[parameters(‘scheduledqueryrules_PerfDemoRule_name’)]”,
“severity”: 3,
“enabled”: true,
“evaluationFrequency”: “PT5M”,
“scopes”: [
“[parameters(‘workspaces_PerfDemoWorkspace_externalid’)]”
],
“targetResourceTypes”: [
“Microsoft.Compute/virtualMachines”
],
“windowSize”: “PT5M”,
“criteria”: {
“allOf”: [
{
“query”: “Perf | where CounterName == “Available MBytes” and InstanceName == “_Total” | project TimeGenerated, CounterValue, Computer,_ResourceIdn”,
“timeAggregation”: “Average”,
“metricMeasureColumn”: “CounterValue”,
“dimensions”: [],
“resourceIdColumn”: “_ResourceId”,
“operator”: “GreaterThan”,
“alertSensitivity”: “High”,
“criterionType”: “DynamicThresholdCriterion”,
“failingPeriods”: {
“numberOfEvaluationPeriods”: 1,
“minFailingPeriodsToAlert”: 1
}
}
]
},
“autoMitigate”: false
}
}
]
}
Use Case: Monitor Behavior Network in Application Insight Virtual Machines
Goal Statement: The primary goal of this use case is to monitor the network write behavior within virtual machines (VMs) and detect irregular patterns that may indicate performance issues or anomalies.
Scenario Definitions:
Problem Identification:
The team aims to ensure optimal performance and promptly identify any network write-related issues within their VMs.
Use Case Description:
The system periodically collects network write data from each VM using dynamic thresholds models.
The models analyze the network write behavior over time, specifically looking for deviations from the expected pattern.
Deviations may include sudden spikes, prolonged high usage, or unexpected drops in network write activity.
Trigger:
Azure monitor triggers a log search alert when network write behavior exceeds the regular patterns, indicating that the alert is beyond the upper boundaries.
Benefits:
Early detection of network write-related problems helps prevent performance degradation.
Proactive monitoring ensures efficient resource utilization.
Improved system stability and responsiveness.
ARM template example:
{
“$schema”: “https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“scheduledqueryrules_LogSearch1ActionGroup_name”: {
“defaultValue”: “LogSearch1ActionGroup”,
“type”: “String”
},
“components_ACME_Portal_externalid”: {
“defaultValue”: “/subscriptions/XXXX-XXXX-XXXX-XXXX/resourceGroups/XXXX-XXXX/microsoft.insights/components/ACME-Portal”,
“type”: “String”
}
},
“variables”: {},
“resources”: [
{
“type”: “microsoft.insights/scheduledqueryrules”,
“apiVersion”: “2024-01-01-preview”,
“name”: “[parameters(‘scheduledqueryrules_LogSearch1ActionGroup_name’)]”,
“location”: “eastus”,
“properties”: {
“displayName”: “[parameters(‘scheduledqueryrules_LogSearch1ActionGroup_name’)]”,
“severity”: 3,
“enabled”: true,
“evaluationFrequency”: “PT5M”,
“scopes”: [
“[parameters(‘components_ACME_Portal_externalid’)]”
],
“targetResourceTypes”: [
“microsoft.insights/components”
],
“windowSize”: “PT30M”,
“criteria”: {
“allOf”: [
{
“query”: “InsightsMetrics| where Origin == “vm.azm.ms”| where Namespace == “Network” and Name == “WriteBytesPerSecond”| extend NetworkInterface=tostring(todynamic(Tags)[“vm.azm.ms/networkDeviceId”])|summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface,
“timeAggregation”: “Average”,
“metricMeasureColumn”: “AggregatedValue”,
“dimensions”:[
{
“name”: “Computer”,
“operator”: “Include”,
“values”: “[[parameters(‘computersToInclude’)]”
},
{
“name”: “NetworkInterface”,
“operator”: “Include”,
“values”: “[[parameters(‘networkInterfacesToInclude’)]”
}
],
“operator”: “GreaterThan”,
“alertSensitivity”: “High”,
“criterionType”: “DynamicThresholdCriterion”,
“resourceIdColumn”: “_ResourceId”,
“failingPeriods”: {
“numberOfEvaluationPeriods”: 1,
“minFailingPeriodsToAlert”: 1
}
}
]
},
“autoMitigate”: false
}
}
]
}
How to create Dynamic Threshold ARM template
You can easily change a log search rule (with a static threshold) template to be a dynamic one by making the following changes:
In the “allOf” condition:
Addition of “criterionType”: “DynamicThresholdCriterion”
Addition of “alertSensitivity”.
Removal of “threshold” parameter.
Update api-version in template to be “2024-01-01-preview”
Summary
In the world of monitoring and alerting, precision matters. Enter Dynamic Thresholds—a game-changer for Log Search Rules. Here’s why they’re essential:
Anomaly Detection:
Dynamic thresholds rely on advanced algorithms to calculate expected performance ranges based on historical data.
They identify anomalies—sudden spikes, drops, or irregular patterns—that warrant attention.
Efficiency Boost:
No more manual threshold tuning. Dynamic thresholds adapt automatically.
Scale alerts across hundreds of dimension combinations series with a single rule.
Stay Ahead:
Early detection prevents performance degradation.
Proactively manage resource utilization for improved stability and responsiveness.
Dynamic thresholds empower you to be proactive, responsive, and precise.
Microsoft Tech Community – Latest Blogs –Read More
How do I get run time or system time of my Speedgoat target computer in R2020b?
I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer.I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer. I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer. MATLAB Answers — New Questions
Training a neural network for different operating points
Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ?Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ? Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ? matlab, neural network, deep learning MATLAB Answers — New Questions
How to fix Polyspace CodeProver Orange warnings due to + operator
Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ?Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ? Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ? codeprover, orange, overflow, +operator MATLAB Answers — New Questions
Problems with display country image in sharepoint column
Hi,
I have a sharepoint choice-column for selecting a country based on the iso-code (US,BE,NL,ES,…..)
I used json column formatting to display the countryflag :
Hi,I have a sharepoint choice-column for selecting a country based on the iso-code (US,BE,NL,ES,…..) I used json column formatting to display the countryflag :{“$schema”: “https://developer.microsoft.com/json-schemas/sp/v2/column-formatting.schema.json”,”elmType”: “div”,”children”: [{“elmType”: “img”,”attributes”: {“src”: “=(‘https://flagcdn.com/w20/’ + toLowerCase(@currentField) + ‘.png’)”}},{“elmType”: “span”,”txtContent”: “@currentField”}]} Problem :when I edit the column : flags are shown.when display in view mode : the flags are not shown, only the two digitcode !! When i look at the source code, something very strange : there are two “src”-parameter in the img-tag ! <img src=”” data-untrusted-src=”https://flagcdn.com/w20/es.png”> Any help will be appreciated Read More
Service Title changing to last attendee booked on
Hi,
We’re currently experiencing some strange behaviour in our scheduled services, whereby the title of the service is automatically updating to the last attendee’s name that booked onto the service.
After further conversations with our users who have independent booking calendars, the same behaviour is being mirrored in each calendar. It’s worth confirming that there have been no changes/configurations made our side, so we’re somewhat unsure if MS has rolled out any product changes that could have impacted or created this error.
Can advise is this is a known issue and what action can be taken to overcome this.
Hi, We’re currently experiencing some strange behaviour in our scheduled services, whereby the title of the service is automatically updating to the last attendee’s name that booked onto the service. After further conversations with our users who have independent booking calendars, the same behaviour is being mirrored in each calendar. It’s worth confirming that there have been no changes/configurations made our side, so we’re somewhat unsure if MS has rolled out any product changes that could have impacted or created this error. Can advise is this is a known issue and what action can be taken to overcome this. Read More
Delete Outlook shared inbox emails older than 45 days
Hello All,
I have an Outlook Shared inbox which receives 5.000 emails everyday. I want to set up power automate flow to delete emails that are older than 45 days.
I hope someone out there would be able to help.
Thanks in advanced.
Hello All, I have an Outlook Shared inbox which receives 5.000 emails everyday. I want to set up power automate flow to delete emails that are older than 45 days. I hope someone out there would be able to help. Thanks in advanced. Read More
Copilot for Microsoft 365: Datacenter Investments, Customer Spend, and Black Hat Exploits
Market analysts question if companies like Microsoft will ever generate a return on their AI investment. That hasn’t stopped Microsoft spending $19 billion in FY24 Q4, so they must be hopeful. Meanwhile, at Black Hat USA 2024, a presentation exploring some vulnerabilities in Copilot should make all Microsoft 365 tenants with Copilot consider how to secure their organization better.
https://practical365.com/copilot-for-microsoft-365-black-hat-2024/
Market analysts question if companies like Microsoft will ever generate a return on their AI investment. That hasn’t stopped Microsoft spending $19 billion in FY24 Q4, so they must be hopeful. Meanwhile, at Black Hat USA 2024, a presentation exploring some vulnerabilities in Copilot should make all Microsoft 365 tenants with Copilot consider how to secure their organization better.
https://practical365.com/copilot-for-microsoft-365-black-hat-2024/ Read More
Help: Highlighting if incorrect value pair in two columns is entered
Hello,
I am very new to power automate, and having some issues to get my head around a problem with conditional formatting or some sort of highlighting a certain condition in my SharePoint list. Maybe someone can help me with the following Szenario or give me some pointers in the right direction.
I have one SharePoint list which is used to track various activities of processes that run on different products.
Column A is of the ‘text’ type and contains a process ID (e.g., ID-111). The column can contain multiple entries of the same process IDColumn B is very similar. It is also of the ‘text’ type and contains the product ID (e.g., P-999). As above the column can contain multiple entries of the same product IDColumn C-… contain various other information, but are irrelevant to the question
What I want is to alert the user via conditional formatting, other forms of highlighting or a dedicated ‘alert’ column if an incorrect ‘process ID <-> product ID’ pair was inserted:
Each process ID is only allowed to have a single product ID associated with it. It is OK to have multiple row entries with the same process ID, but it always has to have the same product ID.
It is also OK to have another process ID associated with the same product ID, but then again the rule above has to apply.
Example:
Process ID // Product ID // Column C // ALERT
ID-111 // P-999 // … // OK
ID-111 // P-999 // … // OK
ID-222 // P-999 // … // OK
ID-222 // P-999 // … // OK
ID-111 // P-888 // … // ERROR
I am quite flexible how the error can be highlighted. It can highlight all rows of the respective process ID or only the newest with the mismatch. Also it does not matter if the product ID field or the process ID field are highlighted or if a dedicated alert column is used.
Previously I have used XLookUp in Excel in combination with help columns to do the trick. But I am somewhat lost at the moment in SharePoint. Some help would be much appreciated 🙂
Please let me know if my explanation was to confusing or if there any open questions:)
Best,
Max
Hello,I am very new to power automate, and having some issues to get my head around a problem with conditional formatting or some sort of highlighting a certain condition in my SharePoint list. Maybe someone can help me with the following Szenario or give me some pointers in the right direction. I have one SharePoint list which is used to track various activities of processes that run on different products.Column A is of the ‘text’ type and contains a process ID (e.g., ID-111). The column can contain multiple entries of the same process IDColumn B is very similar. It is also of the ‘text’ type and contains the product ID (e.g., P-999). As above the column can contain multiple entries of the same product IDColumn C-… contain various other information, but are irrelevant to the questionWhat I want is to alert the user via conditional formatting, other forms of highlighting or a dedicated ‘alert’ column if an incorrect ‘process ID <-> product ID’ pair was inserted: Each process ID is only allowed to have a single product ID associated with it. It is OK to have multiple row entries with the same process ID, but it always has to have the same product ID.It is also OK to have another process ID associated with the same product ID, but then again the rule above has to apply.Example:Process ID // Product ID // Column C // ALERTID-111 // P-999 // … // OKID-111 // P-999 // … // OKID-222 // P-999 // … // OKID-222 // P-999 // … // OKID-111 // P-888 // … // ERROR I am quite flexible how the error can be highlighted. It can highlight all rows of the respective process ID or only the newest with the mismatch. Also it does not matter if the product ID field or the process ID field are highlighted or if a dedicated alert column is used. Previously I have used XLookUp in Excel in combination with help columns to do the trick. But I am somewhat lost at the moment in SharePoint. Some help would be much appreciated 🙂 Please let me know if my explanation was to confusing or if there any open questions:) Best,Max Read More
Table Design Menu ‘greyed out’
One of my Excel Tables is misbehaving.
The rows have stopped auto extending and I cannot access the Table Design Menu; it is greyed out!
WHAT DO I DO, PLEASE?
Many thanks!
One of my Excel Tables is misbehaving.The rows have stopped auto extending and I cannot access the Table Design Menu; it is greyed out!WHAT DO I DO, PLEASE?Many thanks! Read More
Call to inv() function seems to have (undesired) impact on Thread pool or maxNumCompThreads()
I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here?I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here? I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here? maxnumcompthreads, parpool, threads, inv MATLAB Answers — New Questions