Month: August 2024
Dynamic membership rule to include Description attribute
Hello Everyone,
I want to create an Entra Dynamic Group that will that will look for the Department and the Description attribute. I have sync’d the “description (group)” and “description (user)” Directory extensions via our AAD Connect but the rule builder still won’t pick up on it.
After sync’ing the attributes I go into Entra -> Groups -> Create New -> select Dynamic. While building the rule I click the Get Custom Extension Properties button and enter the App ID of my Tenant Schema app. After this I can see the “extension_<AppID>_description Property to select from.
However when I put in a matching value the rule validator does not pick up on it.
Microsoft Entra Connect Sync: Directory extensions – Microsoft Entra ID | Microsoft Learn
Is there anything I am missing?
Ultimately I would like to simply create an EXO Dynamic Group that can filter on these, but I cannot find away.
Thanks for any input.
Hello Everyone, I want to create an Entra Dynamic Group that will that will look for the Department and the Description attribute. I have sync’d the “description (group)” and “description (user)” Directory extensions via our AAD Connect but the rule builder still won’t pick up on it. After sync’ing the attributes I go into Entra -> Groups -> Create New -> select Dynamic. While building the rule I click the Get Custom Extension Properties button and enter the App ID of my Tenant Schema app. After this I can see the “extension_<AppID>_description Property to select from. However when I put in a matching value the rule validator does not pick up on it. Microsoft Entra Connect Sync: Directory extensions – Microsoft Entra ID | Microsoft Learn Is there anything I am missing? Ultimately I would like to simply create an EXO Dynamic Group that can filter on these, but I cannot find away. Thanks for any input. Read More
Vou orientar cliente a processar a Microsoft
Faço gerenciamento digital de um negócio que é uma clínica de ortopedia de quadril, ou seja, 90% de pacientes idosos com baixa ou nenhuma locomoção. O Bing Places criou um perfil do negócio utilizando dados do Google Mapas, porém, a clínica mudou de endereço e o antigo ficou lá. Só descobrimos esse perfil quando vários pacientes passaram a ir ao local errado. Daí vimos que ao utilizar o buscador BING ao invés do Google, o endereço antigo aparecia. Criamos uma conta e reivindicamos a propriedade do negócio, mas para completar, é preciso verificar a conta. Fizemos o processo via telefone e email e, embora o aviso seja de que foi finalizada, logo aparece como não verificada, impedindo atualização de dados. Não é apresentada outra forma de verificar. Já enviamos inúmeros comentários de usuário apontando que o endereço está errado, já enviamos vários emails no endereço que aparece no suporte, nunca tivemos respostas. Enviei msg no X e forneceram um número que também não funciona. Essa é minha última tentativa e após, não vejo outro caminho, teremos que judicializar a questão. É um absurdo que uma big tech não seja capaz de resolver algo tão simples.
Faço gerenciamento digital de um negócio que é uma clínica de ortopedia de quadril, ou seja, 90% de pacientes idosos com baixa ou nenhuma locomoção. O Bing Places criou um perfil do negócio utilizando dados do Google Mapas, porém, a clínica mudou de endereço e o antigo ficou lá. Só descobrimos esse perfil quando vários pacientes passaram a ir ao local errado. Daí vimos que ao utilizar o buscador BING ao invés do Google, o endereço antigo aparecia. Criamos uma conta e reivindicamos a propriedade do negócio, mas para completar, é preciso verificar a conta. Fizemos o processo via telefone e email e, embora o aviso seja de que foi finalizada, logo aparece como não verificada, impedindo atualização de dados. Não é apresentada outra forma de verificar. Já enviamos inúmeros comentários de usuário apontando que o endereço está errado, já enviamos vários emails no endereço que aparece no suporte, nunca tivemos respostas. Enviei msg no X e forneceram um número que também não funciona. Essa é minha última tentativa e após, não vejo outro caminho, teremos que judicializar a questão. É um absurdo que uma big tech não seja capaz de resolver algo tão simples. Read More
Learn how to customize and optimize Copilot for Security with the custom Data Security plugin
This is a step-by-step guided walkthrough of how to use the custom Copilot for Security pack for Microsoft Data Security and how it can empower your organization to understand the cyber security risks in a context that allows them to achieve more. By focusing on the information and organizational context to reflect the real impact/value of investments and incidents in cyber. We are working to add this to our native toolset as well, we will update once ready.
Prerequisites
License requirements for Microsoft Purview Information Protection depend on the scenarios and features you use. To understand your licensing requirements and options for Microsoft Purview Information Protection, see the Information Protection sections from Microsoft 365 guidance for security & compliance and the related PDF download for feature-level licensing requirements. You also need to be licensed for Microsoft Copilot for Security, more information here.
Consider setting up Azure AI Search to ingest policy documents, so that they can be part of the process.
Step-by-step guided walkthrough
In this guide we will provide high-level steps to get started using the new tooling. We will start by adding the custom plugin.
Go to securitycopilot.microsoft.com
Download the DataSecurityAnalyst.yml file from here.
Select the plugins icon down in the left corner.
Under Custom upload, select upload plugin.
Select the Copilot for Security plugin and upload the DataSecurityAnalyst.yml file.
Click Add
Under Custom you will now see the plug-in
The custom package contains the following prompts
Under DLP you will find this if you type /DLP
Under Sensitive you will find this if you type sensitive
Let us get started using this together with the Copilot for Security capabilities
Anomalies detection sample.
Access to sensitive information by compromised accounts.
Document accessed by possible compromised accounts.
CVE or proximity to ISP/IPTags.
Tune Exchange DLP policies sample.
Purview unlabelled operations.
Applications accessing sensitive content.
Hosts that are internet accessible accessing sensitive content
Exchange incident sample prompt book.
SharePoint sample prompt book.
Anomalies detection sample
The DLP anomaly is checking data from the past 30 days and inspect on a 30m interval for possible anomalies. Using a timeseries decomposition model.
The sensitivity content anomaly is using a slightly different model due to the amount of data. It is based on the diffpatterns function that compares week 3,4 with week 1,2.
Access to sensitive information by compromised accounts.
This example is checking the alerts reported against users with sensitive information that they have accessed.
Who has accessed a Sensitive e-mail and from where?
We allow for organizations to input message subject or message Id to identify who has opened a message. Note this only works for internal recipients.
You can also ask the plugin to list any emails classified as Sensitive being accessed from a specific network or affected of a specific CVE.
Document accessed by possible compromised accounts.
You can use the plugin to check if compromised accounts have been accessing a specific document.
CVE or proximity to ISP/IPTags
This is a sample where you can check how much sensitive information that is exposed to a CVE as an example. You can pivot this based on ISP as well.
Tune Exchange DLP policies sample.
If you want to tune your Exchange, Teams, SharePoint, Endpoint or OCR rules and policies you can ask Copilot for Security for suggestions.
How many of the operations in your different departments are unlabelled? Are any of the departments standing out?
In this context you can also use Copilot for Security to deliver recommendations and highlight what the benefit of sensitivity labels are bringing.
Applications accessing sensitive content.
What applications have been used to access sensitive content? The plugin supports asking for applications being used to access sensitive content. This can be a fairly long list of applications, you can add filters in the code to filter out common applications.
If you want to zoom into what type of content a specific application is accessing.
What type of network connectivity has been made from this application?
Or what if you get concerned about the process that has been used and want to validate the SHA256?
Hosts that are internet accessible accessing sensitive content
Another threat vector could be that some of your devices are accessible to the Internet and sensitive content is being processed. Check for processing of secrets and other sensitive information.
Promptbooks are a valuable resource for accomplishing specific security-related tasks. Consider them as a way to practically implement your standard operating procedure (SOP) for certain incidents. By following the SOP, you can identify the various dimensions in an incident in a standardized way and summarize the outcome. For more information on prompt books please see this documentation.
Exchange incident sample prompt book
Note: The above detail is currently only available using Sentinel, we are working on Defender integration.
Posts part of this series
Cyber Security in a context that allows your organization to achieve more
https://techcommunity.microsoft.com/t5/security-compliance-and-identity/cyber-security-in-a-context-that-allows-your-organization-to/ba-p/4120041
Guided walkthrough of the Microsoft Purview extended report experience https://techcommunity.microsoft.com/t5/security-compliance-and-identity/guided-walkthrough-of-the-microsoft-purview-extended-report/ba-p/4121083
How to build the Microsoft Purview extended report experience https://techcommunity.microsoft.com/t5/security-compliance-and-identity/how-to-build-the-microsoft-purview-extended-report-experience/ba-p/4122028
Microsoft Tech Community – Latest Blogs –Read More
Accelerate Cloud Potential for Your SAP Workloads on Azure with these Learning Paths
Accelerate Cloud Potential for Your SAP Workloads on Azure with these Learning Paths
In today’s rapidly evolving digital landscape, businesses need to stay competitive by leveraging the latest tools and services. Together, SAP and Microsoft are not just providing these tools but also creating ecosystems that foster innovation and transformation. This collaboration enables businesses to unlock new potential for their SAP workloads on Azure.
Explore Azure for SAP Workloads
Streamline your SAP operations and maximize ROI with our comprehensive Azure training. Empower your team to seamlessly migrate, manage, and optimize SAP workloads on Azure, leveraging its robust infrastructure and specialized tools. This comprehensive training will enhance your SAP performance, drive efficiency, and unlock innovation within your existing environment
Highlight: New RISE SAP Learn Module
We are excited to introduce the new RISE SAP learn module, “Explore Azure networking for SAP RISE.” This module shows you how to use your Azure networks to connect to your SAP RISE architecture running in SAP’s Azure subscription. After completing this module, you will be able to differentiate the responsibilities of the SAP RISE team, Azure support, and the customer. You will also learn how to connect to SAP RISE with Azure virtual private network (VPN) peering, VNet-to-VNet, and with an on-premises network
Learn from the pros with live, interactive Virtual Training Days
Virtual Training Days are instructor-led classes designed to equip individuals and teams with in-demand skills related to cloud migration, AI, and other cutting-edge technologies. We offer Virtual Training Days to help you migrate SAP to Azure, optimizing your performance, reliability, and scalability while reducing costs. In this session, Migrate and Modernize SAP on the Microsoft Cloud, you’ll find out how to secure and monitor SAP workloads on Azure. Come explore how this move enhances productivity, fosters secure collaboration, and gives you AI-powered insights for greater efficiency. Register for our next session here.
To help you and your team better take advantage of these benefits, we’ve created an array of learning materials and interactive events—from self-guided courses to Virtual Training Days, certifications to conferences—that build your cloud expertise. Our Microsoft Learn Learning Paths are curated collections of free, online modules and resources designed to help you build specific skills or gain knowledge in a particular technology or subject area.
By leveraging these resources, learning paths and the new RISE SAP learn module, you can ensure that your team is well-equipped to handle the complexities of SAP workloads on Azure. Whether you are looking to migrate, manage, or optimize your SAP environment, these resources will provide you with the knowledge and skills needed to succeed.
Join us on this journey to unlock new potential for your SAP workloads on Azure. Start exploring our learning resources today and take the next step towards transforming your business.
Microsoft Tech Community – Latest Blogs –Read More
Two step ahead autoregressive prediction
Is it possible to use the AR function in Matlab to train models such as:
y(t+2)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
rather than:
y(t+1)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
I want to avoid predicting y(t+2) using y(t+1).
Many thanksIs it possible to use the AR function in Matlab to train models such as:
y(t+2)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
rather than:
y(t+1)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
I want to avoid predicting y(t+2) using y(t+1).
Many thanks Is it possible to use the AR function in Matlab to train models such as:
y(t+2)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
rather than:
y(t+1)=a(1)u(t-1)+a(2)u(t-2)+…+a(p)u(t-p)
I want to avoid predicting y(t+2) using y(t+1).
Many thanks matlab, autoregression MATLAB Answers — New Questions
Problem running spm12 using R2019a on Centos 7
When I issue the spm command, it displays the startup window. After clicking on a button, I get:
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkObject’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_register_static: assertion ‘parent_type > 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkBuildable’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_interface_add_prerequisite: assertion ‘G_TYPE_IS_INTERFACE (interface_type)’ failed
(MATLAB:374189): GLib-CRITICAL **: 09:26:25.382: g_once_init_leave: assertion ‘result != 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.383: g_type_register_static: assertion ‘parent_type > 0’ failed
and then it freezes.
OS: Linux 3.10.0-957.el7.x86_64
Compiler: gcc 6.3.0
Glib: 2.42.2
Java: 1.8.0_211
Any ideas on what needs to be fixed?When I issue the spm command, it displays the startup window. After clicking on a button, I get:
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkObject’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_register_static: assertion ‘parent_type > 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkBuildable’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_interface_add_prerequisite: assertion ‘G_TYPE_IS_INTERFACE (interface_type)’ failed
(MATLAB:374189): GLib-CRITICAL **: 09:26:25.382: g_once_init_leave: assertion ‘result != 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.383: g_type_register_static: assertion ‘parent_type > 0’ failed
and then it freezes.
OS: Linux 3.10.0-957.el7.x86_64
Compiler: gcc 6.3.0
Glib: 2.42.2
Java: 1.8.0_211
Any ideas on what needs to be fixed? When I issue the spm command, it displays the startup window. After clicking on a button, I get:
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkObject’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_register_static: assertion ‘parent_type > 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-WARNING **: 09:26:25.382: cannot register existing type ‘GtkBuildable’
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_interface_add_prerequisite: assertion ‘G_TYPE_IS_INTERFACE (interface_type)’ failed
(MATLAB:374189): GLib-CRITICAL **: 09:26:25.382: g_once_init_leave: assertion ‘result != 0’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.382: g_type_add_interface_static: assertion ‘G_TYPE_IS_INSTANTIATABLE (instance_type)’ failed
(MATLAB:374189): GLib-GObject-CRITICAL **: 09:26:25.383: g_type_register_static: assertion ‘parent_type > 0’ failed
and then it freezes.
OS: Linux 3.10.0-957.el7.x86_64
Compiler: gcc 6.3.0
Glib: 2.42.2
Java: 1.8.0_211
Any ideas on what needs to be fixed? spm 12, r2019a, centos7 MATLAB Answers — New Questions
Unable to perform assignment because value of type ‘optim.problemdef.OptimizationExpression’ is not convertible to ‘double’.
Hello!
I’m working on an optimization problem where I need to manage the availability of multiple cores for task execution. My current approach involves finding the earliest available core and updating its availability after assigning a task. However, I’m encountering an issue when trying to update the hapsCoreAvailability array with the new start time and execution time.
the error is
Unable to perform assignment because value of type ‘optim.problemdef.OptimizationExpression’ is not convertible to
‘double’.
Error in Solve_Linear_Problem (line 355)
hapsCoreAvailability(coreIdx) = T_start + execTime;
Caused by:
Error using double
Conversion to double from optim.problemdef.OptimizationExpression is not possible.
here is the code
% Initialize the queue time matrix
Queue_time = optimexpr(N, numNodes, num_vehicles);
% Define variables
T_start = optimvar(‘T_start’, 1, ‘Type’, ‘continuous’, ‘LowerBound’, 0);
% Constraints
Queue_constraints1 = [];
Queue_constraints2 = [];
% Track availability of each HAPS core
hapsCoreAvailability = zeros(1, hapsCapacity);
nodeQueues = cell(numNodes, 1); % Queue times for each node
% Process tasks by generation time
for taskIdx = 1:num_vehicles
for subtaskIdx = 1:N
for nodeIdx = 1:numNodes
% Extract generation time, execution time, and uplink time
taskGenTime = generation_times_matrix(subtaskIdx, nodeIdx, taskIdx);
execTime = Execution_time(subtaskIdx, nodeIdx, taskIdx);
uplinkTime = uplink_time(subtaskIdx, nodeIdx, taskIdx);
% Calculate task arrival time considering uplink time
taskArrivalTime = taskGenTime + uplinkTime;
if nodeIdx == 4 % If assigned to HAPS
% Find the earliest available core
% [earliestCoreTime, coreIdx] = min(hapsCoreAvailability);
coreIdx = -1;
earliestCoreTime = inf;
% Iterate through each core’s availability time
for i = 1:length(hapsCoreAvailability)
% Check if the current core’s availability is earlier than the current earliest time
if hapsCoreAvailability(i) < earliestCoreTime
% Update earliest core time and index
earliestCoreTime = hapsCoreAvailability(i);
coreIdx = i;
end
end
% Start time is the maximum of arrival time and core availability
% T_start should be greater than or equal to both T_arrival and T_core
Queue_constraints1 = [Queue_constraints1, T_start >= taskArrivalTime];
Queue_constraints1 = [Queue_constraints1, T_start >= earliestCoreTime];
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update the core availability time
hapsCoreAvailability(coreIdx) = T_start + execTime; %% here is the error
else % For UAVs
% Queue tasks based on previous completion
if isempty(nodeQueues{nodeIdx})
startTime = taskArrivalTime;
else
T_queue = nodeQueues{nodeIdx}(end);
% T_start should be greater than or equal to both T_arrival and T_queue
Queue_constraints2 = [Queue_constraints2, T_start >= taskArrivalTime];
Queue_constraints2 = [Queue_constraints2, T_start >= T_queue];
end
% Calculate the queue time for the current subtask
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update departure time for the current subtask
departureTime = T_start + execTime;
nodeQueues{nodeIdx} = [nodeQueues{nodeIdx}, departureTime];
end
end
end
end
prob.Constraints.queue_time_constraints = Queue_constraints1;
prob.Constraints.queue_time_constraints = Queue_constraints2;
How can I correctly update the hapsCoreAvailability array with the new start time and execution time in the context of my optimization problem? Is there an alternative way to manage and update core availability when dealing with OptimizationExpression objects?
Any advice or suggestions would be greatly appreciated!
Thank you!Hello!
I’m working on an optimization problem where I need to manage the availability of multiple cores for task execution. My current approach involves finding the earliest available core and updating its availability after assigning a task. However, I’m encountering an issue when trying to update the hapsCoreAvailability array with the new start time and execution time.
the error is
Unable to perform assignment because value of type ‘optim.problemdef.OptimizationExpression’ is not convertible to
‘double’.
Error in Solve_Linear_Problem (line 355)
hapsCoreAvailability(coreIdx) = T_start + execTime;
Caused by:
Error using double
Conversion to double from optim.problemdef.OptimizationExpression is not possible.
here is the code
% Initialize the queue time matrix
Queue_time = optimexpr(N, numNodes, num_vehicles);
% Define variables
T_start = optimvar(‘T_start’, 1, ‘Type’, ‘continuous’, ‘LowerBound’, 0);
% Constraints
Queue_constraints1 = [];
Queue_constraints2 = [];
% Track availability of each HAPS core
hapsCoreAvailability = zeros(1, hapsCapacity);
nodeQueues = cell(numNodes, 1); % Queue times for each node
% Process tasks by generation time
for taskIdx = 1:num_vehicles
for subtaskIdx = 1:N
for nodeIdx = 1:numNodes
% Extract generation time, execution time, and uplink time
taskGenTime = generation_times_matrix(subtaskIdx, nodeIdx, taskIdx);
execTime = Execution_time(subtaskIdx, nodeIdx, taskIdx);
uplinkTime = uplink_time(subtaskIdx, nodeIdx, taskIdx);
% Calculate task arrival time considering uplink time
taskArrivalTime = taskGenTime + uplinkTime;
if nodeIdx == 4 % If assigned to HAPS
% Find the earliest available core
% [earliestCoreTime, coreIdx] = min(hapsCoreAvailability);
coreIdx = -1;
earliestCoreTime = inf;
% Iterate through each core’s availability time
for i = 1:length(hapsCoreAvailability)
% Check if the current core’s availability is earlier than the current earliest time
if hapsCoreAvailability(i) < earliestCoreTime
% Update earliest core time and index
earliestCoreTime = hapsCoreAvailability(i);
coreIdx = i;
end
end
% Start time is the maximum of arrival time and core availability
% T_start should be greater than or equal to both T_arrival and T_core
Queue_constraints1 = [Queue_constraints1, T_start >= taskArrivalTime];
Queue_constraints1 = [Queue_constraints1, T_start >= earliestCoreTime];
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update the core availability time
hapsCoreAvailability(coreIdx) = T_start + execTime; %% here is the error
else % For UAVs
% Queue tasks based on previous completion
if isempty(nodeQueues{nodeIdx})
startTime = taskArrivalTime;
else
T_queue = nodeQueues{nodeIdx}(end);
% T_start should be greater than or equal to both T_arrival and T_queue
Queue_constraints2 = [Queue_constraints2, T_start >= taskArrivalTime];
Queue_constraints2 = [Queue_constraints2, T_start >= T_queue];
end
% Calculate the queue time for the current subtask
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update departure time for the current subtask
departureTime = T_start + execTime;
nodeQueues{nodeIdx} = [nodeQueues{nodeIdx}, departureTime];
end
end
end
end
prob.Constraints.queue_time_constraints = Queue_constraints1;
prob.Constraints.queue_time_constraints = Queue_constraints2;
How can I correctly update the hapsCoreAvailability array with the new start time and execution time in the context of my optimization problem? Is there an alternative way to manage and update core availability when dealing with OptimizationExpression objects?
Any advice or suggestions would be greatly appreciated!
Thank you! Hello!
I’m working on an optimization problem where I need to manage the availability of multiple cores for task execution. My current approach involves finding the earliest available core and updating its availability after assigning a task. However, I’m encountering an issue when trying to update the hapsCoreAvailability array with the new start time and execution time.
the error is
Unable to perform assignment because value of type ‘optim.problemdef.OptimizationExpression’ is not convertible to
‘double’.
Error in Solve_Linear_Problem (line 355)
hapsCoreAvailability(coreIdx) = T_start + execTime;
Caused by:
Error using double
Conversion to double from optim.problemdef.OptimizationExpression is not possible.
here is the code
% Initialize the queue time matrix
Queue_time = optimexpr(N, numNodes, num_vehicles);
% Define variables
T_start = optimvar(‘T_start’, 1, ‘Type’, ‘continuous’, ‘LowerBound’, 0);
% Constraints
Queue_constraints1 = [];
Queue_constraints2 = [];
% Track availability of each HAPS core
hapsCoreAvailability = zeros(1, hapsCapacity);
nodeQueues = cell(numNodes, 1); % Queue times for each node
% Process tasks by generation time
for taskIdx = 1:num_vehicles
for subtaskIdx = 1:N
for nodeIdx = 1:numNodes
% Extract generation time, execution time, and uplink time
taskGenTime = generation_times_matrix(subtaskIdx, nodeIdx, taskIdx);
execTime = Execution_time(subtaskIdx, nodeIdx, taskIdx);
uplinkTime = uplink_time(subtaskIdx, nodeIdx, taskIdx);
% Calculate task arrival time considering uplink time
taskArrivalTime = taskGenTime + uplinkTime;
if nodeIdx == 4 % If assigned to HAPS
% Find the earliest available core
% [earliestCoreTime, coreIdx] = min(hapsCoreAvailability);
coreIdx = -1;
earliestCoreTime = inf;
% Iterate through each core’s availability time
for i = 1:length(hapsCoreAvailability)
% Check if the current core’s availability is earlier than the current earliest time
if hapsCoreAvailability(i) < earliestCoreTime
% Update earliest core time and index
earliestCoreTime = hapsCoreAvailability(i);
coreIdx = i;
end
end
% Start time is the maximum of arrival time and core availability
% T_start should be greater than or equal to both T_arrival and T_core
Queue_constraints1 = [Queue_constraints1, T_start >= taskArrivalTime];
Queue_constraints1 = [Queue_constraints1, T_start >= earliestCoreTime];
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update the core availability time
hapsCoreAvailability(coreIdx) = T_start + execTime; %% here is the error
else % For UAVs
% Queue tasks based on previous completion
if isempty(nodeQueues{nodeIdx})
startTime = taskArrivalTime;
else
T_queue = nodeQueues{nodeIdx}(end);
% T_start should be greater than or equal to both T_arrival and T_queue
Queue_constraints2 = [Queue_constraints2, T_start >= taskArrivalTime];
Queue_constraints2 = [Queue_constraints2, T_start >= T_queue];
end
% Calculate the queue time for the current subtask
Queue_time(subtaskIdx, nodeIdx, taskIdx) = T_start – taskArrivalTime;
% Update departure time for the current subtask
departureTime = T_start + execTime;
nodeQueues{nodeIdx} = [nodeQueues{nodeIdx}, departureTime];
end
end
end
end
prob.Constraints.queue_time_constraints = Queue_constraints1;
prob.Constraints.queue_time_constraints = Queue_constraints2;
How can I correctly update the hapsCoreAvailability array with the new start time and execution time in the context of my optimization problem? Is there an alternative way to manage and update core availability when dealing with OptimizationExpression objects?
Any advice or suggestions would be greatly appreciated!
Thank you! optimization, multiple cores, parallel processing MATLAB Answers — New Questions
trainNetwork reports too many input arguments in 2024a
Transfer learning code, based on the help example, that runs in 2023b, fails in 2024a
Error using trainNetwork (line 191)
Too many input arguments.
What has changed in the 2024a version? I see that trainnet is now recommended and I can do that going forward, but I would expect old code still to run.Transfer learning code, based on the help example, that runs in 2023b, fails in 2024a
Error using trainNetwork (line 191)
Too many input arguments.
What has changed in the 2024a version? I see that trainnet is now recommended and I can do that going forward, but I would expect old code still to run. Transfer learning code, based on the help example, that runs in 2023b, fails in 2024a
Error using trainNetwork (line 191)
Too many input arguments.
What has changed in the 2024a version? I see that trainnet is now recommended and I can do that going forward, but I would expect old code still to run. trainnetwork MATLAB Answers — New Questions
Azure Disaster recovery question !
Hello,
Our customer requested that to provide a solution for Azure DR. I am not sure how to accomplish this.
Anyone, please help.
Seaniro:
Primary Site
West Europe – Qatar Central
Secondery site
West Europe to North Europe
and all are IAAS VM’s
Hello, Our customer requested that to provide a solution for Azure DR. I am not sure how to accomplish this.Anyone, please help. Seaniro: Primary Site West Europe – Qatar CentralSecondery site West Europe to North Europe and all are IAAS VM’s Read More
Excel Formatting Help
Hello, all. I have a table that looks like this (below) with three columns merged and centered. I want to format the entire thing as a table and to include the merged cells. I also want the table to have the model number/description merged into those three as one table column. Is there any way to do this?
Hello, all. I have a table that looks like this (below) with three columns merged and centered. I want to format the entire thing as a table and to include the merged cells. I also want the table to have the model number/description merged into those three as one table column. Is there any way to do this? Read More
Critical Cloud Assets: Identifying and Protecting the Crown Jewels of your Cloud
Cloud computing has revolutionized the way businesses operate, with many organizations shifting their business-critical services and workloads to the cloud. This transition, and the massive growth of cloud environments, has led to a surge in security issues in need of addressing. Consequently, the need for contextual and differentiated security strategies is becoming a necessity. Organizations need solutions that allow them to detect, prioritize, and address security issues, based on their business-criticality and overall importance to the organization. Identifying an organization’s business-critical assets serves as the foundation to these solutions.
Microsoft is pleased to announce the release of a new set of critical cloud assets classification capability in the critical asset management and protection experience, as part of Microsoft Security Exposure Management solution, and Cloud Security Posture Management (CSPM) in Microsoft Defender for Cloud (MDC). This capability enables organizations to identify additional business-critical assets in the cloud, thereby allowing security administrators and the security operations center (SOC) teams to efficiently, accurately, and proactively prioritize to address various security issues affecting critical assets that may arise within their cloud environments.
Learn more how to get started with Critical Asset Management and Protection in Exposure Management and Microsoft Defender for Cloud: Critical Asset Protection with Microsoft Security Exposure Management, Critical assets protection (Preview) – Microsoft Defender for Cloud
Criticality classification methodology
Over the past few months, we, at Microsoft, have conducted extensive research with several key objectives:
Understand and identify the factors that signify a cloud asset’s importance relative to others.
Analyze how the structure and design of a cloud environment can aid in detecting its most critical assets.
Accurately and comprehensively identify a broad spectrum of critical assets, including cloud identities and resources.
As a result, we are announcing the release of a new set of pre-defined classifications for critical cloud assets, encompassing a wide range of asset types, from cloud resources, to identities with privileged permissions on cloud resources. With this release, the total number of business-critical classifications has expanded to 49 for cloud identities and 8 for cloud resources, further empowering users to focus on what matters most in their cloud environments.
In the following sections, we will briefly discuss some of these new classifications, both for cloud-based identities and cloud-based resources, their integration into our products, their objectives, and unique features.
Identities
In cloud environments, it is essential to distinguish between the various role-based access control (RBAC) services, such as Microsoft Entra ID and Azure RBAC. Each service has unique permissions and scopes, necessitating a tailored approach to business-criticality classification.
We will go through examples of new business-critical rules classifying identities with assigned roles both in Microsoft Entra and Azure RBAC:
Microsoft Entra
The Microsoft Entra service is an identity and access management solution in which administrators or non-administrators can be assigned a wide range of built-in or custom roles to allow management of Microsoft Entra resources.
Examples of new business-criticality rules classifying identities assigned with a specific Microsoft Entra built-in role:
Classification: “Exchange Administrator”
Default Criticality Level: “High”
This rule applies to identities assigned with the Microsoft Entra Exchange Administrator built-in role.
Identities assigned this role have strong capabilities and control over the Exchange product, with access to sensitive information through the Exchange Admin Center, and more.
Classification: “Conditional Access Administrator”
Default Criticality Level: “High”
This rule applies to identities assigned with the Microsoft Entra Conditional Access Administrator built-in role.
Identities assigned this role are deemed to be of high importance, as it grants the ability to manage Microsoft Entra Conditional Access settings.
Azure RBAC
Azure role-based access control (Azure RBAC) is a system that provides fine-grained access management of Azure resources that helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. The way you control access to resources using Azure RBAC is to assign Azure roles.
Example of a new criticality rule classifying identities assigned with specific Azure RBAC roles:
Classification: “Identities with Privileged Azure Role”
Default Criticality Level: “High”
This rule applies to identities assigned with an Azure privileged built-in or custom role.
Assets criticality classification within the Azure RBAC system necessitates consideration of different parameters, such as the role assigned to the identity, the scope in which the role takes effect, and the contextual business-criticality that lies within this scope.
Thus, this rule classifies identities which have a privileged action-permission assigned over an Azure subscription scope, in which a critical asset resides, thereby utilizing contextual and differential security measures. This provides the customer with a cutting-edge criticality classification technique for both Azure built-in roles, and custom roles, in which the classification accurately adapts to dynamic changes inside the customer environment, ensuring a more accurate reflection of criticality.
List of pre-defined criticality classifications for identities in Microsoft Security Exposure Management
Cloud resources
A cloud environment is a complex network of interconnected and isolated assets, allowing a remarkable amount of environment structure possibilities, asset configurations, and resource-identity interconnections. This flexibility provides users with significant value, particularly when designing environments around business-critical assets and configuring them to meet specific requirements.
We will present three examples of the new predefined criticality classifications as part of our release, that will illustrate innovative approaches to identifying business-critical assets.
Azure Virtual Machines
Examples of new criticality rules classifying Azure Virtual Machines:
Classification: “Azure Virtual Machine with High Availability and Performance”
Default Criticality Level: “Low”
Compute resources are the cornerstone of cloud environments, supporting production services, business-critical workloads, and more. These assets are created with a desired purpose, and upon creation, the user is presented with several types of configurations options, allowing the asset to meet its specific requirements and performance thresholds.
As a result, an Azure Virtual Machine configured with an availability set, indicates that the machine is designed to withstand faults and outages, while a machine equipped with a premium Azure storage, indicates that the machine should withstand heavy workloads requiring low-latency and high-performance. Machines equipped with both are often deemed to be business-critical.
Classification: “Azure Virtual Machine with a Critical User Signed In”
Default Criticality Level: “High”
Resource-user interconnections within a cloud environment enable the creation of efficient, well-maintained, and least privilege-based systems. These connections can be established to facilitate interaction between resources, enabling single sign-on (SSO) for associated identities and workstations, and more.
When a user with a high or very high criticality level has an active session in the resource, the resource can perform tasks within the user’s scoped permissions. However, if an attacker compromises the machine, they could assume the identity of the signed-in user and execute malicious operations.
Azure Key Vault
Example of a new criticality rule classifying Azure Key Vaults:
Classification: “Azure Key Vaults with Many Connected Identities”
Default Criticality Level: “High”
Through the complex environments of cloud computing, where different kinds of assets interact and perform different tasks, lies authentication and authorization, supported by the invaluable currency of secrets. Therefore, studying the structure of the environment and how the key management solutions inside it are built is essential to detect business-critical assets.
Azure Key Vault is an indispensable solution when it comes to key, secrets, and certificate management. It is widely used by both business-critical and non-critical processes inside environments, where it plays an integral role in the smoothness and robustness of these processes.
An Azure Key Vault whose role is critical within a business-critical workload, such as a production service, could be used by a high number of different identities compared to other key vaults in the organization, thus in case of disruption or compromise, could have adverse effects on the integrity of the service.
List of pre-defined criticality classifications for cloud resources in Exposure Management
Protecting the crown jewels of your cloud environment
The critical asset protection, identification, and management, lies in the heart of Exposure Management and Defender Cloud Security Posture Management (CSPM) products, enriching and enhancing the experience by providing the customer with an opportunity to create their own custom business-criticality classifications and use Microsoft’s predefined ones.
Protecting your cloud crown jewels is of utmost importance, thus staying on top of best practices is crucial, some of our best practice recommendations:
Thoroughly enabling protections in business-critical cloud environments.
Detecting, monitoring, and auditing critical assets inside the environments, by utilizing both pre-defined and custom classifications.
Prioritizing and executing the remediation and mitigation of active attack paths, security issues, and security incidents relating to existing critical assets.
Following the principle of least privilege by removing any permissions assigned to overprivileged identities, such identities could be identified inside the critical asset management experience in Microsoft Security Exposure Management.
Conclusion
In the rapidly growing and evolving world of cloud computing, the increasing volume of security issues underscores the need of contextual and differentiated security solutions to allow customers to effectively identify, prioritize, and address security issues, thereby the capability of identifying organizations’ critical assets is of utmost importance.
Not all assets are created equal, assets of importance could be in the form of a highly privileged user, an Azure Key Vault facilitating authentication to many identities, or a virtual machine created with high availability and performance requirements for production services.
Protecting customers’ most valuable assets is one of Microsoft’s top priorities. We are pleased to announce a new set of business-critical cloud asset classifications, as part of Microsoft Defender for Cloud and Microsoft Security Exposure Management solutions.
Learn more
Microsoft Security Exposure Management
Start with Exposure Management Documentation, Product website, blogs
Critical Asset Management documentation
Critical Asset Protection and how to get started in Microsoft Security Exposure Management blog post
List of Microsoft’s predefined criticality classifications: Link
Microsoft Security Exposure Management what’s new page
Microsoft Defender for Cloud
Microsoft Defender for Cloud (MDC) plans
Microsoft’s Cloud Security Posture Management (CSPM) documentation
Critical Asset Protection in Microsoft Defender for Cloud (MDC) documentation
Microsoft Tech Community – Latest Blogs –Read More
Scaling New Heights: Azure Red Hat OpenShift Now Supports 250 Nodes
Azure Red Hat OpenShift (ARO) is a fully managed Red Hat OpenShift service on Azure. We are excited to announce two significant enhancements to ARO’s capabilities:
The ability to configure multiple IP addresses per cluster load balancer is now generally available.
ARO clusters can now scale up to 250 worker nodes.
Previously, ARO clusters were limited to 62 worker nodes due to having only one IP (Internet Protocol) address associated with the cluster’s load balancer. By enabling multiple IP addresses for the load balancer, we have removed this bottleneck, offering organizations greater flexibility in expanding their deployments.
These enhancements significantly improve the scalability and adaptability of ARO public clusters, empowering organizations to scale their infrastructure more effectively. Our goal is to support even larger clusters, providing robust solutions for enterprises with extensive computational requirements. In this blog post, we will delve into the specifics of deploying large ARO clusters, explore a real-world use case, and provide essential information to help you get started with this powerful new capability.
Deploying Large-Scale ARO Clusters
For clusters with over 101 nodes, we recommend using the following control plane nodes (or similar, newer generation instance types):
Standard_D32s_v3
Standard_D32s_v4
Standard_D32s_v5
Here is a sample Azure CLI (command-line interfaces) command to deploy a cluster with Standard_D32s_v5 as the control plane nodes:
Deploying Infrastructure Nodes
For clusters with over 101 nodes, infrastructure nodes are required to separate cluster workloads (such as Prometheus) to minimize contention with other workloads. We recommend deploying three (3) infrastructure nodes per cluster for redundancy and scalability needs.
Recommended instance types for infrastructure nodes:
Standard_E16as_v5
Standard_E16s_v5
For detailed instructions on configuring infrastructure nodes, see Deploy infrastructure nodes in an Azure Red Hat OpenShift (ARO) cluster.
For detailed guidance on deploying large Azure Red Hat OpenShift cluster, see Deploy a large Azure Red Hat OpenShift cluster – Azure Red Hat OpenShift | Microsoft Learn
Adding IP Addresses to the Cluster
A maximum of 20 IP addresses can be added to a load balancer. One (1) IP address is needed per 65 nodes, so a cluster with 250 nodes requires a minimum of four (4) IP addresses.
To add IP addresses to the load balancer using Azure CLI, run the following command:
Alternatively, you can add IP addresses through a REST (Representational State Transfer) API (Application Programming Interfaces) (Application Programming Interfaces) call:
Caution: Before deleting a large cluster, the cluster to 120 nodes or below.
A preview of the CLI is available to use this feature until the official CLI release is made available. For guidance on how to download and install the wheel extension file for this preview CLI, please refer to the documentation.
The power of 250 nodes
Traditionally, ARO public clusters were created with a public load balancer featuring a single public IP address for outbound connectivity. While this configuration worked well for many scenarios, it limited the maximum node count to 62. Now, with the ability to assign multiple additional public IP addresses to the load balancer, you can scale your cluster to the maximum supported number of nodes, unlocking new possibilities for your applications.
Key Features
Scale up to 20 IP addresses per cluster load balancer
Automatically adjusted outbound rules and frontend IP configurations
Increased maximum node count to 250 per cluster
Enhanced overall cluster scalability and performance
Use Case: High-Traffic E-Commerce Platform
Consider an e-commerce company, MegaShop, experiencing rapid growth. They have been running their platform on an ARO cluster but are approaching the 62-node limit. With the holiday season approaching, they need to scale up significantly to handle the expected traffic surge.
By implementing multiple IP addresses on their ARO cluster load balancer, MegaShop can:
Scale beyond the previous 62-node limit
Ensure smooth operations during peak traffic periods
Maintain high availability and performance for their customers
MegaShop’s DevOps team can easily update their existing cluster to use, for example, 10 IP addresses:
This simple change allows MegaShop to confidently scale their infrastructure to meet holiday demand without worrying about outbound connectivity bottlenecks.
Conclusion
The General availability of multiple IP addresses configuration for ARO cluster load balancer empowers organizations to build and scale robust, enterprise-grade Kubernetes environments on Azure with greater flexibility than ever before.
Whether you are running a high-traffic e-commerce site, a data-intensive analytics platform, or any other scalable application, this new capability ensures that your ARO infrastructure can grow alongside your business needs.
Getting Started
New customers can get started by:
Setting up an Azure subscription
Installing the Azure CLI
Creating a new ARO cluster with the desired number of IP addresses
For more detailed information, best practices, and troubleshooting guides, visit the official Azure Red Hat OpenShift documentation and the Red Hat OpenShift documentation.
Embrace the power of scalability and take your ARO deployments to new heights with multiple IP addresses for your cluster load balancers!
Additional Resources
Getting Started with ARO
Red Hat OpenShift Kubernetes
OpenShift vs Kubernetes: What’s the Difference?
eBook: Getting started with Azure Red Hat OpenShift
Azure Red Hat OpenShift Workshop
Azure Red Hat OpenShift Learning Path
Azure Red Hat OpenShift Learning Hub
The TEI (Total Economic Impact) of Azure Red Hat OpenShift
Microsoft Tech Community – Latest Blogs –Read More
Cybersecurity in a context that allows your organization to achieve more
You don’t need us to tell you about the current Cyber Security threat landscape, if you are reading this blog post you already know. You are also aware that the absence of evidence for a breach is not the same as not being breached and that your cyber security posture is constantly being assessed by adversaries. This is not becoming easier with the boom of AI and related services that are leading to a boom in data processing in combination with new capabilities for threat actors. Or… could it?
We are excited to provide you with a series of posts that will help you use the new technology to your advantage. This series will help small to large organizations to achieve more with the Microsoft Cloud Ecosystem Security.
No matter if you are a business leader or a technologist this will spark ideas that will help you achieve more. These abilities are fully customizable, and we are also adding new out-of-the-box features that can be used to replace these custom features. We will post updates as those become available.
The basis of this approach
How do you identify new security projects? How do you assess which security project you should fund? Are you uncertain if the program you funded has had the desired outcome? What cost is associated with a failed control? What is the positive financial impact of effective controls?
We think the answer to these questions is: By focusing on what the adversaries are after and the consequences of controls being bypassed. Much may change but the target is your crown jewels (across the dimensions of confidentiality, integrity and availability).
The benefit of this focus is that it is well aligned with the focus of the entire organization. Investments to be made can be clearly articulated in terms and values that are understood across the organization. From a technology perspective, it switches the focus to the adversaries’ goals (and how to prevent), which avoids a too-introspective view and approach to security. It also helps you to focus on the consequences of such a breach, the awareness of the consequences will guide you to implement the right type of mitigation based on the impact. Do not let technology get in the way of your decision-making. Allow a freer form of communication across the organization using the value the technology enables.
What are attackers after? Let’s ask Copilot for Security
Please go here to learn more about Copilot for security.
Are you able to tell how far away threat actors have been from this type of data in your system? Wouldn’t it be nice if every time you have an incident you could validate proximity to sensitive information? Before we go deep into this let’s zoom out.
Is there a way to visualize the impact that cyber security has in a business context?
Yes, if your organization is using Microsoft 365 Purview configured to capture file access and you have enabled Microsoft Defender for Cloud Apps integration with Advanced Hunting (more in technical document). This example provides an overview of the data that you can use. Organizational context like department, data context like the data types being accessed, type of cyber security incidents including incident details can be viewed at a high level or at a detailed level. Pair this with your technology investments and you can provide the gains of attacks prevented as well as a view of incidents that penetrated further. With the contextual data you can associate a monetary cost to compromises as well as effective protection.
What about non-Microsoft systems, to see the types of cross-platform systems that can be visualized please see Connect apps to get visibility and control – Microsoft Defender for Cloud Apps | Microsoft Learn. We have not built visualizations for all these products but if you follow the existing patterns, you can do so for your key applications.
We have added the ability to use Microsoft Defender for Endpoint data to output connections to sensitive systems from compromised devices. You can also use Copilot for Security as part of this work, bring in other contextual data you have in documents and in other forms and let Copilot for Security make the connections.
Do not limit this to reporting
Start tagging your incidents with the organizational context in mind. When communicating Cyber Security incidents to stakeholders use contextual data not technical details. Reporting on near misses and actual incidents should bring the actual financial impact and a steer for new investments.
For example, if you have a phishing incident, don’t just report the affected user and the type of phish. Instead, tag the incident with the class of sensitive information that may have been disclosed if the user was compromised. Even if the attack was successfully prevented.
Phishing is one of the most common attacks be realistic (anticipating your reaction), this type of data will support your investments. And it also provides an important data point, what if this control is bypassed. What types of controls do I have in between the attacker and the crown jewels? Which departments are targets, is this a specific threat actor?
Time for another sample from Copilot for Security
Incidents like Anonymous IP are not especially alarming for most organizations. It may be used as supporting data.
But when looking at this same innocuous incident from Copilot for Security we can note that this incident would benefit from the right type of tagging. The fact that an Account Key has been found in the open is concern enough. This tagging can be suggested directly by Copilot for Security, or for highest value connect Copilot for Security with your security policy and tagging taxonomy.
Regularly use Copilot for Security to map out potential ways the attacker may have gone deeper using MITRE ATT&CK as an example. With that in mind what is the proximity to other sensitive content and systems? Use the Exposure management tools like Microsoft Secure Score to find areas you can improve. Armed with this knowledge you may find additional controls that should be set in place to limit the impact of one of the controls failing. Backing the investment decisions with data that matters to your business.
When you validate CVE’s or software vendors for possible supply chain attacks check the impact they may have on your sensitive content. It can validate your next actions and you may even find the type of attackers you weren’t aware of.
But don’t stop here use Microsoft Defender for Cloud Apps to define networks and ISP’s, see this for more information. This will allow you to capture this type of detail based on vulnerabilities or threat actors you know are coming from a specific network segment and the amount of sensitive information being processed at that location. Which will allow you to extend this business context to investments needed in that space.
Are there other areas where this can be used?
What if you need to move one department to another location or are divesting parts of your organization? What type of data is being processed by that department or location?
You can use Copilot for Security.
Or you can use the view from Power BI to start the conversation and filter on the types that are key to your operations.
Conclusion
The approach to placing what is most valuable in the center will help you prepare for new and future threats. As your data landscape changes you will be able to monitor and early on spot weaknesses that may lead to increased risk. In a way you can see this as training where you build your muscles around your data. Instead of meeting cyber incidents as a problem you are meeting them as an opportunity to grow.
What’s next
Please see the new blog posts and start building on your own adaptation of this approach. This is the starting point, and you will see us make many advancements to allow you to grow further.
Security for Copilot Data Security Analyst plugin https://techcommunity.microsoft.com/t5/security-compliance-and-identity/learn-how-to-customize-and-optimize-copilot-for-security-with/ba-p/4120147
https://techcommunity.microsoft.com/t5/security-compliance-and-identity/guided-walkthrough-of-the-microsoft-purview-extended-report/ba-p/4121083
Microsoft Tech Community – Latest Blogs –Read More
I want to start a youtube channel where i develop flight control algorithms. Which matlab license do i need?
As per title. What license do I need to use MatLab for youtube videos? What happens if I make money (in the longterm) from those videos?As per title. What license do I need to use MatLab for youtube videos? What happens if I make money (in the longterm) from those videos? As per title. What license do I need to use MatLab for youtube videos? What happens if I make money (in the longterm) from those videos? matlab, simulink, license MATLAB Answers — New Questions
how to change the direction of this code from right falling into left to left falling into right
Post Content Post Content homework, graph MATLAB Answers — New Questions
array mask not being reset in application
In the attached app, there are 2 sliders for "set range low" and "set range high" which are used to change the scale of hte colorbar. The goal of doing this artifcats in the image.
Anything outside of the range of the colorbar is highlighted with the imdilate function in the mask which is then displayed as a replacement to the orignal image
When we click the buttons for "set range low" and "set range high", it should change the values of the image array so that anything outisde of the high and low limits is set to the actual high or low limit itself.
This seems to work, the problem is that the mask does not seem to reset. So after you click the buttons to "set the range low" in the image, if you go back to the slider and go outside of hte orignal range, there are still values that show up.
Im not sure what is going on, it seems to work ok in the matlab but not the application designer. Been stuck on this for an embarassing amount of time and could use a hand. thank you.
maxvalue = app.Sliderhigh.Value;
minvalue = app.Sliderlow.Value;
app.highEditField.Value = num2str(maxvalue);
app.lowEditField.Value= num2str(minvalue);
c = colorbar(app.UIAxes);
app.UIAxes.CLim = [minvalue maxvalue];
maska= app.a >maxvalue | app.a <minvalue ;
maskc = imdilate(maska,strel(‘disk’,25,0));
imagesc(maxvalue*maskc,’Parent’,app.UIAxes)In the attached app, there are 2 sliders for "set range low" and "set range high" which are used to change the scale of hte colorbar. The goal of doing this artifcats in the image.
Anything outside of the range of the colorbar is highlighted with the imdilate function in the mask which is then displayed as a replacement to the orignal image
When we click the buttons for "set range low" and "set range high", it should change the values of the image array so that anything outisde of the high and low limits is set to the actual high or low limit itself.
This seems to work, the problem is that the mask does not seem to reset. So after you click the buttons to "set the range low" in the image, if you go back to the slider and go outside of hte orignal range, there are still values that show up.
Im not sure what is going on, it seems to work ok in the matlab but not the application designer. Been stuck on this for an embarassing amount of time and could use a hand. thank you.
maxvalue = app.Sliderhigh.Value;
minvalue = app.Sliderlow.Value;
app.highEditField.Value = num2str(maxvalue);
app.lowEditField.Value= num2str(minvalue);
c = colorbar(app.UIAxes);
app.UIAxes.CLim = [minvalue maxvalue];
maska= app.a >maxvalue | app.a <minvalue ;
maskc = imdilate(maska,strel(‘disk’,25,0));
imagesc(maxvalue*maskc,’Parent’,app.UIAxes) In the attached app, there are 2 sliders for "set range low" and "set range high" which are used to change the scale of hte colorbar. The goal of doing this artifcats in the image.
Anything outside of the range of the colorbar is highlighted with the imdilate function in the mask which is then displayed as a replacement to the orignal image
When we click the buttons for "set range low" and "set range high", it should change the values of the image array so that anything outisde of the high and low limits is set to the actual high or low limit itself.
This seems to work, the problem is that the mask does not seem to reset. So after you click the buttons to "set the range low" in the image, if you go back to the slider and go outside of hte orignal range, there are still values that show up.
Im not sure what is going on, it seems to work ok in the matlab but not the application designer. Been stuck on this for an embarassing amount of time and could use a hand. thank you.
maxvalue = app.Sliderhigh.Value;
minvalue = app.Sliderlow.Value;
app.highEditField.Value = num2str(maxvalue);
app.lowEditField.Value= num2str(minvalue);
c = colorbar(app.UIAxes);
app.UIAxes.CLim = [minvalue maxvalue];
maska= app.a >maxvalue | app.a <minvalue ;
maskc = imdilate(maska,strel(‘disk’,25,0));
imagesc(maxvalue*maskc,’Parent’,app.UIAxes) application designer, mask MATLAB Answers — New Questions
Mails from my organisation comes to Junk-E-Mail
Hello,
one user receives emails from inside the Organisation marked as a Junk-Email. Not all and not every time.
I have no idea from what it depends. Nevertheless it should not happen inside my Organisation.
I cannot mark the whole domain as a safe (eg.: *@companyname.com) sender, because I get an error that is not possible to do inside the Organisation. Even if I mark only one email Address as a “safe” sometimes it goes to a Junk folder
In Exchange I also created a Email rule with SCL set to “0”. I red somewhere that in Exchange 2016 is possible to set “-1”. But in MS365 it seems that is not possible.
I have compared Email headers of a “normal” email and of a “junk” and they look fine.
The SPF also looks OK.
Hello, one user receives emails from inside the Organisation marked as a Junk-Email. Not all and not every time.I have no idea from what it depends. Nevertheless it should not happen inside my Organisation.I cannot mark the whole domain as a safe (eg.: *@companyname.com) sender, because I get an error that is not possible to do inside the Organisation. Even if I mark only one email Address as a “safe” sometimes it goes to a Junk folderIn Exchange I also created a Email rule with SCL set to “0”. I red somewhere that in Exchange 2016 is possible to set “-1”. But in MS365 it seems that is not possible.I have compared Email headers of a “normal” email and of a “junk” and they look fine.The SPF also looks OK. Read More
MAILMERGE
Hello,
Looking for guidance on how to include correct spacing for a “,” between the following names. I inserted a “,” for owner(s) 2-6 by right clicking on the owner field, selecting edit field, placing a check mark within the “text to be inserted before:” field option and placed a comma in the box. Looking to have document read as Jane Doe, John Doe, Jane Smith, John Polk, Jennifer Smith, Peter Smith.
Hello, Looking for guidance on how to include correct spacing for a “,” between the following names. I inserted a “,” for owner(s) 2-6 by right clicking on the owner field, selecting edit field, placing a check mark within the “text to be inserted before:” field option and placed a comma in the box. Looking to have document read as Jane Doe, John Doe, Jane Smith, John Polk, Jennifer Smith, Peter Smith. Read More
Prevent Edge to automatically login to Sharepoint
Hi,
I’m having an issue with devices automatically signing in to Sharepoint in the Edge browser.
I am looking for a way to prevent this, as there are confidential files on Sharepoint, so it should always ask for a password. The devices are enrolled in Intune. I have tried to disable implicit sign-in, but that does not do the trick. Can anybody point me in the right direction?
Thanks in advance.
Hi, I’m having an issue with devices automatically signing in to Sharepoint in the Edge browser.I am looking for a way to prevent this, as there are confidential files on Sharepoint, so it should always ask for a password. The devices are enrolled in Intune. I have tried to disable implicit sign-in, but that does not do the trick. Can anybody point me in the right direction? Thanks in advance. Read More
We cannot deploy the CSU in a client’s UAT environment
Hello everyone!
We cannot deploy the CSU in a client’s UAT environment.
How can we find out what’s missing? According to the licenses, they have 20 Dynamics 365 Finance licenses + 5 Dynamics 365 Commerce Attach licenses to the Qualifying Dynamics 365 Base Offer. Do you know what requirements are necessary for deploying the CSU in UAT? We are not even in PROD yet. In other words, we need to determine if this limitation is due to licensing issues or another reason. The client has 10 branches (Retail B&M) with about 5 registers in each. With the licenses they have, should they be able to deploy the CSU in UAT? If not, what do we need to acquire?
Hello everyone!We cannot deploy the CSU in a client’s UAT environment.How can we find out what’s missing? According to the licenses, they have 20 Dynamics 365 Finance licenses + 5 Dynamics 365 Commerce Attach licenses to the Qualifying Dynamics 365 Base Offer. Do you know what requirements are necessary for deploying the CSU in UAT? We are not even in PROD yet. In other words, we need to determine if this limitation is due to licensing issues or another reason. The client has 10 branches (Retail B&M) with about 5 registers in each. With the licenses they have, should they be able to deploy the CSU in UAT? If not, what do we need to acquire? Read More