Month: October 2024
CANT OPEN MY OUTLOOK
Hi , this is the error message im getting every time i try to open my Outlook “There’s a problem with Outlook (new). Reinstall the application from its original install location or contact your administrator.
Hi , this is the error message im getting every time i try to open my Outlook “There’s a problem with Outlook (new). Reinstall the application from its original install location or contact your administrator. Read More
Pulling data using drop downs
Hello,
This may not be possible, but can you input ‘Consultant’ and ‘Month’ in the below;
and it will then be able to pull the data automatically from the below spreadsheet? Using columns A,B and G?
So each month i could go into the top sheet and change the consultant and month and it will give me that months overview?
Thanks so much
Hello,This may not be possible, but can you input ‘Consultant’ and ‘Month’ in the below;and it will then be able to pull the data automatically from the below spreadsheet? Using columns A,B and G? So each month i could go into the top sheet and change the consultant and month and it will give me that months overview?Thanks so much Read More
It takes too long for the app to appear in built for your org
Hi
I am having issue for weeks now, as every time I want to re-upload my custom app to Microsoft Teams store to “your org’s app catalog” it takes almost a day for it to be visible
This is happening for a couple of weeks or 2 months, and before it used to be ok.
The app is visible in admin teams app list, it is approved, but why it takes too long now to be visible?
It affects really bad on our development.
I tried several things, such as changing settings in admin teams, signed out completely, cleared cache.. but nothing helped.
Can someone help please?
Kind regards
Lazar
Hi I am having issue for weeks now, as every time I want to re-upload my custom app to Microsoft Teams store to “your org’s app catalog” it takes almost a day for it to be visible This is happening for a couple of weeks or 2 months, and before it used to be ok.The app is visible in admin teams app list, it is approved, but why it takes too long now to be visible?It affects really bad on our development. I tried several things, such as changing settings in admin teams, signed out completely, cleared cache.. but nothing helped. Can someone help please?Kind regardsLazar Read More
Sync Up Episode 13: In Focus – Designing for Copilot
The latest episode of Sync Up is now available for your viewing and listening pleasure! This month, Arvind Mishra and I sat down to talk with veteran Microsoft designer, Ben Truelove, about the principles and work that went into bringing the magic of Copilot into the OneDrive experience! Ben walks us through what went wrong before it went right and how Copilot is changing the way that our users interact with OneDrive.
Show: https://aka.ms/SyncUp | Apple Podcasts: https://aka.ms/SyncUp/Apple | Spotify: https://aka.ms/SyncUp/Spotify | RSS: https://aka.ms/SyncUp/RSS
Liked this episode? Let us know in the comments below! Thank you all for listening and we can’t wait to share the future of OneDrive with you next week! Register now at aka.ms/OneDriveEvent2024!
Microsoft Tech Community – Latest Blogs –Read More
CPU Management in IIS Application Pools: A Deep Dive into Advanced Settings for Optimal Performance
Application pools provide an isolation between different web application in Internet Information Services (IIS) by allowing you to manage the resources, recycling, and performance per application. Control of CPU usage is going to be one of the most critical factors in the performance and management of an application pool. IIS includes a dedicated section under ApplicationPool -> Advance Settings for the management of CPU. it provides varied settings that enable you to optimize the utilization of the CPU so that a particular server resource does not become dominated by one application. This article will dive into the CPU settings for an ApplicationPool, explaining each option and its configuration.
Overview
In the application pool advance settings, the CPU section consists of many important configurations to control and monitor CPU usage. They are as follows –
Limit (percent)
Limit Action
Limit Interval (minutes)
Processor Affinity Mask
Processor Affinity Mask (64-bit option)
#1 Limit (percent)
The Limit (percent) setting allows you to specify the maximum percentage of CPU that a particular application pool can consume. The percentage is based on the total CPU capacity available on the server, where 100 represents the total CPU power. Setting the value of this property to 0 disables limiting the worker processes to a percentage of CPU time. And if the limit set by the CPU Limit is exceeded, an event is written to the event log and an optional set of events can be triggered as determined by the CPU Limit Action property. For multi-core processors, the limit applies to the total CPU time across all cores. For instance, on a 16-core machine, setting the limit to 25% ensures that the application pool cannot use more than the equivalent of 4 core.
#2 Limit Action
This setting specifies the action IIS will take when the application pool exceeds the configured CPU limit percent. You can choose any action form the list below.
NoAction: IIS will monitor CPU usage but will not take any corrective action if the application pool exceeds the CPU limit. Only an log entry is generated in event logs.
KillW3WP: IIS terminates the worker process (w3wp.exe) associated with the application pool when CPU usage exceeds the limit. The application pool is shut down for the duration of the reset interval and a log entry is generated in event log.
Throttle: IIS will attempt to slow down the application by delaying its access to CPU resources when it exceeds the limit. This action helps ensure that other application pools or processes have sufficient CPU resources.
ThrottleUnderLoad: This option allows the application to exceed the CPU limit when the server is under low load but throttles it when the server experiences higher demand.
#3 Limit Interval (minutes)
The Limit Interval determines the time period (in minutes) over which IIS measures CPU usage. When the number of minutes since the last process accounting reset reaches the value defined by this property, IIS will reset the CPU timers for both logging and limit intervals. Setting this property to 0 disables CPU monitoring.
#4 Processor Affinity Enabled
The Processor Affinity Mask property forces the worker process serving this application pool to run on specific CPUs based on the Processor Affinity Mask value.
#5 Processor Affinity Mask
This setting allows you to restrict the application pool to run on specific processors (or CPU cores). The affinity mask is a bitmask that specifies which processors the application pool should run on. You can use any available CPU Affinity Mask calculator online, or you can calculate it manually. If you’d like a detailed explanation on how to calculate the CPU Affinity Mask, feel free to leave a comment, and I’ll write a follow-up article on this topic.
#6 Processor Affinity Mask (64-bit option)
It specifies the high-order hexadecimal mask for a 64-bit machine. This setting provides the same functionality as the Processor Affinity Mask but extends support for systems with more than 32 processors.
Key Points
Generally, it’s safer to use Throttle or ThrottleUnderLoad to maintain system stability without abruptly terminating applications. KillW3wp is a more aggressive approach and should be used with caution, as terminating the worker process can lead to application downtime and a poor user experience. Also, the Processor Affinity Mask is rarely used unless you are dealing with specific performance tuning scenarios or licensing restrictions on multi-core processors. The modern CPU scheduling mechanisms in the OS typically handle processor assignment more efficiently.
Conclusion
Managing CPU settings within IIS application pools is crucial for maintaining performance, stability, and efficient resource utilization. Configuring CPU limits and actions properly, you can prevent over-consumption of server resources by any one application. This ensures fair resource distribution across other applications running on the same server. Properly managing CPU settings contributes to overall server health, making your IIS environment more resilient and adaptable to growth.
Microsoft Tech Community – Latest Blogs –Read More
Azure NV V710 v5: Empowering Real-Time AI/ML Inferencing and Advanced Visualization in the Cloud
Azure NV V710 v5: Empowering Real-Time AI/ML Inferencing and Advanced Visualization in the Cloud
As industries increasingly rely on high-performance computing and AI for real-time inferencing, remote work, and advanced visualization, Azure’s Virtual Machine (VM) portfolio continues to evolve to meet these demands. Today, we’re excited to introduce the Azure NV V710 v5, the latest VM tailored for small-to-medium AI/ML inferencing workloads, Virtual Desktop Infrastructure (VDI), visualization, and cloud gaming workloads.
Powered by AMD’s latest Radeon™ PRO V710 GPUs and 4th Generation AMD EPYC™ (formerly “Genoa”) high-frequency CPUs delivers high compute performance and flexible GPU partitioning to address a wide range of industry needs.
Why Choose Azure NV V710 v5?
The NV V710 v5 brings a new level of flexibility and performance to the cloud, specifically designed for small-to-medium real-time AI/ML inferencing workloads and graphics-intensive applications.
Key Features of NV V710 v5
Real-Time Inferencing (RTI) and AI Inferencing:
The NV V710 v5 is optimized for small-to-medium AI model inferencing and real-time machine learning processing, offering the computational power and speed necessary for industries that rely on immediate data processing. With support for vLLM, users can perform AI/ML inferencing more efficiently, providing near-instant results for workloads such as edge AI applications and intelligent decision-making systems, all at a lower total cost of SKU ownership.
GPU Partitioning for Flexibility:
A standout feature of the NV V710 v5 is its GPU partitioning capability, allowing customers to allocate fractions of the GPU according to their workload requirements. This flexibility is ideal for multi-tenant environments, enabling organizations to support a variety of inferencing and graphical workloads efficiently without needing a full GPU for each application.
High-Performance AMD EPYC CPUs:
Equipped with AMD 4th Gen EPYC CPUs that boast a 3.9 GHz base frequency and 4.3 GHz max frequency, the NV V710 v5 is optimized for demanding compute tasks requiring both high CPU and GPU performance. This makes it suitable for complex simulations, graphics rendering, and real-time inferencing.
Massive GPU Memory:
With 28 GB of GDDR6 GPU memory, the NV V710 v5 can handle large-sized model inferencing, high-resolution rendering, and intricate visual content. The high memory capacity ensures smooth processing and loading of substantial datasets in real time.
Azure Integration and High-Speed Networking:
Integrated with Azure Accelerated Networking, the NV V710 v5 provides up to 80 Gbps bandwidth, ensuring high performance and low latency for AI inferencing, VDI applications, and cloud gaming workloads. This high-speed networking capability facilitates seamless data transfer, supporting intensive graphical and inferencing operations.
Real-World Applications
One of the key applications of the NV V710 v5 is in the automotive industry, where AI-based sensor simulation and inferencing play a vital role in developing intelligent edge devices for autonomous vehicles. Platforms like the Automated Driving Perception Hub (ADPH) offer automotive customers a virtual environment to evaluate a range of automotive sensors, such as cameras, lidars, and radars.
Accurate Inferencing: The NV V710 v5 supports batch-processed inferencing, providing a trusted environment for evaluating AI model accuracy in various simulations.
Cross-Platform Support: Its compatibility with ROCm/HIP enables cross-platform inferencing, which is crucial for intelligent edge devices.
Broader Applications: Beyond the automotive industry, the NV V710 v5 can support a variety of edge AI devices, such as security cameras, industrial equipment, and drones.
NV V710 v5 Technical Specifications
Specification
Details
vCPUs
Configurations from 4 to 28 vCPUs (3.95 GHz base, 4.3 GHz max)
Memory
16 GB to 160 GB
GPU
AMD Radeon PRO V710 GPU with 24 GB GDDR6 memory, partitioned from 1/6 to full GPU, supporting the latest ROCm releases for vLLM to enhance real-time AI inferencing
Storage
Up to 1 TB temporary disk
Networking
Up to 80 Gbps Azure Accelerated Networking
For more detailed technical information, visit our Azure documentation here.
AI Inferencing Opportunities with NV V710 v5
The NV V710 v5 provides a versatile platform for real-time AI/ML inferencing and visualization tasks. With support for vLLM, it enables enterprises to execute complex AI models in real time efficiently, making it an essential asset for industries focused on AI-driven insights. By leveraging GPU partitioning, companies can optimize their resources across various workloads, ensuring a cost-effective approach to cloud-based inferencing and graphics rendering.
Additional Use Cases
VDI and Remote Workstations: For enterprises deploying virtual desktops, the NV V710 v5 provides high-performance computing resources that can be dynamically adjusted based on user requirements. This flexibility is valuable for media production, design, and financial services, where high-end graphics capabilities are crucial.
Cloud Gaming: The NV V710 v5 is built to handle cloud gaming with low-latency performance, offering gamers a seamless, high-quality experience comparable to traditional gaming consoles. Its robust architecture supports real-time rendering, delivering a premium gaming experience in the cloud.
Conclusion: The Future of AI Inferencing and Graphics Workloads with Azure NV V710 v5
The Azure NV V710 v5 VM is set to transform the landscape of AI inferencing, real-time visualization, and cloud gaming. By combining high-performance AMD Genoa CPUs, 24 GB GPU memory, ROCm 6 support, and vLLM, it provides an all-in-one solution for a wide range of applications.
The NV V710 v5 opens up new opportunities for businesses to run real-time AI/ML model inferencing in the cloud, scale graphical workloads efficiently, and deliver high-quality user experiences. With its advanced partitioning and high-speed networking capabilities, it’s tailored to meet the demands of modern, graphics-intensive, and AI-driven industries.
Ready to experience the power of the NV V710 v5? Sign up for the public preview here.
Microsoft Tech Community – Latest Blogs –Read More
We are removing Feed on Microsoft 365 (Office)
As part of our ongoing efforts to streamline and enhance user experiences, we will be retiring the Feed feature (shown in figure 1 below) on the Microsoft 365 app. This change will affect web endpoints (www.microsoft365.com, www.office.com) and the Windows app (“Microsoft 365 (Office)”).
We are committed to ensuring that your existing workflows remain unaffected. Launched in 2022, Feed was designed to help users explore the latest content and team activities. Over time, we have integrated all the essential features of Feed into a more accessible surface within the Microsoft 365 app: the “Recommended” files on the Home tab (shown in figure 2 below).
Deprecation timeline: Feed will no longer be accessible from the Microsoft 365 app starting November 1, 2024.
Microsoft Tech Community – Latest Blogs –Read More
Introducing Microsoft Purview Data Security pay-as-you-go pricing for your non-Microsoft 365 data
Microsoft Purview is an extensive set of solutions that can help organizations secure and govern their data, wherever it lives. The unification of data security and governance capabilities in Microsoft Purview reflects our belief that our customers need a simpler approach to data.
Microsoft Purview Data Security helps customers dynamically secure their data across its lifecycle by combining data context with user context.
The data security capabilities, including Microsoft Purview Information Protection and Insider Risk Management are already loved and leveraged by customers around the world for their Microsoft 365 data, and we announced back in November 2023 that we were extending those capabilities to non-Microsoft 365 data sources like cloud storage apps (Box, Dropbox, Google Drive), cloud services (AWS, Azure), and Microsoft Fabric (Power BI). This month at the Fabric Conference, we announced additional capabilities for Microsoft Fabric customers, enhancing the Microsoft Purview Data Security capabilities already available for Fabric released in March 2024.
As we continue to invest in securing your non-Microsoft 365 data, we are excited to announce that our data security capabilities will transition from a free to a paid public preview. This new pricing model will take effect starting November 1, 2024.
Pricing Explained
There are two pricing components to support your data security needs: 1) Information Protection with an asset-based meter and 2) Insider Risk Management with a processing unit meter. Together these are used as complementary levers to run your practice and manage your costs.
Microsoft Purview Information Protection is billed based on the number of assets protected such as documents, emails, or other data files. Assets are identified and classified based on their sensitivity and the level of protection they require. This classification can be done manually by users or automatically using data classification tools within Microsoft Purview. Policies are then applied to these assets to control how they are handled, shared, and protected. These policies can enforce actions like encryption, access restrictions, and detecting violations. Billing is calculated based on the number of assets that are protected under these policies.
Feature
SKU
Price
Microsoft Purview Information Protection
Standard
$0.0165 per asset per day or
~ $0.50 per asset per month.
Microsoft Purview Insider Risk Management is billed based on the data security processing unit (DSPU). Insider Risk Management processes activities corresponding to the indicators selected in the policies to generate insights, alerts, and cases. Billing is calculated based on the number of processing units required for the indicators selected in policies. A DSPU is defined as the compute required to process 10,000 user activity logs.
Feature
SKU
Price
Microsoft Purview Insider Risk Management
Standard
$25 per data security processing unit
Pricing Model Timeline
This new pricing model takes effect on November 1, 2024, for all customers using the Data Security solution, which will be accessible from their Azure invoice starting December 1, 2024. On November 1, 2024, customers will find updates to the Microsoft Purview pricing page that reflect the details of this blog, until then, please reference pricing details here and learn more about pay-as-you-go on Microsoft Learn.
Note: Microsoft reserves the right to change the pricing, business model, or service (including but not limited to branding, features, functionality, and availability) at any time in its discretion prior to GA without prior notice.
Try the new solution today!
Log on to the Microsoft Purview portal and give the Data Security capabilities a try. If you want to learn more, please access the following resources:
• Learn more on Microsoft Purview Data Security on Microsoft Learn
• Learn more about Microsoft Purview pricing pay-as-you-go details on Microsoft Learn
• Learn more about these capabilities in the Mechanics Video
• Visit the Microsoft Purview Data Security website
Microsoft Tech Community – Latest Blogs –Read More
How can I change my role and department on MathWorks?
When I was creating my profile I chose the wrong role description and department and would like to know how to change those two (2) things immediately.When I was creating my profile I chose the wrong role description and department and would like to know how to change those two (2) things immediately. When I was creating my profile I chose the wrong role description and department and would like to know how to change those two (2) things immediately. role, department, profile, creating MATLAB Answers — New Questions
Retraining YAMNet for audio classification returns channel mismatch error in “deep.internal.train.Trainer/train”
I am retraining YAMNet for a binary classification task, operating on spectrograms of audio signals. My training audio has two classes, positive and negative. Audio is preprocessed & features extracted using yamnetPreprocess(). When training the network, trainnet() produces the following error:
Error using deep.internal.train.Trainer/train (line 74)
Number of channels in predictions (2) must match the number of
channels in the targets (3).
Error in deep.internal.train.ParallelTrainer>iTrainWithSplitCommunicator (line 227)
remoteNetwork = train(remoteTrainer, remoteNetwork, workerMbq);
Error in deep.internal.train.ParallelTrainer/computeTraining (line 127)
spmd
Error in deep.internal.train.Trainer/train (line 59)
net = computeTraining(trainer, net, mbq);
Error in deep.internal.train.trainnet (line 54)
net = train(trainer, net, mbq);
Error in trainnet (line 42)
[net,info] = deep.internal.train.trainnet(mbq, net, loss, options, …
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 192)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
My understanding of this error is that it indicates a mismatch between the number of classes the network expects, and the number of classes in the dataset. I do not see how this can be possible, considering the number of classes in the network is explicitly set by the number of classes in the datastore:
classNames = unique(ads.Labels);
numClasses = numel(classNames);
net = audioPretrainedNetwork("yamnet", NumClasses=numClasses);
My script is based on this MATLAB tutorial: audioPretrainedNetwork and there are no functional differences in the way I’m building datastores or preprocessing the data. The training options and the call to trainnet() are configured as follows:
options = trainingOptions(‘adam’, …
InitialLearnRate = initial_learn_rate, …
MaxEpochs = max_epochs, …
MiniBatchSize = mini_batch_size, …
Shuffle = "every-epoch", …
Plots = "training-progress", …
Metrics = "accuracy", …
Verbose = 1, …
ValidationData = {single(validationFeatures), validationLabels’}, …
ValidationFrequency = validationFrequency,…
ExecutionEnvironment="parallel-auto");
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Relevant variable dimensions are as follows:
>> unique(ads.Labels)
ans =
2×1 categorical array
negative
positiveNoisy
>> size(trainLabels)
ans =
1 16240
>> size(trainFeatures)
ans =
96 64 1 16240
>> size(validationLabels)
ans =
1 6960
>> size(validationFeatures)
ans =
96 64 1 6960
The only real differences between my script and the MATLAB tutorial are that I’m using parallel execution in the training solver, and the datastore outputEnvironment is set to "gpu" . If I set ExecutionEnvironment = "auto" instead of "parallel-auto" and set ads.OutputEnvironment = ‘cpu’ the error stack is shorter, but the problem is the same:
Error using trainnet (line 46)
Number of channels in predictions (2) must match the number of channels in
the targets (3).
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 189)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Please could someone give me some advice? The root cause of this is buried in the deep learning toolbox, and it’s a little beyond me right now.
Thanks,
BenI am retraining YAMNet for a binary classification task, operating on spectrograms of audio signals. My training audio has two classes, positive and negative. Audio is preprocessed & features extracted using yamnetPreprocess(). When training the network, trainnet() produces the following error:
Error using deep.internal.train.Trainer/train (line 74)
Number of channels in predictions (2) must match the number of
channels in the targets (3).
Error in deep.internal.train.ParallelTrainer>iTrainWithSplitCommunicator (line 227)
remoteNetwork = train(remoteTrainer, remoteNetwork, workerMbq);
Error in deep.internal.train.ParallelTrainer/computeTraining (line 127)
spmd
Error in deep.internal.train.Trainer/train (line 59)
net = computeTraining(trainer, net, mbq);
Error in deep.internal.train.trainnet (line 54)
net = train(trainer, net, mbq);
Error in trainnet (line 42)
[net,info] = deep.internal.train.trainnet(mbq, net, loss, options, …
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 192)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
My understanding of this error is that it indicates a mismatch between the number of classes the network expects, and the number of classes in the dataset. I do not see how this can be possible, considering the number of classes in the network is explicitly set by the number of classes in the datastore:
classNames = unique(ads.Labels);
numClasses = numel(classNames);
net = audioPretrainedNetwork("yamnet", NumClasses=numClasses);
My script is based on this MATLAB tutorial: audioPretrainedNetwork and there are no functional differences in the way I’m building datastores or preprocessing the data. The training options and the call to trainnet() are configured as follows:
options = trainingOptions(‘adam’, …
InitialLearnRate = initial_learn_rate, …
MaxEpochs = max_epochs, …
MiniBatchSize = mini_batch_size, …
Shuffle = "every-epoch", …
Plots = "training-progress", …
Metrics = "accuracy", …
Verbose = 1, …
ValidationData = {single(validationFeatures), validationLabels’}, …
ValidationFrequency = validationFrequency,…
ExecutionEnvironment="parallel-auto");
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Relevant variable dimensions are as follows:
>> unique(ads.Labels)
ans =
2×1 categorical array
negative
positiveNoisy
>> size(trainLabels)
ans =
1 16240
>> size(trainFeatures)
ans =
96 64 1 16240
>> size(validationLabels)
ans =
1 6960
>> size(validationFeatures)
ans =
96 64 1 6960
The only real differences between my script and the MATLAB tutorial are that I’m using parallel execution in the training solver, and the datastore outputEnvironment is set to "gpu" . If I set ExecutionEnvironment = "auto" instead of "parallel-auto" and set ads.OutputEnvironment = ‘cpu’ the error stack is shorter, but the problem is the same:
Error using trainnet (line 46)
Number of channels in predictions (2) must match the number of channels in
the targets (3).
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 189)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Please could someone give me some advice? The root cause of this is buried in the deep learning toolbox, and it’s a little beyond me right now.
Thanks,
Ben I am retraining YAMNet for a binary classification task, operating on spectrograms of audio signals. My training audio has two classes, positive and negative. Audio is preprocessed & features extracted using yamnetPreprocess(). When training the network, trainnet() produces the following error:
Error using deep.internal.train.Trainer/train (line 74)
Number of channels in predictions (2) must match the number of
channels in the targets (3).
Error in deep.internal.train.ParallelTrainer>iTrainWithSplitCommunicator (line 227)
remoteNetwork = train(remoteTrainer, remoteNetwork, workerMbq);
Error in deep.internal.train.ParallelTrainer/computeTraining (line 127)
spmd
Error in deep.internal.train.Trainer/train (line 59)
net = computeTraining(trainer, net, mbq);
Error in deep.internal.train.trainnet (line 54)
net = train(trainer, net, mbq);
Error in trainnet (line 42)
[net,info] = deep.internal.train.trainnet(mbq, net, loss, options, …
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 192)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
My understanding of this error is that it indicates a mismatch between the number of classes the network expects, and the number of classes in the dataset. I do not see how this can be possible, considering the number of classes in the network is explicitly set by the number of classes in the datastore:
classNames = unique(ads.Labels);
numClasses = numel(classNames);
net = audioPretrainedNetwork("yamnet", NumClasses=numClasses);
My script is based on this MATLAB tutorial: audioPretrainedNetwork and there are no functional differences in the way I’m building datastores or preprocessing the data. The training options and the call to trainnet() are configured as follows:
options = trainingOptions(‘adam’, …
InitialLearnRate = initial_learn_rate, …
MaxEpochs = max_epochs, …
MiniBatchSize = mini_batch_size, …
Shuffle = "every-epoch", …
Plots = "training-progress", …
Metrics = "accuracy", …
Verbose = 1, …
ValidationData = {single(validationFeatures), validationLabels’}, …
ValidationFrequency = validationFrequency,…
ExecutionEnvironment="parallel-auto");
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Relevant variable dimensions are as follows:
>> unique(ads.Labels)
ans =
2×1 categorical array
negative
positiveNoisy
>> size(trainLabels)
ans =
1 16240
>> size(trainFeatures)
ans =
96 64 1 16240
>> size(validationLabels)
ans =
1 6960
>> size(validationFeatures)
ans =
96 64 1 6960
The only real differences between my script and the MATLAB tutorial are that I’m using parallel execution in the training solver, and the datastore outputEnvironment is set to "gpu" . If I set ExecutionEnvironment = "auto" instead of "parallel-auto" and set ads.OutputEnvironment = ‘cpu’ the error stack is shorter, but the problem is the same:
Error using trainnet (line 46)
Number of channels in predictions (2) must match the number of channels in
the targets (3).
Error in train_DenseNet_detector_from_semi_synthetic_dataset (line 189)
[trained_network, train_info] = trainnet(trainFeatures, trainLabels’, net, "crossentropy", options);
Please could someone give me some advice? The root cause of this is buried in the deep learning toolbox, and it’s a little beyond me right now.
Thanks,
Ben yammnet, audio, deep learning, trainnet, classification, signal processing, machine learning, image processing MATLAB Answers — New Questions
Blueprint Survey Opportunity for Administering Information Security in Microsoft 365
Greetings!
Microsoft is updating a certification for Administering Information Security in Microsoft 365, and we need your input through our exam blueprinting survey.
The blueprint determines how many questions each skill in the exam will be assigned. Please complete the online survey by October 15th, 2024. Please also feel free to forward the survey to any colleagues you consider subject matter experts for this certification. You may send this to people external to Microsoft as well.
If you have any questions, feel free to contact John Sowles at josowles@microsoft.com or Rohan Mahadevan at rmahadevan@microsoft.com.
Administering Information Security in Microsoft 365 blueprint survey link:
https://microsoftlearning.co1.qualtrics.com/jfe/form/SV_03tzmgnS3oMiDeS
Greetings!
Microsoft is updating a certification for Administering Information Security in Microsoft 365, and we need your input through our exam blueprinting survey.
The blueprint determines how many questions each skill in the exam will be assigned. Please complete the online survey by October 15th, 2024. Please also feel free to forward the survey to any colleagues you consider subject matter experts for this certification. You may send this to people external to Microsoft as well.
If you have any questions, feel free to contact John Sowles at josowles@microsoft.com or Rohan Mahadevan at rmahadevan@microsoft.com.
Administering Information Security in Microsoft 365 blueprint survey link:
https://microsoftlearning.co1.qualtrics.com/jfe/form/SV_03tzmgnS3oMiDeS Read More
Sharepoint library format custom button in a column only visible for a group
Hi
I want to create a library with a column that displays a button and that button should only be visible for a specific group of people. But the members of the group can change.
So can I use the visible property with a security group or a dynamic group instead of checking the specific user email?
Hi I want to create a library with a column that displays a button and that button should only be visible for a specific group of people. But the members of the group can change.So can I use the visible property with a security group or a dynamic group instead of checking the specific user email? Read More
VBA LOOP to copy a hidden sheet n number of times and naming them based on rows
Hello All,
Back here again with something that seems like it should be simple but I cannot get my head around it.
I have a workbook where there is a table (unfortunately not a real table, and it uses merged cells due to needing to comply with formatting of the template) and the table will eventually contain different “Sleeve ID” down the first column. Depending on the day, that column might have 1 “Sleeve” or 30 and the naming might not be in order (i.e. SLV-001, SLV-003, SLV-004, SLV-008). Once this first table has been populated – at least for the first sleeve ID column – I would like to have a macro that can be selected that will then unhide a hidden sheet and copy it the amount of times that there are rows filled with sleeve ID’s in the first table and then name each sheet as the Sleeve ID for each of the filled in rows in that first table.
SUMMARY: when macro is run I would like to:
-Unhide the hidden template sheet (sheet1 in my attached report)
-Create a copy of “sheet1” for each SLEEVE ID filled in on the main page in the table
-Name each created copy as each of the sleeve ID names written in the first table.
Currently I can create blank worksheets based on number of rows used in the first page, and name them correctly but I cannot figure out how to use my created template that I would like to use instead of creating blank sheets.
Please see attached
Hello All, Back here again with something that seems like it should be simple but I cannot get my head around it. I have a workbook where there is a table (unfortunately not a real table, and it uses merged cells due to needing to comply with formatting of the template) and the table will eventually contain different “Sleeve ID” down the first column. Depending on the day, that column might have 1 “Sleeve” or 30 and the naming might not be in order (i.e. SLV-001, SLV-003, SLV-004, SLV-008). Once this first table has been populated – at least for the first sleeve ID column – I would like to have a macro that can be selected that will then unhide a hidden sheet and copy it the amount of times that there are rows filled with sleeve ID’s in the first table and then name each sheet as the Sleeve ID for each of the filled in rows in that first table. SUMMARY: when macro is run I would like to:-Unhide the hidden template sheet (sheet1 in my attached report)-Create a copy of “sheet1” for each SLEEVE ID filled in on the main page in the table-Name each created copy as each of the sleeve ID names written in the first table. Currently I can create blank worksheets based on number of rows used in the first page, and name them correctly but I cannot figure out how to use my created template that I would like to use instead of creating blank sheets. Please see attached Read More
context menu in Word 2016
I have a problem with the context menu in word 2016. When I select the text and click the right mouse button, the context menu opens, while the first, separate item “search in the menus” (picture 1.) appears to me, while behind it is a large space. If I want to find other items in the popup window, I have to start scrolling in it (Figure 2.). Only after the window is fully rolled do I see other items of the pop-up window (Figure 3.). It’s interesting that this doesn’t happen when you click on the image, it only shows up incorrectly when you click on the highlighted text. It’s quite frustrating and it’s weighing on me. It was working fine until recently and just recently it changed arbitrarily. I am asking for advice on how to correct this deficiency. Thank you.
I have a problem with the context menu in word 2016. When I select the text and click the right mouse button, the context menu opens, while the first, separate item “search in the menus” (picture 1.) appears to me, while behind it is a large space. If I want to find other items in the popup window, I have to start scrolling in it (Figure 2.). Only after the window is fully rolled do I see other items of the pop-up window (Figure 3.). It’s interesting that this doesn’t happen when you click on the image, it only shows up incorrectly when you click on the highlighted text. It’s quite frustrating and it’s weighing on me. It was working fine until recently and just recently it changed arbitrarily. I am asking for advice on how to correct this deficiency. Thank you. Read More
Highlighting cells
I need help with a formula for conditional formatting.
Column v and w have data that are dates. This data is conditionally formatted to highlight when dates are past due or coming up to be expired. I would like the cells in column X to highlight red if any cell in column v or w is highlighted. WITHOUT using VBA.
Thanks
I need help with a formula for conditional formatting. Column v and w have data that are dates. This data is conditionally formatted to highlight when dates are past due or coming up to be expired. I would like the cells in column X to highlight red if any cell in column v or w is highlighted. WITHOUT using VBA. Thanks Read More
Count If Function
I have a spreadsheet and all the cells have formulas that are feeding info from another spreadsheet. Until data is entered on the master, the cell on the current spreadsheet is empty (except for the formula). I want a total at the bottom of the column that tells me how many “jobs” are on the spreadsheet. if I use the countif(a1:a300″*”), it is returning 300 because they all contain a formula. I only want to total the ones that actually returned data.
I have a spreadsheet and all the cells have formulas that are feeding info from another spreadsheet. Until data is entered on the master, the cell on the current spreadsheet is empty (except for the formula). I want a total at the bottom of the column that tells me how many “jobs” are on the spreadsheet. if I use the countif(a1:a300″*”), it is returning 300 because they all contain a formula. I only want to total the ones that actually returned data. Read More
Mail-Enabled Contact migration
We’re migrating between forests users and their mailboxes with ADMT.
Now we need to migrate all the mail-enabled contacts from the source forest to the target but ADMT is not able to do it.
Is there a way we can migrate the contacts between the forest ?
thanks
We’re migrating between forests users and their mailboxes with ADMT.Now we need to migrate all the mail-enabled contacts from the source forest to the target but ADMT is not able to do it.Is there a way we can migrate the contacts between the forest ?thanks Read More
Issue with Azure Data Factory Pipeline Connection to SQL Server
Hi everyone,
I’m encountering an issue with my Azure Data Factory pipeline that uses a copy activity to transfer data from MariaDB. The connection is established using a self-hosted integration runtime, and everything seems to be configured correctly. The test connection is valid, and I can preview the data without any issues.
However, when I run the pipeline, I receive the following error:
“`
ErrorCode=SqlFailedToConnect,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Database. Please contact SQL server team for further support.
Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.
‘Type=System.Data.SqlClient.SqlException,Message=A network-related or instance-specific error occurred while establishing a connection to SQL Server.
The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
(provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server),’
“`
The pipeline used to work fine before, and I haven’t made any changes to the linked service configuration. I have also confirmed that the IP address used by the self-hosted IR has been added to the firewall rules of the SQL server, but I’m not an admin, so I can’t verify it fully.
Has anyone encountered a similar issue or have suggestions on how to troubleshoot further?
Thanks in advance!
Hi everyone,I’m encountering an issue with my Azure Data Factory pipeline that uses a copy activity to transfer data from MariaDB. The connection is established using a self-hosted integration runtime, and everything seems to be configured correctly. The test connection is valid, and I can preview the data without any issues.However, when I run the pipeline, I receive the following error:“`ErrorCode=SqlFailedToConnect,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot connect to SQL Database. Please contact SQL server team for further support.Check the linked service configuration is correct, and make sure the SQL Database firewall allows the integration runtime to access.’Type=System.Data.SqlClient.SqlException,Message=A network-related or instance-specific error occurred while establishing a connection to SQL Server.The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.(provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server),’“`The pipeline used to work fine before, and I haven’t made any changes to the linked service configuration. I have also confirmed that the IP address used by the self-hosted IR has been added to the firewall rules of the SQL server, but I’m not an admin, so I can’t verify it fully.Has anyone encountered a similar issue or have suggestions on how to troubleshoot further?Thanks in advance! Read More
A Quick Introduction to Microsoft’s Support for Mission Critical Offering
Customers that have Unified Support from Microsoft can supplement that agreement with Enhanced Support options. One of these options is the Support for Mission Critical (SfMC) offering. Listed below are answers to the top five questions customers typically ask about SfMC.
What is the Support for Mission Critical offering?
The Support for Mission Critical offering is designed to provide comprehensive and proactive support for customers with complex and high-stakes solutions. This program aims to enhance the overall health, resiliency, and performance of mission-critical systems by offering a programmatic end-to-end approach. It includes a framework of assessments, guidance, and recommendations to continuously improve and optimize outcomes. The SfMC offering focuses on anticipating and preventing problems through corrective actions and remediation, ensuring business continuity by restoring operations quickly and safeguarding against system issue recurrence.
What workloads are the SfMC offering available for?
The Support for Mission Critical offering is available for Intelligent Cloud, Modern Work and Business Apps workloads. Mission Critical workloads in the Intelligent Cloud are specific, named workloads hosted either entirely in Azure or under a hybrid configuration with the customer’s on premises estate. Modern Work Mission Critical workloads support the Exchange Online, SharePoint Online, Teams, OneDrive for Business, and/or Endpoint Manager instances of a single customer tenant, plus supporting services and features. SfMC coverage for Business Apps workloads entails the suite of applications sharing the same metadata environment. SfMC Presales Solution Architects meet with the customer and work with the account team to ensure that the initial scoping of the engagement meets the needs of the customer’s business, addresses the challenges of their workload, and focuses on the desired outcomes.
How does the SfMC offering work?
The Support for Mission Critical delivery team consists of designated customer leads that drive the initial assessments to understand the customer workload at a deeper level and identify potential avenues to improve the customer’s overall experience. The leads meet with the customer on a regular cadence to review the status of active support incidents, discuss new issues, challenges, and opportunities the customer is facing, provide trusted advisor guidance, and review proposed changes and improvements to the specified workload. The leads collaborate with subject matter experts from the SfMC team on an as-needed basis to ensure they are engaging the correct resources required to support and assist with the customer’s workload.
What are the outcomes from a typical SfMC engagement?
One of the desired outcomes of an SfMC engagement is for customers to experience a decrease in the number and/or severity of reactive support incidents over time. This is accomplished by evaluating the design and operation of the workload in comparison to known best practices and recommendations, analyzing the causes and contributing factors for past reactive incidents to suggest mitigations and long-term resolutions, and providing additional resiliency guidance based on real-world experiences.
In addition, the SfMC team will work with the customer to improve the performance, availability, security, observability and cost of the workload. This can be achieved by providing training and guidance; meeting with customers at a regular cadence to discuss issues, concerns, goals, and plans; reviewing planned changes to the solution to identify potential known issues; and maintaining familiarity with the current state and configuration of the customer’s workload.
Where can I learn more about Support for Mission Critical
Customers with an existing support agreement with Microsoft can follow up with their Customer Success Account Manager or Account Executive to learn more about this and other Enhanced Support offerings from Microsoft.
Microsoft Tech Community – Latest Blogs –Read More
Join Microsoft at Devoxx Belgium 2024!
Get ready for the 21st edition of Devoxx Belgium conference, a 5-day technology conference happening during October 7 – 11 in Antwerp. With 3,000+ attendees and 200+ speakers from around the world, it’s an exciting opportunity to dive deep into the latest in Java, AI and Cloud technologies. Microsoft is excited to be part of this event, offering a range of sessions, hands-on labs, panelists and networking opportunities to help you enhance your developer skills.
Supercharge Your Java Development with Azure and AI
At Microsoft, we’re dedicated to making your Java development experience easier, more efficient and more intelligent with Azure’s comprehensive offerings in AI, Apps and developer tools.
We offer a wide range of services such as Azure Container Apps, Azure App Service, Azure Kubernetes Service, and Azure Red Hat OpenShift to simplify the cloud deployment of Java applications, providing everything you need from development, maintenance, monitoring to scaling.
For developers, we also provide a complete toolset, including GitHub Copilot for intelligent code suggestions and Azure OpenAI for advanced AI integration. You can easily leverage tools like Microsoft Build of OpenJDK, Visual Studio Code for Java, and Azure plugins for Eclipse, IntelliJ, Maven and Gradle, making your Java development smoother. What’s more, our partnerships with Red Hat, Oracle, IBM, and other leading market partners ensure a strong end-to-end support for your cloud modernization journey.
Join us at Devoxx Belgium 2024
At Devoxx Belgium 2024, we’re showcasing Microsoft Azure’s commitment to the developer community. We offer a great lineup of sessions that showcase how Azure can help you build and modernize Java applications faster, how to integrate AI into your projects, and how to optimize and scale your cloud-native development.
Stop by the Microsoft booth for demos, expert Q&As, and interactive showcases of our developer tools. Plus, don’t miss the chance to grab some cool swag while networking with fellow developers.
Here’s a preview so that you can reserve time in your schedule:
Whether you’re looking to sharpen your developer skills, explore cloud and AI solutions, or dive into the latest developer tools, we’ve got something exciting for you.
We can’t wait to see you there!
Additional Resources:
Visit https://aka.ms/java-hub
Follow us on Twitter – @JavaAtMicrosoft
Subscribe to our new YouTube channel – Microsoft for Java Developers
Microsoft Tech Community – Latest Blogs –Read More