Month: September 2024
I want model a datacenter load from time series power consumption and then connecting it to AC distribution three phase system
Problem I am facing is when I use importedin MATlab and model it as a load using controlled current source and then adding an inverter , my swing bus (From IEEE 13 bus test system) is having Dc bias/ offset instead of balance volatges. I am trying to analyze transient behaviour dueto sudden high current requirement. Please guide why I am getting this problem.Problem I am facing is when I use importedin MATlab and model it as a load using controlled current source and then adding an inverter , my swing bus (From IEEE 13 bus test system) is having Dc bias/ offset instead of balance volatges. I am trying to analyze transient behaviour dueto sudden high current requirement. Please guide why I am getting this problem. Problem I am facing is when I use importedin MATlab and model it as a load using controlled current source and then adding an inverter , my swing bus (From IEEE 13 bus test system) is having Dc bias/ offset instead of balance volatges. I am trying to analyze transient behaviour dueto sudden high current requirement. Please guide why I am getting this problem. load modelling, simulink, ieee 13 bus MATLAB Answers — New Questions
What is the setting in Production Server for auto deployment and how do I set this up?
I could not find a place in Settings section in the Production Sever Dashboard to enable or disable auto deployment, and if I need to add the options through advanced section, what I should put there? Do I need to restart the server/instance after I add the auto deployment configuration?I could not find a place in Settings section in the Production Sever Dashboard to enable or disable auto deployment, and if I need to add the options through advanced section, what I should put there? Do I need to restart the server/instance after I add the auto deployment configuration? I could not find a place in Settings section in the Production Sever Dashboard to enable or disable auto deployment, and if I need to add the options through advanced section, what I should put there? Do I need to restart the server/instance after I add the auto deployment configuration? production server, configuration, auto deployment MATLAB Answers — New Questions
sql server reporting read only replica
i have sql server enterprise edition 2016,
i need to have a secondary always on read only replica for reporting purpose ONLY,
and i do not want to change the server name for application connection, (i guess no listener)
how can i achieve this?
i have sql server enterprise edition 2016, i need to have a secondary always on read only replica for reporting purpose ONLY, and i do not want to change the server name for application connection, (i guess no listener) how can i achieve this? Read More
Excel Help and Training Videos are not playing
I’ve tried to play the videos, however they don’t seem to be playing on Safari and I checked Chrome, but the same issue. I’ve attached a screenshot below. Any ideas why this does not work?
I’ve tried to play the videos, however they don’t seem to be playing on Safari and I checked Chrome, but the same issue. I’ve attached a screenshot below. Any ideas why this does not work? Read More
Tech Talks presents: Copilot first development in the Power Apps Studio
Join us on Thursday, September 26th at 8am PT as Simon Mathews, Principal Program Manager, presents Copilot first development in the Power Apps Studio, specifically the Plan Designer.
Call to Action:
Click on the link to save the calendar invite: https://aka.ms/TechTalksInvite
View past recordings (sign in required): https://aka.ms/TechTalksRecording
Join us on Thursday, September 26th at 8am PT as Simon Mathews, Principal Program Manager, presents Copilot first development in the Power Apps Studio, specifically the Plan Designer.
Call to Action:
Click on the link to save the calendar invite: https://aka.ms/TechTalksInvite
View past recordings (sign in required): https://aka.ms/TechTalksRecording
Copying formulas with gaps in reference cells?
Sorry if this question has already been asked before, I’m not very good at Excel so I’m honestly not even sure how to describe my question 😞
Basically, I have some samples that I’ve taken multiple measurements from, and I want to create a separate table that uses all of the measurements from one sample in one formula, like this:
If I try typing the formula once and dragging it down, it only shifts the formula down by one row (instead of by three like I need it to):
Even if I type the correct formula several times and highlight all of that to drag it down so it knows the pattern (the way you can make it count by 5s if you type 5, 10, 15 before you drag it if that makes sense?), it still doesn’t shift the formula correctly:
Is there any way to easily copy or otherwise input the formula so that it shifts by the correct number of rows for each sample? My actual data set is way too big to be typing each formula by hand 😞
Thanks!! And please let me know if you need more info I’m not sure I’m explaining the problem very well
Sorry if this question has already been asked before, I’m not very good at Excel so I’m honestly not even sure how to describe my question 😞 Basically, I have some samples that I’ve taken multiple measurements from, and I want to create a separate table that uses all of the measurements from one sample in one formula, like this: If I try typing the formula once and dragging it down, it only shifts the formula down by one row (instead of by three like I need it to):Even if I type the correct formula several times and highlight all of that to drag it down so it knows the pattern (the way you can make it count by 5s if you type 5, 10, 15 before you drag it if that makes sense?), it still doesn’t shift the formula correctly: Is there any way to easily copy or otherwise input the formula so that it shifts by the correct number of rows for each sample? My actual data set is way too big to be typing each formula by hand 😞 Thanks!! And please let me know if you need more info I’m not sure I’m explaining the problem very well Read More
Help with Excel formula
Hi
I wanted to know if someone could help me with my excel spreadsheet.
I have an annual spreadsheet linked to a master sheet in one workbook.
Each sheet and the master sheet have a summary block at the top.
The summary block reflects 7 categories,
My formula below works correctly when I filiter for 2 out of the 7 categories. Im unsure why my other 5 other categories won’t calculate although I insert the correct block and line references. Can someone please help me ?
=IF(SUBTOTAL(103, G11:G788) = COUNTA(G11:G788), “”, SUMPRODUCT((SUBTOTAL(103, OFFSET(G11, ROW(G11:G788)-ROW(G11), 0))=1) * (G11:G788 * T11:T788 + F11:F788 * U11:U788)))
Hi I wanted to know if someone could help me with my excel spreadsheet. I have an annual spreadsheet linked to a master sheet in one workbook. Each sheet and the master sheet have a summary block at the top. The summary block reflects 7 categories, My formula below works correctly when I filiter for 2 out of the 7 categories. Im unsure why my other 5 other categories won’t calculate although I insert the correct block and line references. Can someone please help me ? =IF(SUBTOTAL(103, G11:G788) = COUNTA(G11:G788), “”, SUMPRODUCT((SUBTOTAL(103, OFFSET(G11, ROW(G11:G788)-ROW(G11), 0))=1) * (G11:G788 * T11:T788 + F11:F788 * U11:U788))) Read More
“Microsoft Loop 101 + AMA” 🎙 – The Intrazone podcast
Time to get Loop’y. This episode is your Microsoft Loop 101 audio course. It’s a mix of foundational insights and then a full 80-minutes of Q&A from the recent AMA with four Loop product makers: Patrick Gan, Manon Knoertzer, Derek Liddell, and Dan Costenaro.
You’ll hear as they take and answer a ton of audience questions during a recent ask Microsoft anything, AMA. There’s a lot they covered across governance, permissions inheritance, AI, integrations w/Planner & Teams, how pages are stored. And we included information about the new Copilot Pages, powered by Microsoft Loop, plus a slew of upcoming events and webinars.
On with the show!
The Intrazone, episode 114:
Subscribe to The Intrazone podcast + show links and more below.
BONUS – The full Loop AMA video | The Loop AMA (Ask Microsoft Anything) provided an opportunity for everyone to ask open questions and provide feedback live with Loop experts. This AMA aired Thursday, Sept. 12th, 2024, 10am-11:30am PDT. Watch now:
Links to important on-demand recordings and articles mentioned in this episode:
Hosts, guests, and related links and information
Patrick Gan | LinkedIn [guest]
Manon Knoetzer | LinkedIn [guest]
Derek Liddell | LinkedIn [guest]
Dan Costenaro | LinkedIn [guest]
Mark Kashman |@mkashman [co-host]
Chris McNulty |@cmcnulty2000 [co-host]
SharePoint | Facebook | @SharePoint | SharePoint community blog | Feedback
Loop | @MicrosoftLoop | LinkedIn
Try Microsoft Loop: https://aka.ms/LoopApp
Loop adoption center
Get started with Loop (‘how to’ content on support.microsoft.com)
Loop governance capabilities (Microsoft Learn)
Full 5-Part Loop Learning playlist on the “Microsoft Community Learning” YouTube channel: (now on demand)
Related videos, common admin articles and sites
All Loop blog posts on the Microsoft 365 tech community
Ron Pessner’s blog “Microsoft Loop | Live, Five-Part Learning Series” (has the full, playable versions of each part in the series + info about presenters and what’s covered in each)
“Microsoft Loop 101” article by Mark Kashman on Substack.
Microsoft Docs – The home for Microsoft documentation for end users, developers, and IT professionals.
Microsoft Tech Community Home
Stay on top of Office 365 changes
Listen to other Microsoft podcasts
Upcoming Events
September 26, 9am “Microsoft Loop IT governance and management controls“
October 2nd at 9am PDT “The development spectrum of Microsoft 365 and Power Platform” with SMEs April Dunnam, Vesa Juvonen, and Chris Kent.
Oct 8th at 10am PDT | Microsoft OneDrive: AI Innovations for a New Era of Work and Home
Oct. 14-18 | ESPC – SharePoint week (online)
Oct. 16th at 8am PDT “Create and share Copilot agents in SharePoint in a few clicks” with CJ Tan and Karuana Gatimu; our next “Meet the Makers” episode.
CollabDays New England | October 18-19, 2024 – Burlington, Massachusetts, USA
TechCon365 – Dallas | Nov. 11-15, 2024 | Dallas, TX, USA
Microsoft Ignite (+ more info) | Nov 18-22, 2024, “Save the date,” Chicago, IL
ESPC | European SharePoint Conference | Dec 2-5, 2024 in Stockholm, Sweden
+ always review and share the CommunityDays.org website to find your next event.
Subscribe today!
Thanks for listening! If you like what you hear, we’d love for you to Subscribe, Rate and Review on iTunes or wherever you get your podcasts.
Be sure to visit our show page to hear all episodes, access the show notes, and get bonus content. And stay connected to the SharePoint community blog and where we’ll share more information per episode, guest insights, and take any questions or suggestions from our listeners and SharePoint users via email at TheIntrazone@microsoft.com.
Get The Intrazone anywhere and everywhere
Listen to other Microsoft podcasts at aka.ms/microsoft/podcasts.
Cheers and enjoy the Loop-de-Loop, Mark
Microsoft Tech Community – Latest Blogs –Read More
Meta’s new Llama 3.2 SLMs and image reasoning models now available on Azure AI Model Catalog
In collaboration with Meta, Microsoft is excited to announce that Meta’s new Llama 3.2 models are now available on the Azure AI Model Catalog. Starting today, Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct models – Llama’s first ever multi-modal models – via managed compute. are ready to be deployed via managed compute in the Model Catalog.
Coming Soon: Inferencing through Models-as-a-Service serverless APIs is coming soon.
Additionally, Llama 3.2 1B, 3B,1B Instruct, 3B Instruct, are Meta’s first ever SLMs, built for on-device local inferencing for mobile and edge devices, ensuring safety anywhere and low-cost agentic applications like multilingual summarization and RAG for on-device and local inferencing. We’re delighted to be one of Meta’s launch partners for this release and empower developers with the latest Llama models, with the 3.2 release fit for purpose towards edge, mobile and image reasoning use cases. This release brings together the capabilities of Azure’s secure and scalable cloud infrastructure, Azure AI Studio’s tools like Azure AI Content Safety, Azure AI Search and prompt flow with Meta’s cutting-edge AI models to offer a powerful, customizable, and secure AI experience.
Introducing Llama 3.2: A New Era of Vision and Lightweight AI Models
Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. To support image recognition tasks, the Llama 3.2-Vision models use separately trained image reasoning adaptor weights that are packaged with the core LLM weights via cross-attention and called as tool use when an input image is presented to the model.
These models are designed for a variety of use cases, including image reasoning, multilingual summarization, and personalized on-device agents. With these models, developers can create AI applications that prioritize user privacy, reduce reliance on the cloud, and deliver faster, more efficient processing. All models support long context lengths (up to 128k) and are optimized for inference with grouped query attention (GQA).
Starting today, developers can access the following models via managed compute inferencing:
Llama 3.2 1B
Llama 3.2 3B
Llama 3.2-1B-Instruct
Llama 3.2-3B-Instruct
Llama Guard 3 1B
Llama 3.2 11B Vision Instruct
Llama 3.2 90B Vision Instruct
Llama Guard 3 11B Vision
Fine-tuning is available for Llama 3.2 1B Instruct and 3B Instruct and coming soon for the rest of the collection.
Coming soon to Models-as-a-Service, Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct will be available via serverless API deployment.
Key Features and Benefits of Llama 3.2
Multimodal Capabilities for image reasoning applications: Llama 3.2’s Vision models (11B and 90B) are the first Llama models to support multimodal tasks, integrating image encoder representations into the language model. Developers can create applications that analyze visual data and generate accurate insights, helping bridge the gap between vision and language in AI models.
Lightweight Models for Mobile and Edge Applications: Llama 3.2’s 1B and 3B text-only models are ideal for edge applications, offering local and on-device inferencing, ensuring that sensitive information never leaves the device, significantly reducing the risk of data breaches or unauthorized access. These models enable fast, real-time responses for on-device agents, making them ideal for tasks like summarizing messages, retrieving information, and providing multilingual support, all while maintaining user privacy.
System-Level Safety and Customization: Llama 3.2 introduces Llama Guard 3, a safety layer built into the models to ensure responsible innovation. This safeguard helps developers maintain compliance and trust while building AI solutions. Additionally, developers have full control and customization over the models, with direct access to model weights and architecture.
Llama Stack for Seamless Development: Llama 3.2 is built on top of the Llama Stack, a standardized interface that simplifies the development of AI applications. This stack integrates with PyTorch and includes tools for fine-tuning, synthetic data generation, and agentic application development. The Llama Stack API allows developers to manage Llama models with ease, providing a streamlined experience from evaluation to deployment: meta-llama/llama-stack: Model components of the Llama Stack APIs (github.com)
What Sets Llama 3.2 Apart
According to Meta, Llama 3.2 stands out for its combination of flexibility, privacy, and performance:
Deep Customization: Developers can tailor models to their specific needs, with full control over weights and architecture.
Infrastructure Control: With the flexibility to deploy in any environment—whether on-prem, cloud, or virtual—Llama 3.2 offers unmatched versatility.
Ironclad Security: Processing data locally maintains sovereignty over sensitive information, ensuring that privacy is a top priority.
Complete Transparency: Llama 3.2 provides full visibility into model behavior, supporting regulatory compliance and trust building..
Why Llama 3.2 on Azure?
Developers using Meta Llama 3 models can work seamlessly with tools in Azure AI Studio, such as Azure AI Content Safety, Azure AI Search, and prompt flow to enhance ethical and effective AI practices. Here are some main advantages that highlight the smooth integration and strong support system provided by Llama 3.2 with Azure, Azure AI and Models as a Service:
Enhanced Security and Compliance: Azure places a strong emphasis on data privacy and security, adopting Microsoft’s comprehensive security protocols to protect customer data. With Llama 3.2 on Azure AI Studio, enterprises can operate confidently, knowing their data remains within the secure bounds of the Azure cloud, thereby enhancing privacy and operational efficiency.
Content Safety Integration: Customers can integrate Meta Llama 3 models with content safety features available through Azure AI Content Safety, enabling additional responsible AI practices. This integration facilitates the development of safer AI applications, ensuring content generated or processed is monitored for compliance and ethical standards.
Simplified Assessment of LLM flows: Azure AI’s prompt flow allows evaluation flows, which help developers to measure how well the outputs of LLMs match the given standards and goals by computing metrics. This feature is useful for workflows created with Llama 3.2; it enables a comprehensive assessment using metrics such as groundedness, which gauges the pertinence and accuracy of the model’s responses based on the input sources when using a retrieval augmented generation (RAG) pattern
Client integration: You can use the API and key with various clients. Use the provided API in Large Language Model (LLM) tools such as prompt flow, OpenAI, LangChain, LiteLLM, CLI with curl and Python web requests. Deeper integrations and further capabilities coming soon.
Simplified Deployment and Inference: By deploying Meta models through MaaS with pay-as-you-go inference APIs, developers can take advantage of the power of Llama 3 without managing underlying infrastructure in their Azure environment.
These features demonstrate Azure’s commitment to offering an environment where organizations can harness the full potential of AI technologies like Llama 3.2 efficiently and responsibly, driving innovation while maintaining high standards of security and compliance.
Getting Started with Meta Llama3 on MaaS
To get started with Azure AI Studio and deploy your first model, follow these clear steps:
Familiarize Yourself: If you’re new to Azure AI Studio, start by reviewing this documentation to understand the basics and set up your first project.
Access the Model Catalog: Open the model catalog in AI Studio.
Find the Model: Use the filter to select the Meta collection or click the “View models” button on the MaaS announcement card.
Select the Model: Open the Llama-3.2 text model from the list.
Deploy the Model: Click on ‘Deploy’ and choose the managed compute option
FAQ
What does it cost to use Llama 3.2 models on Azure?
For managed compute deployments, you’ll be billed based on the minimum GPU SKU used for deployment, provided you have sufficient GPU quota.
For models via MaaS, you’ll be billed based on the prompt and completion tokens. Pricing will be available soon, seen in Azure AI Studio (Marketplace Offer details tab when deploying the model) and Azure Marketplace.
Do I need GPU capacity in my Azure subscription to use Llama 3.2 models?
Yes, for models available via managed compute deployment, you will need GPU capacity by model.
When you deploy the model, you’ll see the VM that is automatically selected for deployment.
For the 11B Vision Instruct and 90B Vision Instruct available via serverless API (coming soon), no GPU capacity is required.
When Llama 3.2 11B Vision Instruct and 90B Vision Instruct models are listed on the Azure Marketplace, can I purchase and use these models directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Llama 3.2, but the purchase experience can only be accessed through the model catalog.
Attempting to purchase Llama 3.2 models from the Marketplace will redirect you to Azure AI Studio.
Given that Llama 3.2 11B Vision Instruct and 90B Vision Instruct will be billed through the Azure Marketplace, would it retire my Azure consumption commitment (aka MACC) when these models are available via MaaS?
Yes, both the Llama 3.2 11B Vision Instruct and 90B Vision Instruct models will be “Azure benefit eligible” Marketplace offer, indicating MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Meta?
No, Microsoft does not share the content of any inference request or response data with Meta.
Are there rate limits for the Meta models on Azure?
Meta models come with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Can I use MaaS models in any Azure subscription types?
Customers can use MaaS models in all Azure subsection types with a valid payment method, except for the CSP (Cloud Solution Provider) program. Free or trial Azure subscriptions are not supported.
Can I fine-tune Llama 3.2 models?
You can fine-tune the Llama 3.2 1B Instruct and 3B Instruct models. Fine-tuning for the rest of the collection is coming soon.
Microsoft Tech Community – Latest Blogs –Read More
Examples missing from requirements toolbox installation
I am trying to open the "Link test case reqs Example" with the command openExample(‘slrequirements/LinkTestCaseReqsExample’), but the only examples I have installed are "ImportAndEditRequiermentsWordExample" & "VerifyAShortestPathAlgorithmExample". There should be way more than 2 according to the doccumentation.I am trying to open the "Link test case reqs Example" with the command openExample(‘slrequirements/LinkTestCaseReqsExample’), but the only examples I have installed are "ImportAndEditRequiermentsWordExample" & "VerifyAShortestPathAlgorithmExample". There should be way more than 2 according to the doccumentation. I am trying to open the "Link test case reqs Example" with the command openExample(‘slrequirements/LinkTestCaseReqsExample’), but the only examples I have installed are "ImportAndEditRequiermentsWordExample" & "VerifyAShortestPathAlgorithmExample". There should be way more than 2 according to the doccumentation. requirements toolbox, examples MATLAB Answers — New Questions
Accessing Pixhawk 6x parameters when building firmware for SRXL2 connection
I am attempting to get the Pixhawk 6x to receive data from a new spektrum receiver that utilizes SRXL2, the process required for getting the SRXL2 protocol to work involves setting certain parameters to different values, namely:
Set SERIAL4_PROTOCOL = 23
Set SERIAL4_OPTIONS = 4
Set RSSI_TYPE = 3
However, when looking at the parameters avaialble in Qground control or mission planner, these parameters are not present. Additionally, there seems to be some kind of text file called uart.txt that the serial parameters are attempting to read from but it detects nothing when loading. Is there any way of adding these parameters to the pixhawk during the firmware build process? Or is there a specific means of configuring the TELEM2 port for a true UART connection. As this is the only way I can get the SRXL2 to connect to the pixhawk when it is running a simulink model (fmu_v6x_fixedwing).I am attempting to get the Pixhawk 6x to receive data from a new spektrum receiver that utilizes SRXL2, the process required for getting the SRXL2 protocol to work involves setting certain parameters to different values, namely:
Set SERIAL4_PROTOCOL = 23
Set SERIAL4_OPTIONS = 4
Set RSSI_TYPE = 3
However, when looking at the parameters avaialble in Qground control or mission planner, these parameters are not present. Additionally, there seems to be some kind of text file called uart.txt that the serial parameters are attempting to read from but it detects nothing when loading. Is there any way of adding these parameters to the pixhawk during the firmware build process? Or is there a specific means of configuring the TELEM2 port for a true UART connection. As this is the only way I can get the SRXL2 to connect to the pixhawk when it is running a simulink model (fmu_v6x_fixedwing). I am attempting to get the Pixhawk 6x to receive data from a new spektrum receiver that utilizes SRXL2, the process required for getting the SRXL2 protocol to work involves setting certain parameters to different values, namely:
Set SERIAL4_PROTOCOL = 23
Set SERIAL4_OPTIONS = 4
Set RSSI_TYPE = 3
However, when looking at the parameters avaialble in Qground control or mission planner, these parameters are not present. Additionally, there seems to be some kind of text file called uart.txt that the serial parameters are attempting to read from but it detects nothing when loading. Is there any way of adding these parameters to the pixhawk during the firmware build process? Or is there a specific means of configuring the TELEM2 port for a true UART connection. As this is the only way I can get the SRXL2 to connect to the pixhawk when it is running a simulink model (fmu_v6x_fixedwing). simulink, pixhawk, uav toolbox, pixhawk6x, uart, rcin MATLAB Answers — New Questions
Publishing multiple graphs without using a subplot function
How do you publish multiple graphs independently without putting all of them in a subplot?How do you publish multiple graphs independently without putting all of them in a subplot? How do you publish multiple graphs independently without putting all of them in a subplot? graph, subplot, publish MATLAB Answers — New Questions
Numerical method of the “fimplicit” function
Hello, does anyone know what kind of numerical methods are used in the "fimplicit" function? I cannot find any information about this. Thank you!Hello, does anyone know what kind of numerical methods are used in the "fimplicit" function? I cannot find any information about this. Thank you! Hello, does anyone know what kind of numerical methods are used in the "fimplicit" function? I cannot find any information about this. Thank you! fimplicit, numerical method MATLAB Answers — New Questions
How I organize and insert NIR dataset for Classification Learner.
Hello,
I currently research the field of near-infrared spectroscopy (NIR), mainly in the analysis of the spectra of polymeric materials. I would like to make a system for classifying these spectra by machine learning. Literature indicates the APP Classification Leaner and its models to perform the training of the spectra. However, I’m having trouble inserting the spectra sheet (containing wavelength and absorbance values). Being new to Matlab, I don’t know how to organize these spreadsheets with different values, because the wavelength is constant for the different samples, but the absorbance varies for them. My idea is to classify the spectra for the different materials. How could I organize the spreadsheet and develop the training?
If anyone can help me with a tutorial or quick step by step, I would be extremely grateful.
Thanks.Hello,
I currently research the field of near-infrared spectroscopy (NIR), mainly in the analysis of the spectra of polymeric materials. I would like to make a system for classifying these spectra by machine learning. Literature indicates the APP Classification Leaner and its models to perform the training of the spectra. However, I’m having trouble inserting the spectra sheet (containing wavelength and absorbance values). Being new to Matlab, I don’t know how to organize these spreadsheets with different values, because the wavelength is constant for the different samples, but the absorbance varies for them. My idea is to classify the spectra for the different materials. How could I organize the spreadsheet and develop the training?
If anyone can help me with a tutorial or quick step by step, I would be extremely grateful.
Thanks. Hello,
I currently research the field of near-infrared spectroscopy (NIR), mainly in the analysis of the spectra of polymeric materials. I would like to make a system for classifying these spectra by machine learning. Literature indicates the APP Classification Leaner and its models to perform the training of the spectra. However, I’m having trouble inserting the spectra sheet (containing wavelength and absorbance values). Being new to Matlab, I don’t know how to organize these spreadsheets with different values, because the wavelength is constant for the different samples, but the absorbance varies for them. My idea is to classify the spectra for the different materials. How could I organize the spreadsheet and develop the training?
If anyone can help me with a tutorial or quick step by step, I would be extremely grateful.
Thanks. nir, classification learner MATLAB Answers — New Questions
Intune Custom Compliance Policy – Struggling
Any assistance or guidance on this is greatly appreciated.
For over a week I struggled with a custom compliance policy that will do the following.
– Search for a specific installed software and version and produce the following results:
– Application is not installed – Compliance Status set to “Not Applicable”
– Application is installed but is not the desired version. – Compliance status set to “Not Compliant”
– Application is installed, meets the version requirements – Compliance status set to “Compliant”
– Multiple versions of application exist, one of which meet the requirements. Compliance status set to “Not Compliant”
If I run the discovery script on a local device and output the findings it is 100% successful, every time. However, when applying the policy in Intune not every works correctly.
Here are both the JSNO file and discovery script.
——-JSON——
Any assistance or guidance on this is greatly appreciated. For over a week I struggled with a custom compliance policy that will do the following.- Search for a specific installed software and version and produce the following results:- Application is not installed – Compliance Status set to “Not Applicable”- Application is installed but is not the desired version. – Compliance status set to “Not Compliant”- Application is installed, meets the version requirements – Compliance status set to “Compliant”- Multiple versions of application exist, one of which meet the requirements. Compliance status set to “Not Compliant”If I run the discovery script on a local device and output the findings it is 100% successful, every time. However, when applying the policy in Intune not every works correctly. Here are both the JSNO file and discovery script.——-JSON——{ “Rules”: [ { “SettingName”: “ComplianceStatus”, “Operator”: “IsEquals”, “DataType”: “String”, “Operand”: “Compliant”, “MoreInfoUrl”: “https://example.com/compliance-info”, “RemediationStrings”: [ { “Language”: “en_US”, “Title”: “Software Compliance Check”, “Description”: “The required software version is installed and compliant.” } ] }, { “SettingName”: “ComplianceStatus”, “Operator”: “IsEquals”, “DataType”: “String”, “Operand”: “NonCompliant”, “MoreInfoUrl”: “https://example.com/compliance-info”, “RemediationStrings”: [ { “Language”: “en_US”, “Title”: “Software Compliance Check”, “Description”: “The required software version is not installed or is outdated. Please install or update to the required version.” } ] }, { “SettingName”: “ComplianceStatus”, “Operator”: “IsEquals”, “DataType”: “String”, “Operand”: “NotApplicable”, “MoreInfoUrl”: “https://example.com/compliance-info”, “RemediationStrings”: [ { “Language”: “en_US”, “Title”: “Software Compliance Check”, “Description”: “The software is not applicable for this device.” } ] } ]} ——- Discovery Script _——-$softwareName = “Autodesk Single Sign On Component”$requiredVersion = [version]”13.7.7.1807″# Get the installed software information$installedSoftware = Get-CimInstance -ClassName Win32_Product | Where-Object { $_.Name -eq $softwareName }# Initialize the result hash$result = @{ SoftwareInstalled = $false SoftwareVersion = “0.0.0.0” ComplianceStatus = “NotApplicable”}# Process each instance if any are foundif ($installedSoftware) { $result.SoftwareInstalled = $true $isCompliant = $false $multipleCopies = ($installedSoftware.Count -gt 1) foreach ($software in $installedSoftware) { $installedVersion = [version]$software.Version $result.SoftwareVersion = $installedVersion.ToString() if ($installedVersion -ge $requiredVersion) { $isCompliant = $true } } # Determine overall compliance status if ($multipleCopies) { $result.ComplianceStatus = “NonCompliant” } else { $result.ComplianceStatus = $isCompliant ? “Compliant” : “NonCompliant” }}# Return the result as JSON$result | ConvertTo-Json -Compress Read More
Outlook blanking out email data when switching account to send from.
According to ChatGPT, and my own experience, the new Outlook blanks out past email data and forward date when switching accounts, is a known problem in the new version of Outlook. Outlook has not bothered to fix this bug. This is a problem if you want to forward or respond to an email from a personal account and not the account the email was received. Please join with me to send a message to Microsoft to fix this feature that exists in classic Outlook, as Microsoft was notified with the first introduction of new outlook and still refuses to do anything. Do Apple, Android, or Linix browsers have this problem, as it may be time to abandon Microsoft.
According to ChatGPT, and my own experience, the new Outlook blanks out past email data and forward date when switching accounts, is a known problem in the new version of Outlook. Outlook has not bothered to fix this bug. This is a problem if you want to forward or respond to an email from a personal account and not the account the email was received. Please join with me to send a message to Microsoft to fix this feature that exists in classic Outlook, as Microsoft was notified with the first introduction of new outlook and still refuses to do anything. Do Apple, Android, or Linix browsers have this problem, as it may be time to abandon Microsoft. Read More
What’s New for Planner in Teams in September 2024
This September brings bug fixes, accessibility improvements, and labels in Grid view for the Planner app for Microsoft Teams. We’ve been listening to your feedback and hope these latest features that will make managing your tasks even easier. Also, check out our recent AMA to learn more about what’s coming next for the new Planner!
Labels column now in Grid view
Organize and categorize your tasks more efficiently using the labels column in your basic plans’ Grid view. We’ve commonly heard this feature request from customers and hope this feature makes it easier for you to track progress and manage your projects. Try it out by navigating to Grid view for any basic plan and selecting “Add label” for any task in the Labels column. You can edit the name of the label by selecting the label on the task, and filter by label to get a more granular view of your plan.
Improvements to plan accessibility
We are committed to making Planner accessible to everyone. This update includes several improvements as we work towards that vision:
Corrected instruction for the label menu in the task details page: We’ve made it easier to understand and use the label menu, ensuring that everyone can effectively categorize their tasks.
Edit key dates associated with a task using a keyboard: You can now edit start and due dates using your keyboard, making it more convenient for users who prefer keyboard navigation.
Narrator now correctly reads selected values for task’s priority and progress fields
Bug fixes to My Day
We’ve also addressed some bugs to improve your experience with the My Day page in the Planner app. With this latest set of updates, you can expect To Do tasks due today and newly created tasks to show up correctly on My Day.
Share your feedback
Your feedback helps inform our feature updates and we look forward to hearing from you as you try out Planner’s new and existing capabilities! To share your feedback about the new Planner app in Teams, including bugs you’ve observed in our product, you can navigate to the :question_mark: icon in the Planner app and select ‘Feedback’. We also encourage you to share any features you would like to see in the app by adding it to our Planner Feedback Portal.
Learn more about the new Planner
Check out our recent AMA about the new Planner web
Watch the new Planner demos for inspiration on how to get the most out of the new Planner app in Microsoft Tea…
Try out the new Copilot in Planner (preview) today in the new Microsoft Planner in Teams
Check out the new Planner adoption website and explore new resources such as Day in the Life Guides to use Planner for task management, collaborative work management and project management.
Microsoft Tech Community – Latest Blogs –Read More
Project Selection: Portfolio Analysis using Copilot in Excel with Python
One of my favourite features in Project Online, and Project Server, has been the portfolio features that allow you do match your plan portfolio against your business drivers to ensure you are gaining the best strategic value. I created a Power App to do something similar a while back, and when Copilot in Excel with Python was announced, I thought it would be a good time to see if I could recreate what Project does, but in Excel. This is probably a feature that we won’t be rebuilding for new Planner but may be a perfect scenario for Copilot to fill the gap, and maybe some agents can help out? Copilot in Excel with Python is available now for Insiders (Excel Advanced Analysis ) and I must say I was pretty impressed how straightforward it was to talk Copilot through creating some Python for me to do the analysis – and give me exactly the same choice of projects from my portfolio. I’m not using any insider knowledge to do this, but using public documentation on the maths behind the process from Portfolio Analysis with Microsoft Project Server 2010 a white paper written for Microsoft by Andrew Lavinsky, now with EY) and if you want to dig deeper, a search on Saaty and Analytic Hierarchy Process (AHP – the name given to this particular optimization technique) will find more papers going back to the late 70’s. I was keen to check results against Project Online, so used a similar approach, but this does generally show how you might ask Copilot to come up with different optimization algorithms if you have specific scenarios in mind. I’ll walk through more or less the steps I followed, along with the issues I saw along the way and the way I steered Copilot to put things right. At this point I am only looking at the cost side or things. Resourcing is a topic for another day…
Initial Steps – Setting up a Portfolio in Project Online
To give me something to validate against, I configured some drivers and projects in Project Online and set up some driver prioritizations and then mapped my projects to them. I set a budget and forced in a couple of projects. I used the articles at https://learn.microsoft.com/en-us/projectonline/portfolio-analysis-overview to configure some data, if you want to follow along. I used the same names for drivers and plans, although I could not bring myself to have a plan called Lync 2012 Enterprise Deployment – so that was renamed as Teams Enterprise Deployment.
My target is to have my driver priorities match these:
My plan priorities to look something like this:
And my final selection to match up with this group of plans:
Into Excel – Prioritizing Business Drivers!
Starting with the Business Drivers sheet, and I soon learned that it was better to keep things tidy in Excel – so I just labelled my drivers A-E as it made the matrix easier to see, rather than having long text descriptions. From memory I knew this needed to be a symetrical matrix, with the leading diagonal all ‘1’s as each driver matches itself, then the other numbers represented the ‘is as important’, ‘is more important’, is much more important’ and finally ‘is extremely more important than’ – with the same for ‘less important’ levels too. I got the numbers wrong first time through (I used 1,3,5 and 7) and my percentages were a little bit off – but I still got the same project selection – but for accuracy I’ll use the ones from the white paper – 1,3,6 and 9, and the reciprocals for the ‘less than’ options. You only need to fill out the top half of the matrix (to the upper right of the ‘1’s) and have the lower as =1/(the matching cell). So my matrix ended up looking like this to match the driver comparisons I had chosen in Project Online:
I then fired up Copilot and gave the following prompt:
Can you write some Python code to set a ranking for the drivers A, B, C, D and E on the business drivers sheet based on the pair-wise comparison using the Analytic Hierarchy Process. Can you give a consistency ratio for the analysis as well as displaying the priorities calculated for the drivers.
If I hadn’t known about AHP then I think Copilot might still have got to this by starting with basics like asking how to prioritize items against each other – but a little knowledge accelerated this part. You’ll see later how I needed to encourage Copilot to do better…
The answer from Copilot was that it needed to engage overdrive to solve this – or more accurately, to use start advanced analysis:
I clicked Start advanced analysis, which created a new sheet then I could see a few cells that had PY in them so knew it was starting to do things with Python. After creating a DataFrame (the construct the Python library called pandas uses for handling numeric arrays) and copying in my data, which it identified correctly it came up with some Python to do what I asked – and soon had a result, not 42, but 0.031127. Project shows the consistency ratio as a percentage (around 97% for this analysis), so I did ask Copilot if this was an ok value for consistency ratio and it suggested anything under 0.1 was good. It didn’t initially show me the individual priorities, so I had to ask Copilot Can you show the priorities for each driver? The Analysis1 sheet that Copilot was working in then looked like this:
I’ll add to the foot of this post some of the responses and code generated along the way. The beauty of having the Python code here is you could take it and use it elsewhere. You want to create an Azure Function in Python that would take a call from a Power App and return the priorities? This would be a good starting point – just take it over to VS Code. Or just use Copilot in Github. So many possibilities.
So how are we doing? Those numbers look good. Trust me, the A was the top driver, E 2nd and then D, B, C. And the percentages are only differing in the 3rd decimal place. I call that pretty close.
Applying the priorities to plans
In Project Online to get the priorities for the plans, a similar approach is taken as for prioritizing the drivers, saying for each one what impact it will have for the driver concerned:
Behind the scenes, as per the white paper, these levels of None to Extreme can be represented by number 0,1,3,6 and 9. In my Projects sheet I show this same data, for consumption by Copilot and Python:
Then I asked Copilot:
Can you then apply these driver priorities to the Projects on the Projects sheet, in rows which are aligned to the drivers A to E referenced by column using the Ranking of values 0,1,3,6 and 9. 9 is a higher alignment. The aim is to prioritise the projects that have the most alignment with the most important drivers.
It certainly gave a result here, but I need to see it normalized to check if the % matched project – this was the Analysis1 sheet at this point before they were normalized, where you can also see some of the Python in cell A51 that was doing the work:
Slight break here for some gotches I hit along the way. I did see issues a couple of times where Copilot would think there were only 5 plans, as it assumed a similar shape for the dataframe from the driver analysis. It got things right this time, maybe as I’d changed focus to the Projects sheet. Anyway, before I’d just give it a nudge and tell if what the project dataframe should be and all was well. Another gotcha is that I am writing the blog as I go along – and Copilot seemed to lose interest due to the long gap between prompts… It did pick up again without having to start over.
And the normalized result – looking good with just some differences in the 2nd decimal from the result in Project Online:
Budgets, and Mandatory Projects
The next step was to get Copilot to look at my Budget sheet. I did try to have the driver rankings and budgets all in the same sheet, but it seemed to be less trouble to keep things clean and have these separate. My next prompt for Copilot:
Looking at the Budgets sheet, it would cost over $2.5M to execute all projects. If I have just $1.5M which projects should I execute based on the priorities calculated?
And Copilots response? The first what was on the Analysis1 sheet (I added the total) and then what was returned in the chat area (and I haven’t really been telling that side of the story, but it has kept me informed of what it was doing and the resultant DataFrames, as well as showing the Python being used (see the foot of this blog post):
Based on the priorities calculated and the $1.5M budget constraint, the following projects should be executed:
Shared GPS devices for transportation
Helmet with Integrated Sunvisor
Voice Recognition Product
Aquisition Target Analysis
E-campaign to Bloggers
Catalog Publishing
Removable protective lining
Print Advertising Campaign System
This is all well and good, but it has a couple of problems. Firslty there are a couple of projects that I really must do, even though they don’t rank highly against these business drivers, but they keep the company running (the ones showing Force-in, on the Proejct Online screenshot above). And after I’ve solved that issue – these projects aren’t delivering the best strategic value I could possibly obtain. They are just the highest ranked plans chosen until the money runs out. The helmet project is very expensive, and even if it does rank very high, could the money be better spent to give higher strategic value?
Let sort out the projects I need to do first;
If I absolutely must execute the Operations Management plan, and the Internal Application Customization plan, which projects would also fit in the $1.5M budget?
Copilot responded:
To include the Operations Management and Internal Application Customization plans within the $1.5M budget, I will first subtract their costs from the total budget. Then, I will select the highest priority projects that fit within the remaining budget. I will write Python code to perform this selection.
Including the Operations Management and Internal Application Customization plans, the following projects fit within the $1.5M budget:
Shared GPS devices for transportation
Helmet with Integrated Sunvisor
Voice Recognition Product
E-campaign to Bloggers
Operations Management
Internal Application Customization
Cool, but how to get a better optimization? Maybe Copilot has some ideas?
The method used here just adds the top priority plan until the budget is reached. Is there any other algorithm that could be used to optimize the strategic value by trying different combinations of plans together?
At this point, and something similar had popped up at other times, I’d see Copilot get and error, then try something else instead. Cool! Before I’d seen it reference the error, work out why it got an error, then try another approach.
This time however it did appear to be stuck. Looking at the code it seems like it is using Normalized_Score when it doesn’t have that in the DataFrame it is considering. Of course I could have debugged the Python myself (eventually…).
Re-thinking the prompts from the beginning
Instead, I reviewed the responses leading up to this to see where Copilot was headed:
A more sophisticated approach to optimize the strategic value while considering different combinations of plans is to use a knapsack algorithm. The knapsack problem is a classic optimization problem that can help in selecting the best combination of projects to maximize the total value within a given budget. I will write Python code to implement the knapsack algorithm to find the optimal combination of projects.
Maybe I could give it a bit of a push in the right direction. I cleaned up my sheets so that no extraneous information existed (I also had a ‘benefits’ value on my budget sheet which I wasn’t using) and crafted a set of prompts to start from the beginning, not bothering with some of the normalizing steps, which I’d only needed to validate results against my known percentages. I also put the final selection in a single prompt, giving it the budget, the clue it should use knapsack and also tell it my mandatory plans. These were my 3 final prompts:
Can you write some Python code to set a ranking for the drivers A, B, C, D and E on the business drivers sheet based on the pair-wise comparison using the Analytic Hierarchy Process. Can you give a consistency ratio for the analysis and display the priorities calculated for each driver.
Can you then apply these driver priorities to the Projects on the Projects sheet, in rows which are aligned to the drivers A to E referenced by column using the Ranking of values 0,1,3,6 and 9. 9 is a higher alignment. The aim is to prioritise the projects that have the most alignment with the most important drivers.
Looking at the Budgets sheet, can you use the knapsack approach to choose the optimal set of projects to maximize strategic value, based on a strict budget of $1.5M which should not be exceeded and also ensuring that the mandatory projects, Operations Management plan and Internal Application Customization plan are included. Do not include fractions of projects, all or nothing.
The ‘partial project’ piece was added after seeing my budget apparently exceeded, but on closer inspection of the results and DataFrame, it had selected 0.8 of a project to consume the final $120K. Pretty sure that for most projects you don’t get 80% of the strategic impact for doing 80% of the plan… But it had been easy to see what it was doing so debugging wasn’t a problem and modifying the prompt usually got things back on track. I also had the best experience when the sheet that contained the data needed for each step was active when the prompt was entered, otherwise sometimes the DataFrames could be the wrong shape. I’d see it only considering 5 projects, if the sheet with 5 drivers was still active.
Final Result!
A perfect match to the $1.38M that Project Online had selected – see the screenshot right at the start. I haven’t shown here, but I did quiz Copilot on the measure of strategic value compared to the maximum I could have achieved – and got a result around 70%, which also matched that shown by Project. I did sometimes see that even with the prompt saying complete projects only, I did sometimes still see the budget met by adding 80% of the next highest priority plan.
Final Thoughts
Comparing to the results from Project Online allowed me to easily verify if I was getting the results I was expecting, giving me confidence I could use the same prompts against a different data set and still make a good selection. To take out more of the uncertainty, rather than using Copilot each time, it would make sense to use my ‘trusted and verified’ Python code – although it might need a little work to allow it to work with variable numbers of drivers. Tip – the value of the variable RI (Random Index) is different for different sizes of matrix.
What are the different values of RI, Random Index, used when carrying out AHP analysis on different sized matrices?
The Random Index (RI) values used in the Analytic Hierarchy Process (AHP) analysis for different sized matrices are as follows:
For a 1×1 matrix: RI = 0.00
For a 2×2 matrix: RI = 0.00
For a 3×3 matrix: RI = 0.58
For a 4×4 matrix: RI = 0.90
For a 5×5 matrix: RI = 1.12
For a 6×6 matrix: RI = 1.24
For a 7×7 matrix: RI = 1.32
For an 8×8 matrix: RI = 1.41
For a 9×9 matrix: RI = 1.45
For a 10×10 matrix: RI = 1.49
These values are used to determine the consistency of the pairwise comparison matrix in AHP.
Thanks Copilot!
Appendix
The following contains the responses to my final 3 prompts – and the Python generated.
Responses to Prompt 1
To rank the drivers using the Analytic Hierarchy Process (AHP), the pairwise comparison matrix will be used to calculate the priority vector and the consistency ratio. The steps involved are:
Normalize the pairwise comparison matrix.
Calculate the priority vector (eigenvector).
Compute the consistency index (CI) and consistency ratio (CR). I will write Python code to perform these steps and display the priorities and consistency ratio.
“=PY(“”#Calculate priorities and consistency ratio using AHP
from numpy.linalg import eig
# Extract the pairwise comparison matrix
pairwise_matrix = Business_Drivers_A1_F6_df.iloc[:, 1:].values
# Normalize the pairwise comparison matrix
column_sums = pairwise_matrix.sum(axis=0)
normalized_matrix = pairwise_matrix / column_sums
# Calculate the priority vector (eigenvector)
priority_vector = normalized_matrix.mean(axis=1)
# Calculate the consistency index (CI)
eigenvalues, _ = eig(pairwise_matrix)
max_eigenvalue = max(eigenvalues.real)
CI = (max_eigenvalue – len(priority_vector)) / (len(priority_vector) – 1)
# Calculate the consistency ratio (CR)
RI = 1.12 # Random Index for a 5×5 matrix
CR = CI / RI
# Create a DataFrame to display the priorities
priorities_df = pd.DataFrame({
‘Driver’: Business_Drivers_A1_F6_df[‘Drivers’],
‘Priority’: priority_vector
})
priorities_df, CR””,1)”
The priorities for each driver have been calculated using the Analytic Hierarchy Process (AHP). The consistency ratio (CR) for the analysis is approximately 0.0312, indicating a consistent pairwise comparison. The priorities for each driver are displayed in the DataFrame on the sheet.
Responses to Prompt 2
o prioritize the projects based on the driver priorities, the following steps will be taken:
Multiply each project’s alignment score by the corresponding driver priority.
Sum the weighted scores for each project to get a total score.
Rank the projects based on their total scores. I will write Python code to perform these steps and display the prioritized projects.
“=PY(“”#Rank projects based on driver priorities
# Extract the priorities from the previous calculation
priorities = priorities_df[‘Priority’].values
# Multiply each project’s alignment score by the corresponding driver priority
weighted_scores = Projects_A1_F14_df.iloc[:, 1:].values * priorities
# Sum the weighted scores for each project to get a total score
Projects_A1_F14_df[‘Total_Score’] = weighted_scores.sum(axis=1)
# Rank the projects based on their total scores
Projects_A1_F14_df[‘Rank’] = Projects_A1_F14_df[‘Total_Score’].rank(ascending=False)
# Sort the projects by rank
ranked_projects_df = Projects_A1_F14_df.sort_values(by=’Rank’)
ranked_projects_df””,1)”
The projects have been ranked based on their alignment with the driver priorities. The ranked projects are displayed in the DataFrame on the sheet, with the highest priority project being ‘Shared GPS devices for transportation’.
Responses to Prompt 3
To solve this problem using the knapsack approach, the following steps will be taken:
Extract the budget and strategic value for each project from the Budgets sheet.
Ensure that the mandatory projects are included in the selection.
Use the knapsack algorithm to select the optimal set of projects that maximizes the total strategic value without exceeding the budget. I will write Python code to perform these steps and display the selected projects.
“=PY(“”#Select optimal set of projects using knapsack approach
# Extract the budget and strategic value for each project
budgets = Budgets_A1_B15_df[‘Budget’].values
strategic_values = Projects_A1_F14_df[‘Total_Score’].values
# Ensure that the mandatory projects are included
mandatory_projects = [‘Operations Management’, ‘Internal Application Customization’]
mandatory_indices = [Budgets_A1_B15_df[Budgets_A1_B15_df[‘Project’] == project].index[0] for project in mandatory_projects]
# Define the knapsack problem
num_projects = len(budgets)
# Initialize the DP table
dp = [[0 for _ in range(1500001)] for _ in range(num_projects + 1)]
# Fill the DP table
for i in range(1, num_projects + 1):
for w in range(1500001):
if i – 1 in mandatory_indices:
dp[i][w] = dp[i – 1][w – budgets[i – 1]] + strategic_values[i – 1] if w >= budgets[i – 1] else dp[i – 1][w]
else:
if w >= budgets[i – 1]:
dp[i][w] = max(dp[i – 1][w], dp[i – 1][w – budgets[i – 1]] + strategic_values[i – 1])
else:
dp[i][w] = dp[i – 1][w]
# Find the selected projects
selected_projects = []
w = 1500000
for i in range(num_projects, 0, -1):
if dp[i][w] != dp[i – 1][w]:
selected_projects.append(i – 1)
w -= budgets[i – 1]
# Ensure mandatory projects are included
for idx in mandatory_indices:
if idx not in selected_projects:
selected_projects.append(idx)
# Create a DataFrame to display the selected projects
selected_projects_df = Budgets_A1_B15_df.iloc[selected_projects]
selected_projects_df””,1)”
The optimal set of projects has been selected using the knapsack approach, ensuring that the mandatory projects ‘Operations Management’ and ‘Internal Application Customization’ are included. The selected projects are displayed in the DataFrame on the sheet.
Microsoft Tech Community – Latest Blogs –Read More