Email: helpdesk@telkomuniversity.ac.id

This Portal for internal use only!

  • My Download
  • Checkout
Application Package Repository Telkom University
All Categories

All Categories

  • Visual Paradigm
  • IBM
  • Adobe
  • Google
  • Matlab
  • Microsoft
    • Microsoft Apps
    • Analytics
    • AI + Machine Learning
    • Compute
    • Database
    • Developer Tools
    • Internet Of Things
    • Learning Services
    • Middleware System
    • Networking
    • Operating System
    • Productivity Tools
    • Security
    • VLS
      • Office
      • Windows
  • Opensource
  • Wordpress
    • Plugin WP
    • Themes WP
  • Others

Search

0 Wishlist

Cart

Categories
  • Microsoft
    • Microsoft Apps
    • Office
    • Operating System
    • VLS
    • Developer Tools
    • Productivity Tools
    • Database
    • AI + Machine Learning
    • Middleware System
    • Learning Services
    • Analytics
    • Networking
    • Compute
    • Security
    • Internet Of Things
  • Adobe
  • Matlab
  • Google
  • Visual Paradigm
  • WordPress
    • Plugin WP
    • Themes WP
  • Opensource
  • Others
More Categories Less Categories
  • Get Pack
    • Product Category
    • Simple Product
    • Grouped Product
    • Variable Product
    • External Product
  • My Account
    • Download
    • Cart
    • Checkout
    • Login
  • About Us
    • Contact
    • Forum
    • Frequently Questions
    • Privacy Policy
  • Forum
    • News
      • Category
      • News Tag

iconTicket Service Desk

  • My Download
  • Checkout
Application Package Repository Telkom University
All Categories

All Categories

  • Visual Paradigm
  • IBM
  • Adobe
  • Google
  • Matlab
  • Microsoft
    • Microsoft Apps
    • Analytics
    • AI + Machine Learning
    • Compute
    • Database
    • Developer Tools
    • Internet Of Things
    • Learning Services
    • Middleware System
    • Networking
    • Operating System
    • Productivity Tools
    • Security
    • VLS
      • Office
      • Windows
  • Opensource
  • Wordpress
    • Plugin WP
    • Themes WP
  • Others

Search

0 Wishlist

Cart

Menu
  • Home
    • Download Application Package Repository Telkom University
    • Application Package Repository Telkom University
    • Download Official License Telkom University
    • Download Installer Application Pack
    • Product Category
    • Simple Product
    • Grouped Product
    • Variable Product
    • External Product
  • All Pack
    • Microsoft
      • Operating System
      • Productivity Tools
      • Developer Tools
      • Database
      • AI + Machine Learning
      • Middleware System
      • Networking
      • Compute
      • Security
      • Analytics
      • Internet Of Things
      • Learning Services
    • Microsoft Apps
      • VLS
    • Adobe
    • Matlab
    • WordPress
      • Themes WP
      • Plugin WP
    • Google
    • Opensource
    • Others
  • My account
    • Download
    • Get Pack
    • Cart
    • Checkout
  • News
    • Category
    • News Tag
  • Forum
  • About Us
    • Privacy Policy
    • Frequently Questions
    • Contact
Home/Archive for: May 2025

Month: May 2025

MATLAB Answers is provisionally back?
Matlab News

MATLAB Answers is provisionally back?

PuTI / 2025-05-21

It is currently working for me?It is currently working for me? It is currently working for me? meta MATLAB Answers — New Questions

​

Saveobj and Loadobj for arrays of objects
Matlab News

Saveobj and Loadobj for arrays of objects

PuTI / 2025-05-21

I am trying to customize the save() and load() process for a classdef. Cuurrently, I am using old-style saveobj() and loadobj() methods, as I am still trying to get familiar with the newer approach.
Unlike most class methods, calling saveobj and load obj on an array of objects,
saveobj(objArray)
loadobj(objArray)
does not result in the entirety of objArray being passed to the user-provided code. Instead, there is some background Matlab process that invokes them one element at a time, equivalent to,
for i=1:numel(objArray)
saveobj(objArray(i))
loadobj(objArray(i))
end
However, my saveobj() and loadobj needs to know things about the entire array being saved, and calling them one element at a time hides this information. Is there any way to overcome this problem? As I said, I am still getting acquainted with the newer custom serialization and deserialization tools. Is there any chance that could hold a solution?I am trying to customize the save() and load() process for a classdef. Cuurrently, I am using old-style saveobj() and loadobj() methods, as I am still trying to get familiar with the newer approach.
Unlike most class methods, calling saveobj and load obj on an array of objects,
saveobj(objArray)
loadobj(objArray)
does not result in the entirety of objArray being passed to the user-provided code. Instead, there is some background Matlab process that invokes them one element at a time, equivalent to,
for i=1:numel(objArray)
saveobj(objArray(i))
loadobj(objArray(i))
end
However, my saveobj() and loadobj needs to know things about the entire array being saved, and calling them one element at a time hides this information. Is there any way to overcome this problem? As I said, I am still getting acquainted with the newer custom serialization and deserialization tools. Is there any chance that could hold a solution? I am trying to customize the save() and load() process for a classdef. Cuurrently, I am using old-style saveobj() and loadobj() methods, as I am still trying to get familiar with the newer approach.
Unlike most class methods, calling saveobj and load obj on an array of objects,
saveobj(objArray)
loadobj(objArray)
does not result in the entirety of objArray being passed to the user-provided code. Instead, there is some background Matlab process that invokes them one element at a time, equivalent to,
for i=1:numel(objArray)
saveobj(objArray(i))
loadobj(objArray(i))
end
However, my saveobj() and loadobj needs to know things about the entire array being saved, and calling them one element at a time hides this information. Is there any way to overcome this problem? As I said, I am still getting acquainted with the newer custom serialization and deserialization tools. Is there any chance that could hold a solution? saveobj, loadobj, serialization, deserialization, save, load, oop, classdef MATLAB Answers — New Questions

​

Quest Tool Migrates Protected Email and Files Between Tenants
News

Quest Tool Migrates Protected Email and Files Between Tenants

Tony Redmond / 2025-05-21

Solves the Problem of Migrating Data Protected by Sensitivity Labels

I’ve worked as an advisor with Quest for several years, but I had no indication that they would launch a product to migrate content protected by sensitivity labels from one Microsoft 365 tenant to another. That capability is now available in Quest On Demand Migration.

The tenant migration issue has existed since Microsoft introduced Azure Information Protection labels (now sensitivity labels) in 2016. The problem doesn’t arise with labels that simply mark content as being of a certain nature. It comes into play when sensitivity labels apply rights-management based encryption where usage rights define the level of access granted to individual users for protected files or messages.

The popularity of sensitivity labels has increased over time as more tenants come to understand the value of protecting their most sensitive content using the labeling features built into the Office apps. It’s true that labeling only extends to Office documents and PDFs, but that set covers most files created within Microsoft 365 tenants.

The advent of Microsoft 365 Copilot and its ability to find and use files stored in SharePoint Online and OneDrive for Business means that sensitivity labels are even more important. By themselves, sensitivity labels won’t stop apps like BizChat finding sensitive documents, but they can stop Copilot reusing content from those documents in its responses. The DLP policy for Microsoft 365 Copilot imposes a better block by stopping Copilot finding documents assigned specific sensitivity labels.

The growth in protected content creates a problem for tenant-to-tenant migration projects. Many products are available to move Exchange mailboxes and SharePoint files between tenants. However, migration products usually assume that the data they move is unprotected and that users will be able to access the content once it reaches the target tenant. That assumption doesn’t hold true when sensitivity labels protect email and files. The challenge is to move protected items from the source tenant in such a way that protection is maintained and respected by the target tenant.

Methods to Remove Sensitivity Labels from Files

Until now, the guidance for source tenants is to remove protection from content before migration to the target tenant. There are a couple of ways of doing this, starting off by assigning an account super-user privilege to allow them to remove sensitivity labels from files. Finding and processing protected files is an intensely manual process that’s prone to error. It will take a long time to prepare, move, and check any reasonable collection of labelled files, like the 5,188 items with the Public label as reported by the Purview Data Explorer (Figure 1).

Purview Data Explorer lists items with sensitivity labels.
Figure 1: Purview Data Explorer lists items with sensitivity labels

The SharePoint Online PowerShell module includes the Unlock-SPOSensitivityLabelEncryptedFile cmdlet. Administrators can use the cmdlet to remove protection from files in SharePoint sites and OneDrive for Business accounts. It is possible to script the removal of labels from files, but the automation journey breaks down when the files reach the target tenant and need to be relabeled.

SharePoint also supports the assignSensitivityLabel Graph API, which can remove or assign labels to files. However, assignSensitivityLabel is a metered API, meaning that each time the API is run, Microsoft charges $0.00185 (USD) paid for through an Azure subscription. That doesn’t seem like a big fee until the need exists to process tens of thousands of documents to remove labels in the source tenant and reapply labels in the target tenant.

No Solution for Protected Exchange Messages

Note that Exchange Online is missing from the discussion. That’s because all the methods described so far don’t handle email. I don’t know how clients like Outlook and OWA apply sensitivity labels to messages (it’s likely done using APIs from the Microsoft Information Protection SDK), but no cmdlets or Graph APIs are available to remove labels from messages or apply sensitivity labels in bulk to a set of messages migrated in mailboxes moved from one tenant to another.

Migrating Protected Content Between Tenants

All of which means that Quest’s claim to migrate protected content from Exchange Online, SharePoint Online, and OneDrive for Business is very interesting. It’s the first ISV migration offering that I know of which offers such a capability.

Reading the announcement and the accompanying Quest Knowledge Base article gives some insight into how the On Demand product handles protected items. A discovery process (like running the Get-Label cmdlet) finds the set of sensitivity labels in the source tenant. The labels from the source tenant are mapped to labels in the target in some form of table. Normal migration processing moves the data, and some form of post-migration task then updates the labels from the source tenant to matching labels for the target. Quest doesn’t describe what magic is used to make sure that protected content works when it reaches the target tenant, but the knowledge base article mentions the Microsoft Information Protection SDK, so it’s likely that On Demand uses MIP SDK API calls to read and update sensitivity labels for the migrated items.

User-Defined Permissions and Keys

Although creating the capability to move protected content between tenants is a great step forward for migration projects, there are always edge cases to consider. Sensitivity labels with user-defined permissions are an example. These labels are challenging because the permissions vary from item to item. SharePoint Online only recently gained support for sensitivity labels with user-defined permissions, and it’s interesting that Quest claim support for user-defined permissions out of the box.

Quest doesn’t mention sensitivity labels with double-key encryption (DKE), nor do they explain if On Demand supports migration of sensitivity labels with encryption based on customer keys rather than Microsoft-managed keys (sometimes called bring-your-own-key or BYOK). There’s a bunch of complexity involved in moving key management between tenants and it would be surprising if Quest supported BYOK. Thankfully, most customers use Microsoft-managed keys with sensitivity labels because it simplifies operations.

Let the Competition Begin

Overall, it’s great that an ISV has taken on and solved the challenge of moving protected content between tenants. The nature of competition is that once a migration vendor introduces a new capability, their competitors respond. We might see even more interesting developments in this space over the coming months.

 

Microsoft Build 2025: The age of AI agents and building the open agentic web
Microsoft News

Microsoft Build 2025: The age of AI agents and building the open agentic web

PuTI / 2025-05-20

TL;DR? Hear the news as an AI-generated audio overview made using Microsoft 365 Copilot. You can read the transcript here.

 


https://msblogs.thesourcemediaassets.com/2025/05/Build2025_OMB_AI-generated_AudioOverview_Final.mp3

We’ve entered the era of AI agents. Thanks to groundbreaking advancements in reasoning and memory, AI models are now more capable and efficient, and we’re seeing how AI systems can help us all solve problems in new ways.

For example, 15 million developers are already using GitHub Copilot, and features like agent mode and code review are streamlining the way they code, check, deploy and troubleshoot.

Hundreds of thousands of customers are using Microsoft 365 Copilot to help research, brainstorm and develop solutions, and more than 230,000 organizations — including 90% of the Fortune 500 — have already used Copilot Studio to build AI agents and automations.

Companies like Fujitsu and NTT DATA are using Azure AI Foundry to build and manage AI apps and agents that help prioritize sales leads, speed proposal creation and surface client insights. Stanford Health Care is using Microsoft’s healthcare agent orchestrator to build and test AI agents that can help alleviate the administrative burden and speed up the workflow for tumor board preparation.

Developers are at the center of it all. For 50 years Microsoft has been empowering developers with tools and platforms to turn their ideas into reality, accelerating innovation at every stage. From AI-driven automation to seamless cloud integration and more, it’s exciting to see how developers are fueling the next generation of digital transformation.

So, what’s next?

We envision a world in which agents operate across individual, organizational, team and end-to-end business contexts. This emerging vision of the internet is an open agentic web, where AI agents make decisions and perform tasks on behalf of users or organizations.

At Microsoft Build we’re showing the steps we’re taking to make this vision a reality through our platforms, products and infrastructure. We’re putting new models and coding agents in the hands of developers, introducing enterprise-grade agents, making our platforms like Azure AI Foundry, GitHub and Windows the best places to build, embracing open protocols and accelerating scientific discovery with AI, all so that developers and organizations can go invent the next big thing.

Here’s a glimpse at just a few of the announcements today:

Reimagining the software development lifecycle with AI

AI is fundamentally shifting how code is written, deployed and maintained. Developers are using AI to stay in the flow of their environment longer and to shift their focus to more strategic tasks. And as the software development lifecycle is being transformed, we’re providing new features across platforms including GitHub, Azure AI Foundry and Windows that enable developers to work faster, think bigger and build at scale.

  • GitHub Copilot coding agent and new updates to GitHub Models: GitHub Copilot is evolving from an in-editor assistant to an agentic AI partner with a first-of-its-kind asynchronous coding agent integrated into the GitHub platform. We’re adding prompt management, lightweight evaluations and enterprise controls to GitHub Models so teams can experiment with best-in-class models, without leaving GitHub. Microsoft is also open-sourcing GitHub Copilot Chat in VS Code. The AI-powered capabilities from GitHub Copilot extensions will now be part of the same open-source repository that drives the world’s most popular development tool. As the home of over 150 million developers, this reinforces our commitment to open, collaborative, AI-powered software development. Learn more about GitHub Copilot updates.
  • Introducing Windows AI Foundry: For developers, Windows remains one of the most open and widely used platforms available, with scale, flexibility and growing opportunity. Windows AI Foundry offers a unified and reliable platform supporting the AI developer lifecycle across training and inference. With simple model APIs for vision and language tasks, developers can manage and run open source LLMs via Foundry Local or bring a proprietary model to convert, fine-tune and deploy across client and cloud. Windows AI Foundry is available to get started today. To learn more visit our Windows Developer Blog.
  • Azure AI Foundry Models and new tools for model evaluation: Azure AI Foundry is a unified platform for developers to design, customize and manage AI applications and agents. With Azure AI Foundry Models, we’re bringing Grok 3 and Grok 3 mini models from xAI to our ecosystem, hosted and billed directly by Microsoft. Developers can now choose from more than 1,900 partner-hosted and Microsoft-hosted AI models, while managing secure data integration, model customization and enterprise-grade governance. We’re also introducing new tools like the Model Leaderboard, which ranks the top-performing AI models across different categories and tasks, and the Model Router, designed to select an optimal model for a specific query or task in real-time. Read more about Azure AI Foundry Models.

Making AI agents more capable and secure

AI agents are not only changing how developers build, but how individuals, teams and companies get work done. At Build, we’re unveiling new pre-built agents, custom agent building blocks, multi-agent capabilities and new models to help developers and organizations build and deploy agents securely to help increase productivity in meaningful ways.

  • With the general availability of Azure AI Foundry Agent Service, Microsoft is bringing new capabilities to empower professional developers to orchestrate multiple specialized agents to handle complex tasks, including bringing Semantic Kernel and AutoGen into a single, developer-focused SDK and Agent-to-Agent (A2A) and Model Context Protocol (MCP) support. To help developers build trust and confidence in their AI agents, we’re announcing new features in Azure AI Foundry Observability for built-in observability into metrics for performance, quality, cost and safety, all incorporated alongside detailed tracing in a streamlined dashboard. Learn more about how to deploy enterprise-grade AI agents in Azure AI Foundry Service.
  • Discover, protect and govern in Azure AI Foundry: With Microsoft Entra Agent ID, now in preview, agents that developers create in Microsoft Copilot Studio or Azure AI Foundry are automatically assigned unique identities in an Entra directory, helping enterprises securely manage agents right from the start and avoid “agent sprawl” that could lead to blind spots. Apps and agents built with Foundry further benefit from Purview data security and compliance controls. Foundry also offers enhanced governance tools to set risk parameters, run automated evaluations and receive detailed reports. Learn more about Microsoft Entra Agent ID and Azure AI Foundry integrations with Microsoft Purview Compliance Manager.
  • Introducing Microsoft 365 Copilot Tuning and multi-agent orchestration: With Copilot Tuning, customers can use their own company data, workflows and processes to train models and create agents in a simple, low-code way. These agents perform highly accurate, domain-specific tasks securely from within the Microsoft 365 service boundary. For example, a law firm can create an agent that generates documents aligned with its organization’s expertise and style. Additionally, new multi-agent orchestration in Copilot Studio connects multiple agents, allowing them to combine skills and tackle broader, more complex tasks. Check out the Microsoft 365 blog to learn how to access these new tools as well as the Microsoft 365 Copilot Wave 2 spring release, which has moved to general availability and begins rolling out today.

Supporting the open agentic web

To realize the future of AI agents, we’re advancing open standards and shared infrastructure to provide unique capabilities for customers.

  • Supporting Model Context Protocol (MCP): Microsoft is delivering broad first-party support for Model Context Protocol (MCP) across its agent platform and frameworks, spanning GitHub, Copilot Studio, Dynamics 365, Azure AI Foundry, Semantic Kernel and Windows 11. In addition, Microsoft and GitHub have joined the MCP Steering Committee to help advance secure, at-scale adoption of the open protocol and announced two new contributions to the MCP ecosystem, an updated authorization specification, which enables people to use their existing trusted sign-in methods to give agents and LLM-powered apps access to data and services such as personal storage drives or subscription services, and the design of an MCP server registry service, which allows anyone to implement public or private, up-to-date, centralized repositories for MCP server entries. Check out the GitHub repository. As we expand our MCP capabilities, our top priority is to ensure we’re building upon a secure foundation. To learn more about this approach see: Securing the Model Context Protocol: Building a Safe Agentic Future on Windows.
  • A new open project called NLWeb: Microsoft is introducing NLWeb, which we believe can play a similar role to HTML for the agentic web. NLWeb makes it easy for websites to provide a conversational interface for their users with the model of their choice and their own data, allowing users to interact directly with web content in a rich, semantic manner. Every NLWeb endpoint is also an MCP server, so websites can make their content easily discoverable and accessible to AI agents if they choose. Learn more here.

Accelerating scientific discovery with AI

Science may be one of the most important applications of AI, helping to tackle humanity’s most pressing challenges, from drug discovery to sustainability. At Build we’re introducing Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI, helping research and development departments across various industries accelerate the time to market for new products and accelerate and expand the end-to-end discovery process for all scientists. Learn more here.

This is only a small selection of the many exciting features and updates we will be announcing at Build. We’re looking forward to connecting with those who have registered to join us virtually and in-person, for keynote sessions, live code deep dives, hack sessions and more — much of which will be available on demand.

Plus, you can get more on all these announcements by exploring the Book of News, the official compendium of all today’s news.

The post Microsoft Build 2025: The age of AI agents and building the open agentic web appeared first on The Official Microsoft Blog.

TL;DR? Hear the news as an AI-generated audio overview made using Microsoft 365 Copilot. You can read the transcript here.   We’ve entered the era of AI agents. Thanks to groundbreaking advancements in reasoning and memory, AI models are now more capable and efficient, and we’re seeing how AI systems can help us all solve…
The post Microsoft Build 2025: The age of AI agents and building the open agentic web appeared first on The Official Microsoft Blog.Read More

test one two three four
Matlab News

test one two three four

PuTI / 2025-05-20

this is a test to see whether posting questions is back workingthis is a test to see whether posting questions is back working this is a test to see whether posting questions is back working meta, test MATLAB Answers — New Questions

​

Why Copilot Access to “Restricted” Passwords Isn’t as Big an Issue as Uploading Files to ChatGPT
News

Why Copilot Access to “Restricted” Passwords Isn’t as Big an Issue as Uploading Files to ChatGPT

Tony Redmond / 2025-05-20

Unless You Consider Excel Passwords to be Real Passwords

I see that some web sites have picked up the penetration test story about using Microsoft 365 Copilot to extract sensitive information from SharePoint. The May 14 Forbes.com story is an example. The headline of “New Warning — Microsoft Copilot AI Can Access Restricted Passwords” is highly misleading.

Microsoft 365 Copilot and penetration tests.

Unfortunately, tech journalists and others can rush to comment without thinking an issue through, and that’s what I fear has happened in many of the remarks I see in places like LinkedIn discussions. People assume that a much greater problem exists when if they would only think things through, they’d see the holes in the case being presented.

Understanding the Assumptions made by the Penetration Test

As I pointed out in a May 12 article, the penetration test was interesting (and did demonstrate just how weak Excel passwords are). However, the story depends on three major assumptions:

  • Compromise: The attacker has control of an Entra ID account with a Microsoft 365 Copilot license. In other words, the target tenant is compromised. In terms of closing off holes for attackers to exploit, preventing access is the biggest problem in the scenario. All user accounts should be protected with strong multifactor authentication like the Microsoft authenticator app, passkeys, or FIDO-2 keys. SMS is not sufficient, and basic authentication (just passwords) is just madness.
  • Poor tenant management: Once inside a tenant and using a compromised account, Microsoft 365 Copilot will do what the attacker asks it to do, including finding sensitive information like a file containing passwords. However, Copilot cannot find information that is unavailable to the signed-in user. If the tenant’s SharePoint Online deployment is badly managed without well-planned and well-managed access controls, then Copilot will happily find anything that the user’s access allows it to uncover. This is not a problem for Copilot: it is a failure of tenant management that builds on the first failure to protect user accounts appropriately.
  • Failure to deploy available tools: Even in the best-managed SharePoint Online deployment, users can make mistakes when configuring access, Users can also follow poor practice, such as storing important files in OneDrive for Business rather than SharePoint Online. But tenants with Microsoft 365 Copilot licenses can mitigate against user error with tools available to them such as Restricted Content Discovery (RCD) and the DLP policy for Microsoft 365 Copilot. The latter requires the tenant to deploy sensitivity labels too, but that’s part of the effort required to protect confidential and sensitive information.

I’m sure any attacker would love to find an easily-compromised tenant where they can gain control over accounts that have access to both badly managed SharePoint Online sites that hold sensitive information and Microsoft 365 Copilot to help the attackers find that information. Badly-managed and easily-compromised Microsoft 365 tenants do exist, but it is my earnest hope that companies who invest in Microsoft 365 Copilot have the common sense to manage their tenants properly.

Uploading SharePoint and OneDrive Files to ChatGPT

Personally speaking, I’m much more concerned about users uploaded sensitive or confidential information to OpenAI for ChatGPT to process. The latest advice from OpenAI is how the process works for their Deep Research product. Users might like this feature because they can have their documents processed by AI. However, tenant administrators and anyone concerned with security or compliance might have a different perspective.

I covered the topic of uploading SharePoint and OneDrive files to ChatGPT on March 26 and explained that the process depends on an enterprise Entra ID app (with app id e0476654-c1d5-430b-ab80-70cbd947616a) to gain access to user files. Deep Research is different and its connector for SharePoint and OneDrive is in preview, but the basic principle is the same: a Graph-based app uploads files for ChatGPT to process. If that app is blocked (see my article to find out how) or denied access to the Graph permission needed to access files, the upload process doesn’t work.

Set Your Priorities

I suggest that it’s more important to block uploading of files from a tenant to a third-party AI service where you don’t know how the files are managed or retained. It certainly seems like a more pressing need than worrying about the potential of an attacker using Microsoft 365 Copilot to run riot over SharePoint, even if a penetration test company says that this can happen (purely as a public service, and not at all to publicize their company).

At least, that’s assuming user accounts are protected with strong multifactor authentication…


 

Is it possible to make this tiny loop faster?
Matlab News

Is it possible to make this tiny loop faster?

PuTI / 2025-05-19

It seems that it might be possible to make this loop faster. Does anyone have any thoughts? I have to call a loop like this millions of times in a larger workflow, and it is getting to be the slowest part of the code. I appreciate any insights!
a = 2; % constant
b = 3; % constant
n = 10; % number of elements of outputs c and s
% n could be up to 100

% preallocate and set the initial conditions
c = zeros(n,1);
s = c;
c(1) = 3; % set the initial condition for c
s(1) = 1; % set the initial condition for s

% run the loop
for i = 2:n
c(i) = a*c(i-1) -b*s(i-1);
s(i) = b*c(i-1) + a*s(i-1);
endIt seems that it might be possible to make this loop faster. Does anyone have any thoughts? I have to call a loop like this millions of times in a larger workflow, and it is getting to be the slowest part of the code. I appreciate any insights!
a = 2; % constant
b = 3; % constant
n = 10; % number of elements of outputs c and s
% n could be up to 100

% preallocate and set the initial conditions
c = zeros(n,1);
s = c;
c(1) = 3; % set the initial condition for c
s(1) = 1; % set the initial condition for s

% run the loop
for i = 2:n
c(i) = a*c(i-1) -b*s(i-1);
s(i) = b*c(i-1) + a*s(i-1);
end It seems that it might be possible to make this loop faster. Does anyone have any thoughts? I have to call a loop like this millions of times in a larger workflow, and it is getting to be the slowest part of the code. I appreciate any insights!
a = 2; % constant
b = 3; % constant
n = 10; % number of elements of outputs c and s
% n could be up to 100

% preallocate and set the initial conditions
c = zeros(n,1);
s = c;
c(1) = 3; % set the initial condition for c
s(1) = 1; % set the initial condition for s

% run the loop
for i = 2:n
c(i) = a*c(i-1) -b*s(i-1);
s(i) = b*c(i-1) + a*s(i-1);
end vectorization, for loop, speed MATLAB Answers — New Questions

​

Solar Wind Battery Hybrid Integration
Matlab News

Solar Wind Battery Hybrid Integration

PuTI / 2025-05-19

I have created solar system by connecting PV array to boost converter working at MPPT by using P&O MPPT technique. Furthermore i have created wind system by connecting Wind turbine with PMSG which is connected with uncontrolled rectifier and boost converter and I have applied P&O on that boost converter. Both are seperatelly working on MPPT . Now I want to integrate both the system to a common bus from which i will suppy it to grid. Now when I am trying to integrate both either solar or wind is not working properly. For low loads the wind system is not working at mppt and for high loads the solar is not working on MPPT.I have created solar system by connecting PV array to boost converter working at MPPT by using P&O MPPT technique. Furthermore i have created wind system by connecting Wind turbine with PMSG which is connected with uncontrolled rectifier and boost converter and I have applied P&O on that boost converter. Both are seperatelly working on MPPT . Now I want to integrate both the system to a common bus from which i will suppy it to grid. Now when I am trying to integrate both either solar or wind is not working properly. For low loads the wind system is not working at mppt and for high loads the solar is not working on MPPT. I have created solar system by connecting PV array to boost converter working at MPPT by using P&O MPPT technique. Furthermore i have created wind system by connecting Wind turbine with PMSG which is connected with uncontrolled rectifier and boost converter and I have applied P&O on that boost converter. Both are seperatelly working on MPPT . Now I want to integrate both the system to a common bus from which i will suppy it to grid. Now when I am trying to integrate both either solar or wind is not working properly. For low loads the wind system is not working at mppt and for high loads the solar is not working on MPPT. solar, wind, mppt, hybrid model, dc bus MATLAB Answers — New Questions

​

Why does it say “invalid email or password” when i reinstall r2023b Product
Matlab News

Why does it say “invalid email or password” when i reinstall r2023b Product

PuTI / 2025-05-19

I have a problem reinstalling Matlab2023b, as the error states "invalid email or password" when I try to install R2025a or any other product. Note that my email and password are correct, and I can log in to the MathWorks account. Could you advise me on how to resolve this issue?
thanks lot

Your sincerely
Reda dalilaI have a problem reinstalling Matlab2023b, as the error states "invalid email or password" when I try to install R2025a or any other product. Note that my email and password are correct, and I can log in to the MathWorks account. Could you advise me on how to resolve this issue?
thanks lot

Your sincerely
Reda dalila I have a problem reinstalling Matlab2023b, as the error states "invalid email or password" when I try to install R2025a or any other product. Note that my email and password are correct, and I can log in to the MathWorks account. Could you advise me on how to resolve this issue?
thanks lot

Your sincerely
Reda dalila invalid password or email MATLAB Answers — New Questions

​

TIFF from ImageJ not loading correctly in R2024b (worked in R2022b)
Matlab News

TIFF from ImageJ not loading correctly in R2024b (worked in R2022b)

PuTI / 2025-05-19

Hi,
I’ve been using MATLAB to analyze 3D confocal images saved as TIFF files. My process is to get the original images from NIS-Elements, then open them in ImageJ to crop just the droplet region, and save that as a TIFF. In MATLAB R2022b, this TIFF input would allow the program to do a full 3D reconstruction of the droplet and calculate the data I need.
But now that I’m using R2024b, the same TIFF files don’t load properly. The 3D reconstruction step stops and doesn’t work, and I get no data even when I test it using the exact same TIFF files and inputs that worked previously.
I’m wondering if MATLAB R2024b handles TIFF files differently than R2022b? Are there specific settings I should use when saving TIFFs from ImageJ to make sure they work with the latest MATLAB? Or would this be entirely a different problem?
Thanks!Hi,
I’ve been using MATLAB to analyze 3D confocal images saved as TIFF files. My process is to get the original images from NIS-Elements, then open them in ImageJ to crop just the droplet region, and save that as a TIFF. In MATLAB R2022b, this TIFF input would allow the program to do a full 3D reconstruction of the droplet and calculate the data I need.
But now that I’m using R2024b, the same TIFF files don’t load properly. The 3D reconstruction step stops and doesn’t work, and I get no data even when I test it using the exact same TIFF files and inputs that worked previously.
I’m wondering if MATLAB R2024b handles TIFF files differently than R2022b? Are there specific settings I should use when saving TIFFs from ImageJ to make sure they work with the latest MATLAB? Or would this be entirely a different problem?
Thanks! Hi,
I’ve been using MATLAB to analyze 3D confocal images saved as TIFF files. My process is to get the original images from NIS-Elements, then open them in ImageJ to crop just the droplet region, and save that as a TIFF. In MATLAB R2022b, this TIFF input would allow the program to do a full 3D reconstruction of the droplet and calculate the data I need.
But now that I’m using R2024b, the same TIFF files don’t load properly. The 3D reconstruction step stops and doesn’t work, and I get no data even when I test it using the exact same TIFF files and inputs that worked previously.
I’m wondering if MATLAB R2024b handles TIFF files differently than R2022b? Are there specific settings I should use when saving TIFFs from ImageJ to make sure they work with the latest MATLAB? Or would this be entirely a different problem?
Thanks! tiff, 3d reconstruction, matlab2024b, image proces MATLAB Answers — New Questions

​

Regarding using trainnet, testnet in binary image classification(size difference between network output and test data output)
Matlab News

Regarding using trainnet, testnet in binary image classification(size difference between network output and test data output)

PuTI / 2025-05-19

Hello, every MATLAB users,

I’m trying to make simple binary classification network
model is designed to check whether the image has certain object or not

input datastore is combined of image datastore and label datastore as shown:
imdsTrain = imageDatastore(trainingDataTbl.imageFileName);
imdsTrain.Labels = trainingDataTbl.existence;
imdsTrain.Labels = categorical(imdsTrain.Labels)
labelsTrain = categorical(trainingDataTbl.existence);
ldsTrain = arrayDatastore(labelsTrain);
cdsTrain = combine(imdsTrain, ldsTrain);
(i know already imdsTrain has label data but i modified to debug the error even it doesn’t matter)
Each labels is one of 2 categories : True, False

Designed network structure is as follows:
fcn = @(X) X(:,:,1);
bClayers = [
imageInputLayer([800 800 3],"Name","imageinput")
functionLayer(fcn, "Name","gray")
convolution2dLayer([5 5],8,"Name","conv","Padding","same")
reluLayer("Name","relu")
maxPooling2dLayer([8 8],"Name","maxpool","Padding","same","Stride",[8 8])
convolution2dLayer([3 3],16,"Name","conv_1","Padding","same")
reluLayer("Name","relu_1")
maxPooling2dLayer([5 5],"Name","maxpool_1","Padding","same","Stride",[5 5])
fullyConnectedLayer(2,"Name","fc")
softmaxLayer("Name","softmax")];

It’s simple CNN structure

I set the options as below:
options = trainingOptions("adam", …
GradientDecayFactor=0.9, …
SquaredGradientDecayFactor=0.999, …
InitialLearnRate=0.001, …
LearnRateSchedule="none", …
MiniBatchSize=1, …
L2Regularization=0.0005, …
MaxEpochs=4, …
BatchNormalizationStatistics="moving", …
DispatchInBackground=false, …
ResetInputNormalization=false, …
Shuffle="every-epoch", …
VerboseFrequency=20, …
CheckpointPath=tempdir);
I set the MiniBatchSize with 1.
Because, I don’t know why but some reason error came up when i execute the trainnet function.
net = trainnet(cdsTrain, bClayers, "crossentropy", options);
the error message is that size of prediction(maybe output of the network) is not same with size of desired value(maybe ground truth label data).
and the desired value size is affected by MiniBatchSize.
I have no idea why this error is occuring.
How Can I Adjust MiniBatchSize or Modify the Code to Run Succesfully??
다음 사용 중 오류가 발생함: validateTrueValues (54번 라인) 예측값 및 목표값의 크기가 일치해야 합니다. 예측값의 크기: 2(C) × 1(B) 목표값의 크기: 2(C) × 16(B)
(This is the Korean error message)

I trained with minibatch size of 1 Anyway.
Another problem happens.
metricValues = testnet(net, cdsTest, "accuracy");
While test the network, Even I make Traindata and Testdata with Same size and same formality,
Code couldn’t run with error message the size between network output and desired value(maybe Test data) should same.
This is korean error message for anyone who can understand:
다음 사용 중 오류가 발생함: testnet (40번 라인)
메트릭 "Accuracy"의 경우 네트워크 출력값과 목표값의 크기가 동일해야 합니다.

How can I fix this code??

I hope anyone could respond my question.

Thank you for reading.Hello, every MATLAB users,

I’m trying to make simple binary classification network
model is designed to check whether the image has certain object or not

input datastore is combined of image datastore and label datastore as shown:
imdsTrain = imageDatastore(trainingDataTbl.imageFileName);
imdsTrain.Labels = trainingDataTbl.existence;
imdsTrain.Labels = categorical(imdsTrain.Labels)
labelsTrain = categorical(trainingDataTbl.existence);
ldsTrain = arrayDatastore(labelsTrain);
cdsTrain = combine(imdsTrain, ldsTrain);
(i know already imdsTrain has label data but i modified to debug the error even it doesn’t matter)
Each labels is one of 2 categories : True, False

Designed network structure is as follows:
fcn = @(X) X(:,:,1);
bClayers = [
imageInputLayer([800 800 3],"Name","imageinput")
functionLayer(fcn, "Name","gray")
convolution2dLayer([5 5],8,"Name","conv","Padding","same")
reluLayer("Name","relu")
maxPooling2dLayer([8 8],"Name","maxpool","Padding","same","Stride",[8 8])
convolution2dLayer([3 3],16,"Name","conv_1","Padding","same")
reluLayer("Name","relu_1")
maxPooling2dLayer([5 5],"Name","maxpool_1","Padding","same","Stride",[5 5])
fullyConnectedLayer(2,"Name","fc")
softmaxLayer("Name","softmax")];

It’s simple CNN structure

I set the options as below:
options = trainingOptions("adam", …
GradientDecayFactor=0.9, …
SquaredGradientDecayFactor=0.999, …
InitialLearnRate=0.001, …
LearnRateSchedule="none", …
MiniBatchSize=1, …
L2Regularization=0.0005, …
MaxEpochs=4, …
BatchNormalizationStatistics="moving", …
DispatchInBackground=false, …
ResetInputNormalization=false, …
Shuffle="every-epoch", …
VerboseFrequency=20, …
CheckpointPath=tempdir);
I set the MiniBatchSize with 1.
Because, I don’t know why but some reason error came up when i execute the trainnet function.
net = trainnet(cdsTrain, bClayers, "crossentropy", options);
the error message is that size of prediction(maybe output of the network) is not same with size of desired value(maybe ground truth label data).
and the desired value size is affected by MiniBatchSize.
I have no idea why this error is occuring.
How Can I Adjust MiniBatchSize or Modify the Code to Run Succesfully??
다음 사용 중 오류가 발생함: validateTrueValues (54번 라인) 예측값 및 목표값의 크기가 일치해야 합니다. 예측값의 크기: 2(C) × 1(B) 목표값의 크기: 2(C) × 16(B)
(This is the Korean error message)

I trained with minibatch size of 1 Anyway.
Another problem happens.
metricValues = testnet(net, cdsTest, "accuracy");
While test the network, Even I make Traindata and Testdata with Same size and same formality,
Code couldn’t run with error message the size between network output and desired value(maybe Test data) should same.
This is korean error message for anyone who can understand:
다음 사용 중 오류가 발생함: testnet (40번 라인)
메트릭 "Accuracy"의 경우 네트워크 출력값과 목표값의 크기가 동일해야 합니다.

How can I fix this code??

I hope anyone could respond my question.

Thank you for reading. Hello, every MATLAB users,

I’m trying to make simple binary classification network
model is designed to check whether the image has certain object or not

input datastore is combined of image datastore and label datastore as shown:
imdsTrain = imageDatastore(trainingDataTbl.imageFileName);
imdsTrain.Labels = trainingDataTbl.existence;
imdsTrain.Labels = categorical(imdsTrain.Labels)
labelsTrain = categorical(trainingDataTbl.existence);
ldsTrain = arrayDatastore(labelsTrain);
cdsTrain = combine(imdsTrain, ldsTrain);
(i know already imdsTrain has label data but i modified to debug the error even it doesn’t matter)
Each labels is one of 2 categories : True, False

Designed network structure is as follows:
fcn = @(X) X(:,:,1);
bClayers = [
imageInputLayer([800 800 3],"Name","imageinput")
functionLayer(fcn, "Name","gray")
convolution2dLayer([5 5],8,"Name","conv","Padding","same")
reluLayer("Name","relu")
maxPooling2dLayer([8 8],"Name","maxpool","Padding","same","Stride",[8 8])
convolution2dLayer([3 3],16,"Name","conv_1","Padding","same")
reluLayer("Name","relu_1")
maxPooling2dLayer([5 5],"Name","maxpool_1","Padding","same","Stride",[5 5])
fullyConnectedLayer(2,"Name","fc")
softmaxLayer("Name","softmax")];

It’s simple CNN structure

I set the options as below:
options = trainingOptions("adam", …
GradientDecayFactor=0.9, …
SquaredGradientDecayFactor=0.999, …
InitialLearnRate=0.001, …
LearnRateSchedule="none", …
MiniBatchSize=1, …
L2Regularization=0.0005, …
MaxEpochs=4, …
BatchNormalizationStatistics="moving", …
DispatchInBackground=false, …
ResetInputNormalization=false, …
Shuffle="every-epoch", …
VerboseFrequency=20, …
CheckpointPath=tempdir);
I set the MiniBatchSize with 1.
Because, I don’t know why but some reason error came up when i execute the trainnet function.
net = trainnet(cdsTrain, bClayers, "crossentropy", options);
the error message is that size of prediction(maybe output of the network) is not same with size of desired value(maybe ground truth label data).
and the desired value size is affected by MiniBatchSize.
I have no idea why this error is occuring.
How Can I Adjust MiniBatchSize or Modify the Code to Run Succesfully??
다음 사용 중 오류가 발생함: validateTrueValues (54번 라인) 예측값 및 목표값의 크기가 일치해야 합니다. 예측값의 크기: 2(C) × 1(B) 목표값의 크기: 2(C) × 16(B)
(This is the Korean error message)

I trained with minibatch size of 1 Anyway.
Another problem happens.
metricValues = testnet(net, cdsTest, "accuracy");
While test the network, Even I make Traindata and Testdata with Same size and same formality,
Code couldn’t run with error message the size between network output and desired value(maybe Test data) should same.
This is korean error message for anyone who can understand:
다음 사용 중 오류가 발생함: testnet (40번 라인)
메트릭 "Accuracy"의 경우 네트워크 출력값과 목표값의 크기가 동일해야 합니다.

How can I fix this code??

I hope anyone could respond my question.

Thank you for reading. binaryclassification, cnn, trainnet, deeplearning, testnet MATLAB Answers — New Questions

​

Microsoft Build 2025: Menyambut Era Agen AI dan Membangun Agentic Web Terbuka
Microsoft

Microsoft Build 2025: Menyambut Era Agen AI dan Membangun Agentic Web Terbuka

hilfan / 2025-05-19

Singkatnya? Dengarkan ringkasan berita dalam bentuk audio yang dihasilkan oleh AI melalui Microsoft 365 Copilot. Temukan transkrip lengkapnya di sini.

Era agen AI telah tiba. Berkat terobosan besar dalam kemampuan penalaran dan memori, model AI kini menjadi lebih canggih dan efisien. Kita mulai melihat bagaimana sistem AI dapat membantu menyelesaikan berbagai masalah dengan cara-cara baru yang lebih inovatif.

Sebagai contoh, 15 juta developer telah menggunakan GitHub Copilot, dengan fitur seperti agent mode dan code review yang menyederhanakan proses penulisan, checking, deployment hingga troubleshoot code. Sementara itu, ratusan ribu pelanggan memanfaatkan Microsoft 365 Copilot untuk riset, brainstorming, dan pengembangan solusi. Lebih dari 230.000 organisasi — termasuk 90% dari perusahaan Fortune 500 — juga telah menggunakan Copilot Studio untuk membangun agen AI dan otomatisasi.

Perusahaan seperti Fujitsu dan NTT DATA  memanfaatkan Azure AI Foundry untuk membangun dan mengelola aplikasi serta agen AI yang membantu memprioritaskan prospek penjualan, mempercepat pembuatan proposal, dan memberikan insight klien.

Sementara itu, Stanford Health Care menggunakan healthcare agent orchestrator dari Microsoft untuk membangun dan menguji agen AI yang dapat mengurangi beban administratif sekaligus mempercepat alur kerja dalam persiapan tim ahli tumor.

Developer berada di pusat dari seluruh perubahan ini. Selama 50 tahun, Microsoft telah memberdayakan mereka dengan berbagai alat dan platform untuk mewujudkan ide-ide menjadi kenyataan dan mempercepat inovasi di setiap tahap. Dengan otomatisasi berbasis AI dan integrasi cloud yang semakin terhubung, para developer menjadi kekuatan pendorong di balik munculnya generasi baru transformasi digital.

Lalu, apa langkah berikutnya?

Kami membayangkan dunia di mana agen-agen AI dapat beroperasi lintas konteks—dari individu, tim, organisasi, hingga seluruh proses bisnis end-to-end. Inilah visi baru internet: sebuah agentic web yang terbuka, di mana agen AI mampu mengambil keputusan dan menjalankan tugas atas nama pengguna maupun organisasi.

Pada acara Microsoft Build, kami menunjukkan langkah konkret dalam mewujudkan visi ini melalui platform, produk, dan infrastruktur yang kami miliki. Kami membekali developer dengan model-model terbaru dan coding agents, memperkenalkan agen AI kelas enterprise, serta menjadikan Azure AI Foundry, GitHub, dan Windows sebagai pusat terbaik untuk membangun inovasi. Kami juga mendukung protokol terbuka dan mempercepat penemuan ilmiah melalui AI—semuanya agar para developer dan organisasi dapat melahirkan terobosan besar berikutnya.

Berikut sekilas beberapa pengumuman hari ini:

Transformasi Pengembangan Perangkat Lunak dengan AI

AI secara mendasar mengubah cara kita menulis, menerapkan, dan memelihara code. Developer kini memanfaatkan AI untuk tetap berada dalam alur kerja lebih lama dan mengalihkan fokus ke tugas-tugas yang lebih strategis. Seiring transformasi dalam siklus hidup pengembangan perangkat lunak, kami menghadirkan beragam fitur baru di platform seperti GitHub, Azure AI Foundry, dan Windows — agar para developer dapat bekerja lebih cepat, berpikir lebih luas, dan membangun dalam skala yang lebih besar.

  • Agen coding GitHub Copilot dan pembaruan terbaru pada GitHub Models: GitHub Copilot kini berevolusi dari sekadar asisten in-editor menjadi mitra agentic AI, dengan agen coding asinkron pertama yang terintegrasi langsung ke platform GitHub. Kami menghadirkan fitur manajemen prompt, evaluasi ringan, dan kontrol tingkat enterprise pada GitHub Models, sehingga tim dapat bereksperimen dengan model-model terbaik tanpa harus keluar dari GitHub. Microsoft juga merilis open-source GitHub Copilot Chat di VS Code. Kemampuan AI dari ekstensi GitHub Copilot kini menjadi bagian dari repositori open-source yang sama yang mendukung alat pengembangan paling populer di dunia. Sebagai pusat bagi lebih dari 150 juta developer, upaya ini memperkuat komitmen kami terhadap pengembangan perangkat lunak yang terbuka, kolaboratif, dan didukung AI. Pelajari lebih lanjut tentang pembaruan GitHub Copilot.
  • Memperkenalkan Windows AI Foundry: Bagi para developer, Windows tetap menjadi salah satu platform yang paling terbuka dan banyak digunakan yang menawarkan skala, fleksibilitas, dan peluang yang terus berkembang. Windows AI Foundry menghadirkan platform terpadu dan andal yang mendukung seluruh siklus pengembangan AI, dari pelatihan hingga inference. Dengan API model yang sederhana untuk tugas-tugas vision dan bahasa, developer dapat mengelola dan menjalankan LLM open source melalui Foundry Local, atau membawa model milik sendiri untuk dikonversi, disesuaikan, dan diterapkan di berbagai lingkungan, baik di perangkat klien maupun cloud. Windows AI Foundry sudah tersedia dan bisa langsung dicoba. Kunjungi Windows Developer Blog untuk informasi lebih lanjut.
  • Azure AI Foundry Models dan alat baru untuk evaluasi model: Azure AI Foundry adalah platform terpadu bagi para developer untuk merancang, menyesuaikan, dan mengelola aplikasi serta agen AI. Melalui Azure AI Foundry Models, kami menghadirkan model Grok 3 dan Grok 3 mini dari xAI ke dalam ekosistem kami, yang di-host dan ditagihkan langsung oleh Microsoft. Kini, developer bisa memilih dari lebih dari 1.900 model AI yang di-host oleh mitra maupun Microsoft, sembari mengelola integrasi data yang aman, penyesuaian model, dan tata kelola kelas enterprise. Kami juga meluncurkan alat baru seperti Model Leaderboard, yang memberikan peringkat model AI dengan performa terbaik di berbagai kategori dan tugas, serta Model Router, yang dirancang untuk memilih model paling optimal untuk permintaan atau tugas tertentu secara real-time. Pelajari lebih lanjut tentang Azure AI Foundry Models.

Meningkatkan Kapabilitas dan Keamanan Agen AI

Agen AI tidak hanya mengubah cara developer membangun solusi, tetapi juga mengubah cara individu, tim, dan perusahaan menyelesaikan pekerjaan. Pada acara Build, kami memperkenalkan agen-agen siap pakai terbaru, komponen untuk membangun agen kustom, kemampuan multi-agen, serta model-model baru yang membantu developer dan organisasi membangun dan menerapkan agen secara aman untuk meningkatkan produktivitas secara signifikan.

  • Dengan ketersediaan umum Azure AI Foundry Agent Service, Microsoft menghadirkan fitur-fitur baru yang memungkinkan para developer profesional untuk mengelola banyak agen khusus dalam menyelesaikan tugas-tugas kompleks. Ini mencakup penggabungan Semantic Kernel dan AutoGen ke dalam satu SDK yang berfokus pada developer, serta dukungan Agent-to-Agent (A2A) dan Model Context Protocol (MCP). Untuk meningkatkan kepercayaan developer terhadap agen AI yang mereka kembangkan, kami memperkenalkan fitur terbaru pada Azure AI Foundry Observability. Fitur ini menyediakan pemantauan menyeluruh atas metrik kinerja, kualitas, biaya, dan keamanan, yang dapat diakses melalui dashboard terpadu dengan pelacakan detail yang mudah digunakan. Pelajari lebih lanjut cara menerapkan agen AI kelas enterprise menggunakan Azure AI Foundry Service.
  • Temukan, lindungi, dan kelola dalam Azure AI Foundry: Dengan Microsoft Entra Agent ID yang saat ini dalam tahap preview, secara otomatis memberikan identitas unik pada agen-agen yang dibuat developer di Microsoft Copilot Studio atau Azure AI Foundry dalam direktori Entra. Fitur ini membantu perusahaan mengelola agen secara aman sejak awal dan mencegah “agent sprawl” yang berpotensi menimbulkan celah. Aplikasi dan agen yang dibangun dengan Foundry juga mendapatkan manfaat dari kontrol keamanan data dan kepatuhan Microsoft Purview. Selain itu, Foundry menawarkan alat tata kelola yang lebih canggih untuk menetapkan parameter risiko, menjalankan evaluasi otomatis, dan menghasilkan laporan terperinci. Pelajari lebih lanjut mengenai Microsoft Entra Agent ID dan integrasi Azure AI Foundry dengan Microsoft Purview Compliance Manager.
  • Memperkenalkan Microsoft 365 Copilot Tuning dan orkestrasi multi-agen:
    Dengan Copilot Tuning, pelanggan dapat memanfaatkan data, alur kerja, dan proses perusahaan mereka sendiri untuk melatih model dan membuat agen secara sederhana dengan pendekatan low-code. Agen-agen ini mampu menjalankan tugas spesifik dengan tingkat akurasi tinggi secara aman dan tetap berada dalam batas keamanan layanan Microsoft 365. Misalnya, sebuah firma hukum dapat membuat agen yang menghasilkan dokumen sesuai dengan keahlian dan gaya khas organisasinya. Selain itu, fitur orkestrasi multi-agen terbaru di Copilot Studio memungkinkan berbagai agen saling terhubung dan bekerja sama, menggabungkan keahlian untuk menyelesaikan tugas yang lebih luas dan kompleks. Pelajari selengkapnya di blog Microsoft 365 untuk mengetahui cara mengakses alat-alat baru ini, termasuk informasi mengenai ketersediaan umum Microsoft 365 Copilot Wave 2 Spring Release yang mulai digulirkan hari ini.

Mendukung agentic web yang Terbuka

Untuk mewujudkan masa depan agen AI, Microsoft terus mengembangkan standar terbuka dan infrastruktur bersama agar dapat menghadirkan kemampuan unik bagi para pelanggan.

  • Mendukung Model Context Protocol (MCP): Microsoft memberikan dukungan luas terhadap Model Context Protocol (MCP) di berbagai platform dan framework agen, termasuk GitHub, Copilot Studio, Dynamics 365, Azure AI Foundry, Semantic Kernel, serta Windows 11. Selain itu, Microsoft dan GitHub turut bergabung dalam MCP Steering Committee untuk mendorong adopsi protokol terbuka ini secara aman dan dalam skala besar. Microsoft juga memperkenalkan dua kontribusi baru untuk ekosistem MCP, yaitu spesifikasi otorisasi terbaru yang memungkinkan pengguna memakai metode masuk terpercaya yang sudah ada untuk memberikan akses kepada agen dan aplikasi berbasis LLM ke data serta layanan, seperti penyimpanan pribadi atau layanan berlangganan. Selain itu, terdapat desain layanan MCP server registry yang memungkinkan siapa saja membuat repositori server MCP versi publik maupun privat yang terpusat dan selalu diperbarui. Untuk informasi lebih lanjut, kunjungi repositori GitHub. Saat kami terus mengembangkan kemampuan MCP, fokus utama kami adalah membangun fondasi yang aman. Pelajari pendekatan ini lebih dalam melalui artikel Securing the Model Context Protocol: Building a Safe Agentic Future on Windows.
  • Proyek terbuka baru bernama NLWeb: Microsoft menghadirkan proyek terbuka baru bernama NLWeb, yang kami yakini akan memiliki peran penting layaknya HTML dalam dunia agentic web. NLWeb memungkinkan situs web menyediakan antarmuka percakapan yang dapat disesuaikan dengan model pilihan dan data mereka sendiri, sehingga pengguna dapat berinteraksi langsung dengan konten secara lebih bermakna secara semantik. Setiap endpoint NLWeb juga berfungsi sebagai server MCP, sehingga situs web dapat membuat kontennya mudah ditemukan dan diakses oleh agen AI jika mereka menginginkannya. Pelajari lebih lanjut di sini.

Mempercepat Penemuan Ilmiah dengan AI

Sains merupakan salah satu bidang penting dalam pemanfaatan AI untuk menjawab berbagai tantangan besar manusia, mulai dari penemuan obat hingga keberlanjutan. Di Build, kami memperkenalkan Microsoft Discovery, sebuah platform extensible yang dirancang untuk membantu peneliti mengubah proses penemuan secara menyeluruh dengan dukungan agentic AI. Platform ini membantu divisi riset dan pengembangan di berbagai industri mempercepat peluncuran produk baru, sekaligus mendukung percepatan dan perluasan proses penemuan secara menyeluruh bagi komunitas ilmiah. Pelajari di sini

Ini baru sebagian kecil dari berbagai fitur dan pembaruan menarik yang akan kami umumkan di Build. Kami menantikan kehadiran Anda yang sudah mendaftar, baik secara virtual maupun langsung, untuk mengikuti sesi keynote, live coding deep dives, hackathon, dan lainnya — banyak di antaranya juga dapat diakses kapan saja.

Selain itu, Anda akan mendapatkan informasi lebih lengkap dari seluruh pengumuman ini dengan menjelajahi Book of News, kompendium resmi yang berisi seluruh berita hari ini.

 

Microsoft 365 Copilot Gets Viva Insights Service Plans
News

Microsoft 365 Copilot Gets Viva Insights Service Plans

Tony Redmond / 2025-05-19

Two Workplace Analytics Service Plans to Enable Viva Insights

Microsoft message center notification MC1009917 (last updated 25 April 2025, Microsoft 365 roadmap item 471002) announced the inclusion of Viva Insights in the Microsoft 365 Copilot license. The mechanism used is the addition of two “Workplace Analytics” service plans to join the existing eight service plans (table 1) that make up the Copilot license.

Service PlanService Plan SKUService Plan Part Number
Microsoft Copilot with Graph-grounded chat (Biz Chat)3f30311c-6b1e-48a4-ab79-725b469da960M365_COPILOT_BUSINESS_CHAT
Microsoft 365 Copilot in Productivity Appa62f8878-de10-42f3-b68f-6149a25ceb97M365_COPILOT_APPS
Microsoft 365 Copilot in Microsoft Teamsb95945de-b3bd-46db-8437-f2beb6ea2347M365_COPILOT_TEAMS
Power Platform Connectors in Microsoft 365 Copilot89f1c4c8-0878-40f7-804d-869c9128ab5dM365_COPILOT_CONNECTORS
Graph Connectors in Microsoft 365 Copilot82d30987-df9b-4486-b146-198b21d164c7GRAPH_CONNECTORS_COPILOT
Copilot Studio in Copilot for Microsoft 365fe6c28b3-d468-44ea-bbd0-a10a5167435cCOPILOT_STUDIO_IN_COPILOT_FOR_M365
Intelligent Search (Semantic search)931e4a88-a67f-48b5-814f-16a5f1e6028d)M365_COPILOT_INTELLIGENT_SEARCH
Microsoft 365 Copilot for SharePoint0aedf20c-091d-420b-aadf-30c042609612M365_COPILOT_SHAREPOINT
Workplace Analytics (backend)ff7b261f-d98b-415b-827c-42a3fdf015afWORKPLACE_ANALYTICS_INSIGHTS_BACKEND
Workplace Analytics (user)b622badb-1b45-48d5-920f-4b27a2c0996cWORKPLACE_ANALYTICS_INSIGHTS_USER

Table 1: Microsoft 365 Copilot Service Plans

The last update from Microsoft said that updates to add the Viva Insights service plans completed in mid-April 2025.

Viva Insights and Microsoft 365 Copilot

According to Microsoft, access to Workplace Analytics allows “IT admins and analysts can tailor advanced prebuilt Copilot reports with their business data or create custom reports with organizational attributes, expanded Microsoft 365 Copilot usage metrics, and more granular controls.” The data is exposed in Viva Insights (web), the Viva Insights Teams app (Figure 1), and the Viva Insights mobile apps.

Copilot Dashboard in the Viva Insights Teams app.
Figure 1: Copilot Dashboard in the Viva Insights Teams app

Everyone running a Copilot deployment is intimately aware of the need to track and understand how people use AI in different apps. The API behind the Copilot usage report in the Microsoft 365 admin center delivers sparse information. It’s possible to enhance the usage report data with audit data and use the result to track down people who don’t make use of expensive licenses, but that requires custom code. Hence the insights reported in the Copilot Dashboard in Viva Insights.

A note in the announcement says that access to the Copilot Dashboard now requires a minimum of 50 Viva Insights (Copilot) licenses. As obvious from Figure 1, my tenant has fewer than 50 licenses, but can still use Viva Insights because it’s not a new tenant.

What Service Plans Do

As you’re probably aware, a license (product, or SKU) is something that Microsoft sells to customers. A service plan enables or disables specific functionality within a license. For example, the Copilot license includes the Copilot Studio in Copilot for Microsoft 365 service plan, which in turn allows users to create agents in Copilot Studio. If you don’t want people to be able to access Copilot Studio, you can disable the service plan.

Disabling a service plan can be done by updating a user’s licenses through the Microsoft 365 admin center. Options are available to do this through User Accounts or License Details (Figure 2).

Amending service plans for a user’s Microsoft 365 Copilot license.
Figure 2: Amending service plans for a user’s Microsoft 365 Copilot license

If you use group-based licensing, you can amend the options for the Copilot license to remove service plans. However, this affects every user in the group, so you might end up with one group to assign “full” Copilot licenses and another to assign “restricted” licenses.

Be Careful When Disabling Copilot Service Plans

One potential issue with some Copilot service plans is that you’re never quite sure what removing a service plan will do. Removing the Microsoft 365 Copilot in Productivity Apps service plan seems straightforward because it disables the Copilot options in the Office desktop apps (all platforms). But disabling the Intelligent Search service plan will mess up any app that uses Copilot to search.

Blocking Copilot Studio is problematic. Removing the service plan only removes the ability of a user to sign in to use Copilot Studio. They can still sign in for a 60-day trial, just like anyone else with an email address who doesn’t have a Copilot Studio license.

Disabling Copilot Service Plans with PowerShell

Disabling service plans through a GUI can rapidly become tiresome. I wrote a PowerShell script to (downloadable from GitHub) to demonstrate how to use the Set-MgUserLicense cmdlet from the Microsoft Graph PowerShell SDK to disable a Copilot service plan. Another variation on removing service plans is explained here.

The script checks for group-based license assignment for Copilot licenses and if found, creates an array of excluded accounts that it won’t process. It then scans for accounts with a Microsoft 365 Copilot license and if the account isn’t excluded, runs Set-MgUserLicense to disable the Copilot Studio service plan. It’s just an example of using PowerShell to automate a license management operation and is easily amended to process any of the Copilot service plans. Enjoy!!


Stay updated with developments across the Microsoft 365 ecosystem by subscribing to the Office 365 for IT Pros eBook. We do the research to make sure that our readers understand the technology. The Office 365 book package includes the Automating Microsoft 365 with PowerShell eBook.

 

Optimal decimation to Log Simulation Data
Matlab News

Optimal decimation to Log Simulation Data

PuTI / 2025-05-18

Hello everyone,
I have a Inverter model and I want to calculate its switching losses, although I can use MATLAB function ee_getPowerLossSummary, but I want to implement my own power loss analysis in post-processing. If I modulate the inverter at 5kHz, the switching event happens every 200mcicrosecond, and if I want to log simulation data in workspace, what should be the decimation keeping in mind these things.
I am using a variable step solver, because I couldn’t use fixed step with N-channel MOSFTET, where PWM is provided by the Three-phase Two-Level PWM generator with sample time (1/10*Fsw).
If I run the simulation for 7 seconds only, the total step sizes could be up to 7M, keeping in mind the step size of 1microsecond.
Is there any way to run that model with fixed-step solver, or what should be the decimation so that my simulation speed is optimal as well as I can capture the swiching events.

Looking forward to hearign from the experts.
Thank You!Hello everyone,
I have a Inverter model and I want to calculate its switching losses, although I can use MATLAB function ee_getPowerLossSummary, but I want to implement my own power loss analysis in post-processing. If I modulate the inverter at 5kHz, the switching event happens every 200mcicrosecond, and if I want to log simulation data in workspace, what should be the decimation keeping in mind these things.
I am using a variable step solver, because I couldn’t use fixed step with N-channel MOSFTET, where PWM is provided by the Three-phase Two-Level PWM generator with sample time (1/10*Fsw).
If I run the simulation for 7 seconds only, the total step sizes could be up to 7M, keeping in mind the step size of 1microsecond.
Is there any way to run that model with fixed-step solver, or what should be the decimation so that my simulation speed is optimal as well as I can capture the swiching events.

Looking forward to hearign from the experts.
Thank You! Hello everyone,
I have a Inverter model and I want to calculate its switching losses, although I can use MATLAB function ee_getPowerLossSummary, but I want to implement my own power loss analysis in post-processing. If I modulate the inverter at 5kHz, the switching event happens every 200mcicrosecond, and if I want to log simulation data in workspace, what should be the decimation keeping in mind these things.
I am using a variable step solver, because I couldn’t use fixed step with N-channel MOSFTET, where PWM is provided by the Three-phase Two-Level PWM generator with sample time (1/10*Fsw).
If I run the simulation for 7 seconds only, the total step sizes could be up to 7M, keeping in mind the step size of 1microsecond.
Is there any way to run that model with fixed-step solver, or what should be the decimation so that my simulation speed is optimal as well as I can capture the swiching events.

Looking forward to hearign from the experts.
Thank You! power_electronics_control MATLAB Answers — New Questions

​

I need to use a scope to display the current i and the power P as functions of the voltage V, with the curves obtained for various irradiance levels and temperatures
Matlab News

I need to use a scope to display the current i and the power P as functions of the voltage V, with the curves obtained for various irradiance levels and temperatures

PuTI / 2025-05-18

1. Context
As part of my energy modeling project, I developed a simulation model in Simulink:
A photovoltaic (PV) panel model based on the single-diode approach, accounting for irradiance and temperature effects.
the model generate time-domain outputs under standard test conditions.
2. Objectives
I need to extract the I–V and P–V curves of the PV panel for different environmental conditions:
Irradiance (G): 200, 400, 600, 800, 1000 W/m²
Temperature (T): 0°C, 25°C, 50°C
3. Current model status
The Simulink PV model includes:
Computation of Iph, Irs, I0
Equivalent PV cell circuit
Buck-Boost converter with MPPT and PWM control
Time-based curves: Pin(t), Pout(t)Inputs: Irradiance (G), Temperature (T)However, it does not currently output I–V or P–V curves directly.
4. Issues encountered
No direct I–V and P–V output
No voltage sweep mechanism to generate these curves
No variation tracking across different G and T conditions
5. Request for help
I’d appreciate guidance on how to:
Add a voltage sweep mechanism (controlled voltage source) to the PV model
Automatically extract I–V and P–V curves for different G and T values
Create a simulation script that loops over multiple irradiance and temperature settings
Export the results (ideally to .mat or .csv)
Thank you in advance for any suggestions or shared examples!1. Context
As part of my energy modeling project, I developed a simulation model in Simulink:
A photovoltaic (PV) panel model based on the single-diode approach, accounting for irradiance and temperature effects.
the model generate time-domain outputs under standard test conditions.
2. Objectives
I need to extract the I–V and P–V curves of the PV panel for different environmental conditions:
Irradiance (G): 200, 400, 600, 800, 1000 W/m²
Temperature (T): 0°C, 25°C, 50°C
3. Current model status
The Simulink PV model includes:
Computation of Iph, Irs, I0
Equivalent PV cell circuit
Buck-Boost converter with MPPT and PWM control
Time-based curves: Pin(t), Pout(t)Inputs: Irradiance (G), Temperature (T)However, it does not currently output I–V or P–V curves directly.
4. Issues encountered
No direct I–V and P–V output
No voltage sweep mechanism to generate these curves
No variation tracking across different G and T conditions
5. Request for help
I’d appreciate guidance on how to:
Add a voltage sweep mechanism (controlled voltage source) to the PV model
Automatically extract I–V and P–V curves for different G and T values
Create a simulation script that loops over multiple irradiance and temperature settings
Export the results (ideally to .mat or .csv)
Thank you in advance for any suggestions or shared examples! 1. Context
As part of my energy modeling project, I developed a simulation model in Simulink:
A photovoltaic (PV) panel model based on the single-diode approach, accounting for irradiance and temperature effects.
the model generate time-domain outputs under standard test conditions.
2. Objectives
I need to extract the I–V and P–V curves of the PV panel for different environmental conditions:
Irradiance (G): 200, 400, 600, 800, 1000 W/m²
Temperature (T): 0°C, 25°C, 50°C
3. Current model status
The Simulink PV model includes:
Computation of Iph, Irs, I0
Equivalent PV cell circuit
Buck-Boost converter with MPPT and PWM control
Time-based curves: Pin(t), Pout(t)Inputs: Irradiance (G), Temperature (T)However, it does not currently output I–V or P–V curves directly.
4. Issues encountered
No direct I–V and P–V output
No voltage sweep mechanism to generate these curves
No variation tracking across different G and T conditions
5. Request for help
I’d appreciate guidance on how to:
Add a voltage sweep mechanism (controlled voltage source) to the PV model
Automatically extract I–V and P–V curves for different G and T values
Create a simulation script that loops over multiple irradiance and temperature settings
Export the results (ideally to .mat or .csv)
Thank you in advance for any suggestions or shared examples! photovoltaic panel, current, voltage, power, scope MATLAB Answers — New Questions

​

Break in and break away points on Root Locus
Matlab News

Break in and break away points on Root Locus

PuTI / 2025-05-18

Hi,

I’m busy developing a controller, but for some reason my plot does not cut through the real axis and I’m not able to find the locations where the damping ratio is 0.59.

Herewith the code, block diagram and RLocus plot:

G=tf([0.151 0.1774], [1 0.739 0.921 0.25])
rlocus(G)

Any assistance will be appreciated.

Thanks
NielHi,

I’m busy developing a controller, but for some reason my plot does not cut through the real axis and I’m not able to find the locations where the damping ratio is 0.59.

Herewith the code, block diagram and RLocus plot:

G=tf([0.151 0.1774], [1 0.739 0.921 0.25])
rlocus(G)

Any assistance will be appreciated.

Thanks
Niel Hi,

I’m busy developing a controller, but for some reason my plot does not cut through the real axis and I’m not able to find the locations where the damping ratio is 0.59.

Herewith the code, block diagram and RLocus plot:

G=tf([0.151 0.1774], [1 0.739 0.921 0.25])
rlocus(G)

Any assistance will be appreciated.

Thanks
Niel matlab, root locus MATLAB Answers — New Questions

​

External Mode Connection Issue with C2000 LaunchPad and Speedgoat System
Matlab News

External Mode Connection Issue with C2000 LaunchPad and Speedgoat System

PuTI / 2025-05-17

Dear MathWorks Support Team,
I am reaching out to request your assistance with an issue we are facing during the implementation of a controller model using the C2000 Blockset. As advised by Speedgoat support, this problem appears to be related to the C2000 Blockset, and they have recommended that we consult you directly.
Project Setup Overview:
We are working with two models:
Plant model – implemented on the Speedgoat FPGA
Controller model – implemented on a LaunchPad development kit (https://www.speedgoat.com/products/launchpad-development-kit) (TI LaunchXL-F28379D)
The two systems communicate via analog signals using the IO334 interface.
We have successfully confirmed the following:
The plant model is transmitting correct signals to the LaunchPad, verified using an oscilloscope via SMA connectors on the LaunchPad.
The LaunchPad receives the input signals correctly.
The plant model is also responding correctly when tested with a signal generator.
Issue Description:
The controller model builds, deploys, and starts successfully on the LaunchPad. However, when we attempt to use Monitor & Tune (external mode), we encounter the following error:
"External Mode Open Protocol Connect command failed: Could not connect to target application. XCP internal error: timeout expired."
We have attached relevant screenshots and the build report for your review.
Setup Details:
The LaunchPad is connected to the PC via USB.
It is also connected to the Speedgoat system via the SAMTEC connector.
We have used the blocks recommended in the documentation for the LaunchPad.
We would greatly appreciate your help in identifying the cause of this issue and guiding us through any additional steps or configurations needed to enable external mode communication.
Thank you very much for your support.
Best regards,
Varun ShahiDear MathWorks Support Team,
I am reaching out to request your assistance with an issue we are facing during the implementation of a controller model using the C2000 Blockset. As advised by Speedgoat support, this problem appears to be related to the C2000 Blockset, and they have recommended that we consult you directly.
Project Setup Overview:
We are working with two models:
Plant model – implemented on the Speedgoat FPGA
Controller model – implemented on a LaunchPad development kit (https://www.speedgoat.com/products/launchpad-development-kit) (TI LaunchXL-F28379D)
The two systems communicate via analog signals using the IO334 interface.
We have successfully confirmed the following:
The plant model is transmitting correct signals to the LaunchPad, verified using an oscilloscope via SMA connectors on the LaunchPad.
The LaunchPad receives the input signals correctly.
The plant model is also responding correctly when tested with a signal generator.
Issue Description:
The controller model builds, deploys, and starts successfully on the LaunchPad. However, when we attempt to use Monitor & Tune (external mode), we encounter the following error:
"External Mode Open Protocol Connect command failed: Could not connect to target application. XCP internal error: timeout expired."
We have attached relevant screenshots and the build report for your review.
Setup Details:
The LaunchPad is connected to the PC via USB.
It is also connected to the Speedgoat system via the SAMTEC connector.
We have used the blocks recommended in the documentation for the LaunchPad.
We would greatly appreciate your help in identifying the cause of this issue and guiding us through any additional steps or configurations needed to enable external mode communication.
Thank you very much for your support.
Best regards,
Varun Shahi Dear MathWorks Support Team,
I am reaching out to request your assistance with an issue we are facing during the implementation of a controller model using the C2000 Blockset. As advised by Speedgoat support, this problem appears to be related to the C2000 Blockset, and they have recommended that we consult you directly.
Project Setup Overview:
We are working with two models:
Plant model – implemented on the Speedgoat FPGA
Controller model – implemented on a LaunchPad development kit (https://www.speedgoat.com/products/launchpad-development-kit) (TI LaunchXL-F28379D)
The two systems communicate via analog signals using the IO334 interface.
We have successfully confirmed the following:
The plant model is transmitting correct signals to the LaunchPad, verified using an oscilloscope via SMA connectors on the LaunchPad.
The LaunchPad receives the input signals correctly.
The plant model is also responding correctly when tested with a signal generator.
Issue Description:
The controller model builds, deploys, and starts successfully on the LaunchPad. However, when we attempt to use Monitor & Tune (external mode), we encounter the following error:
"External Mode Open Protocol Connect command failed: Could not connect to target application. XCP internal error: timeout expired."
We have attached relevant screenshots and the build report for your review.
Setup Details:
The LaunchPad is connected to the PC via USB.
It is also connected to the Speedgoat system via the SAMTEC connector.
We have used the blocks recommended in the documentation for the LaunchPad.
We would greatly appreciate your help in identifying the cause of this issue and guiding us through any additional steps or configurations needed to enable external mode communication.
Thank you very much for your support.
Best regards,
Varun Shahi c2000 blockset issue MATLAB Answers — New Questions

​

Transfer history to MATLAB 2025a
Matlab News

Transfer history to MATLAB 2025a

PuTI / 2025-05-17

Hi everybody.
I bought a new computer and I want to transfer the MATLAB history I had on the old one. The new computer has MATLAB 2025a installed, while the old one has MATLAB 2024b.
I already transfer histories in the past and it was as simple as copy and paste the History.xml from a computer to another. However, with MATLAB 2025a, history is no longer saved in the History.xml. How can I transfer the command history from the old computer (and MATLAB 2024b) to the new one with MATLAB 2025a?Hi everybody.
I bought a new computer and I want to transfer the MATLAB history I had on the old one. The new computer has MATLAB 2025a installed, while the old one has MATLAB 2024b.
I already transfer histories in the past and it was as simple as copy and paste the History.xml from a computer to another. However, with MATLAB 2025a, history is no longer saved in the History.xml. How can I transfer the command history from the old computer (and MATLAB 2024b) to the new one with MATLAB 2025a? Hi everybody.
I bought a new computer and I want to transfer the MATLAB history I had on the old one. The new computer has MATLAB 2025a installed, while the old one has MATLAB 2024b.
I already transfer histories in the past and it was as simple as copy and paste the History.xml from a computer to another. However, with MATLAB 2025a, history is no longer saved in the History.xml. How can I transfer the command history from the old computer (and MATLAB 2024b) to the new one with MATLAB 2025a? matlab MATLAB Answers — New Questions

​

how to validate mscohere?
Matlab News

how to validate mscohere?

PuTI / 2025-05-17

I would like to validate the use of mscohere for impact test data, using an instrumented hammer that measures force and an accelerometer to measure response. I ran some artificial cases – see attached pdf. Single input, single output. I tried to force mscohere to use only 3 windows of data, one for each impact test. The first half of results are for sinusoidal input and output, and the second half are for a simulated impact input. There are 2 time domain plots in there. The inputs are the 3 input vectors stacked vertically, and the 3 output vectors stacked vertically. Sample rate is 1000 Hz, with a 4 second window and 0.25 Hz resolution. I am using a rectangular window:
[coherence, frequencies2] = mscohere(coherenceInput, coherenceOutput, rectwin(windowLength), 0, windowLength, sampleRateHz);
The results look about right to me, but I would like to validate it against some known or anlytical results. Are there any suitable examples available?
Also, how do I check that it is getting the absolute basics right – for example, is it using the 3 windows of data that I want it to use?I would like to validate the use of mscohere for impact test data, using an instrumented hammer that measures force and an accelerometer to measure response. I ran some artificial cases – see attached pdf. Single input, single output. I tried to force mscohere to use only 3 windows of data, one for each impact test. The first half of results are for sinusoidal input and output, and the second half are for a simulated impact input. There are 2 time domain plots in there. The inputs are the 3 input vectors stacked vertically, and the 3 output vectors stacked vertically. Sample rate is 1000 Hz, with a 4 second window and 0.25 Hz resolution. I am using a rectangular window:
[coherence, frequencies2] = mscohere(coherenceInput, coherenceOutput, rectwin(windowLength), 0, windowLength, sampleRateHz);
The results look about right to me, but I would like to validate it against some known or anlytical results. Are there any suitable examples available?
Also, how do I check that it is getting the absolute basics right – for example, is it using the 3 windows of data that I want it to use? I would like to validate the use of mscohere for impact test data, using an instrumented hammer that measures force and an accelerometer to measure response. I ran some artificial cases – see attached pdf. Single input, single output. I tried to force mscohere to use only 3 windows of data, one for each impact test. The first half of results are for sinusoidal input and output, and the second half are for a simulated impact input. There are 2 time domain plots in there. The inputs are the 3 input vectors stacked vertically, and the 3 output vectors stacked vertically. Sample rate is 1000 Hz, with a 4 second window and 0.25 Hz resolution. I am using a rectangular window:
[coherence, frequencies2] = mscohere(coherenceInput, coherenceOutput, rectwin(windowLength), 0, windowLength, sampleRateHz);
The results look about right to me, but I would like to validate it against some known or anlytical results. Are there any suitable examples available?
Also, how do I check that it is getting the absolute basics right – for example, is it using the 3 windows of data that I want it to use? mscohere MATLAB Answers — New Questions

​

Constraints to a Second Order Curve Fit
Matlab News

Constraints to a Second Order Curve Fit

PuTI / 2025-05-17

Given this set of data, is it possible to perform a 2nd order curve fit with the set constraint that the leading coefficient must be positive? Using an unconstrained curve fit (both "polyfit" and "fit" were used), the data produces a curve with a rather small negative leading coefficient. For reference, the data is as follows:
x = 150, 190, 400, 330, 115, 494
y = 1537, 1784, 3438, 2943, 1175, 4203
The given outputs using both methods largely agreed, as shown below:
fit_eq =
-0.0007 8.3119 255.8074
eq =
Linear model Poly2:
eq(x) = p1*x^2 + p2*x + p3
Coefficients (with 95% confidence bounds):
p1 = -0.0007088 (-0.003588, 0.00217)
p2 = 8.312 (6.51, 10.11)
p3 = 255.8 (25.01, 486.6)Given this set of data, is it possible to perform a 2nd order curve fit with the set constraint that the leading coefficient must be positive? Using an unconstrained curve fit (both "polyfit" and "fit" were used), the data produces a curve with a rather small negative leading coefficient. For reference, the data is as follows:
x = 150, 190, 400, 330, 115, 494
y = 1537, 1784, 3438, 2943, 1175, 4203
The given outputs using both methods largely agreed, as shown below:
fit_eq =
-0.0007 8.3119 255.8074
eq =
Linear model Poly2:
eq(x) = p1*x^2 + p2*x + p3
Coefficients (with 95% confidence bounds):
p1 = -0.0007088 (-0.003588, 0.00217)
p2 = 8.312 (6.51, 10.11)
p3 = 255.8 (25.01, 486.6) Given this set of data, is it possible to perform a 2nd order curve fit with the set constraint that the leading coefficient must be positive? Using an unconstrained curve fit (both "polyfit" and "fit" were used), the data produces a curve with a rather small negative leading coefficient. For reference, the data is as follows:
x = 150, 190, 400, 330, 115, 494
y = 1537, 1784, 3438, 2943, 1175, 4203
The given outputs using both methods largely agreed, as shown below:
fit_eq =
-0.0007 8.3119 255.8074
eq =
Linear model Poly2:
eq(x) = p1*x^2 + p2*x + p3
Coefficients (with 95% confidence bounds):
p1 = -0.0007088 (-0.003588, 0.00217)
p2 = 8.312 (6.51, 10.11)
p3 = 255.8 (25.01, 486.6) curve fitting, matlab, polyfit MATLAB Answers — New Questions

​

1 2 3 … 6 Next

Search

Categories

  • Matlab
  • Microsoft
  • News
  • Other
Application Package Repository Telkom University

Tags

matlab microsoft opensources
Application Package Download License

Application Package Download License

Adobe
Google for Education
IBM
Matlab
Microsoft
Wordpress
Visual Paradigm
Opensource

Sign Up For Newsletters

Be the First to Know. Sign up for newsletter today

Application Package Repository Telkom University

Portal Application Package Repository Telkom University, for internal use only, empower civitas academica in study and research.

Information

  • Telkom University
  • About Us
  • Contact
  • Forum Discussion
  • FAQ
  • Helpdesk Ticket

Contact Us

  • Ask: Any question please read FAQ
  • Mail: helpdesk@telkomuniversity.ac.id
  • Call: +62 823-1994-9941
  • WA: +62 823-1994-9943
  • Site: Gedung Panambulai. Jl. Telekomunikasi

Copyright © Telkom University. All Rights Reserved. ch

  • FAQ
  • Privacy Policy
  • Term

This Application Package for internal Telkom University only (students and employee). Chiers... Dismiss