Month: May 2024
Customer Name not showing up in Title of Invite
We have separate booking pages for each type of meeting (we don’t want the client to have to choose the type of meeting). When a client books the meeting, their name does not show up on the title or in the body of the calendar meeting, so we don’t know who we’re meeting with. What am I doing wrong and how can I fix it to show the client name in the title of the calendar invite?
We have separate booking pages for each type of meeting (we don’t want the client to have to choose the type of meeting). When a client books the meeting, their name does not show up on the title or in the body of the calendar meeting, so we don’t know who we’re meeting with. What am I doing wrong and how can I fix it to show the client name in the title of the calendar invite? Read More
Labs Update 5/30
Hello TSPs,
As you are likely aware, Worldwide Learning continues to experience impacts to training labs as work progresses on Microsoft’s Secure Future Initiative. These updates are important to ensure that Microsoft continues to be a market leader in protecting both partner and customer data. We know how critical these labs are for your success and we appreciate your patience and understanding as we work to navigate the challenges of the evolving security threat landscape.
As of this week, all M365 tenant types are available for classes. Please note that increased security measures require a longer lead time to provision tenants. Depending on the course and quantity of tenants, Authorized Lab Hosters (ALHs) may not be able to immediately meet partner demand. We are working on resolving the issue and hope to have everything normalized within a week. Please keep collaborating with your ALH to prepare for future classes.
D365 tenants remain offline due to the continued security work. Our team continues to make progress on resolving the D365 issue but there is no current ETA for D365 tenant availability. This work impacts the following courses:
MB-210T01: Microsoft Dynamics 365 Sales
MB-230T01: Microsoft Dynamics 365 Customer Service
MB-240T00: Microsoft Dynamics 365 Field Service
MB-910T00: Microsoft Dynamics 365 Fundamentals (CRM)
We sincerely apologize for any inconvenience or disruption these issues may cause to your business and your customers. We value your partnership and support, and we are doing our best to mitigate the impact and provide you with the best possible lab experiences for your learners. We will continue to keep you updated on the progress and the resolution of these issues here in the Forum and on the monthly partner Community Calls.
Dan
Hello TSPs,
As you are likely aware, Worldwide Learning continues to experience impacts to training labs as work progresses on Microsoft’s Secure Future Initiative. These updates are important to ensure that Microsoft continues to be a market leader in protecting both partner and customer data. We know how critical these labs are for your success and we appreciate your patience and understanding as we work to navigate the challenges of the evolving security threat landscape.
As of this week, all M365 tenant types are available for classes. Please note that increased security measures require a longer lead time to provision tenants. Depending on the course and quantity of tenants, Authorized Lab Hosters (ALHs) may not be able to immediately meet partner demand. We are working on resolving the issue and hope to have everything normalized within a week. Please keep collaborating with your ALH to prepare for future classes.
D365 tenants remain offline due to the continued security work. Our team continues to make progress on resolving the D365 issue but there is no current ETA for D365 tenant availability. This work impacts the following courses:
MB-210T01: Microsoft Dynamics 365 Sales
MB-230T01: Microsoft Dynamics 365 Customer Service
MB-240T00: Microsoft Dynamics 365 Field Service
MB-910T00: Microsoft Dynamics 365 Fundamentals (CRM)
We sincerely apologize for any inconvenience or disruption these issues may cause to your business and your customers. We value your partnership and support, and we are doing our best to mitigate the impact and provide you with the best possible lab experiences for your learners. We will continue to keep you updated on the progress and the resolution of these issues here in the Forum and on the monthly partner Community Calls.
Dan Read More
Able to chat with some guest accounts and not others
We have two guest accounts, both people from the same external organization. Lets call them Jane Doe and John Smith from contoso.com. If I open a new chat and type Jane Doe, Teams finds her guest account and offers Jane Doe (Guest) as the match for the name, and all works fine. If I type John Smith, it does not find him. If I enter John Smith’s email address johns(at)contoso.com it is able to open a chat, but lists him as John Smith (External) and of course the options for the chat are restricted (can’t add a new tab for example). I do not see any settings differences between the two accounts, both guest accounts were created by the invitation method. The biggest difference is that Jane’s account is about 4 yrs old, where John’s is only 1 week old.
We have two guest accounts, both people from the same external organization. Lets call them Jane Doe and John Smith from contoso.com. If I open a new chat and type Jane Doe, Teams finds her guest account and offers Jane Doe (Guest) as the match for the name, and all works fine. If I type John Smith, it does not find him. If I enter John Smith’s email address johns(at)contoso.com it is able to open a chat, but lists him as John Smith (External) and of course the options for the chat are restricted (can’t add a new tab for example). I do not see any settings differences between the two accounts, both guest accounts were created by the invitation method. The biggest difference is that Jane’s account is about 4 yrs old, where John’s is only 1 week old. Read More
Addressing common Entra Id protection deployment and maintenance issues
Entra ID tenants face threats from bad actors who use password spray attacks, multifactor spamming, and social phishing campaigns. Many organizations do not prioritize protecting Entra ID because they worry about affecting their end users. One straightforward way to protect Entra ID is to use risk based conditional access policies that combine conditional access policies with the risk signals from Entra ID Protection. In this blog, I will discuss some of the mistakes that we see organizations make that cause delays in the deployment and leave their tenants insecure. This blog will answer questions about Entra ID tenants using third party identity providers to authenticate, reducing false positives, minimizing user impact, and migrating from the old identity protection policies.
First let us make sure a couple of things are understood.
Entra ID protection does have a license requirement.
A risk based conditional access policy is a conditional access policy that leverages the user or sign-in risk condition.
When using a block for either a user risk or sign-in risk, it may require monitoring and manual remediation to be performed by at least a security operator.
Changing password grant control will only remediate user risk and requires the user to be registered for self-service password reset or they will be blocked.
When using cloud authentication using Azure multifactor authentication (MFA), a sign-in risk can be remediated by a multi-factor authentication, and it requires the user to be registered for MFA or they will be blocked.
If only the required multi-factor is used with a user risk policy the user will be bombarded with MFA prompts resulting in poor user experience.
Combining a user risk and sign-in risk in the same policy means both must be met before the policy applies. (remember an MFA prompt does not remediate the user risk)
The workbook being referenced in this blog is designed to help show potential impact and help troubleshoot the deployment or usage of risk based conditional access policies. It requires the Entra Id signinlogs to be sent to Azure Monitor. To learn more: Impact Analysis Risk-Based Access Policies
Issue 1: Implemented sign-in risk policies before reducing the false positives.
One way to lower the number of false positives is to label all the IP addresses of the organization’s network egress as trusted. For assistance, administrators can go to the Entra ID’s workbooks section and find a workbook called “Impact Analysis Risk-Based Access Policies”. Open that workbook and go to the “IP addresses not listed as trusted network” section. There you can see a list of IP addresses from the existing sign-in logs where multiple users from the organization have logged in. Use the autonomous system number (ASN) to check who owns the IP ranges, decide if they are reliable and create a named location with the mark as a trusted location option.
How to create a named location
From the workbook, this shows all the Ip addresses that have sign-ins by multiple users.
Some of the IP addresses in this list may belong to third party proxy solutions. If they cannot provide a source anchor IP address, they should be defined as a trusted named location. Organizations that require MFA from untrusted IP addresses should consider a separate conditional access policy that requires MFA for that new trusted named location. Defining trusted networks is not something you do once, changes to the networks seem to occur to organizations frequently. Make sure to regularly review the report and set up new trusted networks.
Issue 2: Implemented user risk policies prior to remediating the last several years of low, medium, and high user risk.
I once spoke to an organization that had over 3,000 high-risk users. Think about how bad the user experience would have been if they had to change their passwords when they applied the policy. Since the day this feature was available to the tenant, Identity protection has been marking a user it determines is risky with low, medium, or high risk level. The only way to clear the risk for a user is either an administrator manually dismissing it or the user changing or resetting their password in Entra ID. This means that when a user risk policy is turned on, many users may immediately trigger the new risk based conditional access policy and have a bad user experience. And if too many users have an unpleasant experience, it usually looks like an outage and the policy is often reversed.
There are a few things that have been added to help clean this up:
Starting March 31st all low user risks older than 6 months will start to age out. Plan for change – Microsoft Entra ID Identity protection: “Low” risk age out
Organization syncing password hashes, can now leverage the new Allow on-premises password change to reset user risk feature.
And a script has been published to clean out all the old user risk. GitHub: IdentityProtectionTools
Issue 3: Make the implementation of the policies too complex.
When I work with organizations, we usually start out with a plan to deploy the two Microsoft recommended policies:
Require all user sign-ins to all cloud apps with medium or high sign-in risk to require multi-factor authentication.
Require all user sign-ins to all cloud apps with high user risk to change password.
If organizations applied these two policies, it would lower the chances of a bad actor successfully accessing the tenant. What I see is that admins get too ambitious and end up with 10+ new risk based conditional access policy scenarios that they either neglect to implement or cannot verify the actual impact and then give up. The advantage of these two scenarios is that the “Impact Analysis Risk-Based Access Policies” workbook uses existing sign-in logs and shows the number of users who signed in successfully that would have been affected if a policy were in place:
User risk scenarios / High risk users not prompted for password change.
Sign-in risk & trusted network scenarios / Medium or high risk sign-ins not remediated using multifactor authentication.
From the Workbook, this shows whether the recommended policies are in place or if possible, gaps could exist.
The plan is to begin with the basic and essential protections. Then add the more complex and tricky situations to assess. Think about tighter block scenarios for Admin portals, members with privileged roles, security information registration and requiring a compliant device when forcing a password change.
Issue 4: Believing their third-party (federated) identity solution is all they need to protect Entra ID.
Some objects in Entra ID are not secured by third party identity providers. These include B2B (business-to-business) guest users, poorly managed shared mailboxes, and stolen tokens used against Entra ID. Many tenants have poor hygiene and limited monitoring that allow bad actors to use cloud accounts that authenticate directly to Entra ID. To protect Entra ID from these unknown attacks, it is better to use risk based conditional access policies in addition to what your third-party solution provides.
When the third-party identity provider (IDP) does multi-factor, the federatedIdpMfaBehavior setting should be set so that Entra ID can send the user back for MFA and the IDP can tell Entra ID that MFA was performed.
More information about federatedIdpMfaBehavior setting.
#AzureAD Identity Protection adds support for federated identities!
The “Impact Analysis Risk-Based Access Policies” workbook will show if sign-in risk is currently being remediated by multifactor authentication coming from a third-party (federated) identity provider which is a fantastic way to know the policy is working.
From the workbook, if accounts are sent back to a federated identity provider to remediate the risk, then these will not be 0:
Issue 5: The tenant is configured to use the legacy identity protection policies.
A change is scheduled for July 2024 that will no longer allow the changing of legacy policies. Microsoft recommends leveraging conditional access policies when applying conditions around risk, making it easier to troubleshoot and to force a sign-in frequency. If your organization is leveraging the old Identity Protection policies, it is easy to migrate over to risk based conditional access policies.
Refer to October 2023 announcement to get information about migrating. What’s new in Microsoft Entra
The April 2024 announcement covers timelines. What’s new in Microsoft Entra
From the workbook, if the legacy policies are enabled these two will not be 0.
Deploying and maintaining Entra ID Protection is crucial for organizations to protect against threats from bad actors. By avoiding common mistakes and following the best practices, organizations can effectively secure their Entra ID tenants. It is important to regularly review and update policies to ensure the continued security of the tenant. Take the first step in securing your organization by implementing risk-based conditional access policies and following the recommendations outlined in this blog.
Thank you.
Chad Cox
Additional References
Workbook: Impact analysis of risk-based access policies
Azure AD Mailbag: Identity protection
Microsoft Tech Community – Latest Blogs –Read More
Using Admin State to Control Your Azure Load Balancer Backend Instances
Today, Azure Load Balancer distributes incoming traffic across healthy backend pool instances. It accomplishes this by using health probes to send periodic requests to the instances and check for valid responses. Results from the health probe then determine which instances can receive new or continued connections and which ones cannot.
You might want to override the health probe behavior for some of the virtual machines in your Load Balancer backend pool. For example, you might want to take an instance out of rotation for maintenance or testing, or you might even want to force an instance to accept new connections even if the health probe marks it as unhealthy. In these cases, you can use our newly introduced Azure Load Balancer feature called administrative state (admin state). With admin state, you can set a value of UP, DOWN, or NONE on each backend pool instance. This value will affect how the load balancer handles new and existing connections to the instance, regardless of the health probe results.
What is Admin State?
Admin State is an Azure Load Balancer feature that lets you set the state of each individual backend pool instance to a value of UP, DOWN, or NONE. This value overrides the health probe behavior for the respective instance and determines how the load balancer treats the instance for being allowed to accept new and existing connections. Below are the definitions of each state and its effect on connections to the backend instance:
Admin State
New Connections
Existing Connections
UP
Load Balancer will disregard the configured health probe’s response and will always consider the backend instance as eligible for new connections.
Load Balancer will disregard the configured health probe’s response and will always allow existing connections to persist to the backend instance.
DOWN
Load Balancer will disregard the configured health probe’s response and will not allow new connections to the backend instance.
Load Balancer will disregard the configured health probe’s response and existing connections will be determined according to the protocol below:
TCP: Established TCP connections to the backend instance persists.
UDP: Existing UDP flows move to another healthy instance in the backend pool.
Note: This is similar to a Probe Down behavior.
NONE (Blank)
Load Balancer will default to the health probe’s response.
Load Balancer will default to the health probe’s response.
Note: Admin state only works when you have a health probe configured on the load balancer rules. Admin state also does not work with inbound NAT rules.
How to use Admin State?
You can use admin state in different ways depending on your scenario and preference. You can set admin state when you:
Create a new backend pool
Add a new instance to a backend pool
Or updating an existing instance in a backend pool
You can also remove the admin state from an existing instance in a backend pool by setting the value to NONE. This can be done via Azure portal, PowerShell, or CLI.
Why use Admin State?
Previously, to take a backend instance (i.e. Virtual Machine) out of rotation, customers were using Network Security Groups (NSGs) to block traffic from Azure Load Balancer’s health probe or the client’s IPs and ports; or closing the ports on the Virtual Machines (VMs) in the load balancer’s backend pool. This process was complex and added management overhead. Now with admin state, customers can just easily set the state value on the backend pool instance; reducing the overhead and complexity needed for usual maintenance, patching, or simply applying fixes.
Let’s see how one of our customers, Contoso, uses admin state with their web servers.
Contoso’s use cases of admin state
Context
One of our customers, Contoso, leverages Azure Load Balancer to distribute traffic to their web servers hosted on Azure VMs. They have a custom configured health probe that checks the availability of the web servers by sending HTTP requests to a specific defined URL and expecting a 200 OK response to allow connections to the servers.
Issue
However, they notice that the health probe sometimes marks a web server as unhealthy because of transient network issues or application errors, even though the web server is still functional (i.e. “healthy”). This prompts their load balancer to stop sending new connections to that web server, which reduces the capacity, availability and performance of their web application.
Solution
To fix this issue, Contoso makes use of the Azure Load Balancer’s admin state feature to force the load balancer to send new connections to the web servers regardless of what the health probe results are. They accomplished this by setting the admin state value of each backend pool instance (i.e. VMs) to UP, which means that the load balancer always considers the web server healthy and eligible for new connections. It also allows existing connections to persist. Now Contoso can avoid losing traffic because of false positives of the health probe and make sure that their web application can handle the expected load.
Maintenance & Testing
Contoso also wanted to do maintenance and testing on their active web servers to ensure their servers are up to date with the latest software. They decide to use the admin state feature to accomplish this without affecting the traffic flow. They set the admin state value of the web server that they wanted to take out of rotation to DOWN, which means that the load balancer does not allow new connections to that web server and terminates existing connections based on the protocol. Thus, they were able to safely update and troubleshoot the web server without impacting the availability and performance of their web application.
Get Started
We are truly excited to bring to you Azure Load Balancer admin state feature in public preview. With this feature, you would be able to override the health probe behavior on your backend pool instance, giving you more control over your load balancer. This is useful for maintenance, testing and even guaranteeing high availability when transient networking issues arise.
To learn more about the admin state feature, visit the following links:
Overview of admin state concepts
How to manage admin state
We hope you can take advantage of this feature and we welcome your feedback. Please feel free to leave a comment below.
Microsoft Tech Community – Latest Blogs –Read More
Exploring Copilot for Security to Automate Incident Triage
When speaking with Copilot for Security customers, automation is often brought up as a topic of exploration. Customers are eager to extend their existing SOAR investments or workflows to include Copilot because they recognize the capabilities this new technology brings and believe it has the potential to further increase productivity.
Today, Copilot for Security offers two ways of performing automations: 1) Promptbooks which are prompts chained together to achieve a specific task and 2) a LogicApp Connector to fuse the power of Copilot for Security directly into your workflows. In this post, we will explore how the LogicApp connector and set of capabilities could be leveraged to triage an incident––a common action taken by nearly every Security Operations Center (SOC).
Note: This post builds on the original release blog of the connector where a phishing email analysis was performed.
(SIEM + SOAR + GAI) = Next-Gen Automation
For this demonstration, I am going to use Microsoft Sentinel (SIEM) which includes access to LogicApps through the Automations and Playbook capabilities, and Copilot for Security. Included in the product are a set of curated Microsoft Promptbooks including one to triage a Sentinel incident. Running this within the standalone experience will give us a rough sense of what to expect and confidence we can emulate it within a LogicApp using our connector.
While this workflow does not touch on every aspect of incident triage, it provides a good foundation to operate from. Specifically, this logic will summarize the incident, collect any reputation data for a subset of indicators, identify authentication methods of identities impacted, list devices associated with those identities and their compliance status and write an executive report. I am going to keep the core prompts and extend a few to apply more specifically to Sentinel once within the playbook.
Within Sentinel, I can create a playbook from an incident trigger in the “Automations” section of the product.
Once set up, I can leverage the low-code/no-code editor to input my workflow. I’ve mimicked much of the promptbook using the Copilot for Security connector. Each step contains the prompt I plan to run and any context from the incident. Like the promptbooks, Copilot for Security will create a session for this playbook, so each prompt gets the benefit of the broader session context and is stored within the product for later analysis or reasoning.
Each of my prompts help to answer a common question an analyst may pose, but I still need to bring this information back into Sentinel. LogicApps offer a Sentinel connector that can be used to perform actions on our original incident. Here, I get creative in a few ways using generative AI. First, I leverage the session information and have Copilot attempt to classify the incident as “high”, “medium” or “low” based on all the information contained in the responses and force the model to return a label. This is fed into a switch statement which in turn updates the incident status and severity.
Next, I have Copilot for Security explain the reasoning behind the classification and output the data as a bullet point list. This output, paired with the session summary is used to create an HTML comment on the incident, giving an analyst a clear explanation of the steps that Copilot performed when triaging the incident and justification for the label.
Finally, I have Copilot suggest tags for the incident based again on the session information. These are used to tag the incident, adding a dynamic categorization element.
This playbook is configured to run on every incident generated in my workspace automatically. Here’s an example set of outputs where we can see the incident has been automatically classified as “high” severity, marked active, shows signs of a malicious IP and file download and includes the Copilot report as a comment. Naturally, there’s room for improvement on some of the outputs, but this can easily be done through basic prompt tuning.
Augmenting the Security Organization
At the end of last year, I briefly explored how SOAR could benefit from GAI. Notably, I called out natural language as processing instructions, influenced decision making, dynamic content and better human-in-the-loop features. This demonstration of triaging an incident hit on a lot of these categories:
Natural language questions to be answered about the incident, bridging multiple products and data sources.
Natural language responses summarized and “reasoned” over.
Dynamic content created in the form of a classification, tags and summary of the investigation performed.
Influenced decision making by using the model to suggest the severity based on the session content.
Better human-in-the-loop for the fact that this runs on every incident before an analyst needs to be involved.
Functionality like this will augment how security teams run their SOCs, especially as foundation models increase in their accuracy and capabilities. Imagine a world where Copilots are triaging every incident in full then using that information to inform a dynamic prioritization process in real-time. Incidents with clear evidence and decision-making data are automatically actioned and closed whereas ones requiring expert consultation are put into a Teams channel via a series of natural language questions posed by the model and answered by the analyst. In this new SOC, defenders are afforded more time to do more engaging and complex work to protect the organization.
Parting Thoughts
We are living in exciting times in security and IT operations. Generative AI is still rapidly forming and new discoveries are constantly being shared. I strongly encourage every professional and customer I speak with to explore this space, perform experiments and try out new ideas. The Copilot for Security team is constantly looking for new use cases and user feedback. This demonstration of triaging an incident is just one of many workflows we are working on and you should expect a whole lot more!
If you’re interested in replicating this automation or forming your own, check out our getting started documents for Copilot for Security. You can get up and running within minutes and deploy as little as a single Security Computer Unit (SCU). Also be sure to bookmark our Github repository filled with prompt starters, promptbooks and Logic Apps just like this one.
https://learn.microsoft.com/en-us/copilot/security/get-started-security-copilot
https://github.com/Azure/Copilot-For-Security
Microsoft Tech Community – Latest Blogs –Read More
Leading Successful Tech User Groups: Insights from MVPs
Leading a tech user group can be a challenging yet rewarding experience. In this article, we will explore the journey of a user group leader, from the initial challenges of growing the group to the key factors that contributed to its sustained growth and engagement.
We are highlighting Internet of Things and Microsoft Azure German MVP, Damir Dobric, AI United States MVP Adam Wisniewski, and Data Platform MVP Data Platform Bernat Agulló Roselló. By leveraging key factors such as support, coordination, planning, engagement, technology, and valuable content, Damir, Adam, and Bernat each overcame initial challenges and achieved sustained growth and engagement in the successful tech user groups they founded and led.
In founding and leading successful tech user groups, Damir, Adam, and Bernat each had their own unique experiences. Damir founded Azure Meetup Frankurt, the first Azure Group in Germany at the request of Scott Guthrie, while Bernat became a member of the Power BI Barcelona user group, in 2021. In 2023, he joined the organizers’ team benefiting from the knowledge of the attendees and the group. Meanwhile, Adam has been leading user groups, including Tampa XR, since 2018 and has always found them to be an excellent way to connect with like-minded individuals and learn from each other.
Power BI Barcelona user group
Every MVP found unique ways to grow their groups. Damir brought in people from the .NET User Group and various companies, while Bernat reached out through social media. Adam focused on making content his attendees would like. He learned that just creating a group doesn’t mean people will join. So, he made a plan to draw in and keep members interested, got extra help, and met different needs. Damir kept promoting cloud technology, even though it wasn’t popular at first due to data security concerns. His efforts paid off when Azure became widely used. Bernat started his group to keep in touch with past event-goers. To get more people, he invited Power BI users from Barcelona on LinkedIn with a message that got their attention.
MVP Damir Dobric
Leading a tech user group provided Damir, Adam, and Bernat with valuable learning experiences and personal growth. Damir relished the opportunity to network with influential professionals and enthusiasts, finding the reciprocal learning process enriching. Adam on the other hand, gained a great deal of knowledge from exploring topics in more depth, having rich discussions with members and thinking through ways to keep up with the ever-moving tech industry. While Bernat learned about the importance of teamwork, flexibility, and taking breaks while also building a motivated core team to lead the group.
Damir, Adam, and Bernat each shared their insights on nurturing a successful tech user group. Damir emphasized the importance of keeping the group engaged with high-quality, current information. Adam suggested focusing on a subject you’re passionate about and forming a dedicated team to manage events. He also highlighted the need for continuous member recruitment and the avoidance of overcommitment. Bernat advised establishing a committed core team to lead the group, advocating for shared leadership rather than solo efforts. He also underscored the significance of consistent member recruitment and the necessity of taking breaks when needed.
MVP Adam Wisniewski
In conclusion, leading a tech user group can be a challenging yet rewarding experience. The journey of a user group leader involves overcoming initial challenges, developing strategies for growth and engagement, and taking advantage of personal growth and learning opportunities. With the right approach, leading a tech user group can be a fulfilling and enriching experience.
Microsoft Tech Community – Latest Blogs –Read More
Improving chose of guesses in my fit
Hi everyone,
I do some fit on my data, and my guesses is no good enough.
Here an example of plotting my data:
And this is the code of the function that fit my data:
%fit function
function [Xmax,beta] = fit_function(x,y,peaks,t_peaks,peak_num,check_fit)
% ymax calculated analytically with Wolfram Mathematica
ymax = @(b) (2-b(2)/b(1))*(2*b(1)/b(2)-1)^(-b(2)/2/b(1));
modelfun = @(b,x) b(3)/ymax(b)*exp((x-b(4))*(b(1)-b(2))).*sech(b(1)*(x-b(4)));
bguess=[60, 15, peaks(peak_num), x(t_peaks(peak_num))];
%bguesses is [alpha, beta, amplitude, x offset]
%alpha and beta controlling of the slope of the peak, each one of them control in one slope side.
beta = nlinfit(x, y, modelfun, bguess);
Xmax=(log(-1+(2*beta(1)/beta(2)))/(2*beta(1)))+beta(4);
%check fit:
if check_fit==1
fittedCurve = modelfun(beta, x);
hold on;
plot(x, fittedCurve, ‘m’);
legend(‘Original Data’,’Fitted Curve’);
else
end
end
In this fit, I have 4 guesses. The first two of them(call alpha and beta), chosing as a constant number, according to my attempts to see what would be suitable. The other two of them is depend on the data and according to the data, the code chose the appropriate guesses. (actually is the amplitude and x offset)
So, if you look on the code, you see that 60 and 15 is the constant guesses, and the "peaks(peak_num)" and "x(t_peaks(peak_num))", is the guesses that variable according to the maximum point of the data.
The problem is that in some cases,alpha and beta, the two that are constant, sometimes suitable and sometime unsuitable.
I want to insert guesses, that will be change according to the data, exactlly like the last two guesses of the amplitude and x offset. I thinking about derivative of the peaks or something like this, but I managed to mess with it..
do you have idae for good variable guesses or how to do something that will be appropriate to any try of fitting my data?
thank you all (:Hi everyone,
I do some fit on my data, and my guesses is no good enough.
Here an example of plotting my data:
And this is the code of the function that fit my data:
%fit function
function [Xmax,beta] = fit_function(x,y,peaks,t_peaks,peak_num,check_fit)
% ymax calculated analytically with Wolfram Mathematica
ymax = @(b) (2-b(2)/b(1))*(2*b(1)/b(2)-1)^(-b(2)/2/b(1));
modelfun = @(b,x) b(3)/ymax(b)*exp((x-b(4))*(b(1)-b(2))).*sech(b(1)*(x-b(4)));
bguess=[60, 15, peaks(peak_num), x(t_peaks(peak_num))];
%bguesses is [alpha, beta, amplitude, x offset]
%alpha and beta controlling of the slope of the peak, each one of them control in one slope side.
beta = nlinfit(x, y, modelfun, bguess);
Xmax=(log(-1+(2*beta(1)/beta(2)))/(2*beta(1)))+beta(4);
%check fit:
if check_fit==1
fittedCurve = modelfun(beta, x);
hold on;
plot(x, fittedCurve, ‘m’);
legend(‘Original Data’,’Fitted Curve’);
else
end
end
In this fit, I have 4 guesses. The first two of them(call alpha and beta), chosing as a constant number, according to my attempts to see what would be suitable. The other two of them is depend on the data and according to the data, the code chose the appropriate guesses. (actually is the amplitude and x offset)
So, if you look on the code, you see that 60 and 15 is the constant guesses, and the "peaks(peak_num)" and "x(t_peaks(peak_num))", is the guesses that variable according to the maximum point of the data.
The problem is that in some cases,alpha and beta, the two that are constant, sometimes suitable and sometime unsuitable.
I want to insert guesses, that will be change according to the data, exactlly like the last two guesses of the amplitude and x offset. I thinking about derivative of the peaks or something like this, but I managed to mess with it..
do you have idae for good variable guesses or how to do something that will be appropriate to any try of fitting my data?
thank you all (: Hi everyone,
I do some fit on my data, and my guesses is no good enough.
Here an example of plotting my data:
And this is the code of the function that fit my data:
%fit function
function [Xmax,beta] = fit_function(x,y,peaks,t_peaks,peak_num,check_fit)
% ymax calculated analytically with Wolfram Mathematica
ymax = @(b) (2-b(2)/b(1))*(2*b(1)/b(2)-1)^(-b(2)/2/b(1));
modelfun = @(b,x) b(3)/ymax(b)*exp((x-b(4))*(b(1)-b(2))).*sech(b(1)*(x-b(4)));
bguess=[60, 15, peaks(peak_num), x(t_peaks(peak_num))];
%bguesses is [alpha, beta, amplitude, x offset]
%alpha and beta controlling of the slope of the peak, each one of them control in one slope side.
beta = nlinfit(x, y, modelfun, bguess);
Xmax=(log(-1+(2*beta(1)/beta(2)))/(2*beta(1)))+beta(4);
%check fit:
if check_fit==1
fittedCurve = modelfun(beta, x);
hold on;
plot(x, fittedCurve, ‘m’);
legend(‘Original Data’,’Fitted Curve’);
else
end
end
In this fit, I have 4 guesses. The first two of them(call alpha and beta), chosing as a constant number, according to my attempts to see what would be suitable. The other two of them is depend on the data and according to the data, the code chose the appropriate guesses. (actually is the amplitude and x offset)
So, if you look on the code, you see that 60 and 15 is the constant guesses, and the "peaks(peak_num)" and "x(t_peaks(peak_num))", is the guesses that variable according to the maximum point of the data.
The problem is that in some cases,alpha and beta, the two that are constant, sometimes suitable and sometime unsuitable.
I want to insert guesses, that will be change according to the data, exactlly like the last two guesses of the amplitude and x offset. I thinking about derivative of the peaks or something like this, but I managed to mess with it..
do you have idae for good variable guesses or how to do something that will be appropriate to any try of fitting my data?
thank you all (: curve fitting, matlab MATLAB Answers — New Questions
Embedded code for Randi function.it generates 625 arrays to some values, I want to know why it is generated 625 arrays of random values in initial function
y = randi([0, 11],uint32)y = randi([0, 11],uint32) y = randi([0, 11],uint32) embedded coder, matlab function, random number generator MATLAB Answers — New Questions
How do I access student submissions in MATLAB Grader?
How do I access student submissions in MATLAB Grader?How do I access student submissions in MATLAB Grader? How do I access student submissions in MATLAB Grader? grader, solution, submission, student, progress, results, grade, problem, report, assignment MATLAB Answers — New Questions
Display all the traffic going through a Can chanel, App Designer.
Hi,
I’m building an app with Matlab App Designer and i would like to display on a panel (I use a TextAera for now) all the traffic going through a Can chanel.
I did achieve to display the received ones (using receive() and canSignalTimetable()), but I can’t manage to do the same with the transmited ones.
Is there a way to do it ? Maybe easier than keeping in memory all the transmited messages and updating a timer while checking errors in the canChannel, knowing that if there’s error in transmitted messages, it seems that you can’t know wich message wasn’t sent successfully…Hi,
I’m building an app with Matlab App Designer and i would like to display on a panel (I use a TextAera for now) all the traffic going through a Can chanel.
I did achieve to display the received ones (using receive() and canSignalTimetable()), but I can’t manage to do the same with the transmited ones.
Is there a way to do it ? Maybe easier than keeping in memory all the transmited messages and updating a timer while checking errors in the canChannel, knowing that if there’s error in transmitted messages, it seems that you can’t know wich message wasn’t sent successfully… Hi,
I’m building an app with Matlab App Designer and i would like to display on a panel (I use a TextAera for now) all the traffic going through a Can chanel.
I did achieve to display the received ones (using receive() and canSignalTimetable()), but I can’t manage to do the same with the transmited ones.
Is there a way to do it ? Maybe easier than keeping in memory all the transmited messages and updating a timer while checking errors in the canChannel, knowing that if there’s error in transmitted messages, it seems that you can’t know wich message wasn’t sent successfully… canchannel, appdesigner, transmit, receive MATLAB Answers — New Questions
How to create Graphic with variable data (filtered)
Hi everybody,
I have an excel spreadsheet with all the sales forecast of 160 products. I have the quantity sold for each product every month for the last 5 years.
With graphics, I can see the sales forecast for each product. However, doing 160 graphics is too much for the Excel spreadsheet.
Therefore, I was wondering how to create just one graphic and by having a filter (with search bar for instance), I can select the product I want to see the trend and the graphic automatically just display the sales of this product?
How do to that?
Thank you so much for your help!
Hi everybody, I have an excel spreadsheet with all the sales forecast of 160 products. I have the quantity sold for each product every month for the last 5 years. With graphics, I can see the sales forecast for each product. However, doing 160 graphics is too much for the Excel spreadsheet. Therefore, I was wondering how to create just one graphic and by having a filter (with search bar for instance), I can select the product I want to see the trend and the graphic automatically just display the sales of this product? How do to that? Thank you so much for your help! Read More
How to assign co-owner to classwork/assignments in Teams for Education?
Hello,
Trying to utilize Teams to run an IT Training Program for my office, and I created a Class template Teams Team. I made some fellow colleagues as Owners of the team in hopes that they could also contribute other materials to the program, but since they are in different departments, they would also like to participate in the quizzes and assignments I’ve made for IT. Is there a way to assign classwork/assignments to other Owners?
Thanks
Hello, Trying to utilize Teams to run an IT Training Program for my office, and I created a Class template Teams Team. I made some fellow colleagues as Owners of the team in hopes that they could also contribute other materials to the program, but since they are in different departments, they would also like to participate in the quizzes and assignments I’ve made for IT. Is there a way to assign classwork/assignments to other Owners? Thanks Read More
Shared Dataset parameter default value automatically set as “=Nothing”
I create SSRS report using a shared dataset that has predefined parameters. When add the dataset to report, parameters are also get added automatically (as expected). Some of the parameters are optional parameters. For optional parameter, default value is automatically set as “Specify values” and expression “=Nothing”. Because of this, each time I edit and upload a new version of report the parameter value defaults to nothing. To keep the existing parameter value selection intact, I want the first option “No default value” to be selected by default. Please let know, if there is an option to have “No default value” selected by default instead of “Specify Values” combined with “=Nothing” at the time of adding a shared dataset to a report.
I create SSRS report using a shared dataset that has predefined parameters. When add the dataset to report, parameters are also get added automatically (as expected). Some of the parameters are optional parameters. For optional parameter, default value is automatically set as “Specify values” and expression “=Nothing”. Because of this, each time I edit and upload a new version of report the parameter value defaults to nothing. To keep the existing parameter value selection intact, I want the first option “No default value” to be selected by default. Please let know, if there is an option to have “No default value” selected by default instead of “Specify Values” combined with “=Nothing” at the time of adding a shared dataset to a report. Read More
Styles appear in different language / Styles Glitch
For some reason my Styles section is in a different language. This appears even in a new document. I’ve checked language settings and everything is set to English. The only way I have found that has fixed it is when I right click > Modify > Format > Font and make no other changes but just click okay (Font is set to Montserrat for body paragraph). It fixes it temperarily, but if I close the document and open Word again, the styles go back to the different language.
How do I permanently change this?
For some reason my Styles section is in a different language. This appears even in a new document. I’ve checked language settings and everything is set to English. The only way I have found that has fixed it is when I right click > Modify > Format > Font and make no other changes but just click okay (Font is set to Montserrat for body paragraph). It fixes it temperarily, but if I close the document and open Word again, the styles go back to the different language. How do I permanently change this? Read More
How to add rows based on qty value from another field with product tied to specific PO
Trying to add a number of rows with specific value based on a number of products. instead of having users circle the item. I want to have it automatically insert the product going down based on the qty field.
In this case need to find a way to have three rows with product “Fine” and 5 rows with product Flake.
Trying to add a number of rows with specific value based on a number of products. instead of having users circle the item. I want to have it automatically insert the product going down based on the qty field.In this case need to find a way to have three rows with product “Fine” and 5 rows with product Flake. Read More
Timeline for Copilot “auto-complete” feature
Does anyone know when this will be released?
This information comes from the microsoft blog – Copilot, but I have not found any information on when this new feature will be rolled out.
“If you’ve got the start of a prompt, Copilot will offer to auto-complete it to get to a better result, suggesting something more detailed to help ensure you get what you’re looking for. That not only speeds things up, it offers you new ideas for how to leverage Copilot’s power.”
Does anyone know when this will be released? This information comes from the microsoft blog – Copilot, but I have not found any information on when this new feature will be rolled out. “If you’ve got the start of a prompt, Copilot will offer to auto-complete it to get to a better result, suggesting something more detailed to help ensure you get what you’re looking for. That not only speeds things up, it offers you new ideas for how to leverage Copilot’s power.” Read More
Branding kit – product icons
I downloaded the Microsoft 365 branding kit from the Microsoft FastTrack web site as I’ve read that this is the only way to get access to the Microsoft 365 product icons. However, the branding kit only contains PDFs of the product icons, not actual graphic images. Is there a way to get the actual images?
I downloaded the Microsoft 365 branding kit from the Microsoft FastTrack web site as I’ve read that this is the only way to get access to the Microsoft 365 product icons. However, the branding kit only contains PDFs of the product icons, not actual graphic images. Is there a way to get the actual images? Read More
Using Microsoft Viva Pulse and Glint for Employee Reviews – 30, 60, 90 & Biannual Reviews
Hi MS Viva Community,
I have been looking into Microsoft Viva across the board, but specifically looking at Pulse and Glint as a solution for deploying and tracking employee reviews. We have been using a combination of Forms, Excel and BI to carry out this process. I’m working if Pulse and Glint can be a cleaner more turnkey solution than what we have been doing.
Hi MS Viva Community, I have been looking into Microsoft Viva across the board, but specifically looking at Pulse and Glint as a solution for deploying and tracking employee reviews. We have been using a combination of Forms, Excel and BI to carry out this process. I’m working if Pulse and Glint can be a cleaner more turnkey solution than what we have been doing. Read More