Category: News
Unable to fetch more than 5000 records from filtered view
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python.
I have a SharePoint List and created filtered views of the list which contains more than 5000 records . I tried to retrieve the records and getting an error like “The attempted operation is prohibited because it exceeds the list view threshold”. Can anyone help me to get data using pagination like batch by batch in loop? If possible share the code snippet for python. Read More
How to change SP online site domain
Hi,
I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.com
please suggest if need any higher license or anyhow possible?
Thanks,
Deepak
Hi,I wanted to change SP online domain url from https://contoso.sharepoint.com to https://contoso.complease suggest if need any higher license or anyhow possible? Thanks,Deepak Read More
Different meeting stage for host and guest
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies
host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStage
guest = all others users
1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)
2. sidebar contains Login button with authenticates against https://test.de/login
3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host
4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”
5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in)
5.1 If request authenticated => redirect to ‘https://test.de/hoststage‘
5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage‘
Expected result: host should always get /hoststage, guest should always get /gueststage
Current result: sometimes it is working probably, sometimes host gets /gueststage or nothing
My guess is that third party cookies is not working stable and sometimes they are send and sometimes not.
Environment: teams web and desktop (new version 1415/24031414721), asp core 6, third party cookies host = user that starts teams app, logs in (authenticates) and invokes shareAppContentToStageguest = all others users 1. manifest loads sidebar via manifest config (for example configurationURL: https://test.de/sidebar)2. sidebar contains Login button with authenticates against https://test.de/login3. sucessfull Login (cookie authentication) in iFrame => sidebar will redirected to https://test.de/host4. javascript of host calls “microsoftTeams.meeting.shareAppContentToStage(handleShareScreenAction, `https://test.de/guestorhost`);”5. endpoint /guestorhost checks if request is authenticated (in this example only user with sidebar logged in) 5.1 If request authenticated => redirect to ‘https://test.de/hoststage’ 5.2 If request NOT authenticated => redirect to ‘https://test.de/gueststage’Expected result: host should always get /hoststage, guest should always get /gueststageCurrent result: sometimes it is working probably, sometimes host gets /gueststage or nothing My guess is that third party cookies is not working stable and sometimes they are send and sometimes not. Read More
Microsoft Store latest changes with app downloads
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users.
We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies.
Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result.
Just wondering if any other IT admins are dealing with the recent changes to the Microsoft Store app downloads and how instead of launching the full MS Store, it now launches a containerized version. While this does lead to a better user experience, the unfortunate side effect is it bypasses our current restriction to block the MS Store from users. We currently control all our MS Store apps in Intune via the Company Portal and have the policy to “block MS Store” enabled. However, the change where you can download the apps via a self contained EXE on the website now bypasses this block, presumably because the containerized version of the app installer is not referencing any of these policies. Can Microsoft please address this? We don’t really want to block apps.microsoft.com but if this behavior isn’t changed this might be the end result. Read More
Why comments are not imported into Planner from Trello with apps4.Pro?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments?
Hello, I am using the apps4.pro tool to transfer content from my team’s Trello to a new Planner plan. Every time I do this with the administrator, the comments that were made on Trello are not imported into Planner even though the tool’s documentation confirms that the comments will also be transferred. We exported the Trello cards with all their content into a JSON file which is normally supported by apps4.Pro. In the JSON file, comments are correctly saved in the JSON file exported from Trello and are accessible via a key named “text”. Is there a particular format for Planner to recognize and import comments? Read More
Decomissioning a single not anymore used Exchange Server 2013
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it.
I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/
My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.
Thank you in advance.
About 1,5 years ago we moved all the email database to MS365 and since then used online the cloud solution. This was and is not a hybrid connection between the on-premises Exchange server and MS365. Now I want to delete all mailboxes of on-premises server and uninstall it. I am looking at this tutorial in this relation: https://techcommunity.microsoft.com/t5/exchange-team-blog/decommissioning-exchange-server-2013/ba-p/3613793. With some differences (e.g. disabling and deleting mailboxes instead of migrating them) this seems to be a good way. But another tutorial suggests that a single server should be left for management (though this tutorial considers a hybrid installation with 2016/2019 EX-srv): https://www.alitajran.com/remove-last-exchange-server/My question is: How do I remove (or at least minimize) a no-more used Exchange Server 2013 installation und Windows Server 2012 R2.Thank you in advance. Read More
MCP Certification Transcript not Found on my MCID
Hello,
I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere.
Is there a way to retrieve a copy of my transcript?
Thank you for your help
Hello,I received MCP in 2006 but i have not logged in to the platform for many years. I was able to recover the account could using my MCID but despite merging the account to with Microsoft learn several days ago, i could not find my MCP SQL transcipt anywhere. Is there a way to retrieve a copy of my transcript? Thank you for your help Read More
Power Query only returning 500,000 rows of data into excel
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables.
I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records.
Is there a limit? I can’t figure out if it’s my query, or something else.
I have a Power Query that connects to an Azure Log Analytics workspace pulling back data which I then populate an excel spreadsheet with, and generate graphs and pivot tables. I have just noticed that the records returned into Excel caps out at 500,000 records, and I know that there are more than 500,000 records. Is there a limit? I can’t figure out if it’s my query, or something else. Read More
Improving RAG performance with Azure AI Search and Azure AI prompt flow in Azure AI Studio
Content authored by: Arpita Parmar
Introduction
If you’ve been delving into the potential of large language models (LLMs) for search and retrieval tasks, you’ve probably encountered Retrieval Augmented Generation (RAG) as a valuable technique. RAG enriches LLM-generated responses by integrating relevant contextual information, particularly when connected to private data sources. This integration empowers the model to deliver more accurate and contextually rich responses.
Challenges with RAG evaluation
Evaluating RAG poses several challenges, requiring a multifaceted approach. Evaluating both response quality and retrieval effectiveness are in ensuring optimal performance.
Traditional evaluation metrics for RAG applications, while useful, have certain limitations that can impact their effectiveness in accurately assessing RAG performance. Some of these limitations include:
Inability to fully capture user intent: Traditional evaluation metrics often focus on lexical and semantic aspects but may not fully capture the underlying user intent behind a query. This can result in a disconnect between the metrics used to evaluate RAG performance and the actual user experience.
Reliance on ground truth: Many traditional evaluation metrics rely on the availability of a pre-defined ground truth to compare system-generated responses against. However, establishing ground truth can be challenging, particularly for complex queries or those with multiple valid answers. This can limit the applicability of these metrics in certain scenarios.
Limited applicability across different query types: Traditional evaluation metrics may not be equally effective across different query types, such as fact-seeking, concept-seeking, or keyword queries. This can result in an incomplete or skewed assessment of RAG performance, particularly when dealing with diverse query types.
Overall, while traditional evaluation metrics offer valuable insights into RAG performance, they are not without their limitations. Incorporating user feedback into the evaluation process adds another layer of insight, bridging the gap between quantitative metrics and qualitative user experiences. Therefore, adopting a multifaceted approach that considers retrieval quality, relevance of response to retrieval, user intent, ground truth availability, query type diversity, and user feedback is essential for a comprehensive and accurate evaluation of RAG systems.
Improving RAG Application’s Retrieval with Azure AI Search
When evaluating RAG applications, it is crucial to accurately assess retrieval effectiveness and to tune relevance of retrieval. Since the retrieved data is key for the successful implementation of the RAG pattern, a retrieval system that can significantly enhance the quality of your results is integration of Azure AI Search. Even if AI Search offers keyword (full-text), vector and hybrid search capabilities, this post will be focused on using hybrid search. The hybrid search approach can be particularly beneficial in scenarios where the retrieval performance is varied or insufficient. By integrating both keyword and vector-based search techniques, hybrid search can improve the accuracy and completeness of the retrieved documents, which in turn can positively impact the relevance of the generated responses.
The hybrid search process in Azure AI Seach involves the following steps:
Keyword search: Initial keyword index search to find documents containing the query terms using BM25 ranking algorithm.
Vector search: In parallel, vector search uses dense vector representations to map the query to semantically similar documents, leveraging embeddings in vector fields using Hierarchical Navigable Small World (HNSW) or exhaustive k-nearest neighbors (KNN) algorithm.
Result merging: The results from both keyword and vector searches are merged using a Reciprocal Rank Fusion (RRF) algorithm.
Enhancing Retrieval: Quality of retrieval & relevance tuning
When turning for retrieval relevance and quality of retrieval, there are several strategies to consider:
Document processing: Experiment with chunk size and overlap to preserve context or continuity between chunks.
Document understanding: Embeddings play a pivotal role in enabling pipelines to understand documents in relation to user queries. By transforming documents and queries into dense vector representations, embeddings facilitate the measurement of semantic similarity between them. Consider selecting an appropriate embedding model. For example, higher-dimensional embeddings can store more context information but may require more computational resources, while smaller-dimensional embeddings are more efficient but may sacrifice some context.
Vector search configuration: Think of this configuration like building a map. Adjusting this parameter helps the algorithm decide how many landmarks to use and how far apart they should be, which can affect how quickly and accurately it finds relevant information. Adjust the efConstruction parameter for HNSW to change the internal composition of the proximity graph. This parameter changes the way the search algorithm organizes information internally.
Query-time parameter: Increase the number of results (k) to feed more search results. It determines how many search results are returned for each query. Increasing k means the system will provide more potential matches, which can be useful if you’re trying to find the best answer among many possibilities.
Enhancing hybrid search with Semantic re-ranking: To further enhance the quality of search results, a semantic re-ranking step can be added. Also known as L2, this layer takes a subset of the top L1 results and computes higher-quality relevance scores to reorder the result set. The L2 ranker can significantly improve the ranking of results already found by the L1, critical for RAG applications to ensure the best results are in the top positions. In Azure Search, this is done using a semantic ranker developed in partnership with Bing, which leverages vast amounts of data and machine learning expertise. The re-ranking step helps optimize relevance by ensuring that the most related documents are presented at the top of the list.
By unifying these retrieval techniques and configurations, hybrid search can handle queries more effectively compared to using just keywords or vectors alone. It excels at finding relevant documents even when users query with concepts, abbreviations or phraseology different from the documents.
A recent Microsoft study highlights that hybrid search with semantic re-ranking outperforms traditional vector search methods like dense and sparse passage retrieval across diverse question-answering tasks.
According to this study, key advantages with hybrid search with semantic re-ranking include:
Higher answer recall: Returning higher quality answers more often across varied question types.
Broader query coverage: Handling abbreviations, rare terms that vector search struggle with.
Increased precision: Merged results combining keyword statistics and semantic relevance signals.
Now that we’ve covered retrieval tuning, let’s turn our attention to evaluating generation and streamlining the RAG pipeline evaluation process. Azure AI prompt flow offers comprehensive framework to streamline RAG evaluation.
Azure AI prompt flow
Prompt flow streamlines RAG evaluation with multifaceted approach by efficiently comparing prompt variations, integrating user feedback, and supporting both traditional and AI-generated metrics that don’t require ground truth data. It ensures tailored responses for diverse queries, simplifying retrieval and response evaluation while providing comprehensive insights for improved RAG performance.
Both Azure AI Search and Azure AI prompt flow are available in Azure AI Studio, a unified platform for responsibly developing and deploying generative AI applications. The one-stop-shop platform enables developers to explore the latest APIs and models, access comprehensive tooling to support the generative AI development lifecycle, design applications responsibly, and deploy and scale models, flows and apps at scale with continuous monitoring.
With Azure AI Search, developers can connect models to their protected data for advanced fine-tuning and contextually relevant retrieval augmented generation. With Azure AI prompt flow, developers can orchestrate AI workflows with prompt orchestration, interactive visual flows, and code-first experiences to build sophisticated and customized enterprise chat applications.
Here is a video of how to build and deploy an enterprise chat application with Azure AI Studio.
Evaluating RAG applications in prompt flow revolves around three key aspects:
Prompt variations: Prompt variation testing, informed by user feedback, ensures tailored responses for diverse queries, enhancing user intent understanding and addressing various query types effectively.
Retrieval evaluation: This involves assessing the accuracy and relevance of the retrieved documents.
Response evaluation: The focus is on measuring the appropriateness of the LLM-generated response when provided with the context.
Below is the table of evaluation metrics for RAG applications in Prompt flow.
Metric Type
AI Assisted/Ground Truth Based
Metric
Description
Generation
AI Assisted
Groundedness
Measures how well the model’s generated answers align with information from the source data (user-defined context).
Generation
AI Assisted
Relevance
Measures the extent to which the model’s responses generated are pertinent and directly related to the given questions.
Retrieval
AI Assisted
Retrieval Score
Measures the extent to which the model’s retrieved documents are pertinent and directly related to the given questions.
Generation
Ground Truth Based
Accuracy, Precision, Recall, F1 score
Measures the RAG system’s responses to a set of predefined, correct answers. Measures the ratio of the number of shared words between the model generation and the ground truth answers.
There are 3 AI assisted metrics available in prompt flow that do not require ground truth. Traditional metrics based on ground truth are useful while testing RAG applications in development, but AI-assisted metrics offer enhanced capabilities for evaluating user responses, especially in situations where ground truth data is unavailable. These metrics provide valuable insights into the performance of the RAG Application in real-world scenarios, enabling more comprehensive assessment of user interactions and system behavior. These are those metrics:
Groundedness: Groundedness ensures that the responses from the LLM align with the context provided and are verifiable against the available sources. It confirms factual accuracy and ensures that the conversation remains grounded when all responses meet this criterion.
Relevance: Relevance measures the appropriateness of the generated answers to the user’s query based on the retrieved documents. It assesses whether the response provides sufficient information to address the question and adjusts the score accordingly if the answer lacks relevance or contains unnecessary details.
Retrieval Score: The retrieval score reflects the quality and relevance of the retrieved documents to the user’s query. It breaks down the user query into intents, assesses the presence of relevant information in the retrieved documents, and calculates the fraction of intents with affirmative responses to determine relevance.
Groundedness, relevance, and the retrieval score along with prompt variant testing from prompt flow collectively provide insights into the performance of RAG applications. It enables refinement of RAG Applications, addressing challenges associated with information overload, incorrect response, insufficient retrieval and ensuring more accurate responses throughout the end-to-end evaluation process.
Potential scenarios to evaluate RAG workflows
Now, let’s explore 3 potential scenarios to evaluate RAG workflows and how prompt flow and Azure AI Search help in evaluating those scenarios.
Scenario 1: Successful Retrieval and Response
This scenario entails the seamless integration of relevant contextual information with accurate and appropriate responses generated by RAG application. We have good response and good retrieval.
In this scenario, all three metrics perform optimally. Groundedness ensures factual accuracy and verifiability, relevance ensures the appropriateness of the answer to the query, and the retrieval score reflects the quality and relevance of the retrieved documents.
Scenario 2: Inaccurate Response, Insufficient Retrieval
Here, despite the retrieval of relevant documents, the response from LLM is inaccurate. Groundedness may suffer if the response lacks verifiability against the provided sources. Relevance may also be compromised if the response does not adequately address the user’s query. The retrieval score might indicate successful document retrieval but fails to capture the inadequacy of the response.
To address this challenge, Azure AI Search retrieval tuning can be leveraged to enhance the retrieval process, ensuring that the most relevant and accurate documents are retrieved. By fine-tuning the search parameters discussed above in section “Enhancing Retrieval: Quality of retrieval & relevance tuning,” Azure AI Search can significantly improve the retrieval score, thereby increasing the likelihood of obtaining relevant documents for the given query.
Additionally, you can refine the LLM’s prompt by incorporating a conditional statement within the prompt template, such as “if relevant content is unavailable and no conclusive solution is found, respond with ‘unknown’.” Leveraging prompt flow, which allows for the evaluation and comparison of different prompt variations, you can assess the merit of various prompts and select the most effective one for handling such situations. This approach ensures accuracy and honesty in the model’s responses, acknowledging its limitations and avoiding the dissemination of inaccurate information.
Scenario 3: Incorrect Response, Varied Retrieval Performance
In this scenario, the retrieval of relevant documents is followed by an inaccurate response from the LLM. Groundedness may be maintained if the responses remain verifiable against the provided sources. However, relevance is compromised as the response fails to address the user’s query accurately. The retrieval score might indicate successful document retrieval, but the flawed response highlights the limitations of the LLM.
Evaluation in this scenario involves several key steps facilitated by Azure AI prompt flow and Azure AI Search:
Acquiring Relevant Context: Embedding a user query to search a vector database for pertinent chunks is crucial. The success of retrieval relies on the semantic similarity of these chunks to the query and their ability to provide relevant information for generating accurate responses (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Optimizing Parameters: Adjusting parameters such as retrieval type (hybrid, vector, keyword), chunk size, and K value is necessary to enhance RAG application performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
Prompt Variants: Utilizing prompt flow, developers can test and compare various prompt variations to optimize response quality. By iterating prompt templates and LLM selections, prompt flow enables rapid experimentation and refinement of prompts, ensuring that the retrieved content is effectively utilized to produce accurate responses. (see section “How to evaluate RAG with Azure Machine Learning prompt flow”).
Refining Response Generation Strategies: Moreover, exploring different text extraction techniques and embedding models alongside experimenting with chunking strategies can further improve overall RAG performance. (see section “Enhancing Retrieval: Quality of retrieval & Relevance Tuning”).
How to evaluate RAG with Azure AI prompt flow
In this section, let’s walk through the step-by-step process of testing RAG using prompt variants with the prompt flow using metrics such as groundedness, relevance, and retrieval score.
Prerequisite: Build RAG using Azure Machine Learning prompt flow.
1. Prepare Test Data: Ideally, you should prepare a test dataset of 50-100 samples but for this article we will prepare a test dataset with a few samples. Save this as a csv file.
2. Add test data to Azure AI Studio: In your AI Studio project, under Components, select Data -> New data.
3. Select Upload files/folders and upload the test data from a local drive. Click on Next, provide a name to your data location and click on Create.
4. Once the test data is uploaded you can see its details.
5. Evaluate the flow: Under Tools -> Evaluation, click on New evaluation. Choose Conversation with context and select a flow you want to evaluate. Here we are testing two variants of prompt: Variant_0 and Variant_1. Click on Next.
6. Configure the test data. Click on Next.
7. Under Select Metrics RAG metrics are automatically selected based on the scenario you have chosen. Refer to more details of metrics. Choose your Azure OpenAI Service instance and model and click on Next.
8. Review and finish. Click on Submit.
9. Once the evaluation is complete it will be displayed under Evaluations.
10. Check the results by clicking on the evaluation. You can compare the two variants of prompts by comparing their metrics to see which prompt variant is performing better.
11. You can check the result of individual prompt variant evaluation metrics under the Output tab -> Metrics dashboard.
12. Also, under the Output tab, you can also see a detailed view of the metrics under Detailed metrics result.
13. Under the Trace tab, you can trace how many tokens were generated and the duration for each test question.
Conclusion:
The integration of Azure AI Search into the RAG pipeline can significantly improve retrieval scores. This enhancement ensures that retrieved documents are more aligned with user queries, thus enriching the foundation upon which responses are generated. Furthermore, by integrating Azure AI Search and Azure AI prompt flow in Azure AI Studio, developers can test and optimize response generation to improve groundedness and relevance. This approach not only elevates RAG application performance but also fosters more accurate, contextually relevant, and user-centric responses.
Microsoft Tech Community – Latest Blogs –Read More
8.5.2024 Copilot Business Case Builder – kuinka laskea hyödyt auki
Copilot Business Case Builder – kuinka laskea liiketoimintahyödyt asiakkaan johtoryhmälle -webinaari 8.5.2024?
Webinaari 8.5.2024 klo 9-10.
Rekisteröidy tästä linkistä.
Kaipaako asiakkaasi jotain muuta perustelua kuin ajansäästöt? Tuntuuko, että asiakkaasi jarruttelevat investointipäätöksissä?
Ainutlaatuinen mahdollisuus jokaiselle tulla kuulemaan parhaat vinkit Copilot for Microsoft 365 -ratkaisun myyntiin. Tämän kertaisessa webinaarissa annamme avaimet myynnin haasteisiin, kun Microsoftin Business Case Builder guru Benny van Well tulee kertomaan teille uudesta tavasta laskea Copilotin arvo ja hyödyt asiakkaalle.
Asiakkaat (ja sinä) tarvitsevat enemmän kuin aikasäästöjä. Webinaarissa Benny kertoo miksi ja miten liiketoimintahyödyt lasketaan auki, jotta asiakkaan johtoryhmä pystyy tekemään investointipäätöksen. Copilot keskusteluthan pitäisi aina viedä johtoryhmälle eikä asiakkaan IT-funktioon.
Tämän webinaarin päätteeksi kaikki osallistujat ovat saaneet tarvittavat valmiudet ja ymmärryksen siitä, kuinka Copilotin investointipäätös voidaan perustella asiakkaalle.
Tutustu ennen webinaaria tähän materiaaliin: Microsoft Business Case Builder
Tämä webinaari pidetään poikkeuksellisesti englanniksi!
Tallenne on katsottavissa jälkikäteen samasta rekisteröitymislinkistä CLoud Championissa!
Copilot Business Case Builder – how to calculate business benefits
Register using this link.
Do your customers need any other justification than time savings? Does it feel like your customers are hesitating in investment decisions?
A unique opportunity for everyone to come and hear the best tips for selling the Copilot for Microsoft 365 solution. In this webinar, we will give you the keys to sales challenges when Benny van Well, the Microsoft Business Case Builder guru, will tell you about a new way to calculate the value and benefits of Copilot for the customer.
Customers (and you) need more than time savings. In the webinar, Benny will explain why and how business benefits are calculated, so that the customer’s executive team can make an investment decision. Copilot discussions should always be brought to the executive team and not to the customer’s IT function.
After this webinar, all participants will have the necessary readiness and understanding of how to justify the Copilot investment decision to the customer.
Please familiarize yourself with this material before the webinar: Microsoft Business Case Builder
This webinar will be held exceptionally in English!
Recording available afterwards in Cloud Champion using the registration link!
Microsoft Tech Community – Latest Blogs –Read More
How can I plot the complete two circles vertical not horizontal ?
clc
A =[ -1.
0.
0.
0.
0.
0.
0.
0.
0.
-1.
1.
-1.
1.
-1.
2.
-2.
3.
-3.
4.
-5.
5.
-7.
8.
-10.
12.
-16.
20.
-34.
53.
-30.];
B=[ 3262.
131.
-375.
563.
-639.
602.
-486.
345.
-218.
124.
-64.
31.
-13.
5.
-2.
1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
C=[ 0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
AA=[ 8.
0.
1.
-1.
3.
-8.
21.
-52.
126.
-307.
738.
-1771.
4215.
-10047.
23743.
-56327.
132493.
-313806.
736630.
-1749066.
4111518.
-9852368.
23316548.
-57140296.
137506208.
-357384160.
896199040.
-3046175232.
9340706816.
-10404635648.];
BB=[ -76625208.
858156.
-3341452.
4741591.
-7006134.
8310705.
-9026788.
8857093.
-7988619.
6701862.
-5230164.
3847242.
-2655485.
1743048.
-1080089.
641116.
-360810.
195865.
-101116.
50743.
-24261.
11394.
-5098.
2281.
-969.
430.
-179.
97.
-47.
8.];
CC=[ 29.
0.
1.
-1.
1.
-1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
a = 1 ; %RADIUS
L=.1;
akm=2;gamma=0.3;arh=10; %beta1=beta2=1,a1=1,a2=2,arh=10,delta=0.5,u2=-1
alphaa=sqrt(((2+akm).*akm./(gamma.*(2+akm))).^2+arh.^2);
betaa=(2.*akm.*arh.^2./gamma).^(0.25);
alpha1=sqrt((alphaa.^2+sqrt(alphaa.^4-4.*betaa.^4))./2);
alpha2=sqrt((alphaa.^2-sqrt(alphaa.^4-4.*betaa.^4))./2);
dd=6;
c =-a/L;
b =a/L;
m =a*200; % NUMBER OF INTERVALS
%[x,y]=meshgrid((c+dd:(b-c)/m:b),(c:(b-c)/m:b)’);
[x,y]=meshgrid((c+dd:(b-c)/m:b),(0:(b-c)/m:b)’);
[I, J]=find(sqrt(x.^2+y.^2)<(a-0.1));
if ~isempty(I)
x(I,J) = 0; y(I,J) = 0;
end
r=sqrt(x.^2+y.^2);
t=atan2(y,x);
r2=sqrt(r.^2+dd.^2-2.*r.*dd.*cos(t));
zet=(r.^2-r2.^2-dd.^2)./(2.*r2.*dd);
warning on
psi1=0;
for i=2:7
Ai=A(i-1);Bi=B(i-1);Ci=C(i-1);AAi=AA(i-1);BBi=BB(i-1);CCi=CC(i-1);
%psi1=-psi1-(Ai.*r.^(-i-1)+r.^(-3./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(-3./2).*besselk(i-1./2,r.*alpha2).*Ci).*legendreP(i-1,cos(t))-(AAi.*r2.^(-i-1)+r2.^(-3./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*legendreP(i-1,zet);
psi1=psi1+(Ai.*r.^(-i+1)+r.^(1./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(1./2).*besselk(i-1./2,r.*alpha2).*Ci).*gegenbauerC(i,-1./2, cos(t))+(AAi.*r2.^(-i+1)+r2.^(1./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*gegenbauerC(i,-1./2,zet);
end
hold on
%[DH1,h1]=contour(x,y,psi1,25,’-k’,’LineWidth’,1.1); %,psi2,’–k’,psi2,’:k’
%[DH1,h1]=contour(x,y,psi1);
%p1=contour(x,y,psi1,[0.3 0.3],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
%p2=contour(x,y,psi1,[0.4 0.4],’r’,’LineWidth’,1.1);
%p3=contour(x,y,psi1,[0.5 0.5],’g’,’LineWidth’,1.1);
%p4=contour(x,y,psi1,[0.6 0.6],’b’,’LineWidth’,1.1);
%p5=contour(x,y,psi1,[0.7 0.7],’c’,’LineWidth’,1.1);
%p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%p7=contour(x,y,psi1,[0.9 0.9],’y’,’LineWidth’,1.1);
p1=contour(x,y,psi1,[0.01 0.01],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
p2=contour(x,y,psi1,[0.05 .05],’r’,’LineWidth’,1.1);
p3=contour(x,y,psi1,[0.1 0.1],’g’,’LineWidth’,1.1);
p4=contour(x,y,psi1,[0.4 0.4],’b’,’LineWidth’,1.1);
p5=contour(x,y,psi1,[0.6 0.6],’c’,’LineWidth’,1.1);
p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%clabel(DH1,h1,’FontSize’,10,’Color’,’red’)
%%%%%%%%%%%%%%% $frac{textstyle a_1+a_2}{textstyle h}=6.0,;
hold on
t3 = linspace(0,pi,1000);
h2=0;
k2=0;
rr2=2;
x2 = rr2*cos(t3)+h2;
y2 = rr2*sin(t3)+k2;
set(plot(x2,y2,’-k’),’LineWidth’,1.1);
fill(x2,y2,’w’)
hold on
t2 = linspace(0,pi,1000);
h=dd;
k=0;
rr=1;
x1 = rr*cos(t2)+h;
y1 = rr*sin(t2)+k;
set(plot(x1,y1,’-k’),’LineWidth’,1.1);
fill(x1,y1,’w’)
%axis square;
axis(‘equal’)
box on
%set(gca,’XTick’,[], ‘YTick’, [])
axis on
xticklabels([])
yticklabels([])
legend(‘0.01′,’0.05′,’0.1′,’0.4′,’0.6′,’0.8′,’Location’,’northwest’)
%title(‘$frac{beta_1}{a_1mu}=frac{a_1beta_2}{mu}=1.0,;R_{H}=1.0,;frac{a_2}{a_1}=2.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%title(‘$(a);; R_{H}=1.0,;frac{kappa}{mu}=4.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%%%%%%%%%%%%%%%%%%%%clc
A =[ -1.
0.
0.
0.
0.
0.
0.
0.
0.
-1.
1.
-1.
1.
-1.
2.
-2.
3.
-3.
4.
-5.
5.
-7.
8.
-10.
12.
-16.
20.
-34.
53.
-30.];
B=[ 3262.
131.
-375.
563.
-639.
602.
-486.
345.
-218.
124.
-64.
31.
-13.
5.
-2.
1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
C=[ 0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
AA=[ 8.
0.
1.
-1.
3.
-8.
21.
-52.
126.
-307.
738.
-1771.
4215.
-10047.
23743.
-56327.
132493.
-313806.
736630.
-1749066.
4111518.
-9852368.
23316548.
-57140296.
137506208.
-357384160.
896199040.
-3046175232.
9340706816.
-10404635648.];
BB=[ -76625208.
858156.
-3341452.
4741591.
-7006134.
8310705.
-9026788.
8857093.
-7988619.
6701862.
-5230164.
3847242.
-2655485.
1743048.
-1080089.
641116.
-360810.
195865.
-101116.
50743.
-24261.
11394.
-5098.
2281.
-969.
430.
-179.
97.
-47.
8.];
CC=[ 29.
0.
1.
-1.
1.
-1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
a = 1 ; %RADIUS
L=.1;
akm=2;gamma=0.3;arh=10; %beta1=beta2=1,a1=1,a2=2,arh=10,delta=0.5,u2=-1
alphaa=sqrt(((2+akm).*akm./(gamma.*(2+akm))).^2+arh.^2);
betaa=(2.*akm.*arh.^2./gamma).^(0.25);
alpha1=sqrt((alphaa.^2+sqrt(alphaa.^4-4.*betaa.^4))./2);
alpha2=sqrt((alphaa.^2-sqrt(alphaa.^4-4.*betaa.^4))./2);
dd=6;
c =-a/L;
b =a/L;
m =a*200; % NUMBER OF INTERVALS
%[x,y]=meshgrid((c+dd:(b-c)/m:b),(c:(b-c)/m:b)’);
[x,y]=meshgrid((c+dd:(b-c)/m:b),(0:(b-c)/m:b)’);
[I, J]=find(sqrt(x.^2+y.^2)<(a-0.1));
if ~isempty(I)
x(I,J) = 0; y(I,J) = 0;
end
r=sqrt(x.^2+y.^2);
t=atan2(y,x);
r2=sqrt(r.^2+dd.^2-2.*r.*dd.*cos(t));
zet=(r.^2-r2.^2-dd.^2)./(2.*r2.*dd);
warning on
psi1=0;
for i=2:7
Ai=A(i-1);Bi=B(i-1);Ci=C(i-1);AAi=AA(i-1);BBi=BB(i-1);CCi=CC(i-1);
%psi1=-psi1-(Ai.*r.^(-i-1)+r.^(-3./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(-3./2).*besselk(i-1./2,r.*alpha2).*Ci).*legendreP(i-1,cos(t))-(AAi.*r2.^(-i-1)+r2.^(-3./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*legendreP(i-1,zet);
psi1=psi1+(Ai.*r.^(-i+1)+r.^(1./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(1./2).*besselk(i-1./2,r.*alpha2).*Ci).*gegenbauerC(i,-1./2, cos(t))+(AAi.*r2.^(-i+1)+r2.^(1./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*gegenbauerC(i,-1./2,zet);
end
hold on
%[DH1,h1]=contour(x,y,psi1,25,’-k’,’LineWidth’,1.1); %,psi2,’–k’,psi2,’:k’
%[DH1,h1]=contour(x,y,psi1);
%p1=contour(x,y,psi1,[0.3 0.3],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
%p2=contour(x,y,psi1,[0.4 0.4],’r’,’LineWidth’,1.1);
%p3=contour(x,y,psi1,[0.5 0.5],’g’,’LineWidth’,1.1);
%p4=contour(x,y,psi1,[0.6 0.6],’b’,’LineWidth’,1.1);
%p5=contour(x,y,psi1,[0.7 0.7],’c’,’LineWidth’,1.1);
%p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%p7=contour(x,y,psi1,[0.9 0.9],’y’,’LineWidth’,1.1);
p1=contour(x,y,psi1,[0.01 0.01],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
p2=contour(x,y,psi1,[0.05 .05],’r’,’LineWidth’,1.1);
p3=contour(x,y,psi1,[0.1 0.1],’g’,’LineWidth’,1.1);
p4=contour(x,y,psi1,[0.4 0.4],’b’,’LineWidth’,1.1);
p5=contour(x,y,psi1,[0.6 0.6],’c’,’LineWidth’,1.1);
p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%clabel(DH1,h1,’FontSize’,10,’Color’,’red’)
%%%%%%%%%%%%%%% $frac{textstyle a_1+a_2}{textstyle h}=6.0,;
hold on
t3 = linspace(0,pi,1000);
h2=0;
k2=0;
rr2=2;
x2 = rr2*cos(t3)+h2;
y2 = rr2*sin(t3)+k2;
set(plot(x2,y2,’-k’),’LineWidth’,1.1);
fill(x2,y2,’w’)
hold on
t2 = linspace(0,pi,1000);
h=dd;
k=0;
rr=1;
x1 = rr*cos(t2)+h;
y1 = rr*sin(t2)+k;
set(plot(x1,y1,’-k’),’LineWidth’,1.1);
fill(x1,y1,’w’)
%axis square;
axis(‘equal’)
box on
%set(gca,’XTick’,[], ‘YTick’, [])
axis on
xticklabels([])
yticklabels([])
legend(‘0.01′,’0.05′,’0.1′,’0.4′,’0.6′,’0.8′,’Location’,’northwest’)
%title(‘$frac{beta_1}{a_1mu}=frac{a_1beta_2}{mu}=1.0,;R_{H}=1.0,;frac{a_2}{a_1}=2.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%title(‘$(a);; R_{H}=1.0,;frac{kappa}{mu}=4.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%%%%%%%%%%%%%%%%%%%% clc
A =[ -1.
0.
0.
0.
0.
0.
0.
0.
0.
-1.
1.
-1.
1.
-1.
2.
-2.
3.
-3.
4.
-5.
5.
-7.
8.
-10.
12.
-16.
20.
-34.
53.
-30.];
B=[ 3262.
131.
-375.
563.
-639.
602.
-486.
345.
-218.
124.
-64.
31.
-13.
5.
-2.
1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
C=[ 0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
AA=[ 8.
0.
1.
-1.
3.
-8.
21.
-52.
126.
-307.
738.
-1771.
4215.
-10047.
23743.
-56327.
132493.
-313806.
736630.
-1749066.
4111518.
-9852368.
23316548.
-57140296.
137506208.
-357384160.
896199040.
-3046175232.
9340706816.
-10404635648.];
BB=[ -76625208.
858156.
-3341452.
4741591.
-7006134.
8310705.
-9026788.
8857093.
-7988619.
6701862.
-5230164.
3847242.
-2655485.
1743048.
-1080089.
641116.
-360810.
195865.
-101116.
50743.
-24261.
11394.
-5098.
2281.
-969.
430.
-179.
97.
-47.
8.];
CC=[ 29.
0.
1.
-1.
1.
-1.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.
0.];
a = 1 ; %RADIUS
L=.1;
akm=2;gamma=0.3;arh=10; %beta1=beta2=1,a1=1,a2=2,arh=10,delta=0.5,u2=-1
alphaa=sqrt(((2+akm).*akm./(gamma.*(2+akm))).^2+arh.^2);
betaa=(2.*akm.*arh.^2./gamma).^(0.25);
alpha1=sqrt((alphaa.^2+sqrt(alphaa.^4-4.*betaa.^4))./2);
alpha2=sqrt((alphaa.^2-sqrt(alphaa.^4-4.*betaa.^4))./2);
dd=6;
c =-a/L;
b =a/L;
m =a*200; % NUMBER OF INTERVALS
%[x,y]=meshgrid((c+dd:(b-c)/m:b),(c:(b-c)/m:b)’);
[x,y]=meshgrid((c+dd:(b-c)/m:b),(0:(b-c)/m:b)’);
[I, J]=find(sqrt(x.^2+y.^2)<(a-0.1));
if ~isempty(I)
x(I,J) = 0; y(I,J) = 0;
end
r=sqrt(x.^2+y.^2);
t=atan2(y,x);
r2=sqrt(r.^2+dd.^2-2.*r.*dd.*cos(t));
zet=(r.^2-r2.^2-dd.^2)./(2.*r2.*dd);
warning on
psi1=0;
for i=2:7
Ai=A(i-1);Bi=B(i-1);Ci=C(i-1);AAi=AA(i-1);BBi=BB(i-1);CCi=CC(i-1);
%psi1=-psi1-(Ai.*r.^(-i-1)+r.^(-3./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(-3./2).*besselk(i-1./2,r.*alpha2).*Ci).*legendreP(i-1,cos(t))-(AAi.*r2.^(-i-1)+r2.^(-3./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*legendreP(i-1,zet);
psi1=psi1+(Ai.*r.^(-i+1)+r.^(1./2).*besselk(i-1./2,r.*alpha1).*Bi+r.^(1./2).*besselk(i-1./2,r.*alpha2).*Ci).*gegenbauerC(i,-1./2, cos(t))+(AAi.*r2.^(-i+1)+r2.^(1./2).*besselk(i-1./2,r2.*alpha1).*BBi+r2.^(1./2).*besselk(i-1./2,r2.*alpha2).*CCi).*gegenbauerC(i,-1./2,zet);
end
hold on
%[DH1,h1]=contour(x,y,psi1,25,’-k’,’LineWidth’,1.1); %,psi2,’–k’,psi2,’:k’
%[DH1,h1]=contour(x,y,psi1);
%p1=contour(x,y,psi1,[0.3 0.3],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
%p2=contour(x,y,psi1,[0.4 0.4],’r’,’LineWidth’,1.1);
%p3=contour(x,y,psi1,[0.5 0.5],’g’,’LineWidth’,1.1);
%p4=contour(x,y,psi1,[0.6 0.6],’b’,’LineWidth’,1.1);
%p5=contour(x,y,psi1,[0.7 0.7],’c’,’LineWidth’,1.1);
%p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%p7=contour(x,y,psi1,[0.9 0.9],’y’,’LineWidth’,1.1);
p1=contour(x,y,psi1,[0.01 0.01],’k’,’LineWidth’,1.1); %,’ShowText’,’on’
p2=contour(x,y,psi1,[0.05 .05],’r’,’LineWidth’,1.1);
p3=contour(x,y,psi1,[0.1 0.1],’g’,’LineWidth’,1.1);
p4=contour(x,y,psi1,[0.4 0.4],’b’,’LineWidth’,1.1);
p5=contour(x,y,psi1,[0.6 0.6],’c’,’LineWidth’,1.1);
p6=contour(x,y,psi1,[0.8 0.8],’m’,’LineWidth’,1.1);
%clabel(DH1,h1,’FontSize’,10,’Color’,’red’)
%%%%%%%%%%%%%%% $frac{textstyle a_1+a_2}{textstyle h}=6.0,;
hold on
t3 = linspace(0,pi,1000);
h2=0;
k2=0;
rr2=2;
x2 = rr2*cos(t3)+h2;
y2 = rr2*sin(t3)+k2;
set(plot(x2,y2,’-k’),’LineWidth’,1.1);
fill(x2,y2,’w’)
hold on
t2 = linspace(0,pi,1000);
h=dd;
k=0;
rr=1;
x1 = rr*cos(t2)+h;
y1 = rr*sin(t2)+k;
set(plot(x1,y1,’-k’),’LineWidth’,1.1);
fill(x1,y1,’w’)
%axis square;
axis(‘equal’)
box on
%set(gca,’XTick’,[], ‘YTick’, [])
axis on
xticklabels([])
yticklabels([])
legend(‘0.01′,’0.05′,’0.1′,’0.4′,’0.6′,’0.8′,’Location’,’northwest’)
%title(‘$frac{beta_1}{a_1mu}=frac{a_1beta_2}{mu}=1.0,;R_{H}=1.0,;frac{a_2}{a_1}=2.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%title(‘$(a);; R_{H}=1.0,;frac{kappa}{mu}=4.0$’,’Interpreter’,’latex’,’FontSize’,12,’FontName’,’Times New Roman’,’FontWeight’,’Normal’)
%%%%%%%%%%%%%%%%%%%% stream, two circles, vertical axis MATLAB Answers — New Questions
Change in the gradient of points in 3D space with respect to its neighbour.
Hello matlab Community,
Is there any function in Matlab to find the gradient change of a point in 3d space with respect to its neighbour point"?
I have sutface coordinates of approximately 15000 points (.STL file contains this). Now i want to find the gradient change of each point with resppecte to its neighbour.
I am attaching .STL file which contain coordinates of all the surface point.
It can be read in matlab using
TR=stlread(‘aggrgate_1.stl’);
trimesh(TR);Hello matlab Community,
Is there any function in Matlab to find the gradient change of a point in 3d space with respect to its neighbour point"?
I have sutface coordinates of approximately 15000 points (.STL file contains this). Now i want to find the gradient change of each point with resppecte to its neighbour.
I am attaching .STL file which contain coordinates of all the surface point.
It can be read in matlab using
TR=stlread(‘aggrgate_1.stl’);
trimesh(TR); Hello matlab Community,
Is there any function in Matlab to find the gradient change of a point in 3d space with respect to its neighbour point"?
I have sutface coordinates of approximately 15000 points (.STL file contains this). Now i want to find the gradient change of each point with resppecte to its neighbour.
I am attaching .STL file which contain coordinates of all the surface point.
It can be read in matlab using
TR=stlread(‘aggrgate_1.stl’);
trimesh(TR); gradient change, stl MATLAB Answers — New Questions
Issue with integration using trapz
Hello,
I’m trying to implement integration using trapz. But the resultant quantity after integration is negligibly small and doesn’t increment. Am I doing something wrong which is confusing me.
Can you see and advice me the chnages if any.
num=15;
Iq(1)=eps;
for i=2:11
Iq(i)=trapz(X(1:i),X(1:i).*jd0.*(1-(X(1:i)-x0).^2).^num);
end
Here, x0=1
rgds,
rcHello,
I’m trying to implement integration using trapz. But the resultant quantity after integration is negligibly small and doesn’t increment. Am I doing something wrong which is confusing me.
Can you see and advice me the chnages if any.
num=15;
Iq(1)=eps;
for i=2:11
Iq(i)=trapz(X(1:i),X(1:i).*jd0.*(1-(X(1:i)-x0).^2).^num);
end
Here, x0=1
rgds,
rc Hello,
I’m trying to implement integration using trapz. But the resultant quantity after integration is negligibly small and doesn’t increment. Am I doing something wrong which is confusing me.
Can you see and advice me the chnages if any.
num=15;
Iq(1)=eps;
for i=2:11
Iq(i)=trapz(X(1:i),X(1:i).*jd0.*(1-(X(1:i)-x0).^2).^num);
end
Here, x0=1
rgds,
rc trapz MATLAB Answers — New Questions
Specialized Power Systems Multimeter Block
Is there a way to extract numerical results from the phasor analysis of the multimeter block?
What is the output format of this block and how to separate the output measurements into individual items?Is there a way to extract numerical results from the phasor analysis of the multimeter block?
What is the output format of this block and how to separate the output measurements into individual items? Is there a way to extract numerical results from the phasor analysis of the multimeter block?
What is the output format of this block and how to separate the output measurements into individual items? specialized power system, multimeter measurements MATLAB Answers — New Questions
Office Activations per user with devices specified
Hi,
Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?
I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer name
Thank you for your reply 🙂
Regards,
José
Hi, Is it possible to get a report over all users who have activated Office including the name of the device (Windows, Apple, Android device name)?I know that I can go on each user, one by one, to get that information, but having a report will be usefull when searching for a specific computer nameThank you for your reply 🙂 Regards, José Read More
Multiple conditions case
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1.
I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1])
Above formula shows Column 1 value even if one of the above condition is true and ignore others.
Can you please tell me how I can apply above mentioned condition?
Either I am apply formula wrong or I am apply wrong formula. What is the case?
Working on excel workbook where I want to see a value(Column1) in Column 12 if value in Column 3= C1 and If value in Column 2 is >=C31 and if value in Column 2 is <=D3 . Above 3 conditions which are true only then I want to see value in Column 1 otherwise I want to see “NO” in Column 1. I tried formula (IFS([@Column3]=’Shift Pattern’!C1,[@Column1],[@Column2]>’Shift Pattern’!$C$3,[@Column1],[@Column2]<‘Shift Pattern’!$D$3,[@Column1]) Above formula shows Column 1 value even if one of the above condition is true and ignore others. Can you please tell me how I can apply above mentioned condition? Either I am apply formula wrong or I am apply wrong formula. What is the case? Read More
add additional horizontal line on graph
Hello,
I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.
Thanks
Hello,I am wanting to add an additional horizontal line on this graph, it would be linear and at y=24.8. Seems a simple problem but cant figure out how to do it. I have attached my graph and table data.Thanks Read More
Azure AD Assessment Tool from Microsoft not working anymore because of “disabled” enterprise app
Hi everyone,
i was using https://github.com/AzureAD/AzureADAssessment for some time now to easy get a good list of all high privileged users and enterprise app.
But it does not work anymore because MS disabled their own enterprise app due to service violations.
Creating an own app seems to be easy with the help of a user here:
This application has been disabled by Microsoft · Issue #89 · AzureAD/AzureADAssessment (github.com)
But i end up with:
Original exception: AADSTS7000218: The request body must contain
the following parameter: ‘client_assertion’ or ‘client_secret’.
I already selected “Allow public client flows” and added the Redirect URI “https://login.microsoftonline.com/common/oauth2/nativeclient”
Can anyone help me out or do i need another tool?
BR
Stephan
Hi everyone, i was using https://github.com/AzureAD/AzureADAssessment for some time now to easy get a good list of all high privileged users and enterprise app.But it does not work anymore because MS disabled their own enterprise app due to service violations. Creating an own app seems to be easy with the help of a user here:This application has been disabled by Microsoft · Issue #89 · AzureAD/AzureADAssessment (github.com)But i end up with:Original exception: AADSTS7000218: The request body must containthe following parameter: ‘client_assertion’ or ‘client_secret’. I already selected “Allow public client flows” and added the Redirect URI “https://login.microsoftonline.com/common/oauth2/nativeclient” Can anyone help me out or do i need another tool? BRStephan Read More
Monitor SharePoint access
Is there a way to monitor or get alerts when a SharePoint site changes its permissions? For example, if someone new gets added to a SharePoint group or the permissions for the site changes. I’ve tried using Microsoft Purview alerts, but after setting up a few alerts several days ago, it doesn’t seem to be working. I’m not sure if these alerts just aren’t working or I set it up wrong? Is there some other tool I can look into? The only other thing I can think of is a flow to run a report or maybe a Power BI report showing the users and groups.
Is there a way to monitor or get alerts when a SharePoint site changes its permissions? For example, if someone new gets added to a SharePoint group or the permissions for the site changes. I’ve tried using Microsoft Purview alerts, but after setting up a few alerts several days ago, it doesn’t seem to be working. I’m not sure if these alerts just aren’t working or I set it up wrong? Is there some other tool I can look into? The only other thing I can think of is a flow to run a report or maybe a Power BI report showing the users and groups. Read More
Deploy a Gradio Web App on Azure with Azure App Service: a Step-by-Step Guide
Context
Gradio is an open-source Python package that you can use for free to create a demo or web app for your machine learning model, API, Azure AI Services integration or any Python function. You can run Gradio in Python notebooks or on a script. A Gradio interface can automatically create a public link, so you can then share a link to your demo or web app easily with Gradio’s sharing features. A share link usually looks like this: https://07ff8706ab.gradio.live . This link uses the Gradio Share Servers, but these servers only forward your local server, and do not keep any data sent through your app. Share links are valid for 72 hours. For a more stable way to build a demo app, we suggest using Azure App Service. App Service is a Platform as a Service (PaaS) offering from Microsoft. It allows us to host web applications, REST API’s and backend services for mobile applications. We can host web applications and services that are made with multiple programming languages or frameworks including, .NET, Java, Python etc. This document gives you a detailed guide on how to get your gradio application working on Azure. Up we go!
Run your project locally
Any IDE will work, but we recommend using VSCode, because it has many features that make it easy to create a virtual environment, deploy your project to azure and run a local server. Download the Visual Studio Code installer for Windows. When the download is done, run the installer (VSCodeUserSetup-{version}.exe). It will take a minute or less. VS Code will be installed in C:Users{Username}AppDataLocalProgramsMicrosoft VS Code by default. During the installation, don’t forget to select the “Add Open with Code Action”.
As an example, we will show this basic gradio app that shows the hello message to user.
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown(“Hi friends!”)
demo.launch(share=True)
You can run this code as a Python terminal or as a Jupyter notebook cell.
However, to get the Gradio app to work, we need to connect a gradio.Blocks to a FastAPI application that is already in place.
Begin by creating a virtual environment. To achieve this, you need to install the library first.
pip install virtualenv
To create a venv for your project, follow these steps in your terminal: make a new project folder, use cd to go to the project folder, and run this command
cd my-project
python -m venv myenv
myenvScriptsactivate
Otherwise you can create a venv in VSCode, using the command palette : Ctrl + Shift + P -> Python: Create Environment
Now install the libraries
pip install gradio
pip install fastapi
And rewrite your initial gradio code. Create main.py and add the following code :
from fastapi import FastAPI
import gradio as gr
app = FastAPI()
with gr.Blocks() as demo:
gr.Markdown(“Hi friends!”)
app = gr.mount_gradio_app(app, demo, path=”/”)
You are now ready to run your FastAPI application
python main.py
Please note, that when you need to use secrets in your code, you should use the environment variables.
import os
import gradio as gr
with gr.Blocks() as demo:
my_secret_key = os.environ[“MY_SECRET_KEY”]
gr.Markdown(“Hi friends!”)
demo.launch(share=True)
One way to make the environment variable is through the terminal or PC settings, but a better way to set up the debug profile in VScode is to make your development easier. In your .vscode folder, put the launch.json file that has this content:
{
“version”: “0.2.0”,
“configurations”: [
{
“name”: “Python Debugger: FastAPI”,
“type”: “debugpy”,
“request”: “launch”,
“module”: “uvicorn”,
“args”: [
“main:app”,
“–reload”
],
“jinja”: true,
“env”: {
“MY_SECRET_KEY”: “<my secret key value>”
}
}
]
}
This will enable you to launch your local app by using the Run > Start Debugging
Deploy to Azure App Service
Because Azure App Services run in the Linux environment, you need to install the gunicorn package, as this is what the startup command relies on instead of uvicorn.
pip install gunicorn
Use the following command to make a requirements file:
pip freeze > requirements.txt
This will create a file that displays all the packages and their dependencies with their versions, something like this:
aiofiles==23.2.1
altair==5.3.0
annotated-types==0.6.0
anyio==4.3.0
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
contourpy==1.2.1
cycler==0.12.1
fastapi==0.110.1
ffmpy==0.3.2
filelock==3.13.4
fonttools==4.51.0
fsspec==2024.3.1
gradio==4.26.0
gradio_client==0.15.1
gunicorn==21.2.0
Make a new folder called deploy and open it in VSCode. Paste the main.py and requirements.txt files in this folder.
Some of the tutorials suggest creating a Docker image that can then run on App Service. But this is not required. You can also deploy code directly from a local workspace to App Service without making a Docker image.
Before you start, make sure you have the Azure Tools extension pack installed and you are logged into Azure from VS Code. Then go to the Azure portal to create the resource. Sign in to the Azure portal, type app services in the search bar at the top of the portal. Choose the option called App Services under the Services heading on the menu that shows up below the search bar.
On the App Services page, select + Create, then select + Web App from the drop-down menu.
On the Create Web App page, fill out the form as follows.
Resource Group → Select Create new and use your RG name.
Name → you-app-name. This name must be unique across Azure.
Runtime stack → Python 3.11.
Region → Any Azure region near you.
App Service Plan → Under Pricing plan, select Explore pricing plans to select a different App Service plan.
The App Service plan determines the amount of resources (CPU/memory) that your app can use and how much you pay for them.For this example, under Dev/Test, choose the Basic B1 plan. The Basic B1 plan will cost a little bit from your Azure account but is better for performance than the Free F1 plan. When done, select Select to confirm your changes.
At the bottom of the screen on the main Create Web App page, choose the Review + create option. This will bring you to the Review page. To create your App Service, select Create.
Now in VSCode sign to Azure using the command palette (Ctrl + Shift + P)
Then open the Azure extension in VSCode:
Now go to your Web App resource that you made earlier > Right Click > Deploy to Web App
This will start the deployment job
After the deployment is finished, go to the Azure portal, search for the Web Service, select the Settings and input environment variables
And then type the secret name and value as they appear in your local settings in VSCode.
To finish, go to Settings > Configuration > Startup Command and type in this command
python -m gunicorn main:app -k uvicorn.workers.UvicornWorker
To make the web app work properly and recognize the secrets, you have to restart it after setting the environment variables.
To see if the app service is functioning, go to Overview > Default Domain, and you can use this link to access your Web App.
There you have it, your Azure Web App is ready to go. I hope this article was useful.
Microsoft Tech Community – Latest Blogs –Read More