Month: July 2024
How To Revert System Permissions to Default After Locking Myself Out?
I tried to delete a file by taking ownership & got carried away with all the settings. Now it seems I’m locked out of my own drive & my system doesn’t operate properly, i can’t manage my screen brightness, download apps, uninstall apps, & only a handful of settings working currently. I have went through a dozen restarts hoping it will go back to normal & nothing. So how do i restore the system permissions & ownership settings? Read More
Booking 30-minute meetings only on the hour.
Hello,
I work for a company that has a centralized hiring system so we complete many online interviews. Currently, I use a personal booking page but am switching to shared booking page because I want the option to add required questions for my applicants to fill out prior to booking an interview.
However, I am running into an issue with scheduling with the shared booking page. My interview slots are 30 minutes, but on my personal booking page I had it set to limit start times to 1-hr intervals, so it would ONLY book at the top of the hour, and give me 30 minutes in between each interview to prep for the next. I have my outlook calendar specifically set up so people can only book at 9am, 10am, 11am, 2pm, 3pm, and 4pm. I do not want Bookings to give them the option to book at 9:30, 10;30, etc because then it becomes too much. Even if I set the time increments to 1 hr while keeping the 30 minute interview time, bookings still gives it the option to schedule at the half an hour. Is there a way to get the best of both worlds with the required questions and 1-hr intervals?
Hello, I work for a company that has a centralized hiring system so we complete many online interviews. Currently, I use a personal booking page but am switching to shared booking page because I want the option to add required questions for my applicants to fill out prior to booking an interview. However, I am running into an issue with scheduling with the shared booking page. My interview slots are 30 minutes, but on my personal booking page I had it set to limit start times to 1-hr intervals, so it would ONLY book at the top of the hour, and give me 30 minutes in between each interview to prep for the next. I have my outlook calendar specifically set up so people can only book at 9am, 10am, 11am, 2pm, 3pm, and 4pm. I do not want Bookings to give them the option to book at 9:30, 10;30, etc because then it becomes too much. Even if I set the time increments to 1 hr while keeping the 30 minute interview time, bookings still gives it the option to schedule at the half an hour. Is there a way to get the best of both worlds with the required questions and 1-hr intervals? Read More
Sentinel Region migration
Looking for any documentation to describe Sentinel Region migration. Like from one azure region to another azure region.
Is there possibility we can move log analytic workspace data to new region
Is that data will be usable (can we query data)
what other major factor to consider while migrating. Things we can automate and things we need to do manual while migrating Analytic rules, Logic apps, workbooks, automation rules, workbooks, watchlists, Data connectors, parsers etc.
Looking for any documentation to describe Sentinel Region migration. Like from one azure region to another azure region.Is there possibility we can move log analytic workspace data to new region Is that data will be usable (can we query data)what other major factor to consider while migrating. Things we can automate and things we need to do manual while migrating Analytic rules, Logic apps, workbooks, automation rules, workbooks, watchlists, Data connectors, parsers etc. Read More
Sync is down on edge canary android
Good morning,
why hasn’t syncing been working on edge android canary for a while now? the first step was the inability to enable password synchronization, the second (dated today) completely blocks all synchronization options, disabled by default, and impossible to enable them again? Please explain the reasons for this choice, or deploy a fix quickly. thank you very much microsoft edge team !
Good morning,why hasn’t syncing been working on edge android canary for a while now? the first step was the inability to enable password synchronization, the second (dated today) completely blocks all synchronization options, disabled by default, and impossible to enable them again? Please explain the reasons for this choice, or deploy a fix quickly. thank you very much microsoft edge team ! Read More
Licensing for guest users in Entra ID
Hi,
We have Active Directory Premium P1 licenses in our tenant and I’d like to know how does licensing work for guest users in Entra ID. We are pushing MFA through Conditional access and I’m trying to figure out if the guest users will need this license or not for MFA enforcement. I know that if there’s a subscription attached to Entra ID, the licensing is based on MAU but there is no subscription in our Entra ID yet. I’m a GA in our tenant and I don’t see any subscriptions here.
Any advice would be appreciated.
Hi, We have Active Directory Premium P1 licenses in our tenant and I’d like to know how does licensing work for guest users in Entra ID. We are pushing MFA through Conditional access and I’m trying to figure out if the guest users will need this license or not for MFA enforcement. I know that if there’s a subscription attached to Entra ID, the licensing is based on MAU but there is no subscription in our Entra ID yet. I’m a GA in our tenant and I don’t see any subscriptions here. Any advice would be appreciated. Read More
Building the Ultimate Nerdland Podcast Chatbot with RAG and LLM: Step-by-Step Guide
Large Language Models (LLMs) have become a hot topic in the tech world. For those of us in Belgium (or the Netherlands) with a passion for technology and science, one of the go-to resources is the monthly podcast “Nerdland,” which delves into a variety of subjects from bioscience and space exploration to robotics and artificial intelligence.
Recognizing the wealth of knowledge contained in over 100 episodes of “Nerdland,” A simple thought came to our minds: why not develop a chatbot tailored for “Nerdland” enthusiasts? This chatbot would leverage the content of the podcasts to engage and inform users. Want to know more of the project with Nerdland? Visit aka.ms/nerdland
This chatbot enables the Nerdland community to interact at another level with the Nerdland content. On top of that, it democratizes the wealth of information in all these podcasts. Since the podcast is in Ducth, the audience around the world is quite limited. Now we can expose this information in dozens of languages, cause these LLMs are capable of multi-language conversations out of the box.
In this blog post, I’ll explore the technical details and architecture of this exciting project. I’ll discuss the LLMs utilized, the essential components, the process of integrating podcast content into the chatbot, and the deployment strategy used to ensure the solution is scalable, secure, and meets enterprise standards on Azure.
RAG Principles
Upon delving into this article, it’s likely you’ve experimented with chatGPT or other Generative AI (GenAI) models. You may have observed two notable aspects:
Large Language Models (LLMs) excel at crafting responses that are both articulate and persuasive.
However, the content provided by an LLM might be entirely fictional, a phenomenon known as “hallucinations.”
LLMs are often employed for data retrieval, offering a user-friendly interface through natural language. To mitigate the issue of hallucinations, it’s crucial to ground the LLM’s responses in a specific dataset, such as one you privately own. This grounding ensures that the LLM’s outputs are based on factual information.
To utilize LLMs with your proprietary data, a method called Retrieval Augmented Generation (RAG) is used. RAG combines the natural language prowess of LLMs with data they haven’t been explicitly trained on.
Several components are essential for this process:
Indexing: Your data must be organized in a way that allows for easy retrieval. Indexing structures your data into “documents,” making key data points readily searchable.
Depending on your data size, you may need to split it into smaller pieces before indexing. Reason being that LLMs typically only allow a certain amount of input tokens (for reference: GPT3.5 turbo allows 4096 tokens, the newest GPT4o allows up to 128’000 input tokens). Given that LLMs have a finite context window and the amount of tokens has a cost, chunking optimizes this token usage. Typically, data is chunked in increments (e.g., 1024 characters), although the optimal size may vary depending on your data.
Intent Recognition: The user’s query is processed by an LLM to extract the “intent,” a condensed version of the original question. Querying the index with this intent often produces more relevant results than using the full prompt. The index is then searched using the intent, yielding the top n documents.
Once the relevant documents are identified, they are fed into a Large Language Model. The LLM then crafts a response in natural language, drawing upon the information from these documents. It ensures that the answer is not only coherent but also traces back to the original data sources indexed, maintaining a connection to the factual basis of the information.
Keyword matching is a fundamental aspect of data retrieval, yet it has its limitations. For instance, a search for “car” might yield numerous references to “cars” but overlook related terms like “vehicles” due to the lack of direct word correlation. To enhance search capabilities, it’s not just exact keyword matches that are important, but also the identification of words with similar meanings.
This is where vector space models come into play. By mapping words into a vector space, “car” and “vehicle” can be positioned in close proximity, indicating their semantic similarity, while “grass” would be positioned far from both. Such vector representations significantly refine the search results by considering semantically related terms.
Embedding models are the tools that facilitate the translation of words into their vector counterparts. These pretrained models, such as the ADA model, encode words into vectors.
Integrating vector search with the Retrieval Augmented Generation (RAG) model introduces additional steps:
Initially, our index is comprised solely of textual documents. To enable vector-based searching, our data must also be vectorized using the pretrained embedding models. These vectors are then indexed, transforming our textual index into a vector database.
Incoming queries are likewise converted into vectors through an embedding model. This conversion allows for a dual search approach within our vector database, leveraging both keyword and vector similarities.
Finally, the top ‘n’ documents from the index are processed by the LLM, which synthesizes the information to generate a coherent, natural language response.
RAG for Nerdland Assistant
Our Nerdland Assistant is a Retrieval Augmented Generation (RAG) chatbot, uniquely crafted to harness the rich content from the podcast archives. To achieve this, we’ve combined a suite of Azure components, each serving a distinct purpose in the chatbot’s architecture:
Container Apps: These are utilized to host custom logic in the form of containers in a serverless way, ensuring both cost-effectiveness and scalability.
Logic Apps: These facilitate the configuration of workflows, streamlining the process with ease and efficiency.
Azure OpenAI: This serves as a versatile API endpoint, granting access to a range of OpenAI models including ChatGPT4, ADA, and others.
AI Search: At the core of our chatbot is an index/vector database, which enables the sophisticated retrieval capabilities necessary for the RAG model.
Storage Accounts: A robust storage solution is essential for housing our extensive podcast library, ensuring that the data remains accessible and secure.
The journey of the Nerdland Assistant episode begins when a new MP3 file is uploaded to an Azure storage account. This triggers a series of automated workflows.
A Logic App is triggered, instructing a Container App to convert the stereo MP3 to mono format, a requirement for the subsequent speech-to-text conversion.
Another Logic App initiates the OpenAI Whisper transcription batch API, which processes the MP3s with configurations such as language selection and punctuation preferences.
A third Logic App monitors the transcription progress and, upon completion, stores the results back in the storage account.
A fourth Logic App calls upon a Python Scrapy framework-based Container App to scrape additional references from the podcast’s shownotes page.
The final Logic App sets off our custom indexer, hosted in a Container App, which segments the podcast transcripts into smaller chunks and uploads them to Azure AI.
Each chunk is then crafted into an index document, enriched with details like the podcast title, episode sponsor, and scraped data.
These documents are uploaded to Azure AI Search, which employs the ADA model to convert the text into vector embeddings, effectively transforming our index into a vector database.
While Azure AI Search is our chosen platform, alternatives like Azure Cosmos DB, Azure PostgreSQL, or Elastic could also serve this purpose.
Azure AI Search harnesses the ADA model to automatically convert textual documents into vector embeddings, through configuration. For those seeking alternatives, options include Azure Cosmos DB, Azure PostgreSQL, and Elasticsearch, among others.
With our vector database now established atop our podcast episodes, we’re ready to implement Retrieval Augmented Generation (RAG). There are several approaches to this, such as manual implementation, using libraries like LangChain or Semantic Kernel, or leveraging Azure OpenAI APIs and Microsoft OpenAI SDKs.
The choice of method depends on the complexity of your project. For more complex systems involving agents, plugins, or multimodal solutions, LangChain or Semantic Kernel might be more suitable. However, for straightforward applications like ours, the Azure OpenAI APIs are an excellent match.
We’ve crafted our own RAG backend using the Azure OpenAI APIs, which simplifies the process by handling all configurations in a single stateless request. This API abstracts much of the RAG’s complexity and requires the following parameters:
LLM Selection: For the Nerdland Copilot, we’re currently utilizing GPT-4o.
Embedding Model: Such as ADA, to vectorize the input.
Parameters: These include settings like temperature (to adjust the creativity of the LLM’s responses) and strictness (to limit responses to the indexed data).
Vector Database: In our case, this is the AI Search, which contains our indexed data.
Document Retrieval: The number of documents the vector database should return in response to a query.
System Prompt: This provides additional instructions to the LLM on the desired tone and behavior, such as “answer informally and humorously, act like a geeky chatbot.”
User Prompt: The original question posed by the user.
The backend we created in step 9, deployed on a Container App, is abstracted by an API Management layer which enhances security, controls the data flow, and offers potential enhancements like smart caching and load balancing for OpenAI.
To maintain a record of chat interactions, we’ve integrated a Redis cache that captures chat history via session states, effectively archiving the chat history of the Large Language Model (LLM). This server-side implementation ensures that the system prompts remain secure from any end-user modifications.
The final touch to our backend is its presentation through a React frontend hosted on an Azure Static Web App. This interface not only provides a seamless user experience but also offers the functionality for users to view and interact with the sources referenced in each LLM-generated response.
This entire setup is fully scripted as Infrastructure as Code. We utilize Bicep and the Azure Developer CLI to template the architecture, ensuring that our solution is both robust and easily replicable.
LLM Configuration
The quality of the answers by the LLM are significantly shaped by several factors: the system prompts, LLM parameters (such as temperature, max tokens, ..), chunk size, and the indexing method’s robustness.
Every single of the above parameters influences the outcome by a lot. The only way to improve the outcome is to assess the results by influencing the parameters. To structure this process, you can make use of PromptFlow. PromptFlow allows you to repeat this LLM tweaking process, keeping track of the quality of the results per configuration.
Responsible AI
When deploying an application that is making use of generative AI; adhering to Responsible AI Principles is crucial. These Microsoft principles guide the ethical and safe use of AI technologies. https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview
A key advantage of utilizing Azure OpenAI endpoints is the built-in safety content filters provided by Microsoft. These filters function by processing both the input prompts and the generated outputs through a sophisticated array of classification models. The goal is to identify and mitigate the risk of producing any content that could be deemed harmful.
Future of the project and GenAI
Triggered by the above? Feel free to explore yourself on: github.com/azure/nerdland-copilot.
The journey of developing this custom Assistantwas a time-constrained endeavor that laid the groundwork for a basic yet functional system. The possibilities for expansion and enhancement are endless, with potential future enhancements including:
Integration of models like GPT-4o: Enabling speech-based interactions with the bot, offering a more dynamic and accessible user experience.
Data enrichment: Incorporating a broader spectrum of (external) data to enrich the chatbot’s knowledge base and response accuracy.
Quality optimization: Embedding LLMOps (for example with: PromptFlow) into the application’s core to fine-tune the LLM’s performance, coupled with leveraging real-time user feedback for continuous improvement.
Incorporating graph libraries would enable the AI to present answers that are not only informative but also visually compelling, particularly for responses that involve statistical analysis. This would make data interpretation more intuitive for users.
Embracing the adage that “a picture is worth a thousand words,” integrating the ability for the AI to communicate through images and videos could improve the way we interact with the technology.
The concept of creating a podcast from scratch by combining various topics is an exciting prospect. It would allow for the generation of unique and diverse content, tailored to the interests and preferences of the individual. A possible way of achieving this might be with the use of “agents”. An agent being a specific task for an LLM (one specific model/prompt/..) This requires a setup where multiple “agents” work together.
Want to know more? Contact us!
Other sources
https://aka.ms/nerdland
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search
https://github.com/Azure-Samples/azure-search-openai-demo
https://github.com/Azure/Vector-Search-AI-Assistant another example, with AKS and CosmosDB
Microsoft Tech Community – Latest Blogs –Read More
Logic Apps Aviators Newsletter – July 2024
In this issue:
Ace Aviator of the Month
Customer Corner
News from our product group
News from our community
Ace Aviator of the Month
July’s Ace Aviator: Diogo Formosinho
What is your role and title? What are your responsibilities associated with your position?
I work as an Integration Developer at DevScope. My primary responsibility is to develop solutions based on client requirements. This involves analyzing client needs, designing and implementing integration solutions, ensuring that data flows between different systems. My role also emphasizes collaboration with cross-functional teams to ensure that our solutions are aligned with overall business objectives.
Can you provide some insights into your day-to-day activities and what a typical day in your role looks like?
As a developer, my day-to-day activities vary depending on the clients I’m working with. Recently, I’ve been focused on a client based in Canada, which has shaped my daily routine. My morning is dedicated to development and working on active projects. This is when I’m most focused and productive. Whether it’s creating logic apps or testing, the morning hours are crucial for making significant progress on the project. I often start by reviewing my tasks and setting goals for what I want to achieve by lunchtime.
The afternoon is reserved for meetings and collaboration. Working with a Canadian client means I need to sync my schedule to accommodate time zone differences. These meetings are essential for aligning and discussing project updates, and addressing any issues that arise. After meetings, I use the remaining time to make necessary changes based on the feedback received. This can involve tweaking, fixing bugs, or refining what is necessary. The late afternoon is a good time for this kind of work as it allows me to address immediate concerns and ensure that the project stays on track.
By balancing focused development time with collaborative meetings, I ensure that I stay productive and responsive.
What motivates and inspires you to be an active member of the Aviators/Microsoft community?
My motivation to share things with community it’s due to the potential of technology. The collaborative environment, where knowledge is shared and collective problem-solving thrives, constantly inspires me. Being part of a community that values growth, learning, and mutual support motivates me to keep sharing things that may help other professionals on their work.
Looking back, what advice do you wish you would have been told earlier on that you would give to individuals looking to become involved in STEM/technology?
The advice I wish I had received earlier is to have an open mind. It’s crucial to view failures and setbacks as learning opportunities rather than endpoints. Maintaining curiosity, continuously expanding your knowledge, and asking questions can greatly accelerate your learning curve.
What are some of the most important lessons you’ve learned throughout your career that surprised you?
One surprising lesson is the critical role of documentation. Initially, I believed that having a deep understanding of Logic Apps and their capabilities was sufficient. However, I quickly learned that clear, detailed documentation is indispensable. Good documentation not only helps in maintaining and scaling applications but also aids in troubleshooting and onboarding new team members. It ensures that the logic behind each app is transparent and accessible, which is crucial for long-term project sustainability and team collaboration.
Imagine you had a magic wand that could create a feature in Logic Apps. What would this feature be and why?
If I had a magic wand, I would create a feature in Logic Apps that enables integration with a wider range of AI and machine learning models. As someone with a master’s degree in artificial intelligence engineering, I understand the value of incorporating advanced analytics and predictive capabilities into workflows. This feature would allow users to easily integrate AI-powered insights into their applications without needing extensive data science expertise. By democratizing access to these insights, users could unlock new levels of efficiency, innovation, and decision-making.
Customer Corner:
SPAR NL readies for the future of retail with Azure Integration Services
Check out this customer success story with SPAR, a leading retail innovator, and how they’re revolutionizing their operations with Microsoft Azure Integration Services. Azure Logic Apps plays a crucial role in SPAR’s digital transformation journey by automating complex workflows and facilitating real-time data exchange between internal systems and external partners. Read more about how this integration has not only enhanced operational efficiency but also improved agility, enabling SPAR to respond swiftly to market demands and customer needs.
News from our product group:
Azure Logic Apps Community Standup – June 2024
Missed June’s Community Standup live last week? Catch up here in this recording and mark your calendar for July’s standup on the 26th.
Event Grid Trigger: Validation handshake Failed on Event subscription deployment
Having issues with failed error on event subscription deployment while handling validation requests in workflows with event grid triggers to bypass validation? Check out this article for a solution.
Retrieve a Consumption Logic App workflow definition from deletion
Learn more about a recovery method for Consumption where you can retrieve the definition after deletion.
Logic App Standard Storage issues investigation using Slots
Having issues with inaccessible storage? We might have an option to help using Slots.
Azure Logic Apps PeekLock caching and Service Bus queue Lockduration
Check out this article about optimizations when integrating the “When messages are available in a queue (peek-lock)” Logic App trigger with an Azure Service bus queue.
Announcing: Public Preview of Resubmit from an Action in Logic Apps Consumption Workflows
We are excited to introduce Resubmit Action in the Consumption SKU. Read more about this long-awaited feature in this article.
Announcing: General Availability of Azure API Center extension for Visual Studio Code
Read more about our exciting news about Azure API Center extension for Visual Studio Code now being generally available.
Announcement: Introducing .NET C# Inline Action for Azure Logic Apps (Standard) – Preview
Check out this article about our new capability that allows developers to write .NET C# script right within the Logic Apps designer in Azure Portal.
Templates for Azure Logic Apps Standard: Seeking Your Feedback on UI Wireframes
Wanting to preview the new Templates for Azure Logic Apps? We’re looking for your feedback!
Announcement!! Azure OpenAI and Azure AI Search connectors are now Generally Available (GA)
We are thrilled to announce the general availability of Azure OpenAI and AI Search connectors for Logic Apps. Read more here.
Announcing the Public Preview of the Azure Logic Apps Rules Engine!
Learn how to effectively implement Mission Critical Solutions with the new Azure Logic Apps Rules Engine
Integration Environment Update: Introducing Unified Monitoring and Business Process Tracking Update
Check out this article about our exciting new capability in Integration Environment that allows you to monitor Azure Integration Services.
Announcement: Introducing .NET 8 Custom Code support for Azure Logic Apps (Standard) – Preview
We are excited to announce that we now support .NET 8 for custom code in Logic App Standard.
Logic Apps Standard – New Hybrid Deployment Model (Preview)
Read about our exciting new Hybrid Deployment Model for Logic Apps Standard that allows you to run Logic Apps workloads on customer managed infrastructure.
Integrate GPT4o (Azure Open AI) in Teams channel via Logic App with image supportability
Read this post to learn how to upgrade to GPT-4o with capability for image processing.
Advanced Scenarios with the 3270 Design Tool: Arrays and Screen collection with Robert Beardsworth
Watch this video with Harold Campos and Robert Beardsworth as they discuss the 3270 Design tool and demonstrate advanced scenarios to handle Arrays and Screens collection.
News from our community:
Remove Wasteful Processing in Logic Apps
Video by Mike Stephenson
Watch Mike discuss a scenario with Logic Apps where you can have wasteful processing of unchanged records. Learn how to optimize the cost of the Logic App to keep it efficient.
Friday Fact: New Logic App Designer (in GA) Enables Copy and Paste Actions
Post/Video by Luís Rigueira
Read this post or watch the video by Luis about how to use the copy and paste actions with the new generally available Designer in Logic Apps.
Generative AI Capabilities for Logic Apps Standard with Azure OpenAI and AI Search Connectors
Post by Steef-Jan Wiggers
Read this article from Steef-Jan discussing the general availability of Azure OpenAI and Azure AI Search connectors for Logic Apps Standard.
Post/Video by Diogo Formosinho
Learn from this month’s Ace Aviator Diogo about how to manage rate limits when it comes to External APIs.
Friday Fact: Consistency in Logic App Trigger Names Ensures Successful Resubmissions
Post/Video by Luís Rigueira
Read or watch this simple yet important tip and trick from Luis when it comes to testing your Logic Apps on Azure Portal.
Post/Video by Sandro Pereira
Implementing advanced routing scenarios? Learn about this trick from Sandro when it comes to using Conditional Split with Custom Expressions inside Logic Apps.
Get Started with Azure Logic Apps Standard using Visual Studio Code | Local Development with VS Code
Video by Sri Gunnala
Learn how to kickstart with Azure Logic Apps Standard in VS Code including prerequisites, a simple example workflow, and deployment to Azure in this getting started video from Sri.
Integration Insider: Integration Modernization with AIS – From Lift & Shift to Full Modernization
Video by Derek Marley and Tim Bieber
Watch another edition of Integration Insider where Derek and Tim reveal a new Integration Anti-Pattern emerging that enterprise organizations need to be aware of and how to combat it.
Post by Sandro Pereira
Read about a solution Sandro found for an error in a Logic Apps deployment related to misconfigured managed identities in API connections.
Microsoft Tech Community – Latest Blogs –Read More
Psychtoolbox- Error Using Screen
Hi everyone, I’m encountering an issue with a section of code from a script. It keeps closing out with an error, and I’m not sure why. The error message reads as follows:
Error using Screen
Usage:
Screen(‘DrawTexture’, windowPointer, texturePointer [,sourceRect] [,destinationRect] [,rotationAngle] [, filterMode] [, globalAlpha] [, modulateColor] [, textureShader] [, specialFlags] [,
auxParameters]);
Error in memcongr (line 802)
Screen(‘DrawTexture’, window, leftStimuli{iTrial}, [], LeftPatch{iTrial});
The variables leftStimuli & LeftPatch are defined as:
leftStimuli = cell(1,nruns);
LeftPatch = cell(1,length(rightselection{oldtrialsperrun}));
Also, these variables are defined as:
oldtrialsperrun:
oldtrialsperrun = 5;
luretrialsperrun= 3;
nTrialsTotal = oldtrialsperrun + luretrialsperrun;
rightselection
images_encoding_stim = Shuffle(stimuli);
leftStimuli = cell(1,nruns);
RightStimuli = cell(1,nruns);
% Ensure the equality of the number of stimuli
total_stimuli = length(images_encoding_stim);
half_stimuli = floor(total_stimuli / 2);
% Divide into two groups: right position and left position
rightselection = images_encoding_stim(1:half_stimuli);
leftselection = images_encoding_stim(half_stimuli+1:end);
clear half_stimuli
% Divide for each run (here you get a code with the stimuli you need to present on the right and left sides)
for irun = 1:nruns
stimuliPerRunRight{irun} = {rightselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
stimuliPerRunLeft{irun} = {leftselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
end
Thank you very much for everything. I’ve been stuck with this error for a couple of months. I’m very willing and open to answering questions about any part of the code in order to get it working.Hi everyone, I’m encountering an issue with a section of code from a script. It keeps closing out with an error, and I’m not sure why. The error message reads as follows:
Error using Screen
Usage:
Screen(‘DrawTexture’, windowPointer, texturePointer [,sourceRect] [,destinationRect] [,rotationAngle] [, filterMode] [, globalAlpha] [, modulateColor] [, textureShader] [, specialFlags] [,
auxParameters]);
Error in memcongr (line 802)
Screen(‘DrawTexture’, window, leftStimuli{iTrial}, [], LeftPatch{iTrial});
The variables leftStimuli & LeftPatch are defined as:
leftStimuli = cell(1,nruns);
LeftPatch = cell(1,length(rightselection{oldtrialsperrun}));
Also, these variables are defined as:
oldtrialsperrun:
oldtrialsperrun = 5;
luretrialsperrun= 3;
nTrialsTotal = oldtrialsperrun + luretrialsperrun;
rightselection
images_encoding_stim = Shuffle(stimuli);
leftStimuli = cell(1,nruns);
RightStimuli = cell(1,nruns);
% Ensure the equality of the number of stimuli
total_stimuli = length(images_encoding_stim);
half_stimuli = floor(total_stimuli / 2);
% Divide into two groups: right position and left position
rightselection = images_encoding_stim(1:half_stimuli);
leftselection = images_encoding_stim(half_stimuli+1:end);
clear half_stimuli
% Divide for each run (here you get a code with the stimuli you need to present on the right and left sides)
for irun = 1:nruns
stimuliPerRunRight{irun} = {rightselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
stimuliPerRunLeft{irun} = {leftselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
end
Thank you very much for everything. I’ve been stuck with this error for a couple of months. I’m very willing and open to answering questions about any part of the code in order to get it working. Hi everyone, I’m encountering an issue with a section of code from a script. It keeps closing out with an error, and I’m not sure why. The error message reads as follows:
Error using Screen
Usage:
Screen(‘DrawTexture’, windowPointer, texturePointer [,sourceRect] [,destinationRect] [,rotationAngle] [, filterMode] [, globalAlpha] [, modulateColor] [, textureShader] [, specialFlags] [,
auxParameters]);
Error in memcongr (line 802)
Screen(‘DrawTexture’, window, leftStimuli{iTrial}, [], LeftPatch{iTrial});
The variables leftStimuli & LeftPatch are defined as:
leftStimuli = cell(1,nruns);
LeftPatch = cell(1,length(rightselection{oldtrialsperrun}));
Also, these variables are defined as:
oldtrialsperrun:
oldtrialsperrun = 5;
luretrialsperrun= 3;
nTrialsTotal = oldtrialsperrun + luretrialsperrun;
rightselection
images_encoding_stim = Shuffle(stimuli);
leftStimuli = cell(1,nruns);
RightStimuli = cell(1,nruns);
% Ensure the equality of the number of stimuli
total_stimuli = length(images_encoding_stim);
half_stimuli = floor(total_stimuli / 2);
% Divide into two groups: right position and left position
rightselection = images_encoding_stim(1:half_stimuli);
leftselection = images_encoding_stim(half_stimuli+1:end);
clear half_stimuli
% Divide for each run (here you get a code with the stimuli you need to present on the right and left sides)
for irun = 1:nruns
stimuliPerRunRight{irun} = {rightselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
stimuliPerRunLeft{irun} = {leftselection(1+(irun-1)*oldtrialsperrun : irun*oldtrialsperrun)};
end
Thank you very much for everything. I’ve been stuck with this error for a couple of months. I’m very willing and open to answering questions about any part of the code in order to get it working. psychtoolbox, screen MATLAB Answers — New Questions
What is the RoadRunner Maximum Map Size?
What is the maximum building area in RoadRunner? The default grid is 2000 x 2000 m, can I make that larger?What is the maximum building area in RoadRunner? The default grid is 2000 x 2000 m, can I make that larger? What is the maximum building area in RoadRunner? The default grid is 2000 x 2000 m, can I make that larger? roadrunner, mapsize, map, size, grid, gridsize MATLAB Answers — New Questions
My code runs in script and not in app design
Hi I wrote a code in script and it works but when I copy the exact code in app design it give me the error:" Error using ./ Arrays have incompatible sizes for this operation. Help plsHi I wrote a code in script and it works but when I copy the exact code in app design it give me the error:" Error using ./ Arrays have incompatible sizes for this operation. Help pls Hi I wrote a code in script and it works but when I copy the exact code in app design it give me the error:" Error using ./ Arrays have incompatible sizes for this operation. Help pls appdesigner, app designer MATLAB Answers — New Questions
ismember returning false for 0.6000 == 0.6
Hello,
I have a column of data that was created by using
A = 0.05:0.01:0.9
Secondly I am trying to obtain just the values of
B = [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
However when I run
[C idx] = ismember(B,A)
it returns the logical array
[1 1 1 1 1 0 1 1 1]
[6 16 26 36 46 0 66 76 86]
I have checked the workspace and confirmed that the value 0.6000 exists within A and even when I explicitly index it returns false
A(56)
returns
0.6000
and
A(56) == 0.6
returns logical 0.
Repeating this for the other values in B results in logical 1s as array C describes.
Thank you for any help you can provide!Hello,
I have a column of data that was created by using
A = 0.05:0.01:0.9
Secondly I am trying to obtain just the values of
B = [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
However when I run
[C idx] = ismember(B,A)
it returns the logical array
[1 1 1 1 1 0 1 1 1]
[6 16 26 36 46 0 66 76 86]
I have checked the workspace and confirmed that the value 0.6000 exists within A and even when I explicitly index it returns false
A(56)
returns
0.6000
and
A(56) == 0.6
returns logical 0.
Repeating this for the other values in B results in logical 1s as array C describes.
Thank you for any help you can provide! Hello,
I have a column of data that was created by using
A = 0.05:0.01:0.9
Secondly I am trying to obtain just the values of
B = [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
However when I run
[C idx] = ismember(B,A)
it returns the logical array
[1 1 1 1 1 0 1 1 1]
[6 16 26 36 46 0 66 76 86]
I have checked the workspace and confirmed that the value 0.6000 exists within A and even when I explicitly index it returns false
A(56)
returns
0.6000
and
A(56) == 0.6
returns logical 0.
Repeating this for the other values in B results in logical 1s as array C describes.
Thank you for any help you can provide! ismember, logical array, floating point MATLAB Answers — New Questions
Can an ADF Pipeline trigger upon source table update?
Hi,
Is it possible for an Azure Data Factory Pipeline to be triggered each time the source table changes?
Let’s say I have a ‘copy data’ activity in a pipeline. The activity copies data from TableA to TableB. Can the pipeline be configured to execute whenever source TableA is updated (a record deleted, changed, a new record inserted, etc..)?
Thanks.
Hi,Is it possible for an Azure Data Factory Pipeline to be triggered each time the source table changes?Let’s say I have a ‘copy data’ activity in a pipeline. The activity copies data from TableA to TableB. Can the pipeline be configured to execute whenever source TableA is updated (a record deleted, changed, a new record inserted, etc..)?Thanks. Read More
Why did the new Teams remove the feature that lets you know who has already joined a meeting?
In the classic Teams, you used to be able to see who had joined a meeting that already started. They’ve removed this in the new Teams.
Is there a workaround? Not sure why they’d remove helpful key features like this….
I’ve tried to use Teams classic, but it is now defunct 🙂
In the classic Teams, you used to be able to see who had joined a meeting that already started. They’ve removed this in the new Teams. Is there a workaround? Not sure why they’d remove helpful key features like this…. I’ve tried to use Teams classic, but it is now defunct 🙂 Read More
Copilot in Excel: Unlocking Insights from Data
You’ve seen how Copilot in Excel can help write complex formulas. Today, let’s delve into a dataset containing US birth data from 2000 – 2014 to learn how Copilot in Excel can help us format data, analyze data, and create visualizations.
1. First, we’d like to ask Copilot to format our data for better readability
We’d like to go from this:
Day of Week
Births
6
9083
7
8006
To this:
Day of Week
Births
Saturday
9,083
Sunday
8,006
Prompt: “Convert the days of week into words. For example, 1 is Monday. Additionally, add thousand separators into the birth column.”
2. Next, let’s ask Copilot a question about our data
Prompt: “What are the top 10 days with the lowest birth rate and give a rationale.”
With this prompt, Copilot creates a table with the days with the lowest birth rate, and it gives us an explanation that December 25th is a major US holiday, which means hospitals may have limited staff and schedule fewer elective births.
3. Finally, we’d like Copilot to help us create a visualization of our data to help us uncover more insights
One of the most powerful ways to understand data is through visualization. Copilot makes it easy to create compelling visuals that can highlight trends and patterns in the birth data.
Prompt: “Create a line graph that graphs the number of births by year. Grouped by the days of the week.”
Here we see that Copilot creates a line graph for us, with each line representing a day of the week. We can see that weekdays have higher birth rates than weekends.
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks.
Your feedback helps shape the future of Excel. Please leave a comment below with thoughts or questions on content related to this blog post. Additionally, please let us know how you like a particular feature and what we can improve upon—“Give a compliment” or “Make a suggestion”. You can also submit new ideas or vote for other ideas via Microsoft Feedback.
Subscribe to our Excel Blog and the Insiders Blog to get the latest updates. Stay connected with us and other Excel fans around the world – join our Excel Community and follow us on X, formerly Twitter.
Microsoft Tech Community – Latest Blogs –Read More
2024 Microsoft Nonprofit Partner of the Year Awards announced
Microsoft Partner of the Year Awards celebrates the outstanding achievements and innovation by partners. These partners play a crucial role in delivering transformative solutions that address complex challenges and drive success for customers around the globe.
The rigorous selection process focuses on partners who have demonstrated excellence in innovation and implementation of customer solutions based on Microsoft technology. The importance of their work cannot be overstated. They enhance the capabilities of technology to empower organizations by saving time and reaching farther.
Winner
Valorem Reply: Pioneering technology for social good
Valorem Reply is deeply committed to nonprofits. Their team of Microsoft-dedicated practitioners is uniquely equipped to understand and tackle the challenges faced by their customers, accelerating value from tech investments.
As part of the Global Reply network and a Microsoft Cloud Solutions Partner, Valorem Reply enables nonprofits with innovative technology. Their strategic focus on harnessing Microsoft technology to imagine and create cutting-edge solutions is a testament to their belief in using the best tools to forge a better future by empowering nonprofit organizations.
The collaboration between Valorem Reply and their customer CARE illustrates the transformative power of technology. CARE, a global humanitarian organization, collects and analyzes structured and unstructured survey data across 100+ countries to determine the preparedness of high-risk countries for upcoming emergencies.
By developing an OpenAI sentiment analysis application, Valorem Reply equipped CARE with a powerful tool to enhance emergency readiness and decision-making, reducing human error and delivery times, and providing timely insights for risk mitigation. The sentiment analysis application is a strategic tool for emergency preparedness, enabling CARE to make informed decisions that could make the difference between life and death.
The customer outcomes speak for themselves:
20% higher accuracy in AI-analyzed data, enabling informed decisions and enhancing business strategies.
40% reduction in qualitative survey data analysis time and resource requirements.
Our winner’s journey is one of passion, expertise, and unwavering dedication to making a positive impact. It’s a story of how technology, when aligned with humanitarian goals, can become a force for good, transforming lives and shaping a more equitable and safe future for all. This is the essence of Valorem Reply—a company that truly embodies the spirit of innovation for social good.
Finalists
Exigo Tech: Strong partnerships for mission-driven success
Exigo Tech champions the transformative power of technology for nonprofits, partnering closely with clients like Samaritans of Singapore (SOS) to revolutionize their information technology (IT) systems. Their commitment is to create strong partnerships with clients to produce successful technological outcomes.
SOS provides confidential emotional support to individuals considering suicide. SOS was overwhelmed with requests for emotional support across various channels, which delayed their ability to assist those in need. To address this, Exigo Tech helped SOS implement Microsoft Dynamics 365 to centralize and streamline communication, allowing SOS to quickly categorize and assign cases to the right mental health professionals. This efficient system ensured timely support, with Dynamics 365 omnichannel customer service module doubling the response rate and providing 24/7 virtual support, demonstrating the critical role of technology in mental health services.
KPMG: Powered Enterprise for nonprofit transformation
KPMG leverages their Powered Enterprise suite to thoughtfully transition clients to the cloud, focusing on transforming functions with technology as the key enabler. This preconfigured and tested model includes leading practices, detailed job descriptions optimized for Dynamics 365, and a wealth of artifacts to drive organizational change. It allows rapid delivery of Microsoft solutions with minimal configuration, ensuring speed to value, reduced risks, and future-proof integrity and security. Collaborating closely with Microsoft, KPMG enhances the Microsoft Cloud for Nonprofit, benefiting organizations worldwide.
One such organization is the Australian Red Cross (ARC). ARC needed to unify and update legacy systems that created data siloes and disparate platforms: Microsoft Cloud for Nonprofits enables a unified system that allows for faster data insights and scalable growth. Using KPMG’s Powered Enterprise approach, the solution was implemented in just nine months and provided ARC with clarity, direction, and a de-risked approach at every step.
Wipfli: Amplifying impact through AI innovation
Wipfli is revolutionizing global change by harnessing Microsoft AI capabilities across Azure, Modern Work, and Business Applications and Power Platform. Their innovative approach includes AI algorithms in Fundraising and Engagement, Dynamics 365 Copilot, and Microsoft Fabric, which are pivotal in data strategy and AI-powered data aggregation and visualization. This enables organizations to leverage technology for impactful change and societal advancement.
Wipfli helped Junior Achievement (JA) lay AI foundations with Fabric. JA leveraged Microsoft technology to optimize its curriculum for 4.4 million students, streamline operations, and foster community engagement while ensuring data privacy for its one million volunteers. This scalable model has been successfully implemented by JA and can be adopted by other national organizations to improve efficiency in collaboration with local branches.
Wipfli brought together complex environments from 102 local offices, strategically scaling JA’s global impact by creating a unified organization. The firm also employed Power Pages, Nonprofit Common Data Model, Microsoft Cloud for Nonprofit, and Dynamics 365 Customer Insights (marketing module) to develop a new volunteer portal. This innovation has resulted in significant operational improvements for Junior Achievement, including:
The elimination of a three-month manual data validation process, now handled efficiently through the new system.
A drastic reduction in the time required to organize 100,000 annual events—from a multi-day manual process to digital registration that takes less than five minutes.
These advancements have not only enhanced JA’s operational capabilities but also set a precedent for other organizations seeking similar transformations. Wipfli’s use of Microsoft tools has proven to be a game-changer in the nonprofit sector, enabling organizations to focus more on their mission and less on administrative burdens.
The winners and finalists of the 2024 Nonprofit Partner of the Year Award are shining examples of how technology can be leveraged for social good, improving lives, and fostering a more equitable and sustainable world. Their dedication to the global nonprofit community and transformation using Microsoft tools is changing the world, one impactful solution at a time.
Congratulations to all the nominees, finalists, and winners of the Partner of the Year 2024 in the nonprofit category. Your commitment to excellence and innovation is truly inspiring and sets a benchmark for others in the industry.
Microsoft Tech Community – Latest Blogs –Read More
can an example is provided to use Attention mechanism in time series sequence data? and also how to use it with LSTM?.
I have large time series sequence data, and I want to use attention mechanism for this data, and also concatenate the output of the attention mechanism with LSTM. Can any one help me in this regards by solving an example?I have large time series sequence data, and I want to use attention mechanism for this data, and also concatenate the output of the attention mechanism with LSTM. Can any one help me in this regards by solving an example? I have large time series sequence data, and I want to use attention mechanism for this data, and also concatenate the output of the attention mechanism with LSTM. Can any one help me in this regards by solving an example? deep learning, attention mechanism MATLAB Answers — New Questions
Can’t get autoreply working!
Hello
Please i need your help on this issue.
One of my client is having an issue where he cant get autoreply working.
Hello Please i need your help on this issue. One of my client is having an issue where he cant get autoreply working. Read More
How to Fix Quick-Books Error 3180 When Saving Sales Receipts or Invoices
I’m encountering Q.B Error 3180 when saving sales receipts or invoices, and it mentions an invalid tax code issue. Despite reviewing and updating my tax settings and Quick-Books, the problem persists. Can anyone provide a detailed solution to fix this error? Any assistance would be greatly appreciated!
I’m encountering Q.B Error 3180 when saving sales receipts or invoices, and it mentions an invalid tax code issue. Despite reviewing and updating my tax settings and Quick-Books, the problem persists. Can anyone provide a detailed solution to fix this error? Any assistance would be greatly appreciated! Read More
I’m having problems with the taskbar
The individual symbols are invisible. And I can’t do a Windows update. He tried to restart explorer.ex. Also, my autostarts don’t work. Support advised me to contact the Windows Insider Program.
The individual symbols are invisible. And I can’t do a Windows update. He tried to restart explorer.ex. Also, my autostarts don’t work. Support advised me to contact the Windows Insider Program. Read More
Troubleshooting Quick-Books Error 193 During Installation or Update
I’m experiencing Q.B Error 193 when trying to install or update Quick-Books Desktop. This error indicates a problem with the installation process, and I’ve already tried restarting my computer and reinstalling the software without success. Could someone provide detailed steps or solutions to resolve this issue? Any help would be appreciated!
I’m experiencing Q.B Error 193 when trying to install or update Quick-Books Desktop. This error indicates a problem with the installation process, and I’ve already tried restarting my computer and reinstalling the software without success. Could someone provide detailed steps or solutions to resolve this issue? Any help would be appreciated! Read More