Category: Microsoft
Category Archives: Microsoft
Exploring AI Agent-Driven Auto Insurance Claims RAG Pipeline.
Introduction:
In this post, I explore a recent experiment aimed at creating a RAG pipeline tailored for the insurance industry, specifically for handling automobile insurance claims, with the goal of potentially reducing processing times.
I also showcase the implementation of Autogen AI Agents to enhance search retrieval through agent interaction and function calls on sample auto insurance claims documents, a Q&A use case, and how this workflow can substantially reduce the time required for claims processing.
RAG workflows in my opinion represent a novel data stack, distinct from traditional ETL processes. Although they encompass data ingestion and processing similar to traditional ETL in data engineering, they introduce additional pipeline stages like chunking, embedding, and the loading of data into vector databases, diverging from the standard Lakehouse or data warehouse pipelines.
Each stage of the RAG application workflow is pivotal to the accuracy and pertinence of the downstream LLM application. One of these stages is the chunking method, and for this proof of concept, I chose to test a page-based chunking technique that leverages the document’s layout without relying on third party packages.
Key Services and Features:
By leveraging enterprise-grade features of Azure AI services, I can securely integrate Azure AI Document Intelligence, Azure AI Search, and Azure OpenAI through private endpoints. This integration ensures that the solution adheres to best practice cybersecurity standards. In addition, it offers secure network isolation and private connectivity to and from virtual networks and associated Azure services.
Some of these services are:
Azure AI Document Intelligence and the prebuilt-layout model.
Azure AI Search Index and Vector database configured with the HNSW search algorithm.
Azure OpenAI GPT-4-o model.
Page-based Chunking technique.
Autogen AI Agents.
Azure Open AI Embedding model: text-ada-003.
Azure Key Vault.
Private Endpoints integration across all services.
Azure Blob Storage.
Azure Function App. (This serverless compute platform can be replaced with Microsoft Fabric or Azure Databricks)
Document Extraction and Chunking:
These templates include forms with data detailing the accident location, description, vehicle information of the involved parties, and any injuries sustained. Thanks to the folks at LlamaIndex for providing the sample claims documents. Below is a sample of the forms template.
The claim documents are PDF files housed in Azure Blob Storage. Data ingestion begins from the container URL of the blob storage using the Azure AI Document Intelligence Python SDK.
This implementation of a page-based chunking method utilizes the markdown output from the Azure AI Document Intelligence SDK. The SDK, setup with the prebuilt-layout extraction model, extracts the content of pages, including forms and text, into markdown formats, preserving the document’s specific structure, such as paragraphs and sections, and its context.
The SDK facilitates the extraction of documents page by page, via the pages collection of the documents, allowing for the sequential organization of markdown output data. Each page is preserved as an element within a list of pages, streamlining the process of efficiently extracting page numbers for each segment. More details about the document intelligence service and layout model can be found at this link.
The snippet below illustrates the process of page-based extraction, preprocessing of page elements, and their assignment to a Python list:
Each page content will be used as the value of the content field in the vector database index, alongside other metadata fields in the vector index. Each page content is its own chunk and will be embedded before being loaded into the vector database. The following snippet demonstrates this operation:
Define Autogen AI Agents and Agent Tool/Function:
The concept of an AI Agent is modeled after human reasoning and the question-and-answer process. The agent is driven by a Large Language Model (its brain), which assists in determining whether additional information is required to answer a question or if a tool needs to be executed to complete a task.
In contrast, non-agentic RAG pipelines incorporate meticulously designed prompts that integrate context information (typically through a context variable within the prompt) sourced from the vector store before initiating a request to the LLM for a response. AI agents possess the autonomy to determine the “best” method for accomplishing a task or providing an answer. This experiment presents a straightforward agentic RAG workflow. In upcoming posts, I will delve into more complex, agent-driven RAG solutions. More details about Autogen Agents can be accessed here.
I set up two Autogen agent instances designed to simulate or engage in a question-and-answer chat conversation among themselves to carry out search tasks based on the input messages. To facilitate the agents’ ability to search and fetch query results from the Azure AI Search vector store via function calls, I authored a Python function that will be associated with these agents. The AssistantAgent, which is configured to invoke the function, and the UserProxyAgent, which is tasked with executing the function, are both examples of the Autogen Conversable Agent class.
The user agent begins a dialogue with the assistant agent by asking a question about the search documents. The assistant agent then gathers and synthesizes the response according to the system message prompt instructions and the context data retrieved from the vector store.
The snippets below provide the definition of Autogen agents and a chat conversation between the agents. The complete notebook implementation is available in the linked GitHub repository.
Last Thoughts:
The assistant agent correctly answered all six questions, aligning with my assessment of the documents’ information and ground truth. This proof of concept demonstrates the integration of pertinent services into a RAG workflow to develop an LLM application, which aims to substantially decrease the time frame for processing claims in the auto insurance industry scenario.
As previously stated, each phase of the RAG workflow is crucial to the response quality. The system message prompt for the Assistant agent needs precise crafting, as it can alter the response outcomes based on the set instructions. Similarly, the custom retrieval function’s logic plays a significant role in the agent’s ability to locate and synthesize responses to the messages.
The accuracy of the responses has been assessed manually. Ideally, this process should be automated.
In an upcoming post, I intend to explore the automated evaluation of the RAG workflow. Which methods can be utilized to accurately assess and subsequently refine the RAG pipeline?
Both the retrieval and generative stages of the RAG process require thorough evaluation.
What tools can we use to accurately evaluate the end-to-end phases of a RAG workflow, including extraction, processing, and chunking strategies? How can we compare various chunking methods, such as the page-based chunking described in this article versus the recursive character text split chunking option?
How do we compare the retrieval results of an HNSW vector search algorithm against the KNN exhaustive algorithm?
What kind of evaluation tools are available and what metrics can be captured for agent-based systems?
Is a one-size-fits-all tool available to manage these? We will find answers to these questions.
Moreover, I would also like to examine and assess how this and other RAG and generative ai workflows are reviewed to ensure alignment with the standards of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability as defined in the Responsible AI Ethics framework for building and developing these systems.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Defender for Identity: the critical role of identities in automatic attack disruption
In today’s digital landscape, cyber-threats are becoming increasingly sophisticated and frequent. Advanced attacks are often multi-workload and cross-domain, requiring organizations to deploy robust security solutions to counter this complexity and protect their assets and data. Microsoft Defender XDR offers a comprehensive suite of tools designed to prevent, detect and respond to these threats. With speed and effectiveness being the two most important elements in incident response, Defender XDR tips the scale back to defenders with automatic attack disruption.
What is Automatic attack disruption?
Automatic attack disruption is an AI-powered capability that uses the correlated signals in Microsoft Defender XDR to stop and prevent further damage of in-progress attacks. What makes this disruption technology so differentiated is our ability to recognize the intent of an attacker and accurately predict, then stop, their next move with an extremely high level of confidence. This includes automated response actions such as containing compromised devices, disabling compromised user accounts, or disabling malicious OAuth apps. The benefits of attack disruption include:
Disruption of attacks at machine speed: with an average time of 3 minutes to disrupt ransomware attacks, attack disruption changes the speed of response for most organizations.
Reduced Impact of Attacks: by minimizing the time attackers have to cause damage, attack disruption limits the lateral movement of threat actors within your network, reducing the overall impact of the threat. This means less downtime, fewer compromised systems, and lower recovery costs.
Enhanced Security Operations: attack disruption allows security operations teams to focus on investigating and remediating other potential threats, improving their efficiency and overall effectiveness.
The role of Defender for Identity
While attack disruption occurs at the Defender XDR level, it’s important to note that Microsoft Defender for Identity, delivers critical identity signals and response actions to the platform. At a high level, Defender for Identity helps customers better protect their identity fabric through identity-specific posture recommendations, detections and response actions. These are correlated with the other workload signals in the Defender platform and attributed to a high-fidelity incident. Within the context of attack disruption, Defender for Identity enables user specific response actions including:
Disabling user accounts: When a user account is compromised, Defender for Identity can automatically disable the account to prevent further malicious activities. Whether the identity in question is managed in Active Directory on-premises or Entra ID in the cloud, Defender is able to take immediate action and help contain the threat and protect your organization’s assets.
Resetting passwords: In cases where a user’s credentials have been compromised, Defender for Identity can force a password reset. This ensures that the attacker can no longer use the compromised credentials to access your systems
Microsoft Defender XDR’s automatic disruption capability is a game-changer in the world of cybersecurity. Powered by Microsoft Security intelligence and leveraging AI and machine learning, it provides real-time threat mitigation, reduces the impact of attacks, and enhances the efficiency of security operations. However, to fully realize the benefits of automatic disruption, it’s essential to include Defender for Identity in your security strategy, filling a critical need in your defenses.
Use this quick installation guide to deploy Defender for Identity.
Microsoft Tech Community – Latest Blogs –Read More
Azure Communication Services at the DEVIntersection Conference
Join us for the DEVintersection Conference from September 10 to 12, 2024, in Las Vegas, Nevada. This event gathers technology enthusiasts from across the globe. Whether you are a developer, IT professional, or business leader, this conference provides an exceptional chance to explore the latest in cloud technology.
Our experts from Azure Communication Services will be there at the event and will be hosting the following sessions.
Take Your Apps to the Next Level: Azure OpenAI, Communication, and Organizational Data Features
Sept 10, 14:00 – 15:00 | Lab | Grand Ballroom 118 | Dan Wahlin
Many of us are building Line of Business (LOB) apps that can integrate data from custom APIs and 3rd party data sources. That’s great, but when was the last time you sat down to think through how you can leverage the latest technologies to take the user experience to the next level?
In this session, Dan Wahlin introduces different ways to enhance customer experience by adding calling and SMS. You can integrate organizational data to minimize user context shifts, leverage the power of AI to enhance customer productivity, and take your LOB apps to the next level.
Register and add this session.
Bridging the Gap: Integrating Custom Applications with Microsoft Using Azure
Sept 11, 15:30 – 16:30 | Session | Grand Ballroom 122 | Milan Kaur
Discover how Azure can facilitate seamless integration between non-Teams users and Microsoft Teams. If you’re invested in Teams and seeking to develop audio-video solutions to connect your custom third-party applications with Teams, this session is for you. Join us to explore the possibilities and streamline collaboration beyond internal Teams.
Register and add this session.
Beyond chatbots: add multi-channel communication to your AI apps
Sept 12, 08:30 – 09:30 | Session | Grand Ballroom 122 | Milan Kaur
Unlock the potential of conversational AI with Azure!
In this session, discover how to extend your bot’s functionality beyond standard chat interactions. We’ll learn together how to add voice and other messaging channels such as WhatsApp to build pro-code AI bots grounded in custom data.
Register and add this session.
About our speakers
Dan Wahlin
Principal Cloud Developer Advocate
Milan Kaur
Senior Product Manager
Dan Wahlin is a Principal Cloud Developer Advocate at Microsoft focusing on Microsoft 365 and Azure integration scenarios. In addition to his work at Microsoft, Dan creates training courses for Pluralsight, speaks at conferences and meetups around the world, and offers webinars on a variety of technical topics.
Twitter: @DanWahlin
Milan is a seasoned software engineer turned product manager passionate about building innovative communication tools. With over a decade of experience in the industry, she has a deep understanding of the challenges and opportunities in the field of cloud communications.
LinkedIn: @milankaurintech
Microsoft Tech Community – Latest Blogs –Read More
Unlock Analytics and AI for Oracle Database@Azure with Microsoft Fabric and OCI GoldenGate
The strategic partnership between Oracle and Microsoft has redefined the enterprise cloud landscape. Oracle Database@Azure seamlessly integrates Oracle’s database services with Microsoft’s Azure cloud platform, empowering businesses to maintain the performance, security, and reliability of Oracle databases while modernizing with Azure’s extensive cloud services.
As companies strive to accelerate their digital transformation, reduce complexity, and optimize their cloud strategies, data remains central to their success. High-quality data underpins effective business insights and serves as the foundation for AI innovation. Now in public preview, customers have the opportunity to use OCI GoldenGate—a database replication and heterogeneous data integration service—to sync their data estates with Microsoft Fabric. This integration unlocks new prospects for data analytics and AI applications by unifying diverse datasets, allowing teams to identify patterns and visualize opportunities.
A Unified Platform for Data and AI
Microsoft Fabric is an AI-powered real-time analytics and business intelligence platform that consolidates data engineering, integration, warehousing, and data science into one unified solution. By simplifying the complexity and cost of integrating analytics services, Microsoft Fabric provides a seamless experience for data professionals across various roles.
Microsoft Fabric integrates tools like Azure Synapse Analytics and Azure Data Factory into a cohesive Software as a Service (SaaS) platform, featuring seven core workloads tailored to specific tasks and personas. This platform enables organizations to manage their entire data lifecycle within a single solution, streamlining the process of building, managing, and deploying data-driven applications. With its unified architecture, Microsoft Fabric reduces the complexity of managing a data estate and simplifies billing by offering a shared pool of capacity and storage across all workloads. It also enhances data management and protection with robust governance and security features.
A key highlight of Microsoft Fabric is its integration with native generative AI services, such as Copilot, which enables richer insights and more compelling visualizations. This AI-driven approach can significantly impact business growth by improving decision-making and collaboration across teams. With Power BI and Synapse workloads built in and native integration with Azure Machine Learning, you can accelerate the deployment of AI-powered solutions, making it an essential tool for organizations looking to advance their data strategies.
OCI Golden Gate integration with Microsoft Fabric
OCI GoldenGate is a real-time data integration and replication solution that ensures high availability, disaster recovery, and transactional integrity across diverse environments. When integrated with Microsoft Fabric, OCI GoldenGate adds significant value by enabling seamless, real-time data synchronization between Oracle databases and the AI-powered analytics platform of Fabric. This ensures that data professionals can work with the most up-to-date information across their data ecosystem, enhancing the accuracy and timeliness of insights.
OCI GoldenGate’s ability to support complex data transformations and migrations allows organizations to leverage Microsoft Fabric’s advanced analytics and AI capabilities without disruption, driving faster, more informed decision-making and enabling businesses to unlock new levels of innovation.
Get started
Enhance your data strategy and drive more informed decision-making by leveraging your existing Microsoft and Oracle investments with Oracle Database@Azure by integrating it with Microsoft Fabric. Get started today through the Azure Marketplace!
Read the Oracle CloudWorld blog: https://aka.ms/OCWBlog24
Learn more about Microsoft Fabric at https://aka.ms/fabric
Learn more about Oracle Database@Azure: https://aka.ms/oracle
Technical documentation: Overview – Oracle Database@Azure | Microsoft Learn
To setup OCI GoldenGate, you can refer to the documentation here – Implement OCI GoldenGate on an Azure Linux VM – Azure Virtual Machines | Microsoft Learn
Get skilled: https://aka.ms/ODAA_Learn
Microsoft Tech Community – Latest Blogs –Read More
Announcing availability of Oracle Database@Azure in Australia East
Microsoft and Oracle are excited to announce that we are expanding the general availability of Oracle Database@Azure for the Azure Australia East region.
Customer demand for Oracle Database@Azure continues to grow – that’s why we’re announcing plans to expand regional availability to a total of 21 regions around the world. Oracle Database@Azure is now available in six Azure regions – Australia East, Canada Central, East US, France Central, Germany West Central, and UK South. To meet growing global demand, the service will soon be available in more regions, including Brazil South, Central India, Central US, East US 2, Italy North, Japan East, North Europe, South Central US, Southeast Asia, Spain Central, Sweden Central, United Arab Emirates North, West Europe, West US 2, and West US 3. In addition to the 21 primary regions, we will also add support for disaster recovery in a number of other Azure regions including Brazil Southeast, Canada East, France South, Germany North, Japan West, North Central US, South India, Sweden South, UAE Central, UK West, and West US.
As part of the continued expansion of Oracle services on Azure, we have new integrations with Microsoft Fabric and Microsoft Sentinel and support for Oracle Autonomous Recovery Service. Visit our sessions at Oracle CloudWorld and read our blog to learn more.
Learn more: https://aka.ms/oracle
Technical documentation: Overview – Oracle Database@Azure | Microsoft Learn
Get skilled: https://aka.ms/ODAA_Learn
Microsoft Tech Community – Latest Blogs –Read More
Day zero support for iOS/iPadOS 18 and macOS 15
With Apple’s recent announcement of iOS/iPadOS 18.0 and macOS 15.0 Sequoia, we’ve been working hard to ensure that Microsoft Intune can provide day zero support for Apple’s latest operating systems so that existing features work as expected.
We’ll continue to upgrade our service and release new features that integrate elements of support for the new operating system (OS) versions.
Apple User Enrollment with Company Portal
With iOS/iPadOS 18, Apple no longer supports profile-based User Enrollment. Due to these changes, Intune will end support for Apple User Enrollment with Company Portal shortly after the release of iOS/iPadOS 18 and you’ll need to use an alternate management method for enrolling devices. We recommend enrolling devices with account driven User Enrollment for similar functionality and an improved user experience. For those looking for a simpler enrollment experience, try the new web based device enrollment for iOS/iPadOS.
Please note, device enrollment with Company Portal will remain unaffected by these changes.
Impact to existing devices and profiles:
After Intune ends support for User Enrollment with Company Portal:
Existing enrolled devices are not impacted and will continue to be enrolled.
Users won’t be able to enroll new devices if they’re targeted with this enrollment type profile.
Intune technical support will only be provided for existing devices enrolled with this method. We won’t provide technical support for any new enrollments.
New settings and payloads
We’ve continued to invest in the data-driven infrastructure that powers the settings catalog, enabling us to provide day zero support for new settings as they’re released by Apple. The Apple settings catalog has been updated to support all of the newly released iOS/iPadOS and macOS settings for both declarative device management (DDM) and mobile device management (MDM) so that your team can have your devices ready for day zero. New settings for DDM include:
Disk Management
External Storage: Control the mount policy for external storage
Network Storage: Control the mount policy for network storage
Safari Extension Settings
Allowed Domains: Control the domain and sub-domains that the extension can access
Denied Domains: Control the domain and sub-domains that the extension cannot access
Private Browsing: Control whether an extension is allowed in Private Browsing
State: Control whether an extension is allowed, disallowed, or configurable by the user
Software Update Settings
Allow Standard User OS Updates: Control whether a standard user can perform Major and Minor software updates
Software Update Settings > Automatic updates
Allowed: Specifies whether automatic downloads of available updates can be controlled by the user
Download: Specifies whether automatic downloads of available updates can be controlled by the user
Install OS Updates: Specifies whether automatic install of available OS updates can be controlled by the user
Install Security Update: Specifies whether automatic install of available security updates can be controlled by the user
Software Update Settings > Deferrals
Combined Period In Days: Specifies the number of days to defer a major or minor OS software update on the device
Major Period In Days: Specifies the number of days to defer a major OS software update on the device
Minor Period In Days: Specifies the number of days to defer a minor OS software update on the device
System Period In Days: Specifies the number of days to defer system or non-OS updates. When set, updates only appear after the specified delay, following the release of the update
Notifications: Configure the behavior of notifications for enforced updates
Software Update Settings > Rapid Security Response
Enable: Control whether users are offered Rapid Security Responses when available
Enable Rollback: Control whether users are offered Rapid Security Response rollbacks
Recommended Cadence: Specifies how the device shows software updates to the user
New settings for MDM include:
Extensible Single Sign On (SSO) > Platform SSO
Authentication Grace Period: The amount of time after a ‘FileVault Policy’, ‘Login Policy’, or ‘Unlock Policy’ is received or updated that unregistered local accounts can be used
FileVault Policy: The policy to apply when using Platform SSO at FileVault unlock on Apple Silicon Macs
Login Policy: The policy to apply when using Platform SSO at the login window
Non Platform SSO Accounts: The list of local accounts that are not subject to the ‘FileVault Policy’, ‘Login Policy’, or ‘Unlock Policy’
Offline Grace Period: The amount of time after the last successful Platform SSO login a local account password can be used offline
Unlock Policy: The policy to apply when using Platform SSO at screensaver unlock
Extensible Single Sign On Kerberos
Allow Password: Allow the user to switch the user interface to Password mode
Allow SmartCard: Allow the user to switch the user interface to SmartCard mode
Identity Issuer Auto Select Filter: A string with wildcards that can use used to filter the list of available SmartCards by issuer. e.g “*My CA2*”
Start In Smart Card Mode: Control if the user interface will start in SmartCard mode
Restrictions
Allow ESIM Outgoing Transfers
Allow Personalized Handwriting Results
Allow Video Conferencing Remote Control
Allow Genmoji
Allow Image Playground
Allow Image Wand
Allow iPhone Mirroring
Allow Writing Tools
System Policy Control
Enable XProtect Malware Upload
With the upcoming Intune September (2409) release, the new DDM settings will be:
Math
Calculator
Basic Mode
Add Square Root
Scientific Mode – Enabled
Programmer Mode – Enabled
Input Modes – Unit Conversion
System Behavior – Keyboard Suggestions
System Behavior – Math Notes
New MDM settings for Intune’s 2409 (September) release include:
System Extensions
Non Removable System Extensions
Non Removable System Extensions UI
Web Content Filter
Hide Deny List URLs
More information on configuring these new settings using the settings catalog can be found at Create a policy using settings catalog in Microsoft Intune.
Updates to ADE Setup Assistant screens within enrollment policies
With Intune’s September (2409) release, there’ll be six new Setup Assistant screens that admins can choose to show or hide when creating an Automated Device Enrollment (ADE) policy. These include three iOS/iPadOS and three macOS Skip Keys that will be available for both existing and new enrollment policies.
Emergency SOS (iOS/iPadOS 16+)
The IT admin can choose to show or hide the iOS/iPadOS Safety (Emergency SOS) setup pane that is displayed during Setup Assistant.
Action button (iOS/iPadOS 17+)
The IT admin can choose to show or hide the iOS/iPadOS Action button configuration pane that is displayed during Setup Assistant.
Intelligence (iOS/iPadOS 18+)
The IT admin can choose to show or hide the iOS/iPadOS Intelligence setup pane that is displayed during Setup Assistant.
Wallpaper (macOS 14+)
The IT admin can choose to show or hide the macOS Sonoma wallpaper setup pane that is displayed after an upgrade. If the screen is hidden, the Sonoma wallpaper will be set by default.
Lockdown mode (macOS 14+)
The IT admin can choose to show or hide the macOS Lockdown Mode setup pane that is displayed during Setup Assistant.
Intelligence (macOS 15+)
The IT admin can choose to show or hide the macOS Intelligence setup pane that is displayed during Setup Assistant.
For more information refer to Apple’s SkipKeys | Apple Developer Documentation.
Updates to supported vs. allowed versions for user-less devices
We previously introduced a new model for enrolling user-less devices (or devices without a primary user) for supported and allowed OS versions to keep enrolled devices secure and efficient. The support statements have been updated to reflect the changes with the iOS/iPadOS 18 and upcoming macOS 15 releases:
Support statement for supported versus allowed macOS versions for devices without a primary user.
If you have any questions or feedback, leave a comment on this post or reach out on X @IntuneSuppTeam. Stay tuned to What’s new in Intune for additional settings and capabilities that will soon be available!
Microsoft Tech Community – Latest Blogs –Read More
LLM Load Test on Azure (Serverless & Managed-Compute)
Introduction
In the ever-evolving landscape of artificial intelligence, the ability to efficiently load test large language models (LLMs) is crucial for ensuring optimal performance and scalability. llm-load-test-azure is a powerful tool designed to facilitate load testing of LLMs running in various Azure deployment settings.
Why Use llm-load-test-azure?
The ability to load test LLMs is essential for ensuring that they can handle real-world usage scenarios. By using llm-load-test-azure, developers can identify potential bottlenecks, optimize performance, and ensure that their models are ready for deployment. The tool’s flexibility, comprehensive feature set, and support for various Azure AI models make it an invaluable resource for anyone working with LLMs on Azure.
Some scenarios where this tool is helpful:
You set up an endpoint and need to determine the number of tokens it can process per minute and the latency expectations.
You implemented a Large Language Model (LLM) on your own infrastructure and aim to benchmark various compute types for your application.
You intend to test the real token throughput and conduct a stress test on your premium PTUs.
Key Features
llm-load-test-azure is packed with features that make it an indispensable tool for anyone working with LLMs on Azure. Here are some of the highlights:
Customizable Testing Dataset: Generate a custom testing dataset tailored to settings similar to your use case. This flexibility ensures that the load tests are as relevant and accurate as possible.
Load Testing Options: The tool supports customizable concurrency, duration, and warmup options, allowing users to simulate various load scenarios and measure the performance of their models under different conditions.
Support for Multiple Azure AI Models: Whether you’re using Azure OpenAI, Azure OpenAI Embedding, Azure Model Catalog serverless (Maas), or managed-compute (MaaP), llm-load-test-azure has you covered. The tool’s modular design enables developers to integrate new endpoints with minimal effort.
Detailed Results: Obtain comprehensive statistics like throughput, time-to-first-token, time-between-tokens and end2end latency in JSON format, providing valuable insights into the performance of your models.
Getting Started
Using llm-load-test-azure is straightforward. Here’s a quick guide to get you started:
Generate Dataset (Optional): Create a custom dataset using the generate_dataset.py script. Specify the input and output lengths, the number of samples, and the output file name.
[ python datasets/generate_dataset.py –tok_input_length 250 –tok_output_length 50 –N 100 –output_file datasets/random_text_dataset.jsonl ]
–tok_input_length: The length of the input. minimum 25.
–tok_output_length: The length of the output.
–N: The number of samples to generate.
–output_file: The name of the output file (default is random_text_dataset.jsonl).
Run the Tool: Execute the load_test.py script with the desired configuration options. Customize the tool’s behavior using a YAML configuration file, specifying parameters such as output format, storage type, and warmup options.
load_test.py [-h] [-c CONFIG] [-log {warn,warning,info,debug}]
optional arguments:
-h, –help show this help message and exit
-c CONFIG, –config CONFIG
config YAML file name
-log {warn,warning,info,debug}, –log_level {warn,warning,info,debug}
Provide logging level. Example –log_level debug, default=warning
Results
The tool will produce comprehensive statistics like throughput, time-to-first-token, time-between-tokens and end2end latency in JSON format, providing valuable insights into the performance of your LLM Azure endpoint.
Example of the json output:
“results”: [ # stats on a request level
…
],
“config”: { # the run settings
…
“load_options”: {
“type”: “constant”,
“concurrency”: 8,
“duration”: 20
…
},
“summary”: { # overall stats
“output_tokens_throughput”: 159.25729928295627,
“input_tokens_throughput”: 1592.5729928295625,
“full_duration”: 20.093270540237427,
“total_requests”: 16,
“complete_request_per_sec”: 0.79, # number of competed requests / full_duration
“total_failures”: 0,
“failure_rate”: 0.0
#time per ouput_token
“tpot”: {
“min”: 0.010512285232543946,
“max”: 0.018693844079971312,
“median”: 0.01216195583343506,
“mean”: 0.012808671338217597,
“percentile_80”: 0.012455177783966065,
“percentile_90”: 0.01592913103103638,
“percentile_95”: 0.017840550780296324,
“percentile_99”: 0.018523185420036312
},
#time to first token
“ttft”: {
“min”: 0.4043765068054199,
“max”: 0.5446293354034424,
“median”: 0.46433258056640625,
“mean”: 0.4660029411315918,
“percentile_80”: 0.51033935546875,
“percentile_90”: 0.5210948467254639,
“percentile_95”: 0.5295632600784301,
“percentile_99”: 0.54161612033844
},
#input token latency
“itl”: {
“min”: 0.008117493672586566,
“max”: 0.01664590356337964,
“median”: 0.009861880810416522,
“mean”: 0.010531313198552402,
“percentile_80”: 0.010261738599844314,
“percentile_90”: 0.013813444118403915,
“percentile_95”: 0.015781731761280615,
“percentile_99”: 0.016473069202959836
},
#time to ack
“tt_ack”: {
“min”: 0.404374361038208,
“max”: 0.544623851776123,
“median”: 0.464330792427063,
“mean”: 0.46600091457366943,
“percentile_80”: 0.5103373527526855,
“percentile_90”: 0.5210925340652466,
“percentile_95”: 0.5295597910881042,
“percentile_99”: 0.5416110396385193
},
“response_time”: {
“min”: 2.102457046508789,
“max”: 3.7387688159942627,
“median”: 2.3843793869018555,
“mean”: 2.5091602653265,
“percentile_80”: 2.4795608520507812,
“percentile_90”: 2.992232322692871,
“percentile_95”: 3.541854977607727,
“percentile_99”: 3.6993860483169554
},
“output_tokens”: {
“min”: 200,
“max”: 200,
“median”: 200.0,
“mean”: 200.0,
“percentile_80”: 200.0,
“percentile_90”: 200.0,
“percentile_95”: 200.0,
“percentile_99”: 200.0
},
“input_tokens”: {
“min”: 2000,
“max”: 2000,
“median”: 2000.0,
“mean”: 2000.0,
“percentile_80”: 2000.0,
“percentile_90”: 2000.0,
“percentile_95”: 2000.0,
“percentile_99”: 2000.0
},
}
}
Conclusion
llm-load-test-azure is a powerful and versatile tool that simplifies the process of load testing large language models on Azure. Whether you’re a developer or AI enthusiast, this repository provides the tools you need to ensure that your models perform optimally under various conditions. Check out the repository on GitHub and start optimizing your LLMs today!
Bookmark this Github link: maljazaery/llm-load-test-azure (github.com)
Acknowledgments
Special thanks to Zack Soenen for code contributions, Vlad Feigin for feedback and reviews, and Andrew Thomas, Gunjan Shah and my manager Joel Borellis for ideation and discussions.
llm-load-test-azure tool is derived from the original load test tool [openshift-psap/llm-load-test (github.com)]. Thanks to the creators.
Disclaimer
This tool is unofficial and not a Microsoft product. It is still under development, so feedback and bug reports are welcome.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft at Open Source Summit Europe 2024
Join Microsoft at Open Source Summit Europe, from September 16 to 18, 2024. This event gathers open source developers, technologists, and community leaders to collaborate, share insights, address challenges, and gain knowledge—advancing open source innovation and ensuring a sustainable ecosystem. Open Source Summit features a series of events focused on the most critical technologies, topics, and issues in the open source community today.
Register for Open Source Summit Europe 2024 today!
Attend Microsoft sessions
Attend a Microsoft session at Open Source Summit Europe to learn more about Microsoft’s contributions to open source communities, gain valuable insights from industry experts, and stay up to date on the latest open source trends. Be sure to add these exciting sessions to your event schedule.
Monday, September 16, 2024
Session
Speakers
Time
The Open Source AI Definition is (Almost) Ready
Justin Colannino, Microsoft
Stefano Maffulli, Open Source Initiative
2:15 PM to
2:55 PM CEST
Tuesday, September 17, 2024
Session
Speakers
Time
Keynote: OSS Security Through Collaboration
Ryan, Waite, Open Source Strategy and Incubations, Microsoft
9:50 AM to 10:05 AM CEST
Linux Sandboxing with Landlock
Mickaël Salaün, Senior Software Engineer, Microsoft
11:55 AM to 12:35 PM CEST
Danielle Tal, Microsoft; Mauro Morales, Spectro Cloud; Felipe Huici, Unikraft GmbH; Richard Brown, SUSE; Erik Nordmark, Zededa
11:55 AM to 12:35 PM PDT
Wednesday, September 18, 2024
Session
Speakers
Time
Panel: Why Open Source AI Matters for Europe
Justin Colannino, Microsoft; Sachiko Muto, OpenForum; Stefano Maffulli, Open Source Initiative; Cailean Osborne, The Linux Foundation
11:55 AM to 12:35 PM CEST
Open-Source Software Engineering Education
Stephen Walli, Principal Programmer Manager, Microsoft
3:10 PM to 3:50 PM CEST
Visit us at the Microsoft booth and experience exciting sessions and demos
Come visit us at booth D3 to engage with fellow open source enthusiasts at Microsoft, experience live demos on the latest open source technologies, and discuss the future of open source. You can also catch exciting sessions in the booth to learn more about a wide range of open source topics, including the following and more:
.NET 9
Azure Kubernetes Service
Flatcar Container Linux
Headlamp
Inspektor Gadget and eBPF observability
Linux on Azure
PostgreSQL
WebAssembly
We hope to see you in Vienna next week!
Learn more about Linux and open source at Microsoft
Open Source at Microsoft —explore the open source projects, programs, and tools at Microsoft.
Linux on Azure —learn more about building, running, and deploying your Linux applications in Azure.
Microsoft Tech Community – Latest Blogs –Read More
10 istilah AI (lanjutan) yang perlu Anda ketahui
Read the English version here
Jakarta, 4 September 2024 – Sejak generative artificial intelligence (AI) menjadi semakin populer pada akhir tahun 2022, sebagian besar dari kita telah memperoleh pemahaman dasar tentang teknologi tersebut dan bagaimana teknologi ini menggunakan bahasa sehari-hari untuk membantu kita berinteraksi dengan komputer secara lebih mudah. Beberapa dari kita bahkan telah menggunakan jargon seperti “prompt” dan “machine learning” sambil minum kopi santai bersama teman-teman. Pada akhir 2023 lalu, Microsoft telah merangkumkan 10 istilah AI yang perlu Anda ketahui. Namun, seiring dengan berkembangnya AI, istilah-istilah ini juga terus berkembang. Tahukah Anda perbedaan antara model bahasa besar dan kecil? Atau apa kepanjangan dari “GPT” di ChatGPT? Berikut ini adalah sepuluh kosa kata AI tingkat lanjut yang perlu Anda ketahui.
Penalaran (reasoning)/perencanaan (planning)
Komputer yang menggunakan AI kini dapat memecahkan masalah dan menyelesaikan tugas dengan menggunakan pola yang telah mereka pelajari dari data historis untuk memahami informasi. Proses ini mirip dengan penalaran atau proses berpikir logis. Sistem AI yang paling canggih menunjukkan kemampuan untuk melangkah lebih jauh dari ini dan dapat mengatasi masalah yang semakin kompleks dengan membuat perencanaan. Ia bisa merancang urutan tindakan yang perlu diterapkan untuk mencapai tujuan tertentu.
Sebagai contoh, bayangkan Anda meminta bantuan program AI untuk membuat rencana perjalanan ke taman bermain. Anda menulis “saya ingin mengunjungi enam wahana berbeda di taman bermain X, termasuk wahana air di waktu terpanas di hari Sabtu, 5 Oktober”. Berdasarkan tujuan Anda tersebut, sistem AI dapat memecahnya menjadi langkah-langkah kecil untuk membuat jadwal sambil menggunakan penalaran, untuk memastikan Anda tidak mengunjungi wahana yang sama dua kali, dan bahwa Anda bisa menaiki wahana air antara jam 12 siang sampai jam 3 sore.
Pelatihan (training)/inferensi (inference)
Ada dua langkah yang dilakukan untuk membuat dan menggunakan sistem AI: pelatihan dan inferensi. Pelatihan adalah aktivitas mendidik sistem AI di mana ia akan diberikan dataset, dan sistem AI tersebut belajar melakukan tugas atau membuat prediksi berdasarkan data tersebut. Misalnya, sistem AI diberikan daftar harga rumah yang baru-baru ini dijual di suatu lingkungan, lengkap dengan jumlah kamar tidur dan kamar mandi di masing-masing rumah tersebut dan banyak variabel lainnya. Selama pelatihan, sistem AI akan menyesuaikan parameter internalnya. Parameter internal yang dimaksud merupakan sebuah nilai yang menentukan berapa banyak bobot yang harus diberikan terhadap tiap variabel, dan bagaimana ia memengaruhi harga jual rumah. Sementara itu, inferensi adalah ketika sistem AI menggunakan pola dan parameter yang telah dipelajari tadi untuk menghasilkan prediksi harga untuk rumah yang baru akan dipasarkan di masa depan.
Model bahasa kecil (small language model / SLM)
Model bahasa kecil, atau SLM, adalah versi mini dari model bahasa besar, atau large language models (LLM). Keduanya menggunakan teknik pembelajaran mesin (machine learning) untuk membantu mereka mengenali pola dan hubungan, sehingga mereka dapat menghasilkan respons dalam bahasa sehari-hari yang realistis. Jika LLM berukuran sangat besar dan membutuhkan daya komputasi dan memori yang besar, SLM seperti Phi-3 dilatih menggunakan dataset lebih kecil yang terkurasi dan memiliki parameter yang lebih sedikit, sehingga lebih ringkas dan bahkan dapat digunakan secara offline alias tanpa koneksi internet. Ini membuatnya cocok diaplikasikan di perangkat seperti laptop atau ponsel, di mana Anda mungkin ingin mengajukan pertanyaan sederhana tentang perawatan hewan peliharaan, tetapi tidak perlu mengetahui informasi terperinci mengenai cara melatih anjing pemandu.
Grounding
Sistem generative AI dapat menyusun cerita, puisi, dan lelucon, serta menjawab pertanyaan penelitian. Tetapi terkadang mereka kesulitan membedakan fakta dan fiksi, atau mungkin data pelatihan mereka sudah ketinggalan zaman, sehingga sistem AI dapat memberikan tanggapan yang tidak akurat—suatu kejadian yang disebut sebagai halusinasi. Developers bekerja untuk membantu AI berinteraksi dengan dunia nyata secara akurat melalui proses grounding. Ini adalah proses ketika developers menghubungkan dan menambatkan model mereka dengan data dan contoh nyata untuk meningkatkan akurasi dan menghasilkan output yang lebih relevan secara kontekstual dan dipersonalisasi.
Retrieval Augmented Generation (RAG)
Ketika developers memberikan akses sistem AI ke sumber grounding untuk membantunya menjadi lebih akurat dan terkini, mereka menggunakan metode yang disebut Retrieval Augmented Generation atau RAG. Pola RAG menghemat waktu dan sumber daya dengan memberikan pengetahuan tambahan tanpa harus melatih ulang program AI.
Ini seolah-olah Anda adalah detektif Sherlock Holmes dan Anda telah membaca setiap buku di perpustakaan tetapi belum bisa memecahkan suatu kasus, jadi Anda naik ke loteng, membuka beberapa gulungan naskah kuno, dan voilà — Anda menemukan potongan teka-teki yang hilang. Sebagai contoh lain, jika Anda memiliki perusahaan pakaian dan ingin membuat chatbot yang dapat menjawab pertanyaan khusus terkait produk Anda, Anda dapat menggunakan pola RAG di katalog produk Anda untuk membantu pelanggan menemukan sweater hijau yang sempurna dari toko Anda.
Orkestrasi (Orchestration)
Program AI perlu melakukan banyak hal saat memproses permintaan pengguna. Untuk memastikan sistem AI ini melakukan semua tugas dalam urutan yang benar demi menghasilkan respons terbaik, seluruh tugas ini diatur oleh lapisan orkestrasi.
Sebagai contoh, jika Anda bertanya kepada Microsoft Copilot “siapa Ada Lovelace”, dan kemudian menanyakan Copilot “kapan dia lahir” di prompt selanjutnya, orkestrator AI di sini menyimpan riwayat obrolan Anda untuk melihat bahwa kata “dia” di prompt kedua itu merujuk pada Ada Lovelace.
Lapisan orkestrasi juga dapat mengikuti pola RAG dengan mencari informasi segar di internet untuk ditambahkan ke dalam konteks dan membantu model menghasilkan jawaban yang lebih baik. Ini seperti seorang maestro yang mengisyaratkan pemain biola dan kemudian seruling dan oboe, sambil mengikuti lembaran musik untuk menghasilkan suara yang ada dalam benak komposer.
Memori
Model AI saat ini secara teknis tidak memiliki memori. Tetapi program AI dapat mengatur instruksi yang membantu mereka “mengingat” informasi dengan mengikuti langkah-langkah spesifik dengan setiap interaksi — seperti menyimpan pertanyaan dan jawaban sebelumnya dalam obrolan secara sementara, dan kemudian memasukkan konteks itu dalam permintaan model saat ini, atau menggunakan data grounding dari pola RAG untuk memastikan respons yang diberikan menggunakan informasi terbaru. Developers bereksperimen dengan lapisan orkestrasi untuk membantu sistem AI mengetahui apakah mereka perlu mengingat rincian langkah secara sementara, misalnya — memori jangka pendek, seperti mencatat di sticky note — atau apakah akan lebih berguna jika sistem AI mengingat sesuatu dalam jangka waktu yang lebih lama dengan menyimpannya di lokasi yang lebih permanen.
Transformer models dan diffusion models
Orang-orang telah mengajarkan sistem AI untuk memahami dan menghasilkan bahasa selama beberapa dekade, tetapi salah satu terobosan yang mempercepat kemajuan baru-baru ini adalah transformer models. Di antara model generative AI, tranformer adalah model yang memahami konteks dan nuansa terbaik dan tercepat. Mereka adalah pendongeng yang fasih, memperhatikan pola data dan mempertimbangkan pentingnya input yang berbeda untuk membantu mereka dengan cepat memprediksi apa yang akan terjadi selanjutnya, sehingga memungkinkan mereka menghasilkan teks. Bahkan transformer adalah huruf T di ChatGPT — Generative Pre-trained Transformer. Sementara itu, diffusion models yang umumnya digunakan untuk pembuatan gambar menambahkan sentuhan baru dengan bekerja secara lebih bertahap dan metodis, menyebarkan piksel gambar dari posisi acak hingga didistribusikan sampai membentuk gambar yang diminta dalam prompt. Diffusion models terus membuat perubahan kecil sampai mereka menciptakan output yang sesuai dengan kebutuhan pengguna.
Frontier models
Frontier models adalah sistem skala besar yang mendorong batas-batas AI dan dapat melakukan berbagai macam tugas dengan kemampuan baru yang lebih luas. Mereka bisa sangat maju sehingga terkadang kita terkejut dengan apa yang dapat mereka capai. Perusahaan teknologi termasuk Microsoft membentuk Frontier Model Forum untuk berbagi pengetahuan, menetapkan standar keamanan, dan membantu semua orang memahami program AI yang kuat ini guna memastikan pengembangan yang aman dan bertanggung jawab.
GPU
GPU, yang merupakan singkatan dari Graphics Processing Unit, pada dasarnya adalah kalkulator bertenaga turbo. GPU awalnya dirancang untuk menghaluskan grafis fantastis dalam video game, dan kini menjadi otot dari komputasi. Chip-nya memiliki banyak core kecil, yakni jaringan sirkuit dan transistor, yang menangani masalah matematika secara bersama-sama, atau disebut juga sebagai pemrosesan paralel. Hal ini pada dasarnya sama dengan yang AI lakukan — memecahkan banyak perhitungan dalam skala besar untuk dapat berkomunikasi dalam bahasa manusia dan mengenali gambar atau suara. Karena itu, platform AI sangat memerlukan GPU, baik untuk pelatihan dan inferensi. Faktanya, model AI paling canggih saat ini dilatih menggunakan serangkaian besar GPU yang saling berhubungan — terkadang berjumlah puluhan ribu dan tersebar di pusat data raksasa — seperti yang dimiliki Microsoft di Azure, yang merupakan salah satu komputer paling kuat yang pernah dibuat.
Pelajari selengkapnya tentang berita AI terbaru di Microsoft Source dan berita kami di Indonesia melalui halaman ini.
-SELESAI-
Conditional formating using formula
Hi,
I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2)
I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:
=MATCH(A2,Array1,0)
This only highlights a single cell, even if I try to apply the conditional format to the Table1
Can anyone help?
Thanks
Hi, I’m looking to apply a conditional format to a table (Table1) which highlights the row where a cell matches a cell within another table (Table2) I’ve had a look online, the only thing I can find is a formula which works if I refer to an array of cells rather than another table in the workbook:=MATCH(A2,Array1,0)This only highlights a single cell, even if I try to apply the conditional format to the Table1 Can anyone help?Thanks Read More
New Outlook:
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails.
Can’t sign in to the New Outlook. Besides, my hotmail is blocked and I cannot access my mails. Read More
Migrating to 365 with 2 domains
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same?
I have a client that has two different domains (old and new). Example: Old email: email address removed for privacy reasons new email email address removed for privacy reasons. It looks like their provider created alias’s for the new domain. Problem is they still get email going to the old email that get’s forwarded(?) to the new email. I want to migrate over to 365. I’m pretty sure the migration will work to transfer over their email history using the new email, but I’m not sure how the forwarding will work. Can I create alias’s for the old email in 365 to do the same? Read More
Upcoming marketplace webinars available in September
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners
Whether you are brand new to marketplace or have already published multiple offers, our Mastering the Marketplace webinar series has a variety of offerings to help you maximize the marketplace opportunity. Check out these upcoming webinars in September:
▪ Creating your first offer in Partner Center (9/5): Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster!
▪ Creating Plans and Pricing for your offer (9/10): Learn about the payouts process lifecycle for the Microsoft commercial marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. We will review the payouts process lifecycle for the Azure Marketplace; how to register and the registration requirements; general payout processes from start to finish; and, how to view and access payout reporting.
▪ AI and the Microsoft commercial marketplace (9/12): Through the Microsoft commercial marketplace, get connected to the solutions you need—from innovative AI applications to cloud infra and everything in between. Join this session to learn what’s on our roadmap and see how the marketplace helps you move faster and spend smarter.
▪ Developing your SaaS offer (9/12): In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement a SaaS landing page and webhook to subscribe to change events, and how to integrate your SaaS product into the marketplace.
Find our complete schedule here:
https://aka.ms/MTMwebinars
#ISV #maximizemarketplace #Azure #MSMarketplace #MSPartners
Formula returning dash when I add a new cell
extremely frustrating I use this sheet to track my side job pay and it glitches everytime I try to edit it and returns 0. i am trying to add august to the gross pay total.
extremely frustrating I use this sheet to track my side job pay and it glitches everytime I try to edit it and returns 0. i am trying to add august to the gross pay total. Read More
Tasks
When I open Tasks I get “The task owner has restricted this action,” and “This list cannot be modified as it no longer exists.” I am horrified as I use it every day. I can’t modify the task in any way. How can I fix this?
When I open Tasks I get “The task owner has restricted this action,” and “This list cannot be modified as it no longer exists.” I am horrified as I use it every day. I can’t modify the task in any way. How can I fix this? Read More
A generalisation of the MAP lambda helper function
Discussion topic. Your thoughts are welcome.
On Saturday I finally bit the bullet and completed a MAPλ Lambda function that generalises the in-built MAP Lambda helper function. As examples, I tried problems of generating the Kronecker product of two matrices and then one of generating variants of an amortisation table.
The original amortisation schedule uses SCAN to calculate closing balances step by step from opening balances. Having returned the closing balances as an array, the principal is inserted at the first element to give opening balances. An array calculation based on the same code is used to return other values of interest using HSTACK.
Following that, I created the array of loan terms {10, 15, 20} (yrs) and used the formula
= MAPλ(variousTerms, AmortisationTableλ(principal, rate, startYear))
to generate
as a single spilt range.
I have posted a copy of MAPλ on GitHub
A version of Excel MAP helper function that will return an array of arrays (github.com)
The intention is that the function can be used without knowing how it works but you are, of course, welcome to try to pick through it.
Discussion topic. Your thoughts are welcome. On Saturday I finally bit the bullet and completed a MAPλ Lambda function that generalises the in-built MAP Lambda helper function. As examples, I tried problems of generating the Kronecker product of two matrices and then one of generating variants of an amortisation table. The original amortisation schedule uses SCAN to calculate closing balances step by step from opening balances. Having returned the closing balances as an array, the principal is inserted at the first element to give opening balances. An array calculation based on the same code is used to return other values of interest using HSTACK.Following that, I created the array of loan terms {10, 15, 20} (yrs) and used the formula = MAPλ(variousTerms, AmortisationTableλ(principal, rate, startYear)) to generateas a single spilt range. I have posted a copy of MAPλ on GitHub A version of Excel MAP helper function that will return an array of arrays (github.com)The intention is that the function can be used without knowing how it works but you are, of course, welcome to try to pick through it. Read More
Update Error for Windows 11 Insider Preview (10.0.26120.1542)
Hi!
When the update Windows 11 Insider Preview (10.0.26120.1542) started, it reached 1% and suddenly stopped.
I tried to run a Troubleshoot for Windows Update inside Configurations and it shows an error 0x803C010A and didn’t proceed as well.
Anyone solved this problem?
Thanks
Hi!When the update Windows 11 Insider Preview (10.0.26120.1542) started, it reached 1% and suddenly stopped.I tried to run a Troubleshoot for Windows Update inside Configurations and it shows an error 0x803C010A and didn’t proceed as well.Anyone solved this problem? Thanks Read More
How to sync Outlook Notes with Gmail account
I have Outlook 2021 desktop installed on my PC. I would like to sync the Outlook Notes:
with my Google Workspace account. Is this possible?
I have Outlook 2021 desktop installed on my PC. I would like to sync the Outlook Notes: with my Google Workspace account. Is this possible? Read More
Default SQL Server Connection for SSMS
SQL 2019 – SSMS 19.3.4.0
I was always wrongly under the impression that SSMS required a server connection in the Object Explorer to run a script against. We have databases with the same names on 2 servers as we’re preparing for migration and I accidentally ran a script on server B, even though there appeared to be no connection open to server B. Only Server A was connected in the object explorer. I was then shocked to find that any new sql script I opened was connected to server B which had been closed out in Object Explorer.
What controls the default server for a script when opening via File / Open in SSMS? What is the best way to lock a script to specific server or make it more obvious which server this is being applied to. I may need to get used to looking in the bottom right where it displays the SQL server, but I’d like to make it more fool proof.
I see activating SQLCMD Mode on the Query Menu is one option, but I wonder what the downside to this might be such that it is not default behaviour.
SQL 2019 – SSMS 19.3.4.0I was always wrongly under the impression that SSMS required a server connection in the Object Explorer to run a script against. We have databases with the same names on 2 servers as we’re preparing for migration and I accidentally ran a script on server B, even though there appeared to be no connection open to server B. Only Server A was connected in the object explorer. I was then shocked to find that any new sql script I opened was connected to server B which had been closed out in Object Explorer. What controls the default server for a script when opening via File / Open in SSMS? What is the best way to lock a script to specific server or make it more obvious which server this is being applied to. I may need to get used to looking in the bottom right where it displays the SQL server, but I’d like to make it more fool proof. I see activating SQLCMD Mode on the Query Menu is one option, but I wonder what the downside to this might be such that it is not default behaviour. Read More