Month: August 2024
Speech Recognition for Alphanumeric
Hi,
I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low.
For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”.
I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks.
Is there any settings available to improve the recognition success rate?
Azure Services used:
ACS Phone Number
Azure Cognitive Service
Event Grid Subscriptions
Thanks,
Aravind
Hi, I am using Azure Communication Service with Cognitive Service for handling voice call scenarios (STT and TTS). One of our customer use cases requires alpha-numeric input in a workflow. The Azure Speech recognizer performs well for numbers and other patterns. However, when the user spells out alphabets for alphanumeric values, the recognition success rate is very low. For example, the product ID pattern is like “P-43246”. In most cases, “P” is recognized as “D”, “B”, or “3”. I have tested this on both mobile phone networks and VoIP. The success rate is significantly lower on mobile networks. Is there any settings available to improve the recognition success rate?Azure Services used:ACS Phone NumberAzure Cognitive Service Event Grid SubscriptionsThanks,Aravind Read More
Optimizing Query Performance with Work_Mem
work_mem plays a crucial role in optimizing query performance in Azure Database for PostgreSQL. By allocating sufficient memory for sorting, hashing, and other internal operations, One can improve overall database performance and responsiveness, especially under heavy load or complex query scenarios. Fine-tuning work_mem based on workload characteristics is key to achieving optimal performance in your PostgreSQL environment
Understanding work_mem
Purpose:
Memory for Operations: work_mem sets the maximum amount of memory that can be used by operations such as sorting, hashing, and joins before PostgreSQL writes data to temporary disk files. This includes operations to accomplish:
ORDER BY: Sort nodes are introduced in the plan when ordering cannot be satisfied by an index.
DISTINCT and GROUP BY: These can introduce Aggregate nodes with a hashing strategy, which require memory to build hash tables, and potentially Sort nodes when the Aggregate is parallelized.
Merge Joins: When sorting of some or both of the relations being joined is not satisfied via indexes.
Hash Joins: To build hash tables.
Nested Loop Joins: When memoize nodes are introduced in the plan because the estimated number of duplicates is high enough that caching results of lookups is estimated to be cheaper than doing the lookups again.
Default Value: The default work_mem value is 4 MB (or 4096 KB). This means that any operation can use up to 4 MB of memory. If the operation requires more memory, it will write data to temporary disk files, which can significantly slow down query performance.
Concurrent Operations:
Multiple Operations: A single complex query may involve several sorts or hash operations that run in parallel. Each operation can utilize the work_mem allocated, potentially leading to high total memory consumption if multiple operations are occurring simultaneously.
Multiple Sessions: If there are several active sessions, each can also use up to the work_mem value for their operations, which further increases memory usage. For example, if you set work_mem to 10 MB and have 100 concurrent connections, the total potential memory usage for sorting and hashing operations could reach 1,000 MB (or 1 GB).
Impact of Disk Usage:
Spilling to Disk: When the memory allocated for an operation exceeds work_mem, PostgreSQL writes data to temporary files on disk. Disk I/O is significantly slower than memory access, which can lead to degraded performance. Therefore, optimizing work_mem is crucial to minimize disk spills.
Disk Space Considerations: Excessive disk spills can also lead to increased disk space usage, particularly for large queries, which may affect overall database performance and health.
Hash Operations:
Sensitivity to Memory: Hash-based operations (e.g., hash joins, hash aggregates) are particularly sensitive to memory availability. PostgreSQL can use a hash_mem_multiplier to allow these operations to use more memory than specified by work_mem. This multiplier can be adjusted to allocate a higher memory limit for hash operations when needed.
Adjusting work_mem at Different Levels
Server Parameter:
Affects all connections unless overridden.
Configured globally, via REST APIs, Azure CLI or the Azure portal. For more information, read Server parameters in Azure Database for PostgreSQL – Flexible Server
Session Level:
Adjusted using SET work_mem = ’32MB’;
Affects only the current session.
Reverts to default after the session ends.
Useful for optimizing specific queries.
Role or user level:
Set using ALTER ROLE username SET work_mem = ’16MB’;
Applied automatically upon user login.
Tailors settings to user-specific workloads.
Database Level:
Set using ALTER DATABASE dbname SET work_mem = ’20MB’;
Affects all connections to the specified database.
Function, Procedure Level:
Adjusted within a stored procedure/function using SET work_mem = ’64MB’;
Valid for the duration of the procedure/function execution.
Allows fine-tuning of memory settings based on specific operations.
Server Parameter: work_mem
The formula provided, work_mem = Total RAM / Max Connections / 16, is a guideline to ensure that the memory is distributed effectively without over-committing resources. Refer to the official Microsoft documentation on managing high memory utilization in Azure Database for PostgreSQL here.
Breaking Down the Formula
Total RAM:
This is the total physical memory available on your PostgreSQL server. It’s the starting point for calculating memory allocation for various PostgreSQL operations.
Max Connections:
This is the maximum number of concurrent database connections allowed. PostgreSQL needs to ensure that each connection can operate efficiently without causing the system to run out of memory.
Division by 16:
The factor of 16 is a conservative estimate to prevent overallocation of memory. This buffer accounts for other memory needs of PostgreSQL and the operating system.
If your server has a significant amount of RAM and you are confident that other memory requirements (e.g., operating system, cache, other processes) are sufficiently covered, you might reduce the divisor (e.g., to 8 or 4) to allocate more memory per operation.
Analytical workloads often involve complex queries with large sorts and joins. For such workloads, increasing work_mem by reducing the divisor can improve query performance significantly.
Step-by-Step Calculation of work_mem
Total RAM:
The server has 512 GB of RAM.
Convert 512 GB to MB: 512 * 1024 = 524,288 MB
Max Connections:
The server allows up to 2000 maximum connections.
Base Memory Per Connection:
Divide the total RAM by the number of connections: 524,288 / 2000 = 262.144 MB
Apply the Conservative Factor (Divide by 16):
Apply the Conservative Factor (Divide by 16): 262.144 / 16 = 16.384 MB
One should set work_memto approximately 16 MB (rounded from 16.384 MB).
In case one need help with how to set up server parameters or require more information, please refer to the official documentation at Azure PostgreSQL Flexible Server Server Parameters. This resource provides comprehensive insights into the server parameters and their configurations.
Query Execution with EXPLAIN ANALYZE
Fine-Tune work_mem with EXPLAIN ANALYZE
To determine the optimal work_mem value for your query, you’ll need to analyze the EXPLAIN ANALYZE output to understand how much memory the query is using and where it is spilling to disk. Here’s a step-by-step guide to help you:
Execute the query with EXPLAIN ANALYZE to get detailed execution statistics:
EXPLAIN (ANALYZE, BUFFERS)
SELECT
*
FROM DataForWorkMem
WHERE time BETWEEN ‘2006-01-01 05:00:00+00’ AND ‘2006-03-31 05:10:00+00’
ORDER BY name;
Analyze the Output
Look for the following details in the output:
Sort Operation: Check if there is a Sort operation and whether it mentions “external sort” or “external merge”, This indicates that the sort operation used more memory than allocated in work_mem and had to spill to disk.
Buffers Section: The Buffers section shows the amount of data read from and written to disk. High values here may indicate that increasing work_mem could reduce the amount of data spilled to disk.
Here is output generated by above query:
“Gather Merge (cost=8130281.85..8849949.13 rows=6168146 width=47) (actual time=2313.021..3848.958 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72278, temp read=97446 written=97605“
” -> Sort (cost=8129281.82..8136992.01 rows=3084073 width=47) (actual time=2296.884..2726.374 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: external merge Disk: 193200kB“
” Buffers: shared hit=72278, temp read=97446 written=97605“
” Worker 0: Sort Method: external merge Disk: 196624kB“
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..7661339.18 rows=3084073 width=47) (actual time=206.138..739.962 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Rows Removed by Index Recheck: 62934″
” Heap Blocks: exact=15199 lossy=17800″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=203.416..203.417 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.129 ms”
“Execution Time: 4169.774 ms”
Let’s break down the details from the execution plan:
Gather Merge
Purpose: Gather Merge is used to combine results from parallel workers. It performs an order-preserving merge of the results produced by each of its child node instances.
Cost and Rows:
Planned Cost: 8130281.85..8849949.13
This is the estimated cost of the operation.
Planned Rows: 6168146
This is the estimated number of rows to be returned.
Actual Time: 2313.021..3848.958
The actual time taken for the Gather Merge operation.
Actual Rows: 6564864
The actual number of rows returned.
Workers:
Planned: 2
The planned number of parallel workers for this operation.
Launched: 1
The number of workers that were actually used.
Buffers
Shared Hit: 72278
This represents the number of buffer hits for shared buffers .
Temp Read: 97446
This indicates the amount of temporary disk space read.
Approximately 798.8 MB (97446 blocks * buffers of 8KB)
Temp Written: 97605
This indicates the amount of temporary disk space written.
Approximately 799.6 MB (97605 blocks * buffers of 8KB)
Sort Node
Sort:
Cost: 8129281.82..8136992.01
The estimated cost for the sorting operation includes both the startup cost and the cost of retrieving all available rows from the operator.
The startup cost represents the estimated time required to begin the output phase, such as the time needed to perform the sorting in a sort node.
Rows: 3084073
The estimated number of rows returned.
Actual Time: 2296.884..2726.374
The actual time taken for the sorting operation.
The first number represents the startup time for the operator, i.e., the time it took to begin executing this part of the plan. The second number represents the total time elapsed from the start of the execution of the plan to the completion of this operation. The difference between these two values is the actual duration that this operation took to complete.
Actual Rows: 3282432
The actual number of rows returned.
Sort Method
External Merge:
This indicates that an external merge sort was used, meaning that the sort could not be handled entirely in memory and required temporary files.
Disk:
Main Process: 193200 kB
The amount of disk space used by the main process for sorting.
Worker 0: 196624 kB
The amount of disk space used by the worker process for sorting.
To optimize PostgreSQL query performance and avoid disk spills, set the work_mem to cover the total memory usage observed during sorting:
Main Process Memory Usage: 193200 kB
Worker Memory Usage: 196624 kB
Total Memory Required: 389824 kB (approximately 380 MB)
Recommended work_mem Setting: 380 MB
This setting ensures that the sort operation can be performed entirely in memory, improving query performance and avoiding disk spills.
Increasing work_mem to 380 MB at the session level resolved the issue. The execution plan confirms that this memory allocation is now adequate for your sorting operations. The absence of temporary read/write stats in the Buffers section suggests that sorting is being managed entirely in memory, which is a favorable result.
Here’s is updated execution plan:
“Gather Merge (cost=4944657.91..5664325.19 rows=6168146 width=47) (actual time=1213.740..2170.445 rows=6564864 loops=1)”
” Workers Planned: 2″
” Workers Launched: 1″
” Buffers: shared hit=72244″
” -> Sort (cost=4943657.89..4951368.07 rows=3084073 width=47) (actual time=1207.758..1357.753 rows=3282432 loops=2)”
” Sort Key: name”
” Sort Method: quicksort Memory: 345741kB”
” Buffers: shared hit=72244″
” Worker 0: Sort Method: quicksort Memory: 327233kB”
” -> Parallel Bitmap Heap Scan on dataforworkmem (cost=88784.77..4611250.25 rows=3084073 width=47) (actual time=238.881..661.863 rows=3282432 loops=2)”
” Recheck Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Heap Blocks: exact=34572″
” Buffers: shared hit=72236″
” -> Bitmap Index Scan on dataforworkmem_time_idx (cost=0.00..86934.32 rows=7401775 width=0) (actual time=230.774..230.775 rows=6564864 loops=1)”
” Index Cond: ((“”time”” >= ‘2006-01-01 05:00:00+00’::timestamp with time zone) AND (“”time”” <= ‘2006-03-31 05:10:00+00’::timestamp with time zone))”
” Buffers: shared hit=5702″
“Planning:”
” Buffers: shared hit=5″
“Planning Time: 0.119 ms”
“Execution Time: 2456.604 ms”
It confirms that:
Sort Method: “quicksort” or “other in-memory method” instead of “external merge.”
Memory Usage: The allocated work_mem (380 MB) is used efficiently.
Execution Time: Decreased to 2456.604 ms from 4169.774 ms.
Adjusting work_mem Using pg_stat_statements Data
To estimate the memory needed for a query based on the temp_blks_readparameters from PostgreSQL’s pg_stat_statements, you can follow these steps:
Get the Block Size:
PostgreSQL uses a default block size of 8KB. You can verify this by running:
Calculate Total Temporary Block Usage:
Sum the temp_blks_read to get the total number of temporary blocks used by the query.
Convert Blocks to Bytes:
Multiply the total temporary blocks by the block size (usually 8192 bytes) to get the total temporary data in bytes.
Convert Bytes to a Human-Readable Format:
Convert the bytes to megabytes (MB) or gigabytes (GB) as needed.
To identify queries that might benefit from an increased work_mem setting, use the following query to retrieve key performance metrics from PostgreSQL’s pg_stat_statements view:
SELECT
query,
calls,
total_exec_time AS total_time,
mean_exec_time AS mean_time,
stddev_exec_time AS stddev_time,
rows,
local_blks_written,
temp_blks_read,
temp_blks_written,
blk_read_time,
blk_write_time
FROM
pg_stat_statements
ORDER BY
total_exec_time DESC
LIMIT 10;
Example Calculation
Suppose we have the following values from the pg_stat_statements:
temp_blks_read: 5000
block_size: 8192 bytes
Calculation:
Total Temporary Data (bytes)= 5000 × 8192 = 40,960,000 bytes
Total Temporary Data (MB) = 40,960,000 / (1024 × 1024) = 39.06 MB
This estimate indicates that to keep operations in memory and avoid temporary disk storage, work_mem should ideally be set to a value higher than 39 MB.
Here is a query that provides the total amount of temporary data in megabytes for each query recorded in pg_stat_statements. This information can help identify which queries might benefit from an increase in work_mem to potentially improve performance by reducing temporary disk usage.
SELECT
query,
total_temp_data_bytes / (1024 * 1024) AS total_temp_data_mb
FROM
(
SELECT
query,
temp_blks_read * 8192 AS total_temp_data_bytes
FROM pg_stat_statements
) sub;
Using Query Store to Determine work_mem
PostgreSQL’s Query Store is a powerful feature designed to provide insights into query performance, identify bottlenecks, and monitor execution patterns.
Here is how to use Query Store to analyze query performance and estimate the disk storage space required for temporary blocks read (temp_blks_read).
Analyzing Query Performance with Query Store
To analyze query performance, Query Store offers execution statistics, including temp_blks_read, which indicates the number of temporary disk blocks read by a query. Temporary blocks are used when query results or intermediate results exceed available memory.
Retrieving Average Temporary Blocks Read
Use the following SQL query to get the average temp_blks_read for individual queries:
SELECT
query_id,
AVG(temp_blks_read) AS avg_temp_blks_read
FROM query_store.qs_view
GROUP BY query_id;
This query calculates the average temp_blks_read for each query. For example, if query_id 378722 shows an average temp_blks_read of 87,348, this figure helps understand temporary storage usage.
Estimating Disk Storage Space Required
Estimate disk storage based on temp_blks_read to gauge temporary storage impact:
Know the Block Size: PostgreSQL’s default block size is 8 KB.
Calculate Disk Space in Bytes: Multiply the average temp_blks_read by the block size:
Space (bytes) = avg_temp_blks_read × Block Size (bytes)
Space (bytes) = 87,348 × 8192 = 715,048,896 bytes
Convert Bytes to Megabytes (MB):
Space (MB) = 715,048,896 / (1024 * 1024) = 682 MB
Consider adjusting work_mem at the session level or within stored procedures/functions to optimize performance.
Query Store is an invaluable tool for analyzing and optimizing query performance in PostgreSQL. By examining metrics like temp_blks_read, you can gain insights into query behavior and estimate the disk storage required. This knowledge enables better resource management, performance tuning, and cost control, ultimately leading to a more efficient and reliable database environment
Best Practices for Setting work_mem
Monitor and Adjust: Regularly monitor the database’s performance and memory usage. Tools like pg_stat_statements and pg_stat_activity can provide insights into how queries are using memory.
Incremental Changes: Adjust work_mem incrementally and observe the impact on performance and resource usage. Make small adjustments and evaluate their effects before making further changes.
Set Appropriately for Workloads: Tailor work_mem settings based on the types of queries and workloads running on your database. For example, batch operations or large sorts might need higher settings compared to simple, small queries.
Consider Total Memory: Calculate the total memory usage, considering the number of concurrent connections and operations, to ensure it does not exceed available physical RAM.
Balancing work_mem involves understanding your workload, monitoring performance, and adjusting settings to optimize both memory usage and query performance.
Microsoft Tech Community – Latest Blogs –Read More
New on Azure Marketplace: July 18-24, 2024
We continue to expand the Azure Marketplace ecosystem. For this volume, 153 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
Access Patient Flow Manager: Access Patient Flow Management provides real-time bed occupancy updates, improving patient care, reducing risk, and saving time. It interfaces with existing patient administration systems and departmental solutions, standardizes data capture, and digitally manages bed supply and demand.
ACSC-Compliant Red Hat Enterprise Linux 7: Foundation Security offers an ACSC-compliant Red Hat Enterprise Linux 7 virtual machine image with built-in security controls to protect sensitive data. The image is regularly updated and ideal for organizations needing a secure and compliant environment. Foundation Security’s team of experts provides ongoing support, and its solutions are trusted by Fortune 500 companies.
ACSC Essential Eight-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers an ACSC Essential Eight-compliant RHEL 8 virtual machine image with built-in security controls to protect sensitive data. The preconfigured image reduces the time and resources required for security implementation and is regularly updated to keep up with the latest threats and compliance regulations. Foundation Security’s experienced team provides ongoing support, and its solutions are used by several Fortune 500 companies.
ACSC Essential Eight-Compliant Rocky Linux 8: Foundation Security offers an ACSC Essential Eight-compliant Rocky Linux 8 virtual machine image with built-in security controls to protect sensitive data. The preconfigured image reduces the time and resources required for security implementation and is regularly updated to keep up with the latest threats and compliance regulations. Foundation Security’s experienced team provides ongoing support.
ACSC Essential Eight-Compliant Rocky Linux 9: Foundation Security offers an ACSC Essential Eight-compliant Rocky Linux 9 virtual machine image with hundreds of built-in security controls. This preconfigured image reduces the time and resources required for security implementation, ensuring the confidentiality, integrity, and availability of sensitive data. Foundation Security’s experienced team provides ongoing support, making it an ideal solution for organizations that need a secure and compliant environment.
ACSC ISM-Compliant Red Hat Enterprise Linux 8 (RHEL 8): This preconfigured ACSC ISM-compliant RHEL 8 virtual machine image is designed to meet Australian government security standards, reducing time and resources required for security implementation and compliance efforts. Foundation Security updates the image regularly to address evolving threats and compliance requirements, with ongoing support provided by a team of experts.
ACSC ISM-Compliant Red Hat Enterprise Linux 9 (RHEL 9): The ACSC ISM-compliant RHEL 9 virtual machine image is a preconfigured solution that aligns with Australian government security standards. It reduces the time and resources required for security implementation and compliance efforts. Foundation Security updates the image regularly to address evolving threats and compliance requirements, providing ongoing support to address any security concerns or compliance queries.
ACSC ISM-Compliant Rocky Linux 8: The ACSC ISM-compliant Rocky Linux 8 virtual machine image is preconfigured with security controls aligned with Australian government standards. Foundation Security updates the image to address evolving threats and compliance requirements, providing ongoing support to meet the highest security standards required by the ACSC ISM.
ACSC ISM-Compliant Rocky Linux 9: Foundation Security offers an ACSC ISM-compliant Rocky Linux 9 virtual machine image with built-in security controls to align with Australian government standards. This preconfigured image reduces the time and resources required for security implementation and compliance efforts. The team provides ongoing support, and its solutions are trusted by various Australian government agencies and contractors.
AffableBPM AI-Based Data Analytics Copilot: AffableBPM’s Data Analytics is powered by Microsoft Azure OpenAI to convert your questions into database searches, presenting the results in an intuitive visual format. It provides instant insights without the need for complex tools or technical skills. It is perfect for quickly making decisions and enhances productivity by bypassing traditional setup and configuration steps.
AI Anomaly Detection: AI Anomaly Detection monitors databases and business indicators to detect anomalies. Users are notified via email or preferred channels. The app monitors database schema and business indicators and sends notifications with interactive data visualizations and AI-generated descriptive analytics.
Apache Solr on Ubuntu: Apache Solr is an open-source search platform that excels in handling large volumes of data efficiently. It facilitates full-text search, supports advanced features such as faceted search and hit highlighting, and can handle diverse document types. Solr integrates seamlessly with various programming languages and frameworks, making it a cornerstone technology for organizations looking to enhance search functionality and improve user experience.
CCN Advanced Level-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a preconfigured CCN Advanced Level-compliant RHEL 9 virtual machine image fortified with numerous security controls to meet the rigorous standards set by the CCN for high-security environments. The image is designed and implemented to meet the highest security standards required by the CCN Advanced Level profile, and regularly updated to address evolving threats and compliance requirements. Foundation Security also provides ongoing support to address any security concerns or compliance queries.
CCN Basic Level-Compliant Red Hat Enterprise Linux (RHEL 9): Foundation Security offers a preconfigured CCN Basic Level-compliant RHEL 9 virtual machine image that meets Spanish government standards for public or low-sensitivity information systems. This reduces the time and effort required for basic security implementation and compliance. The image is regularly updated to address common threats and evolving compliance requirements, and Foundation Security offers support to address security concerns or compliance questions.
CCN Intermediate Level-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a CCN Intermediate Level-compliant RHEL 9 virtual machine image with robust security controls for moderately sensitive environments in Spain. Foundation Security’s expertise in Spanish security frameworks ensures compliance with government standards, reducing complexity and time for implementation.
CJIS-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured CJIS-compliant RHEL 7 virtual machine image with numerous security controls to meet CJIS Security Policy requirements. Foundation Security’s team of experts provides ongoing support to address security concerns and compliance queries.
CJIS-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured CJIS-compliant RHEL 8 virtual machine image with numerous security controls to meet CJIS Security Policy requirements. Foundation Security’s team of experts provides ongoing support to address concerns and compliance queries.
Debian 10 with Minecraft Bedrock Game Server: Virtual Pulse offers a simplified solution for hosting a Minecraft Bedrock game server on Debian 10. The image provides a user-friendly interface and comprehensive documentation to guide users through every step of the configuration process, allowing them to focus on enjoying the game rather than troubleshooting technical issues. The solution is designed for both enthusiasts and server administrators seeking a robust and customizable hosting solution.
Docker: ATH Infosystems offers this image providing Docker, a containerization platform that simplifies application development, deployment, and scaling. Docker provides a consistent and isolated environment, runs on any system, optimizes resource utilization, and enhances security.
EMQX: ATH Infosystems has configured this image providing EMQX on CentOS. EMQX is designed for large-scale IoT deployments. It offers reliable communication, advanced security features, and easy customization through a plugin-based architecture.
Fedora 40 with Trusted Launch: Ntegral has configured this virtual machine image containing Fedora Server 40, a stable and flexible Linux operating system suitable for organizations and individuals. It offers the latest open-source technology, modularity, easy administration, and advanced identity management. Ntegral has optimized and packaged it for Azure, ensuring it is always up-to-date and secure.
fieldWISE: fieldWISE by Vassar Labs uses GIS, remote sensing, AI, machine learning, and data analytics to provide timely insights up to the agriculture field level. It offers customizable and scalable products to help growers plan, monitor, optimize, protect, and earn the most from their fields. The platform benefits food manufacturing, agriculture inputs, insurance, and government institutions.
Flask: Flask is a lightweight and flexible web framework for Python, offering easy-to-use tools and libraries for building web applications quickly and efficiently. It follows the WSGI specification and supports extension with various libraries and frameworks. ATH Infosystems has configured this virtual machine image containing Flask on CentOS 8.5.
Health AI247: This AI-powered database built on Microsoft Azure allows medical professionals to access patient records and research symptoms via efficient workflows. By entering an ID number, doctors can retrieve medical records and provide informed care.
HyperStream Data Processor: HyperStream Data Processor is a high-performance platform for real-time data processing and analytics. It offers advanced tools for processing large data streams, enabling organizations to gain immediate insights and make data-driven decisions.
Jitsi: Jitsi is an open-source video conferencing tool that offers secure and flexible online meetings and video calls with high-quality audio and video, robust security features, screen sharing, integration with various tools, and customization options. ATH Infosystems has configured this image with Jitsi on CentOS 8.5.
Linux Stream 9 Minimal with OpenVPN: OpenVPN provides a reliable solution for secure remote access, catering to diverse user personas and addressing the growing need for enhanced privacy and security. It encrypts data transmission, protecting against cyber threats and unauthorized access to sensitive data. Virtual Pulse has packaged this image for easy installation on Microsoft Azure.
MongoDB on AlmaLinux 8: MongoDB on AlmaLinux 8 offers a flexible approach to data storage and management, allowing developers to work with unstructured and volatile data. Its ability to store data in the BSON document format makes it ideal for a variety of applications, from mobile apps to big data analytics. Tidal Media has configured and provides this image.
MongoDB on AlmaLinux 9: MongoDB is an open-source NoSQL database that offers flexibility, scalability, and high performance. It supports multiple programming languages and platforms, has a dynamic schema, and reduces the complexity of database management. MongoDB is ideal for modern applications and businesses of all sizes and is accessible even to small companies and startups. Tidal Media has configured and provides this image.
MongoDB on Debian 11: MongoDB on Debian 11 is a flexible and scalable NoSQL database that allows for easy handling of complex data structures. It integrates with various development frameworks and languages and provides comprehensive security features, automated backup, and recovery solutions. MongoDB is ideal for modern applications that require dynamic and robust data management solutions. Tidal Media has configured and provides this image.
MongoDB on Oracle Linux 8: MongoDB is a flexible and scalable database that allows for efficient storage and processing of data of any size and type. Its replication system ensures data reliability and availability, while its intuitive interface and natural integration with modern programming languages make application development fast and convenient. It easily scales both vertically and horizontally, making it easy and flexible to manage data. Tidal Media has configured and provides this image.
MongoDB on Red Hat Enterprise Linux 8: MongoDB is a flexible and high-performance database that can store and process various types of data. It offers powerful tools for data aggregation, indexing, replication, and sharding, making it suitable for projects of any scale. Tidal Media has configured and provides this image.
MongoDB on Rocky 8: MongoDB on Rocky 8 is a flexible and high-performance database management system that stores information as documents and collections, making data management easier and query processing faster. It offers data replication, indexing on any field, GridFS technology, load balancing, and support for ACID transactions across multiple documents. With MongoDB, businesses can efficiently process large volumes of data and have a reliable and efficient database for any task. Tidal Media has configured and provides this image.
MongoDB on SUSE 15 SP5: MongoDB on SUSE 15 SP5 is a reliable and scalable database solution for modern applications. It offers enhanced security features, stability, and enterprise-grade support. This solution is perfect for enterprises seeking a dependable database system to handle complex and data-intensive workloads. Tidal Media has configured and provides this image.
MongoDB on Ubuntu 22.04 LTS: MongoDB on Ubuntu 22.04 is a NoSQL database solution that offers high performance, scalability, and flexibility for managing vast amounts of unstructured data. It supports real-time analytics, content management, and more, making it ideal for developers, data scientists, and systems administrators. Tidal Media has configured and provides this image.
MongoDB on Ubuntu 24.04 LTS: MongoDB is a flexible and scalable database that allows you to store and manage data as documents. It has no strict data schema, making it ideal for projects that require rapid adaptation to changing needs. MongoDB simplifies the process of storing and retrieving data, resulting in increased performance and reduced overhead. It is an ideal choice for various types of applications, including big data analytics, web development, and mobile applications. Tidal Media has configured and provides this image.
Neo4j: Neo4j is a scalable graph database management system with ACID transactions, horizontal scaling, and seamless integration with programming languages and analytics tools. ATH Infosystems has configured this image containing Neo4j on CentOS 8.5..
NeuralNet Integrator: NeuralNet Integrator is an AI platform that integrates and deploys neural network models across various applications. It offers tools for developing, training, and managing neural networks, ensuring optimal performance and scalability.
Next.js: Next.js is an open-source framework for building modern web applications with powerful features like server-side rendering, static site generation, and built-in CSS. ATH Infosystems has configured this image providing Next.js on CentOS 8.5.
Oracle Linux 8.10 for Arm64 Architecture: Oracle Linux Server 8.10 is a reliable, secure, and performant enterprise operating system that brings the latest open-source innovations and business-critical performance and security optimizations. It delivers virtualization, management, and cloud-native computing tools, as well as application binary compatible with Red Hat Enterprise Linux. Ntegral has configured this image.
OSPP-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers an OSPP-compliant RHEL 7 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers an OSPP-compliant RHEL 8 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile.Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers an OSPP-compliant RHEL 9 virtual machine image with hardened security controls for organizations requiring high assurance in their operating systems. The preconfigured image reduces time and resources needed for security implementation and evaluation, with ongoing support from experts in compliance standards.
OSPP-Compliant Rocky Linux 8: Foundation Security offers an OSPP-compliant Rocky Linux 8 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
OSPP-Compliant Rocky Linux 9: Foundation Security offers an OSPP-compliant Rocky Linux 9 virtual machine image with numerous security controls to meet the comprehensive requirements of the Operating System Protection Profile. Foundation provides ongoing support to address any security concerns or compliance queries.
PCI-Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured RHEL 7 virtual machine image with numerous security controls to meet the latest PCI DSS standard. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured RHEL 8 virtual machine image with numerous security controls to establish a PCI-compliant environment. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers a preconfigured RHEL 9 virtual machine image with numerous security controls to meet the latest PCI DSS standard. Foundation provides ongoing support to address security concerns and compliance queries.
PCI-Compliant Rocky Linux 8: This Rocky Linux 8 virtual machine image is designed to meet the latest security standards for companies handling payment card data. It includes numerous security controls and is regularly updated to address evolving threats and compliance requirements. Foundation Security’s team of experts provides ongoing support to ensure a consistently secure and compliant platform.
PCI-Compliant Rocky Linux 9: This Rocky Linux 9 virtual machine image is designed to meet the latest security standards for organizations handling payment card data. It includes numerous security controls and is regularly updated to address evolving threats and compliance requirements. Foundation Security’s team of experts provides ongoing support to ensure a consistently secure and compliant platform.
phpMyAdmin: ATH Infosystems has configured this image providing CentOS 8.5 and. phpMyAdmin, an open-source web-based administration tool for managing MySQL and MariaDB databases via a user-friendly interface for various database management tasks.
PortalTalk Governance Solution for Microsoft Teams: PortalTalk by QS Solutions streamlines administrative duties with automated site provisioning, offers robust access control, and empowers Microsoft Teams channel owners to manage their domains autonomously while IT staff retain comprehensive control over the system. It enhances an organization’s security stance and simplifies administrative processes, delivering a secure and compliant environment for Teams and SharePoint document management.
Prometheus on Ubuntu: Anarion has configured this image providing Prometheus, an open-source monitoring and alerting toolkit used to collect and store metrics as time series data. Prometheus employs a powerful query language called PromQL, allowing for complex aggregations and transformations of data.
Quantive StrategyAI Gold (US): Quantive StrategyAI is an AI-powered tool that helps businesses plan, execute, and adapt their strategies quickly. It provides real-time insights, digital collaboration tools, and flexible executive dashboards to track business KPIs and goals.
Quantive StrategyAI Gold (UK): Quantive StrategyAI is an AI-powered tool that helps businesses plan, execute, and adapt their strategies quickly. It provides real-time insights, digital collaboration tools, and flexible executive dashboards to track business KPIs and goals. This offer is available in the United Kingdom.
Red Hat Enterprise Linux 8.10 with Trusted Launch: Red Hat Enterprise Linux includes built-in security features like SELinux and mandatory access controls. Configured by Ntegral, this Trusted Launch virtual machine helps protect against advanced attacks.
Redgate Flyway Enterprise: Flyway Enterprise simplifies and accelerates database delivery with automation, object-level version control, and flexible deployment options. It supports multiple database platforms and integrates with common CI and release tools.
Redgate Monitor Enterprise: Redgate Monitor enhances productivity and efficiency, simplifies collaboration, and boosts skills portability. It ensures operational continuity, reduces security risks, ensures compliance, and saves time on manual database tasks.
Redgate Test Data Manager: Redgate Test Data Manager streamlines the data provisioning workflow, enabling developers and testers to self-serve dedicated, compliant copies of production environments within seconds. It automates the delivery of high-quality test data as part of your CI/CD pipeline, and simplifies data security with automated data discovery, classification, and masking practices.
RH-CCP Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers a preconfigured RHEL 7 virtual machine image with numerous security controls to meet the requirements of the Red Hat Common Criteria Profile. Foundation provides ongoing support to address security concerns and compliance queries.
RH CCP-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers a preconfigured RHEL 8 virtual machine image with numerous security controls to meet the requirements of the Red Hat Common Criteria Profile. Foundation provides ongoing support to address security concerns and compliance queries.
Rocky Linux 9.4 Generation 2 VM: Rinne Labs offers a lightweight and secure Rocky Linux 9.4 image built from the official ISO with only essential packages for optimal performance. The image is updated with the latest security patches and updates, making it ideal for rapid deployment of web applications, efficient development and testing environments, stable and secure server infrastructure, data analytics, and machine learning.
Rocky Linux 8.10 on Arm64 Architecture: Rocky Linux is a premier Linux distribution for enterprise cloud environments, offering additional security and compliance. Ntegral has packaged this image to work out of the box.
Smart RDM Data-Driven Decision Support System for Manufacturing: This Smart RDM offer delivers a decision support system that provides real-time insights and action recommendations for manufacturing processes. It analyzes past execution history and data analytics to suggest the best performing solution scenarios. The system also allows for what-if scenarios and digital twin modeling to test multiple scenarios in a safe environment.
Smart RDM Energy Efficiency Management for Manufacturing: ConnectPoint helps you optimize energy consumption in manufacturing processes through real-time data insights, predictive analytics, and a decision support system. Via this offer, you can use Smart RDM to calculate energy costs, sets alarms for abnormal consumption, and maximizes the use of renewable sources.
Standard System Security Compliant Red Hat Enterprise Linux 7 (RHEL 7): Foundation Security offers this RHEL 7 virtual machine image that incorporates industry-standard practices for system security. The preconfigured image reduces the complexity and time required for security implementation and is regularly updated to address evolving threats.
Standard System Security-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers this RHEL 8 virtual machine image with preconfigured security controls to establish a secure environment based on industry-standard practices. The image is regularly updated to address evolving threats and incorporates the latest security best practices.
STIG-Compliant Red Hat Enterprise Linux 8 (RHEL 8): Foundation Security offers this RHEL 8 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
STIG-Compliant Red Hat Enterprise Linux 9 (RHEL 9): Foundation Security offers this RHEL 9 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
STIG-Compliant Rocky Linux 8: Foundation Security’s Rocky Linux 8 virtual machine image is preconfigured with hundreds of security controls to meet Department of Defense standards. It reduces complexity and time required for security implementation and is regularly updated to address evolving threats and compliance requirements.
STIG-Compliant Rocky Linux 9: Foundation Security offers this Rocky Linux 9 virtual machine image fortified with hundreds of security controls to meet Department of Defense standards. Foundation provides ongoing support to address security concerns and compliance queries.
Sullexis Hierarchy Management Powered by LinqIQ: LinqIQ helps manage hierarchies for analytics, AI, and data management platforms. It creates and maintains hierarchies based on how businesses interact with customers, vendors, and partners. LinqIQ can be customized and connected to existing systems, with a user-friendly interface for easy adjustments. It can be implemented quickly on Microsoft Azure.
Webmin Server on Oracle Linux 9: Webmin Server provides a web-based interface that simplifies system administration tasks for IT professionals. It offers a comprehensive suite of tools and features for managing Unix-like systems, including user account management, software package installation, and system monitoring. With its intuitive interface and remote access capabilities, Webmin enhances productivity and reduces errors. Tidal Media has configured this image providing Webmin on Oracle Linux 9.
Webmin Server on Red Hat Enterprise Linux 9: Webmin Server provides a user-friendly interface for managing servers remotely. It offers powerful server management features, including user and group creation, network configuration, and database management. With Webmin, you can monitor system resources, configure firewalls, and manage configuration files to keep your server secure and performing optimally. Tidal Media has configured this image providing Webmin on Red Hat Enterprise Linux 9.
Go further with workshops, proofs of concept, and implementations
AltaML AI: 8-Week Proof of Concept: AltaML’s engagement includes ideation, feasibility assessment, and AI/ML model experimentation using Microsoft Azure AI. AltaML efficiently de-risks the ML process, hastens ROI realization, and supports informed decisions for full-scale deployment.
Application Modernization and Migration to Azure: Implementation: Click2Cloud offers migration and modernization services, evaluating existing custom applications and providing detailed information on migration costs to Microsoft Azure.
Database Migration to Azure: Implementation: Click2Cloud’s Database Migration Service helps you migrate, innovate, and modernize data using AI on Microsoft Azure. The solution ensures a seamless, efficient transition, eliminating physical infrastructure and end-of-life software issues. Key deliverables include a business value assessment report and proof of concept.
Migrate Legacy Data to Azure: 4-Week Proof of Concept: T-Systems Managed Application Retirement Services (M.A.R.S.) is a consultancy-to-cloud capability for sunsetting legacy applications. It enables the transfer of data to a single platform, eliminating business and security risks, and reducing overall costs.
Rackspace Managed XDR Powered by Microsoft Sentinel: Rackspace Managed XDR, built on Microsoft Sentinel, offers advanced threat detection capabilities, certified security analysts, and AI-assisted remediation for detection and response to cybersecurity threats across your digital estate. It integrates with over 300 security technologies and log sources, conducts proactive threat hunts, and speeds up containment and eradication of threats through cloud-native security orchestration and automated response.
Unisys Cloud Transformation: Implementation: Unisys Cloud Transformation offers a secure and phased approach to Azure migrations and modernization. Unisys begins with workshops to gather information about your business case and technical requirements. Experts design and build Azure target environments to host your applications.
Contact our partners
ACSC Essential Eight-Compliant Red Hat Enterprise Linux 9 (RHEL 9)
App and Infrastructure: 3-Day Assessment
BreachRisk Copilot for Security
Cognizant – Oracle Dataases to Oracle Database@Azure: Migration
Copilot for Microsoft 365: 1-Week Assessment
GravityZone Small Business Security
HCLTech Cloud Security Foundation (CsaaS) for Azure
Imperium Co-Managed Service for Microsoft Dynamics 365 (SaaS)
Imperium Co-Managed Service for Microsoft Fabric (SaaS)
Infisical Secured and Supported by HOSSTED
Kelvin Autonomous Operations Software
Linux Stream 9 Minimal with iPerf3 Server
Linux Stream 9 with iPerf3 Server
Managed Service Provider (MSP) for Azure
Metric Insights BI Portal – Virtual Machine Image
Octave Immersive Data Service: 2- 3-Week Assessment
PacketFabric Network Solutions
PositivityTech Financial Services Industry Benchmark Platform
Rocky 8.10 Generation 2 with Support by Rinne Labs
Rocky 8.6 Generation 2 with Support by Rinne Labs
Rocky Linux 8.10 Generation 2 with Support by Rinne Labs
Rocky Linux 8.6 Generation 2 with Support by Rinne Labs
Senseye Predictive Maintenance
STIG-Compliant Red Hat Enterprise Linux 7 (RHEL 7)
XENA VISION – Smart City Active Surveillance
Yobi Signal as a Service: Data Enrichment
ZingWorks Distribution Requirement Planning
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
How to use all cores for running a Simulink model?
Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me.Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me. Hi,
I modeled a thermal-fluid network in Simulink with Simscape modules and I want to use all cores in order to speed up my simulation.
I am using a workstation with 64 physical cores and 128 logical cores, but I get the same run-time as when I run the model on my Laptop with 6 physical cores and 12 logical cores. What should I do that Simulink uses all the capacity of my workstation for running a model?
I would appreciate if you could help me. parallel computing in simulink MATLAB Answers — New Questions
MSSQL – availability group issues
Hello,
I have a cluster setup with availability group and a database was part of this AG, but something happened and I can see it in the AG, but with an exclamation mark.
I tried to alter the availability group and remove it, but the error was that the database is not part of the AG.
Can some one tell me how I can remove that database from the AG ?
Thanks,
Daniel
Hello,I have a cluster setup with availability group and a database was part of this AG, but something happened and I can see it in the AG, but with an exclamation mark.I tried to alter the availability group and remove it, but the error was that the database is not part of the AG.Can some one tell me how I can remove that database from the AG ?Thanks,Daniel Read More
Excel copy and paste error
Usually, when I copy something in Excel, it highlights the cell and keeps it highlighted until I finish pasting it on the Excel or if I copy something else, even on another app or browser.
But now when I copy something from Excel, paste it on the browser (URL bar), copy something else from the opened page, and then try to paste the browser data back to Excel, the initially copied cell is still highlighted, and it pastes that data only, not the browser data. I have to then paste the browser data from the clipboard.
Can someone confirm this issue or suggest a solution?
Usually, when I copy something in Excel, it highlights the cell and keeps it highlighted until I finish pasting it on the Excel or if I copy something else, even on another app or browser.But now when I copy something from Excel, paste it on the browser (URL bar), copy something else from the opened page, and then try to paste the browser data back to Excel, the initially copied cell is still highlighted, and it pastes that data only, not the browser data. I have to then paste the browser data from the clipboard.Can someone confirm this issue or suggest a solution? Read More
>= TODAY function returning the wrong values
Hi! I’m trying to create a conditional formatting formular that highlights a row in green if both the cell in column E = “y” and the date in column D is any date after or including today. This is the formula I thought would work – I have also tried it with the IF function around it, both versions below:
=AND($E3 = “y”, $D3>=TODAY())
=IF(AND($E3 = “y”, $D3>=TODAY()), TRUE, FALSE)
It’s currently highlighting cells with dates from last year and all sorts, I’m super confused as to why it’s not working as expected. Help would be majorly appreciated! We’re using it to track when we have new staff incoming – so when I type in “y” and the start date is still incoming, it’s green and we can review data easily, then when the start date has passed it drops all conditional formatting (it’s to stop us getting confused essentially!)
I have checked that all the cells with dates in are selected as a ‘long date’ format FYI!
To note: I have two other (working haha) conditional formatting rules in here:
=AND(ISBLANK($E3), ISBLANK($A3)) to turn a row grey if there is nothing in both column A and E (so it’s grey until we add a name in (which goes in column A))
=IF(ISERROR(SEARCH(“y”,$E2)),TRUE,FALSE) to turn a row red if it doesn’t have a y in column E (i.e. if we have a new staff member incoming but haven’t prepped for them)
Hi! I’m trying to create a conditional formatting formular that highlights a row in green if both the cell in column E = “y” and the date in column D is any date after or including today. This is the formula I thought would work – I have also tried it with the IF function around it, both versions below: =AND($E3 = “y”, $D3>=TODAY()) =IF(AND($E3 = “y”, $D3>=TODAY()), TRUE, FALSE) It’s currently highlighting cells with dates from last year and all sorts, I’m super confused as to why it’s not working as expected. Help would be majorly appreciated! We’re using it to track when we have new staff incoming – so when I type in “y” and the start date is still incoming, it’s green and we can review data easily, then when the start date has passed it drops all conditional formatting (it’s to stop us getting confused essentially!) I have checked that all the cells with dates in are selected as a ‘long date’ format FYI! To note: I have two other (working haha) conditional formatting rules in here: =AND(ISBLANK($E3), ISBLANK($A3)) to turn a row grey if there is nothing in both column A and E (so it’s grey until we add a name in (which goes in column A)) =IF(ISERROR(SEARCH(“y”,$E2)),TRUE,FALSE) to turn a row red if it doesn’t have a y in column E (i.e. if we have a new staff member incoming but haven’t prepped for them) Read More
= TODAY function returning the wrong values” />
Accelerate the development of Generative AI application with GitHub Models
The first step in developing generative AI applications is to choose a model. How to choose a model is the key. This includes
When we combine application development with business scenarios, there are many comparisons, such as the generation effects of the same prompt words under different models.
Quick comparison and switching of multiple models
How different models adapt to new application frameworks and solutions to complete projects more effectively.
The release of GitHub Models plays a very important role for developers and different development teams to more effectively select models in the process of developing applications and create applications based on different application frameworks. Let’s take a look at how I use GitHub Models to complete development in different scenarios.
Model comparison
In GitHub Models, through the provided playground, we can complete the comparison of the same prompt for different models.
Let’s take a look at the comparison between Phi-3-mini and Mistral Nemo
Judging from the results, this is an evenly matched result.
Quick comparison and switching of multiple models
Above, we switched models in the playground to compare different models under the same prompt. For development, a more direct approach may be required. With the Azure AI Inference SDK you can quickly switch to different models. You can choose Python, JavaScript, and REST access methods by selecting Code.
If we choose the Phi-3-mini scenario, we can choose to obtain the access method in Code
Of course, you can directly and seamlessly access the programming environment through Codespace.
Adaptation to different application frameworks
Generative AI has different application frameworks combined with models to complete applications, such as GraphRAG. We can use the REST interface provided by GitHub Models to test model solutions other than GPT-4o, such as selecting the latest Meta LLama 3.1 405b Instruct. If the local deployment of this model has been limited by computing power, it will be difficult for individuals and small teams to adopt it. But based on the interface provided by GitHub Models, we can complete the test in the local environment very simply
Configure the environment
Install the GraphRAG Python library
pip install graphrag -U
Create a GraphRAG project
mkdir -p ./ragmd/input
python -m graphrag.index –init –root ./ragmd
Modify settings.yaml
encoding_model: cl100k_base
skip_workflows: []
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: meta-llama-3.1-405b-instruct
model_supports_json: true # recommended if this is available for your model.
max_tokens: 4000
api_base: https://models.inference.ai.azure.com
parallelization:
Stagger: 0.3
async_mode: threaded # or asyncio
embeddings:
async_mode: threaded # or asyncio
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_embedding # or azure_openai_embedding
model: jinaai
api_base: http://localhost:5146/v1
Note Please configure GitHub Tokens in .env
Run
python -m graphrag.index –root ./ragmd
Test Results
python -m graphrag.query –root ./ragmd –method global “What’s GraphRAG”
Through GitHub Models, we can quickly use the provided models for model comparison and application development environment testing, which allows model and application testing to be completed more efficiently and quickly in environments with limited computing power.
Learning Resources
Sign Up https://gh.io/models
Introducing GitHub Models: A new generation of AI engineers building on GitHub https://github.blog/news-insights/product-news/introducing-github-models/
Understand Phi-3 https://aka.ms/phi-3cookbook
Learn about GraphRAG https://microsoft.github.io/
Microsoft Tech Community – Latest Blogs –Read More
Demo Bytes: Storage Replica, Failover Clustering, and Winget
Windows Server 2025 is the most secure and performant release yet! Download the evaluation now!
Looking to migrate from VMware to Windows Server 2025? Contact your Microsoft account team!
Looking to migrate from VMware to Windows Server 2025? Contact your Microsoft account team!
The 2024 Windows Server Summit was held in March and brought three days of demos, technical sessions, and Q&A, led by Microsoft engineers, guest experts from Intel®, and our MVP community. For more videos from this year’s Windows Server Summit, please find the full session list here.
This article covers some demos of Windows Server 2025.
Demo Bytes: Storage Replica
Demo time! Get an up-close look at the next generation of Storage Replica!
Storage Replica was first released in Windows Server 2016 and has come a long way. See how we’ve improved performance by enhancing logs and compression. Watch demos where we replace DFSR with this modern replication system that will replicate in-use files and protect your organization from disasters.
Demo bytes: Failover clustering | Installing packages with WinGet
More demos! First, we’ll look at the newest capabilities for failover clustering in Windows Server 2025. Find out how your organization can achieve high availability for manufacturing, retail, and AI scenarios. Then we’ll switch gears to WinGet, the command-line utility that enables you to install applications and other packages in Windows Server 2025 from the command line.
Microsoft Tech Community – Latest Blogs –Read More
Trend Micro’s security software prevents MATLAB’s access to “MATLABWindow.exe” file, inhibiting web-based applications from running.
Why am I unable to use web-based applications such as Simulink, App Designer, Simscape, etc.?Why am I unable to use web-based applications such as Simulink, App Designer, Simscape, etc.? Why am I unable to use web-based applications such as Simulink, App Designer, Simscape, etc.? trendmicro, antivirus, exit, code, 1073741819, matlab.internal.cef.webwindow MATLAB Answers — New Questions
Accessing parameters in encrypted models
I have an encrypted Simulink model with some tunable parameters. I can change them by right-clicking on the model and choosing ‘Block Parameters (ModelReference)’. However, unlike other (nonencrypted) blocks, I can’t change them by using ‘set_param’. Is there any way of accessing these parameters from the command line? I’m using Simulink R2015b.I have an encrypted Simulink model with some tunable parameters. I can change them by right-clicking on the model and choosing ‘Block Parameters (ModelReference)’. However, unlike other (nonencrypted) blocks, I can’t change them by using ‘set_param’. Is there any way of accessing these parameters from the command line? I’m using Simulink R2015b. I have an encrypted Simulink model with some tunable parameters. I can change them by right-clicking on the model and choosing ‘Block Parameters (ModelReference)’. However, unlike other (nonencrypted) blocks, I can’t change them by using ‘set_param’. Is there any way of accessing these parameters from the command line? I’m using Simulink R2015b. encrypted model, parameters, set_param MATLAB Answers — New Questions
App designer generated app freezes in function “createComponents(app)”
Hi
I’ve written an application using App Designer with 100s of components. When I start the app it freezes during the build of the components in the App Designer generated function createComponents. This happens before my own code starts to execute. The way to avoid this freeze is to add a breakpont far down in the createComponents function. createComponents is called before registerApp and runStartupFcn is called. All the mentioned functions are generated before my own code start to execute and are generated by App Designer and are therefore grayed out which means I can’t do any edits.
Is this freeze problem known to this community and is there a fix for this?
Is there any way for me to edit the App Designer generated functions? I would like to add a pause instead of a breakpoint as a possible way to solve the freeze in createComponents function…
I’m running Matlab R2022a Update 8.
Most grateful for your hints and tipsHi
I’ve written an application using App Designer with 100s of components. When I start the app it freezes during the build of the components in the App Designer generated function createComponents. This happens before my own code starts to execute. The way to avoid this freeze is to add a breakpont far down in the createComponents function. createComponents is called before registerApp and runStartupFcn is called. All the mentioned functions are generated before my own code start to execute and are generated by App Designer and are therefore grayed out which means I can’t do any edits.
Is this freeze problem known to this community and is there a fix for this?
Is there any way for me to edit the App Designer generated functions? I would like to add a pause instead of a breakpoint as a possible way to solve the freeze in createComponents function…
I’m running Matlab R2022a Update 8.
Most grateful for your hints and tips Hi
I’ve written an application using App Designer with 100s of components. When I start the app it freezes during the build of the components in the App Designer generated function createComponents. This happens before my own code starts to execute. The way to avoid this freeze is to add a breakpont far down in the createComponents function. createComponents is called before registerApp and runStartupFcn is called. All the mentioned functions are generated before my own code start to execute and are generated by App Designer and are therefore grayed out which means I can’t do any edits.
Is this freeze problem known to this community and is there a fix for this?
Is there any way for me to edit the App Designer generated functions? I would like to add a pause instead of a breakpoint as a possible way to solve the freeze in createComponents function…
I’m running Matlab R2022a Update 8.
Most grateful for your hints and tips appdesigner, createcomponents MATLAB Answers — New Questions
New Outlook can’t find Microsoft 365 mail server on a new PC
Dear community,
first of all: I like the New Outlook very much. It worked fine for several months on my old laptop but now I have a new one and have to install it.
My company is using Microsoft 365 and our mails are handled by Exchange 365. Our mail adresses are like email address removed for privacy reasons where “our-company.com” is our domain name.
Now I installed Office on my new laptop from the website of 365. I wanted to configure “New Outlook”, but I got an error message saying, that the mail server can’t be found and should try it again. I tried to select the account type manually. Same behaviour. The “Troubleshooting” button also doesn’t help.
Then I configured Outlook Classic as a next step and that works fine! Then I tried the switch “Change to new Outlook” (my original text is German) thinking that this will transfer the working configuration. Same problem: “mail server not found”.
I ended up in the situation, where nothing works. “Outlook Classic” wants to start “New Outlook” and “New Outlook” can’t find the mail server without giving me more options to change the configuration manually.
Why can’t “New Outlook” find our MS 365 hosted Exchange??? Outlook Classic and the rest of the Office apps including teams work fine with my account.
Other networks (home, hotspot) didn’t help. So it is no problem of our company network.
I have to uninstall “New Outlook” to escape from this infinite loop.
Any help is appreciated!
Thank you,
Stefan
Dear community, first of all: I like the New Outlook very much. It worked fine for several months on my old laptop but now I have a new one and have to install it. My company is using Microsoft 365 and our mails are handled by Exchange 365. Our mail adresses are like email address removed for privacy reasons where “our-company.com” is our domain name. Now I installed Office on my new laptop from the website of 365. I wanted to configure “New Outlook”, but I got an error message saying, that the mail server can’t be found and should try it again. I tried to select the account type manually. Same behaviour. The “Troubleshooting” button also doesn’t help.Then I configured Outlook Classic as a next step and that works fine! Then I tried the switch “Change to new Outlook” (my original text is German) thinking that this will transfer the working configuration. Same problem: “mail server not found”. I ended up in the situation, where nothing works. “Outlook Classic” wants to start “New Outlook” and “New Outlook” can’t find the mail server without giving me more options to change the configuration manually. Why can’t “New Outlook” find our MS 365 hosted Exchange??? Outlook Classic and the rest of the Office apps including teams work fine with my account. Other networks (home, hotspot) didn’t help. So it is no problem of our company network. I have to uninstall “New Outlook” to escape from this infinite loop. Any help is appreciated!Thank you,Stefan Read More
Runtime Broker – Consuming lots of memory.
Hello All,
I have developed a UWP app years ago to download and store files and sync data from a service of mine.
This application has been stable and working without issues for long time, however a recent change in the behaviour of the solution has meant it’s almost continually streaming data from the service and a side effect of that means the runtime broker is consuming so much memory the system grinding to a halt, even stopping the traffic doesn’t free the memory.
Is there any way I can debug what is happening with this RuntimeBroker or understand why this is happening?
Hello All, I have developed a UWP app years ago to download and store files and sync data from a service of mine. This application has been stable and working without issues for long time, however a recent change in the behaviour of the solution has meant it’s almost continually streaming data from the service and a side effect of that means the runtime broker is consuming so much memory the system grinding to a halt, even stopping the traffic doesn’t free the memory. Is there any way I can debug what is happening with this RuntimeBroker or understand why this is happening? Read More
Fit data to lagged custom function
Hello,
I would like to ask if you can advice the correct approach I can follow to estimate the parameters of a custom lagged function
(1) y(t)=c^2*a+y(t-1)*(a-1)
where c is a known constant.
to a time series data (I can use the symbilic function to create (1) )
Thank you.
Best regards
PaoloHello,
I would like to ask if you can advice the correct approach I can follow to estimate the parameters of a custom lagged function
(1) y(t)=c^2*a+y(t-1)*(a-1)
where c is a known constant.
to a time series data (I can use the symbilic function to create (1) )
Thank you.
Best regards
Paolo Hello,
I would like to ask if you can advice the correct approach I can follow to estimate the parameters of a custom lagged function
(1) y(t)=c^2*a+y(t-1)*(a-1)
where c is a known constant.
to a time series data (I can use the symbilic function to create (1) )
Thank you.
Best regards
Paolo curve fitting, time series MATLAB Answers — New Questions
Simulink mask: show/hide the port of a custom Simulink block
Hi guys,
I am creating a mask for my custom block and I want add a checkbox with the following behaviour:
checked: it shows a new port the the block
unhecked: it hides the new port for the block
The integrator block has the same behaviour (see the belows images to understand):
I am struggling how to implement this feature in the Checkbox callback because I can’t find the parameter to show/hide the port.Hi guys,
I am creating a mask for my custom block and I want add a checkbox with the following behaviour:
checked: it shows a new port the the block
unhecked: it hides the new port for the block
The integrator block has the same behaviour (see the belows images to understand):
I am struggling how to implement this feature in the Checkbox callback because I can’t find the parameter to show/hide the port. Hi guys,
I am creating a mask for my custom block and I want add a checkbox with the following behaviour:
checked: it shows a new port the the block
unhecked: it hides the new port for the block
The integrator block has the same behaviour (see the belows images to understand):
I am struggling how to implement this feature in the Checkbox callback because I can’t find the parameter to show/hide the port. simulink, mask, integrator, custom MATLAB Answers — New Questions
Compilation Error encountered while running Polyspace Bug Finder.
Version : R2021a
language : C
C version : C11
Compiler : ti
Target : C28x
Project : TMS320F28374s
Error :
—
File D:CCS_workspacePolyspace testPatriotSafetyMonitor_SL1_SL2_20240710_058.000_Gen3C2000_18_12_2_LTS_Includestdlib.h line 294
Error: declaration is incompatible with "long __euclidean_div_i32byu32(long, unsigned long, unsigned long)" (declared at line 50 of "C:Program FilesPolyspaceR2021apolyspaceverifierextensionstitmw_builtinsc28x.h")
__euclidean_div_i32byu32(long numerator, unsigned long denominator);
^
When performing Bug Finder analysis, an incompatibility occurs between Polyspace’s C28x.h and my .h file. However, the same program does not exhibit these issues on some computers. I hope to identify the true cause to ensure the program is error-free. Could you please explain the role of C28x.h in the analysis? Additionally, is it reasonable to modify C28x.h?Version : R2021a
language : C
C version : C11
Compiler : ti
Target : C28x
Project : TMS320F28374s
Error :
—
File D:CCS_workspacePolyspace testPatriotSafetyMonitor_SL1_SL2_20240710_058.000_Gen3C2000_18_12_2_LTS_Includestdlib.h line 294
Error: declaration is incompatible with "long __euclidean_div_i32byu32(long, unsigned long, unsigned long)" (declared at line 50 of "C:Program FilesPolyspaceR2021apolyspaceverifierextensionstitmw_builtinsc28x.h")
__euclidean_div_i32byu32(long numerator, unsigned long denominator);
^
When performing Bug Finder analysis, an incompatibility occurs between Polyspace’s C28x.h and my .h file. However, the same program does not exhibit these issues on some computers. I hope to identify the true cause to ensure the program is error-free. Could you please explain the role of C28x.h in the analysis? Additionally, is it reasonable to modify C28x.h? Version : R2021a
language : C
C version : C11
Compiler : ti
Target : C28x
Project : TMS320F28374s
Error :
—
File D:CCS_workspacePolyspace testPatriotSafetyMonitor_SL1_SL2_20240710_058.000_Gen3C2000_18_12_2_LTS_Includestdlib.h line 294
Error: declaration is incompatible with "long __euclidean_div_i32byu32(long, unsigned long, unsigned long)" (declared at line 50 of "C:Program FilesPolyspaceR2021apolyspaceverifierextensionstitmw_builtinsc28x.h")
__euclidean_div_i32byu32(long numerator, unsigned long denominator);
^
When performing Bug Finder analysis, an incompatibility occurs between Polyspace’s C28x.h and my .h file. However, the same program does not exhibit these issues on some computers. I hope to identify the true cause to ensure the program is error-free. Could you please explain the role of C28x.h in the analysis? Additionally, is it reasonable to modify C28x.h? polyspace, bug finder MATLAB Answers — New Questions
Effect size, statistical power of the test, and confidence interval (of hypothesis testing)
I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
All of them return a p-value, i.e. a p-value for chi2gof, a p-value for kstest2, a p-value for ranksum, and a p-value for kruskalwallis.
Since the p-value is not enough to understand the data/distributions (for example, please see Sullivan & Feinn (2012), Dunkler et al.(2020), Greenland (2016), and du Prel et al. (2009)), I would like to calculate the effect size, the statistical power of the test, and the confidence interval (of hypothesis testing).
I would use the following MATLAB functions:
meanEffectSize (two-sample effect size computations)
sampsizepwr (power of the test)
not found anything about the confidence interval..
However:
I am not sure if both functions, meanEffectSize and sampsizepwr, can be "compatible" with (i) non-normal distributions and (ii) with the 4 previously mentioned tests.
According to the documentation I found on sampsizepwr, it looks like that the calculation of the statistical power of the test works only with normal distributions, but I am not sure.
Those 4 mentioned tests do not provide the confidence interval. Maybe other MATLAB function do, but I was not able to find them.
I have then 2 questions:
Question 1. Is there anyone who could kindly enlighten me on the "compatibility" of meanEffectSize and sampsizepw with with (i) non-normal distributions and (ii) with the 4 previously mentioned tests?
Question 2. Is there anyone who could kindly tell me if there are MATLAB functions to calculate the confidence intervals for the 4 mentioned statistical tests?I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
All of them return a p-value, i.e. a p-value for chi2gof, a p-value for kstest2, a p-value for ranksum, and a p-value for kruskalwallis.
Since the p-value is not enough to understand the data/distributions (for example, please see Sullivan & Feinn (2012), Dunkler et al.(2020), Greenland (2016), and du Prel et al. (2009)), I would like to calculate the effect size, the statistical power of the test, and the confidence interval (of hypothesis testing).
I would use the following MATLAB functions:
meanEffectSize (two-sample effect size computations)
sampsizepwr (power of the test)
not found anything about the confidence interval..
However:
I am not sure if both functions, meanEffectSize and sampsizepwr, can be "compatible" with (i) non-normal distributions and (ii) with the 4 previously mentioned tests.
According to the documentation I found on sampsizepwr, it looks like that the calculation of the statistical power of the test works only with normal distributions, but I am not sure.
Those 4 mentioned tests do not provide the confidence interval. Maybe other MATLAB function do, but I was not able to find them.
I have then 2 questions:
Question 1. Is there anyone who could kindly enlighten me on the "compatibility" of meanEffectSize and sampsizepw with with (i) non-normal distributions and (ii) with the 4 previously mentioned tests?
Question 2. Is there anyone who could kindly tell me if there are MATLAB functions to calculate the confidence intervals for the 4 mentioned statistical tests? I am using the following two-sample tests for non-normal distributions:
chi2gof
kstest2
ranksum
kruskalwallis
All of them return a p-value, i.e. a p-value for chi2gof, a p-value for kstest2, a p-value for ranksum, and a p-value for kruskalwallis.
Since the p-value is not enough to understand the data/distributions (for example, please see Sullivan & Feinn (2012), Dunkler et al.(2020), Greenland (2016), and du Prel et al. (2009)), I would like to calculate the effect size, the statistical power of the test, and the confidence interval (of hypothesis testing).
I would use the following MATLAB functions:
meanEffectSize (two-sample effect size computations)
sampsizepwr (power of the test)
not found anything about the confidence interval..
However:
I am not sure if both functions, meanEffectSize and sampsizepwr, can be "compatible" with (i) non-normal distributions and (ii) with the 4 previously mentioned tests.
According to the documentation I found on sampsizepwr, it looks like that the calculation of the statistical power of the test works only with normal distributions, but I am not sure.
Those 4 mentioned tests do not provide the confidence interval. Maybe other MATLAB function do, but I was not able to find them.
I have then 2 questions:
Question 1. Is there anyone who could kindly enlighten me on the "compatibility" of meanEffectSize and sampsizepw with with (i) non-normal distributions and (ii) with the 4 previously mentioned tests?
Question 2. Is there anyone who could kindly tell me if there are MATLAB functions to calculate the confidence intervals for the 4 mentioned statistical tests? effect size, statistical power, confidence interval, chi2gof, kstest2, ranksum, kruskalwallis MATLAB Answers — New Questions
How Do You Increase Font Size in Favorites Menu of 365?
I want to increase the font size in all the folders in my Favorites menu of Outlook 365.
It is the menu on the left side of Outlook with the word Favorites at the very top.
I combed the Internet looking for the answer but could not find a solution.
Help please!
I want to increase the font size in all the folders in my Favorites menu of Outlook 365.It is the menu on the left side of Outlook with the word Favorites at the very top.I combed the Internet looking for the answer but could not find a solution.Help please! Read More
How Do You Convert PNG to an ICO on Windows 11?
Hey everyone!
I have a few images that I want to use as icons, and I know ICO is the preferred format for that. Currently, I’m looking for some help with converting PNG to the ICO on Windows 11 as the default Photos app can’t do that.
I’ve tried three online PNG to ICON converters, but I’ve only got the mixed results regarding quality and size. Are there specific programs or online services that you recommend that maintain the image quality? Additionally, if there are settings I should be aware of to ensure the icon works properly in Windows, please let me know!
Thanks
Hey everyone! I have a few images that I want to use as icons, and I know ICO is the preferred format for that. Currently, I’m looking for some help with converting PNG to the ICO on Windows 11 as the default Photos app can’t do that. I’ve tried three online PNG to ICON converters, but I’ve only got the mixed results regarding quality and size. Are there specific programs or online services that you recommend that maintain the image quality? Additionally, if there are settings I should be aware of to ensure the icon works properly in Windows, please let me know! Thanks Read More