Month: August 2024
New on Azure Marketplace: August 12-17, 2024
We continue to expand the Azure Marketplace ecosystem. For this volume, 143 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
biGENIUS-X: biGENIUS-X from biGENIUS AG is an automated data transformation tool that features a modern graphical user interface, advanced automation, and modeling wizards to quickly build or rearchitect data solutions. It supports customization and parallel development with Git.
Boardflare premium: Boardflare’s add-ins for Microsoft Excel offer AI functions like translation, fuzzy matching, and sentiment analysis. The base versions are free, with premium models available via subscription. Subscriptions are limited to Microsoft work or school accounts in certain countries.
Bocada Cloud – Standard Plan: Bocada Cloud is a SaaS platform for monitoring backups across your IT environment. Bocada Cloud centralizes data protection and offers automated data collection, reporting, and alerting with API-based connectors for accuracy.
CABEM Competency Manager Lite: Competency Manager by CABEM streamlines competency tracking across departments and locations with an evergreen skills matrix and continual audit readiness. It ensures efficient onboarding and reporting, and it integrates with learning management systems.
CentOS Stream 10 on Azure x86_64: This offer from Ntegral provides CentOS Stream 10 on a Microsoft Azure virtual machine. CentOS Stream 10 offers a stable, scalable environment ideal for developers and enterprises. Positioned upstream of Red Hat Enterprise Linux (RHEL), it provides early access to future RHEL releases.
Cien.ai Go-To-Market Suite: Cien’s offer helps B2B go-to-market teams by standardizing and enhancing customer relationship management data for AI-powered apps and analytics. Dashboards and heatmap analysis are two components of the plan. This offer is ideal for executives, revenue operations professionals, and management consultants seeking improved revenue insights.
Connected Sanctions & PEP Verification for AML Compliance Management: CORIZANCE’s platform offers AI-powered, real-time tracking of watch lists, sanctions (such as from the United Nations), and more to manage anti-money laundering and financial crime risks. Ensure compliance, enhance stakeholder confidence, and boost brand value with enhanced risk detection.
Crossware Email Signature: Crossware Email Signature for Office 365 offers businesses a centralized solution for managing consistent and compliant email signatures, disclaimers, and branding. It features a web-based signature designer, an advanced rule builder, and campaign management tools.
DataSphere Optimize: DataSphere Optimizer from Spektra Systems enhances data processing efficiency with advanced algorithms and real-time analytics. It features scalable infrastructure, a user-friendly interface, comprehensive security, and automated workflow management.
Elgg v6.0.2 on Ubuntu v20: This offer from Anarion Technologies provides Elgg with Ubuntu on a Microsoft Azure virtual machine. Elgg is an open-source social networking engine for creating custom social networking sites and online communities. Its flexibility and extensibility through plug-ins and a powerful API make it suitable for small-scale and large-scale social networks alike.
Gain Control of Cloud Expenses with Sirius: The Sirius cloud cost management platform from SGA, part of the FCamara Group, delivers continuous insights and recommendations to improve the visibility of your cloud spending and restore your sense of control over your budget. This offer is available only in Portuguese.
Helpdesk 365 – Enterprise: Helpdesk 365 from Apps 365 is a customizable ticketing system for SharePoint and Microsoft 365. It supports IT, HR, and finance requests, and it includes automation, chatbots, and multi-language support.
HyperData Flow Engine: HyperData Flow Engine from Spektra Systems optimizes data flows in complex systems. It features real-time data streaming, dynamic workload distribution, scalable architecture, advanced analytics, and automated workflow management. Ensure data security and compliance with HyperData Flow Engine.
Iostat on Debian 11: This offer from Apps4Rent provides Iostat along with Debian 11 on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Oracle Linux 8.8: This offer from Apps4Rent provides Iostat along with Oracle Linux 8.8 on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 20.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 20.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 22.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 22.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 24.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 24.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Kylin secured and supported by Hossted: Hossted offers a repackaged Kylin deployment with instant setup, robust security, and a control dashboard. It includes continuous security scans and round-the-clock premium support.
Lucid Data Hub Enterprise Application: Lucid Data Hub is a generative AI platform that automates data integration and analytics for complex ERP systems like SAP, Oracle, and Microsoft Dynamics. It simplifies data management, enhances data quality, and accelerates insights, benefiting data engineers, analysts, scientists, business intelligence teams, and IT managers by addressing integration, preparation, and scalability challenges.
Palantir AIP: Palantir AIP is a secure platform for integrating AI into enterprise decision-making. It features an intuitive workflow builder for AI apps, end-to-end evaluation tools for production readiness, and an ontology SDK for development. The AIP Now repository offers prebuilt AI applications and examples for accelerated development.
Prefect Cloud: Prefect is a data workflow orchestration platform that helps developers build, observe, and react to data pipelines. It offers real-time visibility, identifies bottlenecks, and ensures performance. Automating over 200 million tasks monthly, Prefect enables faster, resilient code deployment, empowering companies to leverage data for a competitive edge.
Quantum Compute Core: Quantum Compute Core from Spektra Systems is a cutting-edge platform utilizing quantum computing for greater speed and advanced algorithms. It ensures data security with quantum-resistant encryption and features a user-friendly interface for real-time processing and decision-making.
Red Hat Enterprise Linux 8 (8.10 LVM) DISA STIG Benchmarks: This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with the Defense Information Systems Agency’s Security Technical Implementation Guides (STIG). It ensures adherence to stringent security standards, mitigates vulnerabilities, and reduces cyber threats.
Red Hat Enterprise Linux 8 HIPAA (8.10 LVM): This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with the Health Insurance Portability and Accountability Act (HIPAA). HIPAA establishes national standards for the protection of certain health information and mandates measures for safeguarding electronic health records.
Red Hat Enterprise Linux 8 NIST 800-171 (8.10 LVM) Benchmarks: This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with NIST 800-171 standards, which protect controlled unclassified information in non-federal systems and organizations.
Red Hat Enterprise Linux 8 PCI DSS (8.10 LVM): This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with PCI DSS, which concerns the protection of payment data. Madarson IT ensures images are up to date, secure, and ready to use, fostering trust and reducing vulnerabilities.
Salesbuildr: Salesbuildr is a sales and revenue operations platform for managed service providers using Autotask PSA, ConnectWise PSA, or Microsoft Dynamics 365. Ideal for midsized and large organizations, it standardizes products, creates competitive proposals, enables branded e-commerce storefronts, and identifies upsell and cross-sell opportunities.
Sanic on Debian 11: This offer from Apps4Rent provides Sanic along with Debian 11 on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 20.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 20.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 22.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 22.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 24.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 24.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sensors-as-a-Service: Sensors-as-a-Service by Ectron Corporation offers seamless integration of more than 32,000 industrial sensors with Microsoft-hosted analytics. It monitors machine functionality, energy usage, product quality, and other key performance indicators. Eliminate human error and enhance your operational efficiency with AI and machine learning.
SlashNext Cloud Email: SlashNext Integrated Cloud Email Security for Microsoft 365 combines AI, natural language processing, and computer vision for real-time threat detection. Use it to protect against business email compromise, account takeovers, spear phishing, and more. Setup is easy via the Microsoft Graph API, and it integrates with Microsoft Sentinel.
Ubuntu 20.04 with Apache Subversion (SVN) Server: This offer from Virtual Pulse S. R. O. provides Apache Subversion along with Ubuntu 20.04 on a Microsoft Azure virtual machine. Apache Subversion is a full-featured version control system for source code, web pages, documentation, and more.
Ubuntu 20.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 20.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu 22.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 22.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu 24.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 24.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu Pro with 24×7 Support: Ubuntu Pro enhances Ubuntu Server LTS with advanced security, compliance features, and system management tools. It includes continual support, expanded security maintenance, kernel live patching, and automated security and compliance tasks. It’s ideal for production environments.
Websoft9 Applications Hosting Platform for ArangoDB: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides ArangoDB 3.11 along with Docker and a cloud-native InfluxDB runtime on the Websoft9 Applications Hosting Platform. ArangoDB is a scalable graph database system.
Websoft9 Applications Hosting Platform for Bytebase: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides Bytebase 2.17 along with Docker on the Websoft9 Applications Hosting Platform. Bytebase is a database CI/CD solution for developers and database administrators.
Websoft9 Applications Hosting Platform for InfluxDB: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides InfluxDB along with Docker on the Websoft9 Applications Hosting Platform. InfluxDB is a popular open-source database for developers managing time-series data. Unlock real-time insights from time-series data at any scale in the cloud, on-premises, or at the edge.
Websoft9 Applications Hosting Platform for Redash: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides Redash 10.1 along with Docker on the Websoft9 Applications Hosting Platform. Connect Redash to any data source (such as PostgreSQL, MySQL, Redshift, BigQuery, or MongoDB) to query, visualize, and share your data.
WizarD Core: WizarD from Systech Solutions empowers businesses with conversational AI, enabling natural language access to enterprise data warehouses. It utilizes generative AI and trained models to understand user intent and retrieve data directly from Snowflake. This tool provides business users and analysts direct access to data and insights, reducing IT dependency.
WizarD Doc Pro: The WizarD Document Processing Engine from Systech Solutions allows users to upload large text or PDF documents, perform optical character recognition on PDFs with images and charts, automatically index files for intelligent search, and interact with the data via a chat interface.
Zero Code AI Platform (AIPaaS): The AIPaaS platform from UCBOS offers no-code AI model building with features like data preparation, real-time scoring, and hyperparameter tuning. It supports predictive analytics, natural language processing, and computer vision.
Zero Code Application Composition Platform (aPaaS): aPaaS from UCBOS is a no-code application composition platform that enables rapid app development using a drag-and-drop builder, AI execution engine, and built-in integration tools. It supports mobile devices and the cloud, offers extensive customization, and ensures security and compliance.
Zero Code Enterprise-Centric Supply Chain Solutions (SCMPaaS): SCMPaaS from UCBOS offers no-code supply chain solutions to boost IT innovation, vendor independence, and business agility. Improve supply chain planning, foster supplier collaboration, and streamline procurement systems and logistics management.
Zero Code Semantic Integration & Orchestration Platform (iPaaS): The iPaaS platform from UCBOS functions as intelligent middleware to connect your disparate data sources, augment your enterprise systems with real-time analytics, and orchestrate advanced business process engines.
Go further with workshops, proofs of concept, and implementations
Altron’s Data Estate Modernization: Unlock your data’s potential with this offer from Altron Digital Business. Using Microsoft Azure services, Altron Digital Business will study your on-premises data estate, then design and build custom architecture for a modern analytics platform. After the deployment, Altron Digital Business will provide six months of support.
Application Migration to Microsoft Entra ID: 12-Week Implementation: This service from Modern Methodologies aids medium-size to large enterprises in migrating their identity provider to Microsoft Entra ID. Modern Methodologies will focus on SSO infrastructure migration, security, and compliance, with optional technical support for application modifications.
Assortment Intelligence Implementation: Sigmoid will implement a suite of assortment planning solutions so you can optimize your product mix, enhance inventory management, and boost sales. Sigmoid will use Azure Data Lake Storage to supply the storage layer and Azure Data Factory to orchestrate data integration pipelines. Microsoft Purview will be used for unified data governance, and Azure Machine Learning will integrate and analyze large data sets.
Azure Virtual Desktop Design and Deployment: Using Azure Virtual Desktop, The Partner Masters will build a virtual desktop infrastructure solution that enables remote work and meets your specific business needs. The Partner Masters will supply a design and configuration guide, along with a documented plan of how to get your team trained and certified to maintain the solution.
CMMC Workshop: 2-Hour Discovery Workshop: Coretek’s workshop for federal defense contractors will address Cybersecurity Maturity Model Certification 2.0 compliance. Coretek will review Microsoft’s proven architecture and multiple approaches to CMMC readiness, including solutions using Microsoft Azure and Microsoft 365 licensing options, and determine GCC or GCC-High requirements.
Consulting Service on Belake.ai: Dataside Solucoes em Dados LTDA will help clients integrate Belake.ai with Microsoft Azure and tools such as Azure OpenAI and Azure Cognitive Search. Belake.ai uses generative AI to convert natural language questions into detailed charts and visualizations. Specialized support will be offered for integrating Microsoft Power BI Embedded.
CoreConversations: Core BTS will implement its CoreConversations AI tool, which deploys proprietary conversational agents to unlock your company’s data potential. Simply pose questions to the AI and receive immediate, data-driven answers that facilitate smarter decision-making and process optimization.
Easy Migration Azure: Cloud Continuity will implement Easy Migration, a solution to smoothly migrate your applications and data to Microsoft Azure. Benefits include adaptable infrastructure, heightened security, management simplification, and potential savings of up to 50 percent through greater efficiency. This service is available only in Spanish.
Fabric Copilot: 1-Day Workshop: When using the Copilot capabilities of Microsoft Fabric, it’s essential to ensure that your semantic model follows best practices for modeling. In this workshop, iLink Systems will take one of your reports and review the AI features that are applicable to you. You’ll learn how to utilize the out-of-the-box AI visuals for Microsoft Power BI and how to update semantic models for optimal use with Copilot.
Fabric Accelerator: 4-Day Workshop: This hands-on workshop from HSO will give you a comprehensive understanding of how a data and analytics solution within Microsoft Fabric can benefit your organization. An action plan will ensure that all participants can effectively apply the workshop learnings. A high-level action plan for implementing Microsoft Fabric will also be drafted.
Federation Service: In this engagement, Avanade will implement flexible microservices that work with existing integration systems, such as Azure API Management, MuleSoft API Management, Microsoft Fabric, and Microsoft Azure Data Fabric.
Fortress Security Solution: Fortress-G from KAMIND IT is a comprehensive managed security offering in which KAMIND IT will set up your Microsoft environments and establish the correct security posture to defend your assets. This will include CMMC Level 2 and NIST-800-171 compliance, mobile device management, and more.
Nagarro’s XPerience360: Centralized Low-Code MDM Solution: Nagarro will implement its XPerience360 Platform so your company can consolidate data from various sources to create a unified customer profile using Microsoft Azure Synapse Analytics. The XPerience360 Platform enhances data quality, decision-making, and analytics through features like data deduplication, segmentation, and low-code development.
Planogram Assortment Optimization: 8-Week Implementation: Sigmoid will implement Microsoft tools, including Azure Data Factory, Azure Machine Learning, and Microsoft Power BI, to optimize store-specific assortments and planograms. This can increase sales, reduce inventory costs, and streamline your planning.
UST Insight for Microsoft Fabric: UST’s workshop will help businesses integrate Microsoft Fabric into their data strategy, enhancing efficiency and innovation. This four-hour session will include strategic insights, live demos, and tailored use cases. It’s intended for data-driven enterprises in finance, healthcare, retail, and manufacturing.
Contact our partners
CABIE: The Super Slick Customs Process
Click Armor Enterprise Security Awareness Training
Composable Architecture Platform (CAP)
Copilot Studio in a Day Workshop
Copilot User Empowerment Training
Cysana Malware Detector and Ransomware Blocker
Data Strategy: 2-Week Assessment
Devart ODBC Driver for Mailjet
Devart ODBC Driver for NexusDB
Devart ODBC Driver for QuestDB
Devart ODBC Driver for SendGrid
Devart ODBC Driver for ServiceNow
Devart ODBC Driver for ShipStation
Devart ODBC Driver for Shopify
Endpoint Privilege Manager miniOrange
Entra ID Connector for IntelliTime (Contact Me Offer)
Evolution CMS v3.1.27 on Ubuntu v20
Infosys Cobalt Cloud FinOps Assessment
KUARIO Personalized Payment for Self-Services
Lubyc for Employee Personal Business Profile
Lubyc for Employee Professional Profile
Managed Services for Microsoft Azure
Microsoft Azure Cloud Migration: 6-Week Assessment
Microsoft Entra ID Conditional Access Framework Review
MIM Migration to Entra ID Assessment
On Power BI Dashboard in a Day Workshop
Penetration Testing: 4-Week Assessment
Pico Manufacturing Process Error Proofing Platform
Power Platform Training: Automate Flow in a Day
ScriptString.AI – Utility Data Management
SFTP Gateway Enterprise Solution
Squid Proxy Server on AlmaLinux 8
Stacknexus for Microsoft Outlook
Syntho: AI-Generated Synthetic Data Platform
Tokiota Cloud Managed Service SSGG
Ubuntu 18.04 with Extended Lifecycle Support
Ubuntu 24.04 with Apache Subversion (SVN) Server
UNIFYSecure Managed Security Service for XDR MDR SOC
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
How can I save my figure to eps AND keep white margins (from my defined figure and axes position)?
I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs.I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs. I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs. saveas, eps, nocrop, vertical alignement, figure position, export MATLAB Answers — New Questions
License Manager Error -9 Your username does not match the username in the license file.
License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users??License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users?? License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users?? ubuntu MATLAB Answers — New Questions
How can I integrate RTOS in stm32F407XX automatic code generation?
Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation?Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation? Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation? st, stm32cubemx, keil MATLAB Answers — New Questions
Onboard domain computers by GPO deployment. Policy created by Defender Portal are not deployed
Hi
I onboarded computers using Group Policy Deployment and set additional GPO settings described in this document: Onboard Windows devices to Microsoft Defender for Endpoint via Group Policy – Microsoft Defender for Endpoint | Microsoft Learn
Then I created Endpoint Security Policies in Defender Portal. Assign to All Users and All computers groups. I see that these policies are not deployed to computers. Option “Policy sync” on computer menu is grey out (disabled). I don’t know why?
Perhaps if I set additional defender settings by GPO it is means that I cannot use Endpoint Security Policies in Defender Portal? We don’t use Intune or MDM. We have only Defender for Endpoint P1 licence and synchronization domain users and computers account with Microsoft Entra.
Thank you for help
Tomasz
HiI onboarded computers using Group Policy Deployment and set additional GPO settings described in this document: Onboard Windows devices to Microsoft Defender for Endpoint via Group Policy – Microsoft Defender for Endpoint | Microsoft Learn Then I created Endpoint Security Policies in Defender Portal. Assign to All Users and All computers groups. I see that these policies are not deployed to computers. Option “Policy sync” on computer menu is grey out (disabled). I don’t know why?Perhaps if I set additional defender settings by GPO it is means that I cannot use Endpoint Security Policies in Defender Portal? We don’t use Intune or MDM. We have only Defender for Endpoint P1 licence and synchronization domain users and computers account with Microsoft Entra. Thank you for helpTomasz Read More
Combobox return blank for 1st option
Hi,
I’m creating a non VBA combobox with the 1st option being “Select” as I need an option to have a null value. I’m referencing the combobox in another formula so can’t have “Select” return 1. is there any way if the option is set to “Select” for it to return blank or no value?
Thanks in advance.
Hi,I’m creating a non VBA combobox with the 1st option being “Select” as I need an option to have a null value. I’m referencing the combobox in another formula so can’t have “Select” return 1. is there any way if the option is set to “Select” for it to return blank or no value?Thanks in advance. Read More
Implementing Data Vault 2.0 on Fabric Data Warehouse
This Article is Authored By Michael Olschimke, co-founder and CEO at Scalefree International GmbH and Co-authored with @Trung_Ta Senior BI Consultant from Scalefree
The Technical Review is done by Ian Clarke and Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft
Introduction
In the previous articles of this series, we have discussed how to model Data Vault on Microsoft Fabric. Our initial focus was on the basic entity types including hubs, links, and satellites; advanced entity types, such as non-historized links and multi-active satellites and the third article was modeling a more complete model, including a typical modeling process, for Microsoft Dynamics CRM data.
But the model only serves a purpose: our goal for the entire blog series is to build a data platform using Microsoft Fabrics on the Azure Cloud. And for that, we also have to load the source data into the target model. This is the topic of this article: how to load the Data Vault entities. For that reason, we continue with our discussion of the basic and advanced Data Vault entities. The Microsoft Dynamics CRM article should be considered as an excursion to demonstrate how a more comprehensive model looks like and, also important, how we get there.
Data Vault 2.0 Design Principles
Data Vault was developed by Dan Linstedt, who is the inventor of Data Vault and co-founder of Scalefree. It is designed to meet the challenging requirements of the first users in the U.S. government. These requirements led to certain design principles that are the basis for the features and characteristics of Data Vault today.
One requirement is to perform insert-only operations on the target database. The data platform team might be required to delete records for legal reasons (e.g., GDPR or HIPAA), but updating records will never be done. The advantage of this approach is twofold: the performance of inserts is much faster than deleting or updating records. Also, inserting records is more in line with the task at hand: the data platform should capture source data and changes to the source data. Updating records means losing the old version of the record.
Another feature of the loading patterns in Data Vault is the ability to load the data incrementally. There is no need at all to load the same record twice into the Raw Data Vault layer. But also in subsequent layers, such as the Business Vault, there is no need to touch the same record another time, or run a calculation on it. Yes, it is not true for every case, but most cases. And it requires diligent implementation practices. But it is possible, and we have built many systems this way: in the best case, we touch data only if we haven’t touched it before.
The insert-only, incremental approach leads to full restartability: if only records which have not been processed yet are loaded in the next execution of the load procedure, why not partition the data in independent sub-sets, and then load partition by partition? If something goes wrong, just restart the process: it will continue loading what’s not in the target yet.
And if the partitions are based on independent sub-sets, the next step is to parallelize the loads on multiple distributed nodes. This requires independent processes, which means that there should be no dependencies between the individual loading procedures of the Raw Data Vault (and all the other layers). That leads to high-performant, scalable distributed systems in the cloud to process any data volume at any speed.
This design, and the modeling style of Data Vault, leads to many parallel jobs to be executed. On the other hand, due to the standardization of the model (all hubs, links, and satellites are based on simple, repeating patterns), it is possible to standardize the loading patterns as well. Essentially, every hub is not only modelled in a similar way, using a generation template, but also the loading process is based on templates. The same is true for links, satellites, and all special entity types.
The standardization and the number of loading procedures to be produced leads then to the need (and possibility) of automation. There is no need to invent these tools: tools such as Vaultspeed are readily available for project teams to use and speed up their development. These tools help the team to maintain the generated processes at scale.
It should also not matter which technology is used to load the Raw Data Vault: some customers prefer SQL statements, while others prefer Python scripts or ETL tools such as SQL Server Integration Services (SSIS). The Data Vault concepts for loading the entities are tool-agnostic and can even be performed in real-time scenarios, for example with Python or C# code.
These requirements are regarding the implementation of the Data Vault loading procedures. Additional, more business-focused requirements have been discussed in our introductory article of this blog series.
Identifying Records
Another design choice is made by the data modeller: how records should be identified in the Data Vault model. There are three options that are commonly used: sequences, hash keys or business keys.
Sequences have been used back in Data Vault 1.0 and are an option till today. However, not a desired one: all of our clients who are using sequences want to get rid of them by migrating to Data Vault 2.0 with hash keys or business keys to be used to identify records. There are many issues with the use of sequences in Data Vault models: one is the required loading order. Hubs must be loaded first because the sequences are used in links and hub satellites to refer to the hub’s business key. After the links are completely loaded, link satellites can be loaded. This loading order is particularly an issue in real-time systems where this leads to unnecessary waiting states.
However, the bigger issue is the implied requirement for lookups. In order to load links, the business keys’ sequence must be figured out first by performing a lookup with the business key from the source against the hub, where the sequence is generated for each new business key loaded. This puts a lot of I/O performance on the database disk level.
Business keys are another option, if the database engine supports joining on business keys (and especially multi-part business keys) efficiently. This is true for many distributed databases but not true for more traditional, relational database engines, where the third (and default) option is used: hash keys. Since it is possible to mix different database technologies (e.g., using a distributed database, such as Microsoft Fabric, in the Azure cloud and a Microsoft SQL Server on premise), one might end up in an unnecessarily complex environment where parts of the overall solution are identified by business keys and other parts are identified by hash keys.
Therefore, most clients opt for the use of hash keys in all databases, regardless of the actual capabilities. Hash keys offer the advantage of consistent query performance across different databases and easy to formulate join conditions that don’t span across many columns. This is the reason why we also decided to use hash keys in a previous article and throughout this blog series.
When using hash keys (or business keys), all Data Vault entities can be loaded in parallel, without any dependencies. Eventual consistency will be reached as soon as the data of one batch (or real-time message) has been fully processed and loaded into the Raw Data Vault. However, immediate consistency is not possible anymore when the entities are loaded in parallel, which is the recommendation. But it is certainly possible (but beyond the scope of this article) to guarantee the consistency of query results, which is sufficient in analytical (and most operational) use cases.
It is also recommended to use hash difference from satellite payload to streamline delta detection during loading procedures for Data Vault satellites. Satellites are delta-driven, i.e. only entirely new records and records in which at least one of the attributes contains a change, will be loaded. In order to perform the delta detection, incoming records shall be compared with the previous ones. Without a hash diff in place, this has to be done by comparing records column by column, which can have a negative impact on performance of loading processes. Therefore it is highly recommended to perform change detection using the fixed-length hash difference.
Loading Raw Data Vault Entities
The next sections discuss how to load the Raw Data Vault entities, namely hubs, links, and satellites. We will keep the main focus on standard entities.
The following figure drafts a Data Vault model from a few data tables of the CRM source system of a retail business.
The following sections will present the loading patterns for objects that are marked in the above figure.
Data Vault stage
Before talking about loading the actual Data Vault entities, we must first explore the Data Vault stage objects, where incoming data payloads are prepared for the following loading processes.
In Fabric, it is recommended to create Data Vault stages as views. This is to leverage caching in Fabric Data Warehouse. While the same stage view can be used to populate different target objects (hubs, links, satellites,…), the query behind it will be executed only once and its results will be automatically stored in the database’s cache, ready for access by subsequent loading procedures from the same stage. This technique practically eliminates the need for materializing stage objects, as per the traditional approach.
Data Vault stages also calculate hash key and hash difference values respectively from business keys and descriptive attributes. It is important to note that within a stage object, there may be more than one hash key, as well as more than one hash diff, in case the stage object should feed data to multiple target Hubs, Links and Satellites, which is common practice.
Moreover, Data Vault stages prepare the insertion of so-called ghost records. These are artificially generated data records added to Data Vault objects, which contain default/dummy values. To read more about ghost record and its usage, please visit: Implementing Ghost Records.
The example code script below creates a Data Vault stage view from source table store_address. This will in the next step be utilized to load Hub Store and its satellite Store Address.
CREATE VIEW dbo.stage_store_address_crm AS
WITH src_data AS (
SELECT
CURRENT_TIMESTAMP AS load_datetime,
‘CRM.store_address’ AS record_source,
store_id,
address_street,
postal_code,
country
FROM dbo.store_address
),
hash AS (
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
CASE
WHEN store_id IS NULL THEN ‘00000000000000000000000000000000’
ELSE CONVERT(CHAR(32), HASHBYTES(‘md5’,
COALESCE(CAST(store_id AS VARCHAR),”)
),2)
END AS hk_store_hub,
CONVERT(CHAR(32), HASHBYTES(‘md5’,
COALESCE(CAST(address_street AS VARCHAR),”) + ‘|’ +
COALESCE(CAST(postal_code AS VARCHAR),”) + ‘|’ +
COALESCE(CAST(country AS VARCHAR),”)
),2) AS hd_store_address_lroc_sat
FROM src_data
),
Zero_keys AS (
SELECT
CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126) AS load_datetime,
‘SYSTEM’ AS record_source,
‘??????’ AS store_id,
‘(unknown)’ AS address_street,
‘(unknown)’ AS postal_code,
‘??’ AS country,
‘00000000000000000000000000000000’ AS hk_store_hub,
‘00000000000000000000000000000000’ AS hd_store_address_lroc_sat
UNION
SELECT
CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126) AS load_datetime,
‘SYSTEM’ AS record_source,
‘XXXXXX’ AS store_id,
‘(error)’ AS address_street,
‘(error)’ AS postal_code,
‘XX’ AS country,
‘ffffffffffffffffffffffffffffffff’ AS hk_store_hub,
‘ffffffffffffffffffffffffffffffff’ AS hd_store_address_lroc_sat
),
final_select AS (
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
hk_store_hub,
hd_store_address_lroc_sat
FROM hash
UNION ALL
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
hk_store_hub,
hd_store_address_lroc_sat
FROM ghost_records
)
SELECT *
FROM final_select
;
This rather lengthy statement is preparing the data and adding the ghost records: the first CTE src_data selects the data from the staging table and adds the system attributes, such as the load date timestamp and record source. The next CTE hash then adds the hash keys and hash differences required for the target model. Yes, that implies that the target model is already known but we have seen in the previous article how we derive the target model from the staged data in a data-driven Data Vault design. Once that is done, the target model for the Raw Data Vault is known and we can add the required hash keys (for hubs and links) and hash diffs (for satellites) to the staged data. This is done only virtually in Fabric – on other platforms, it might be required to actually add the hash values to the staging tables.
Another CTE zero_keys is used to generate two records to be used as zero keys in hubs and links and ghost records in satellites. This CTE populates the two records with default values for the descriptive attributes and the business keys. It is recommended to use default descriptions that one would expect to see for the unknown member in a dimension. The reason is that these two records will later be turned into two members in the dimension: the unknown member and the erroneous member.
The CTE final_select then unionizes the two datasets: the staging data provided by the CTE hash and the zero key records provided by ghost_records. The loading processes for the Raw Data Vault then use this dataset as the input dataset.
Loading Hubs
The first loading pattern that we want to examine is the hub loading pattern. Since a hub contains a distinct list of business keys, a deduplication logic must be carried out during the hub loading process to eliminate duplicates of the same business key from the Data Vault stage view. In the code script below, we aim to load only the very first copy of incoming hash keys/business keys into the target hub entity. This is guaranteed by the ROW_NUMBER() window function, and the WHERE condition rno (Row Number) = 0 at the end of the loading script.
In addition, we also perform a forward-lookup check to verify if the incoming hash key doesn’t exist in the target. Only then will it be inserted into the hub entity. This is done in the WHERE condition: [hash key from stage] NOT IN (SELECT DISTINCT [hash key] FROM [target Hub]).
The example code script below loads the hub Store with the business key Store ID:
WITH dedupe AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
store_id,
ROW_NUMBER() OVER (PARTITION BY hk_store_hub ORDER BY load_datetime ASC) AS rno
FROM dbo.stage_store_address_crm
)
INSERT INTO DV.STORE_HUB
(
hk_store_hub,
load_datetime,
record_source,
store_id
)
SELECT
hk_store_hub,
load_datetime,
record_source,
store_id
FROM dedupe
WHERE rno = 1
AND hk_store_hub NOT IN (SELECT hk_store_hub FROM DV.STORE_HUB)
;
In the above statement, the CTE dedupe selects all business keys and adds a row number to select the first occurrence of the business key, including its record source and load date timestamp. Duplicates can exist for two reasons: either multiple batches exist in the staging area or the same business key appears multiple times in the same source of the single batch – for example, when a customer has purchased multiple products across multiple transactions in a retail.
The CTE is then input for the INSERT INTO statement into the hub entity. In the select from the CTE, a filter is applied to select only the first occurrence of the business key.
Loading Standard Links
Pattern similar to Hub loading – only the very first copy of incoming link hash keys will be loaded into the target link entity. Link hash key is calculated from the business key combination of the hubs that are connected by the Link entity.
The example code script below loads the link Store Employee with two Hub references from Hub Store and Hub Employee:
WITH dedupe AS (
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
FROM (
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source,
ROW_NUMBER() OVER (PARTITION BY hk_store_employee_lnk ORDER BY load_datetime ASC) AS rno
FROM DV.stage_store_employee_crm
) s
WHERE rno = 1
)
INSERT INTO DV.store_employee_lnk
(
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
)
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
FROM dedupe
WHERE hk_store_employee_lnk NOT IN (SELECT hk_store_employee_lnk FROM DV.store_employee_lnk)
;
Similar to the standard Hub’s loading pattern, the standard Link’s one starts with a deduplication process. Its goal is to only insert the very first occurrence of the combination of business keys being referenced in the Link relationship.
Then, the main INSERT INTO … SELECT statement also includes a forward look-up to the target Link entity, to only insert unknown Link hash keys into the Raw Vault.
Loading Non-Historized Links
In Data Vault 2.0, the recommended approach to model transactions, events or, in general, non-changing data is to utilize Non-historized Link entities – a.k.a. Transactional Link, discussed in our previous article to advanced Data Vault modeling.
The example code script below loads a Non-Historized Link that contains transactions made in retail stores – with two Hub references from Hub Store and Hub Customer:
WITH high_water_marking AS (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM DV.stage_store_transactions_crm
WHERE load_datetime > (
SELECT COALESCE(MAX(load_datetime), DATEADD(s, -1, CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126)))
FROM DV.store_transaction_nlnk
)
),
dedupe AS (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date,
ROW_NUMBER() OVER (PARTITION BY hk_store_transaction_nlnk ORDER BY load_datetime ASC) AS rno
FROM high_water_marking
) s
WHERE rno = 1
)
INSERT INTO DV.store_transaction_nlnk
(
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
)
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM dedupe
WHERE hk_store_transaction_nlnk NOT IN (SELECT hk_store_transaction_nlnk FROM DV.store_transaction_nlnk)
;
The main difference between the loading patterns for Standard Links and Non-Historized Links lies within the so-called high-water marking logic in the first CTE of the same name. This logic only lets records through from the DV stage object that have a technical load_datetime that occurs after the latest one found in the target Link entity. This allows us to “skip” data records that have been processed by previous DV loads, effectively reduces the workload on the data warehouse’s side.
Loading Standard Satellites
Now to the more complicated loading patterns within a Data Vault 2.0 implementation, which are for Satellite entities. While querying from the stage, only records with a load date timestamp (LDTS) exceeding the latest load date from that target satellite will be fetched for further processing.
Note that these queries are more enhanced compared to other examples you would find elsewhere. The reason is that they are optimized for loading all data from the underlying data lake, even multiple batches at once, but in the right order. Especially for satellites, this poses a challenge as Data Vault satellites are typically delta-driven in order to save storage and improve performance.
A few principles from the above loading patterns for Hubs and Links also apply for Satellite – such as the deduplication logic. This eliminates hard duplicates (i.e. records with identical data attribute values) and, combined with the aforementioned filtering of old load date timestamps, aims to reduce the amount of incoming data records.
WITH stg AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM DV.stage_store_address_crm
WHERE load_datetime > (
SELECT COALESCE(MAX(load_datetime), DATEADD(s, -1, CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126)))
FROM DV.store_address_crm_lroc_sat
)
),
dedupe_hash_diff AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
COALESCE(LAG(hd_store_address_crm_lroc_sat) OVER (PARTITION BY hk_store_hub ORDER BY load_datetime), ”) AS prev_hd,
address_street,
postal_code,
country
FROM stg
) s
WHERE hd_store_address_crm_lroc_sat != prev_hd
),
dedupe_hard_duplicate AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country,
ROW_NUMBER() OVER(PARTITION BY hk_store_hub ORDER BY load_datetime DESC) AS rno
FROM dedupe_hash_diff
) dhd
WHERE rno = 1
),
latest_delta_in_target AS (
SELECT
hk_store_hub,
hd_store_address_crm_lroc_sat
FROM (
SELECT
hk_store_hub,
hd_store_address_crm_lroc_sat,
ROW_NUMBER() OVER(PARTITION BY hk_store_hub ORDER BY load_datetime DESC) AS rno
FROM
DV.store_address_crm_lroc_sat
) s
WHERE rno = 1
)
INSERT INTO DV.store_address_crm_lroc_sat
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM dedupe_hard_duplicate
WHERE NOT EXISTS (
SELECT 1
FROM latest_delta_in_target
WHERE latest_delta_in_target.hk_store_hub = dedupe_hard_duplicate.hk_store_hub
AND latest_delta_in_target.hd_store_address_crm_lroc_sat = dedupe_hard_duplicate.hd_store_address_crm_lroc_sat
)
;
The first CTE stg selects the batches from the staging table where the load date timestamp is not yet in the target satellite. Batches that are already processed in the past are ignored this way.
After that, the CTE dedupe_hash_diff removes non-deltas from the data flow: records that have not changed from the previous batch (identified by the load date timestamp) are removed from the dataset.
Next, the CTE dedupe_hard_duplicate removes those records from the dataset where hard duplicates exist. This statement assumes that a standard satellite is loaded, not a multi-active satellite.
The CTE latest_delta_in_target retrieves the latest delta for the hash key from the target satellite to perform the delta-check against the target.
Finally, the insert into statement selects the changed or new records from the sequence of CTEs and inserts them into the target satellite.
Calculating the Satellite’s End-Date
Typically, a Data Vault satellite contains not only a load date, but also a load end date. However, the drawback of a physical load end date is that it requires an update on the satellite. This is done in the load end-dating process after loading more data into the satellite.
This update is not desired. Nowadays, the alternative approach is to calculate the load end date virtually in a view on top of the satellite’s table. This view provides the same structure (all the attributes) of the underlying table and, in addition, the load end date, which is calculated using a window function.
CREATE VIEW DV.store_address_crm_lroc_sat AS
WITH enddating AS (
SELECT
hk_store_hub,
load_datetime,
COALESCE(
LEAD(DATEADD(ms, -1, load_datetime)) OVER (PARTITION BY hk_store_hub ORDER BY load_datetime),
CONVERT(DATETIME, ‘9999-12-31T23:59:59’, 126)
) AS load_end_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM DV.store_address_crm_lroc0_sat
)
SELECT
hk_store_hub,
load_datetime,
load_end_datetime,
record_source,
CASE WHEN load_datetime = CONVERT(DATETIME, ‘9999-12-31T23:59:59’, 126)
THEN 1
ELSE 0
END AS is_current,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM enddating
;
In the CTE enddating the load end date is calculated using the LEAD function. Other than that, the CTE selects all columns from the underlying table.
The view’s select statement is then calculating an is_current flag based on the load end date, which comes in handy often.
Outlook and conclusion
This concludes our article on loading standard entities of the Raw Data Vault. They provide the foundation for the loading patterns of the advanced Data Vault entities, such as non-historized links or multi-active satellites, with only minor modifications, which we typically discuss in our blog at https://www.scalefree.com/blog/ and in our training. In a subsequent article, we will demonstrate how to automate these patterns using Vaultspeed to improve the productivity (and therefore the agility) of your team. However, before we get there, we will continue our journey downstream in the architecture of the data platform and discuss in our next article how to implement business rules in the Business Vault.
About the Authors
Michael Olschimke is co-founder and CEO at Scalefree International GmbH, a Big Data consulting firm in Europe. The firm empowers clients across all industries to take advantage of Data Vault 2.0 and similar Big Data solutions. Michael has trained thousands of data warehousing individuals from the industry, taught classes in academia, and published on these topics regularly.
Trung Ta is a senior BI consultant at Scalefree International GmbH. With over 7 years of experience in data warehousing and BI, he has been advising Scalefree’s clients in different industries (banking, insurance, government,…) and of various sizes in establishing and maintaining their data architectures. Trung’s expertise lies within Data Vault 2.0 architecture, modeling, and implementation, with a specific focus on data automation tools.
<<< Back to Blog Series Title Page
Microsoft Tech Community – Latest Blogs –Read More
how to solve XCP connection error
when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance,when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance, when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance, xcp, matlab, simulink MATLAB Answers — New Questions
How I Can Find The HumanActivityData data set contains over 380000 observations of five different physical human activities? from Brian Hu
Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor.Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor. Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor. the humanactivitydata MATLAB Answers — New Questions
Where to find the generated simscape code
Hi does anyone know where to find the generated simscape code? I simply built a dummy model down below and then used simulink coder to generate the c code. I already know that the Simulink Coder software generates code from the Simscape blocks separately from the Simulink blocks in my model, but where does this file store? and which file contains the c code of the simcape blocks?Hi does anyone know where to find the generated simscape code? I simply built a dummy model down below and then used simulink coder to generate the c code. I already know that the Simulink Coder software generates code from the Simscape blocks separately from the Simulink blocks in my model, but where does this file store? and which file contains the c code of the simcape blocks? Hi does anyone know where to find the generated simscape code? I simply built a dummy model down below and then used simulink coder to generate the c code. I already know that the Simulink Coder software generates code from the Simscape blocks separately from the Simulink blocks in my model, but where does this file store? and which file contains the c code of the simcape blocks? simscape, code generation MATLAB Answers — New Questions
Automatically select the right number of bins (or combine the bins) for the expected frequencies in crosstab, in order to guarantee at least 5 elements per bin
I have two observed datasets, "x" and "y", that I want to compare the observed frequencies in bins of "x" and "y" against the other, through crosstab. To do so, I first need to place the elements of "x" and "y" into bins, by using the histcounts function. The resulting binned arrays, "cx" and "cy", are then compared to each other with a chi-square test, perfomed by crosstab. The chi-square test of independence is performed to determine if there is a significant association between the frequencies of "x" and "y" across the bins.
However, the chi-square test "is not valid for small samples, and if some of the counts (in the expected frequency) are less than five, you may need to combine some bins in the tails.". In the following example, several bins of the observed frequencies "cx" and "cy" have zero elements, and I do not know if they affect the expected frequencies calculated within/by crosstab.
Therefore, is there a way in crosstab to automatically select the right number of bins for the expected frequencies, or to combine them if some are empty, in order to guarantee at least 5 elements per bin?
rng default; % for reproducibility
a = 0;
b = 100;
nb = 50;
% Create two log-normal distributed random datasets, "x" and "y’
% (but we can use any randomly distributed data)
x = (b-a).*round(lognrnd(1,1,1000,1)) + a;
y = (b-a).*round(lognrnd(0.88,1.1,1000,1)) + a;
% Counts/frequency of "x" and "y"
cx = histcounts(x,’NumBins’,nb);
cy = histcounts(y,’NumBins’,nb);
[~,chi2,p] = crosstab(cx,cy)I have two observed datasets, "x" and "y", that I want to compare the observed frequencies in bins of "x" and "y" against the other, through crosstab. To do so, I first need to place the elements of "x" and "y" into bins, by using the histcounts function. The resulting binned arrays, "cx" and "cy", are then compared to each other with a chi-square test, perfomed by crosstab. The chi-square test of independence is performed to determine if there is a significant association between the frequencies of "x" and "y" across the bins.
However, the chi-square test "is not valid for small samples, and if some of the counts (in the expected frequency) are less than five, you may need to combine some bins in the tails.". In the following example, several bins of the observed frequencies "cx" and "cy" have zero elements, and I do not know if they affect the expected frequencies calculated within/by crosstab.
Therefore, is there a way in crosstab to automatically select the right number of bins for the expected frequencies, or to combine them if some are empty, in order to guarantee at least 5 elements per bin?
rng default; % for reproducibility
a = 0;
b = 100;
nb = 50;
% Create two log-normal distributed random datasets, "x" and "y’
% (but we can use any randomly distributed data)
x = (b-a).*round(lognrnd(1,1,1000,1)) + a;
y = (b-a).*round(lognrnd(0.88,1.1,1000,1)) + a;
% Counts/frequency of "x" and "y"
cx = histcounts(x,’NumBins’,nb);
cy = histcounts(y,’NumBins’,nb);
[~,chi2,p] = crosstab(cx,cy) I have two observed datasets, "x" and "y", that I want to compare the observed frequencies in bins of "x" and "y" against the other, through crosstab. To do so, I first need to place the elements of "x" and "y" into bins, by using the histcounts function. The resulting binned arrays, "cx" and "cy", are then compared to each other with a chi-square test, perfomed by crosstab. The chi-square test of independence is performed to determine if there is a significant association between the frequencies of "x" and "y" across the bins.
However, the chi-square test "is not valid for small samples, and if some of the counts (in the expected frequency) are less than five, you may need to combine some bins in the tails.". In the following example, several bins of the observed frequencies "cx" and "cy" have zero elements, and I do not know if they affect the expected frequencies calculated within/by crosstab.
Therefore, is there a way in crosstab to automatically select the right number of bins for the expected frequencies, or to combine them if some are empty, in order to guarantee at least 5 elements per bin?
rng default; % for reproducibility
a = 0;
b = 100;
nb = 50;
% Create two log-normal distributed random datasets, "x" and "y’
% (but we can use any randomly distributed data)
x = (b-a).*round(lognrnd(1,1,1000,1)) + a;
y = (b-a).*round(lognrnd(0.88,1.1,1000,1)) + a;
% Counts/frequency of "x" and "y"
cx = histcounts(x,’NumBins’,nb);
cy = histcounts(y,’NumBins’,nb);
[~,chi2,p] = crosstab(cx,cy) crosstab, binning, binned, array, histcounts MATLAB Answers — New Questions
Text rending issue on Windows Insider?
No text displayed. Does anyone know how to fix this issue?
No text displayed. Does anyone know how to fix this issue? Read More
How to make text underline like this
I have an image of a document that needs to be converted to word. Can anyone tell me how to do this?
It’s clearly not just an underline because there’s obviously more space between the line and the text. I also think it’s not just a bottom border, because when I try to reproduce it, it extends to the full width.
I have an image of a document that needs to be converted to word. Can anyone tell me how to do this? It’s clearly not just an underline because there’s obviously more space between the line and the text. I also think it’s not just a bottom border, because when I try to reproduce it, it extends to the full width. Read More
Tips to Reduce Your Electricity Bill
Lowering your electricity bill is both cost-effective and environmentally friendly. Here are some quick tips to help you save:
Upgrade to Energy-Efficient Appliances: Use ENERGY STAR-rated appliances to cut down on energy consumption.
Improve Insulation: Proper insulation reduces heating and cooling needs by keeping your home’s temperature stable.
Install a Programmable Thermostat: Set temperatures to save energy when you’re not at home.
Switch to LED Lighting: LEDs use less electricity and last longer than traditional bulbs.
Unplug Unused Devices: Avoid phantom energy use by unplugging devices when not in use.
Consider Renewable Energy: Look into solar panels or other renewable options to reduce grid reliance.
Monitor Your Usage: Use online tools or smart meters to track and manage your energy consumption.
For more details on managing your electricity bill and finding savings, visit the FESCO bill management website.
Lowering your electricity bill is both cost-effective and environmentally friendly. Here are some quick tips to help you save:Upgrade to Energy-Efficient Appliances: Use ENERGY STAR-rated appliances to cut down on energy consumption.Improve Insulation: Proper insulation reduces heating and cooling needs by keeping your home’s temperature stable.Install a Programmable Thermostat: Set temperatures to save energy when you’re not at home.Switch to LED Lighting: LEDs use less electricity and last longer than traditional bulbs.Unplug Unused Devices: Avoid phantom energy use by unplugging devices when not in use.Consider Renewable Energy: Look into solar panels or other renewable options to reduce grid reliance.Monitor Your Usage: Use online tools or smart meters to track and manage your energy consumption.For more details on managing your electricity bill and finding savings, visit the FESCO bill management website. Read More
Filtering channel messages in a conversation by date range
I can use a filter query to retrieve chat messages for a specific date range. However, does the Graph API for channel messages in a conversation support filtering messages and replies by date range?
* I’m looking for filter options to retrieve messages and replies within a specific conversation in a teams channel such as the image below, and not the entire message of a channel.
The filter query that works for me in chat messages are these
List Messages in a chat documentation: https://learn.microsoft.com/en-us/graph/api/chat-list-messages?view=graph-rest-1.0&tabs=httpList chat messages filtered by last modified date range:https://graph.microsoft.com/v1.0/chats/19:2da4c29f6d7041eca70b638b43d45437@thread.v2/messages?$top=2&$orderby=lastModifiedDateTime desc&$filter=lastModifiedDateTime gt 2022-09-22T00:00:00.000Z and lastModifiedDateTime lt 2022-09-24T00:00:00.000Z
There’s no filter query option for channel messages in a conversation that I can find for the below HTTP Request.
https://learn.microsoft.com/en-us/graph/api/chatmessage-get?view=graph-rest-1.0&tabs=httpHTTP Request: GET /teams/{team-id}/channels/{channel-id}/messages/{message-id}https://learn.microsoft.com/en-us/graph/api/chatmessage-list-replies?view=graph-rest-1.0&tabs=httpHTTP Request: GET /teams/{team-id}/channels/{channel-id}/messages/{message-id}/replies
If filter option not supported, is there an alternative way to retrieve channel messages and replies in a conversation by date range?
I can use a filter query to retrieve chat messages for a specific date range. However, does the Graph API for channel messages in a conversation support filtering messages and replies by date range? * I’m looking for filter options to retrieve messages and replies within a specific conversation in a teams channel such as the image below, and not the entire message of a channel. The filter query that works for me in chat messages are theseList Messages in a chat documentation: https://learn.microsoft.com/en-us/graph/api/chat-list-messages?view=graph-rest-1.0&tabs=httpList chat messages filtered by last modified date range:https://graph.microsoft.com/v1.0/chats/19:2da4c29f6d7041eca70b638b43d45437@thread.v2/messages?$top=2&$orderby=lastModifiedDateTime desc&$filter=lastModifiedDateTime gt 2022-09-22T00:00:00.000Z and lastModifiedDateTime lt 2022-09-24T00:00:00.000Z There’s no filter query option for channel messages in a conversation that I can find for the below HTTP Request. https://learn.microsoft.com/en-us/graph/api/chatmessage-get?view=graph-rest-1.0&tabs=httpHTTP Request: GET /teams/{team-id}/channels/{channel-id}/messages/{message-id}https://learn.microsoft.com/en-us/graph/api/chatmessage-list-replies?view=graph-rest-1.0&tabs=httpHTTP Request: GET /teams/{team-id}/channels/{channel-id}/messages/{message-id}/replies If filter option not supported, is there an alternative way to retrieve channel messages and replies in a conversation by date range? Read More
Conditional Formatting degrees
how do I get the condition to change color if a value is less than 90°? The formatting doesn’t seem to recognize or understand the values in °.
how do I get the condition to change color if a value is less than 90°? The formatting doesn’t seem to recognize or understand the values in °. Read More
Using groups to assign admin roles – works great except…
About a year ago we migrated our internal processes to using Entra ID security groups to manage Entra ID role assignment. It is mostly a good solution, but over time we started finding issues that Microsoft either can’t or is unwilling to fix. Their “solution” is always to “assign the role directly”, which isn’t scalable for an organization that doesn’t own entitlement to PIM. Below are the roles and functionality that are broken if roles are not directly assigned:
Exchange Administrator – Unable to download message trace logs
Groups Administrator / Global Administrator – Unable to configure group expiration policy
Power Platform Administrator / Global Administrator – Unable to elevate to Power Platform System Administrator role in environments
Do others have this issue? Is there any hope of MS actually fixing this, or are we going to have to switch our process back to direct role assignment by some other means?
About a year ago we migrated our internal processes to using Entra ID security groups to manage Entra ID role assignment. It is mostly a good solution, but over time we started finding issues that Microsoft either can’t or is unwilling to fix. Their “solution” is always to “assign the role directly”, which isn’t scalable for an organization that doesn’t own entitlement to PIM. Below are the roles and functionality that are broken if roles are not directly assigned: Exchange Administrator – Unable to download message trace logsGroups Administrator / Global Administrator – Unable to configure group expiration policyPower Platform Administrator / Global Administrator – Unable to elevate to Power Platform System Administrator role in environments Do others have this issue? Is there any hope of MS actually fixing this, or are we going to have to switch our process back to direct role assignment by some other means? Read More
Performing simple Azure Table Storage REST API operations using curl command.
The blog provides guidance to perform simple Table Storage REST API operations such as Create table, Delete Table, Insert entity, Delete entity, Merge entity, Get Table properties, Get Table Storage Stats, Query Table, Query Entities and Update entities operations using curl command.
Let us look at some of the command syntax to perform REST API operations and we will be making use of SAS token as the authentication mechanism. We need to take care of the pointers below while performing the operations via curl command:
Ensure the URL is formed correctly as per the operation you are trying to perform.
The mandatory header needs to be passed along with correct values for it.
Ensure you are appending/removing extra ‘?’ to the SAS token in the URLs accordingly.
Http verb can be GET, PUT or DELETE as provided by the REST API specifications.
So let’s began:
Get Table Storage Properties:
This Rest API gets the properties of an Azure Table Storage account. Reference link for Rest API is:
Get Table Service Properties (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X GET “https://<storageacccountname>.table.core.windows.net/?restype=service&comp=properties<SAS_token>” -H “x-ms-date:2024-02-23T03:24Z” -H “x-ms-version:2020-04-08”
Output:
Get Table Storage Stats:
This Rest API retrieves statistics that are related to replication for Azure Table Storage. This operation works only on the secondary location endpoint when we have RAGRS replication enabled for the storage account. Reference link: Get Table Service Stats (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X GET “https://storageaccount-secondary.table.core.windows.net/?restype=service&comp=stats&<SAS_token>” -H “x-ms-date:2024-02-23T03:24Z” -H “x-ms-version:2020-04-08”
Output:
Query tables:
This Rest API returns a list of tables under the specified account. Reference link: Query Tables (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X GET “https://storageaccount.table.core.windows.net/Tables?<SAS_token>” -H “x-ms-date:2024-02-23T18:16:35Z” -H “x-ms-version:2020-04-08”
Output:
Create table:
This Rest API creates a new table in a storage account. Reference link: Create Table (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X POST “https://storageaccount.table.core.windows.net/Tables?<SAS_token> ” -H “x-ms-date:2024-02-23T18:16:35Z” -H “x-ms-version:2020-04-08” -H “Content-Length: 27” -H “Content-Type: application/json” -d “{“TableName”:”sampletable”}
Output:
Delete table:
This Rest API deletes a table in a storage account. Reference link: Delete Table (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X DELETE “https://storageaccount.table.core.windows.net/Tables(‘sampletable’)?<SAS_token> ” -H “x-ms-date:2024-02-23T18:16:35Z” -H “x-ms-version:2020-04-08” -H “Content-Type: application/json”
Output:
Query Entities:
This Rest API queries entities in a table and includes the $filter and $select options. Reference link: Query Entities (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X GET “https://storageaccount.table.core.windows.net/shswadsampletable1(PartitionKey=’B’,RowKey=’1′)?<SAS_token> ” -H “x-ms-date:2024-02-24T10:14:50.2646880Z” -H “x-ms-version:2020-04-08”
Output:
Delete Entity Operation:
This Rest API deletes an existing entity in a table. Reference link: Delete Entity (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X DELETE “https://storageaccount.table.core.windows.net/shswadsampletable1(PartitionKey%3D’B’%2C%20RowKey%3D’1′)?<SAS_token> ” -H “x-ms-date:2024-02-24T11:14:50.2646880Z” -H “x-ms-version:2020-04-08” -H “If-Match:*”
Output:
Insert Operation:
This Rest API inserts a new entity into a table. Reference link: Insert Entity (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X POST “https://storageaccount.table.core.windows.net/test?<SAS_token> ” -H “x-ms-date: 2024-02-25T10:39:50.2646880Z” -H “x-ms-version: 2020-04-08” -H “Accept: application/json;odata=nometadata” -H “Content-Type: application/json” -d “{“RowKey”:”bbb”,”PartitionKey”:”ssss”,”Name”:”aaa”,”PhoneNumber”:”111″}”
Output:
Update operation:
This Rest API updates the existing entity in the Table Storage. Reference link: Update Entity (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X PUT “https://storageaccount.table.core.windows.net/test(PartitionKey%3D’ss’%2C%20RowKey%3D’bbb’)?<SAS_token> ” -H “x-ms-date: 2024-02-25T10:39:50.2646880Z” -H “x-ms-version: 2020-04-08” -H “Accept: application/json;odata=nometadata” -H “Content-Type: application/json” -H “If-Match=*” -d “{“RowKey”:”bbb”,”PartitionKey”:”ssss”,”Name”:”aaa”,”PhoneNumber”:”111″}
Output
Merge operation:
This Rest API operation updates an existing entity by updating the entity’s properties and does not replace the existing entity, Reference link: Merge Entity (REST API) – Azure Storage | Microsoft Learn
Syntax URL:
curl -X PUT “https://storageaccount.table.core.windows.net/test(PartitionKey%3D’ssss’%2C%20RowKey%3D’bbb’)?<SAS_token> ” -H “x-ms-version: 2020-04-08” -H “Accept: application/json;odata=nometadata” -H “Content-Type: application/json” -H “If-Match=*” -d “{“RowKey”:”bbb”,”PartitionKey”:”ssss”,”Name”:”aaa”,”PhoneNumber”:”11231″}”
Output:
Hope this article helps you in performing the Table Storage operations by making use of curl command.
Happy Learning!
Microsoft Tech Community – Latest Blogs –Read More
rtwbuild doesn’t generate modelsources.txt in MATLAB 2023
I have used rtwbuild in MATLAB 2020a and it generated modelsources.txt.
MATLAB 2023a doesn’t generate this file. Please help me to solve it.I have used rtwbuild in MATLAB 2020a and it generated modelsources.txt.
MATLAB 2023a doesn’t generate this file. Please help me to solve it. I have used rtwbuild in MATLAB 2020a and it generated modelsources.txt.
MATLAB 2023a doesn’t generate this file. Please help me to solve it. 2023a, simulink MATLAB Answers — New Questions
RMS Analysis on BIN File in Simulink
I am trying to import a BIN file into Simulink and analyze it for RMS. I currently have the sine wave, RMS, and scope blocks, and I can generate a sample sine wave, but I have no way of importing my BIN files. They are large, and I attempted using the Binary File Reader block, and while it started creating a sine wave, it was taking too long to run. How do I make this process more efficient?I am trying to import a BIN file into Simulink and analyze it for RMS. I currently have the sine wave, RMS, and scope blocks, and I can generate a sample sine wave, but I have no way of importing my BIN files. They are large, and I attempted using the Binary File Reader block, and while it started creating a sine wave, it was taking too long to run. How do I make this process more efficient? I am trying to import a BIN file into Simulink and analyze it for RMS. I currently have the sine wave, RMS, and scope blocks, and I can generate a sample sine wave, but I have no way of importing my BIN files. They are large, and I attempted using the Binary File Reader block, and while it started creating a sine wave, it was taking too long to run. How do I make this process more efficient? simulink, binary, statistics MATLAB Answers — New Questions