Category: News
Calendar synchronisation issue between google calendar and outlook desktop mac
Hello Any new entry in my Google calendar doens’t update my mac outlook, where the opposite works
can anybody help ?
Hello Any new entry in my Google calendar doens’t update my mac outlook, where the opposite works can anybody help ? Read More
How to apply NonNegativity constraint in ODE solver when defining ode as a structure
I am trying to perform a local sensitivity analysis on the parameters of my model using odeSensitivity. Taking help from this page https://in.mathworks.com/help/matlab/ref/odesensitivity.html. but the problem is that when I used to code the ode simply I put nonnegativity under options, but now i am not able to do that. I tried taking F.NonNegative=ones(1,n_parameters) but that doesnt seem to work.I am trying to perform a local sensitivity analysis on the parameters of my model using odeSensitivity. Taking help from this page https://in.mathworks.com/help/matlab/ref/odesensitivity.html. but the problem is that when I used to code the ode simply I put nonnegativity under options, but now i am not able to do that. I tried taking F.NonNegative=ones(1,n_parameters) but that doesnt seem to work. I am trying to perform a local sensitivity analysis on the parameters of my model using odeSensitivity. Taking help from this page https://in.mathworks.com/help/matlab/ref/odesensitivity.html. but the problem is that when I used to code the ode simply I put nonnegativity under options, but now i am not able to do that. I tried taking F.NonNegative=ones(1,n_parameters) but that doesnt seem to work. ode, structures MATLAB Answers — New Questions
Matlab onramp course blank screen
I’m trying to do my Matlab onramp course but when i try enter commands the screen is completely blank, and i can’t do anything.I’m trying to do my Matlab onramp course but when i try enter commands the screen is completely blank, and i can’t do anything. I’m trying to do my Matlab onramp course but when i try enter commands the screen is completely blank, and i can’t do anything. matlab, onramp, commands MATLAB Answers — New Questions
Plot contours from counts of a scatter plot
Hi everyone, I’m fairly new to Matlab and i’m not sure how to obtain what i what i want.
I made this graphics where i’m trying to match some datas with my simulation. The prolem is that i’m simulating thtat same amounts of point as the ones in the data set (about 100) but has some statistic problems. So i wanted to simulate more points (about 1000) to use contour lines to show where i have different density of simulated points. This is where i ran into trubles.
I looked into the contour(Z) function, but i’m not sure how to make the Z matrix. I have two vectors, x and y, with the coordinates of my simulated point (and anoters set of twwo vectors for the datas).
I was thinking about using something lixe this to:
Z = histcounts2(x, y, ‘BinWidth’, [n, n], ‘XBinLimits’, [x1 x2], ‘YBinLimits’, [y1 y2])
But I’m not sure wich BinWidht to use to obtain what I want, or even if ot’s the best way to go about it.
Any pointers would be really appreciated,
Thakns to allHi everyone, I’m fairly new to Matlab and i’m not sure how to obtain what i what i want.
I made this graphics where i’m trying to match some datas with my simulation. The prolem is that i’m simulating thtat same amounts of point as the ones in the data set (about 100) but has some statistic problems. So i wanted to simulate more points (about 1000) to use contour lines to show where i have different density of simulated points. This is where i ran into trubles.
I looked into the contour(Z) function, but i’m not sure how to make the Z matrix. I have two vectors, x and y, with the coordinates of my simulated point (and anoters set of twwo vectors for the datas).
I was thinking about using something lixe this to:
Z = histcounts2(x, y, ‘BinWidth’, [n, n], ‘XBinLimits’, [x1 x2], ‘YBinLimits’, [y1 y2])
But I’m not sure wich BinWidht to use to obtain what I want, or even if ot’s the best way to go about it.
Any pointers would be really appreciated,
Thakns to all Hi everyone, I’m fairly new to Matlab and i’m not sure how to obtain what i what i want.
I made this graphics where i’m trying to match some datas with my simulation. The prolem is that i’m simulating thtat same amounts of point as the ones in the data set (about 100) but has some statistic problems. So i wanted to simulate more points (about 1000) to use contour lines to show where i have different density of simulated points. This is where i ran into trubles.
I looked into the contour(Z) function, but i’m not sure how to make the Z matrix. I have two vectors, x and y, with the coordinates of my simulated point (and anoters set of twwo vectors for the datas).
I was thinking about using something lixe this to:
Z = histcounts2(x, y, ‘BinWidth’, [n, n], ‘XBinLimits’, [x1 x2], ‘YBinLimits’, [y1 y2])
But I’m not sure wich BinWidht to use to obtain what I want, or even if ot’s the best way to go about it.
Any pointers would be really appreciated,
Thakns to all ‘countours’ ‘contour plot’ ‘histcounts’ ‘counts’ MATLAB Answers — New Questions
Amplitude White noise in Simulink Matlab
I want to add colored noise to the step signal when I use White noise inside Simulink it has 1/-1 amplitude I want to reduce this amplitude like pink noise which has 0.05/-0.05 amplitude
white noise
pink noiseI want to add colored noise to the step signal when I use White noise inside Simulink it has 1/-1 amplitude I want to reduce this amplitude like pink noise which has 0.05/-0.05 amplitude
white noise
pink noise I want to add colored noise to the step signal when I use White noise inside Simulink it has 1/-1 amplitude I want to reduce this amplitude like pink noise which has 0.05/-0.05 amplitude
white noise
pink noise noise, pinknoise, whitenoise MATLAB Answers — New Questions
How to Activate Workplace Benefits Program (formerly Home Use)
We are recipients of a Microsoft 365 grant for nonprofits.
According to this web page, we should also have access to the Workplace Benefits Program, which gives our employees the option to subscribe to a personal version of Microsoft 365 at a 30% discount. I used to work for a different non-profit that received the Microsoft 365 grant for non-profits and I obtained my discounted Microsoft 365 subscription through the program.
When my organization’s employees at this non-profit (where I work now) try to obtain the benefit, they get an email (shown below) stating that Microsoft cannot offer the workplace benefits discount to my organization’s employees. It directs the employees to contact their HR or IT Admin (I’m the IT Admin, but I don’t know what’s wrong).
Has this happened to anyone else? Does anyone know who I can contact at Microsoft to try to get this straightened out?
Thanks,
Scott
We are recipients of a Microsoft 365 grant for nonprofits. According to this web page, we should also have access to the Workplace Benefits Program, which gives our employees the option to subscribe to a personal version of Microsoft 365 at a 30% discount. I used to work for a different non-profit that received the Microsoft 365 grant for non-profits and I obtained my discounted Microsoft 365 subscription through the program. When my organization’s employees at this non-profit (where I work now) try to obtain the benefit, they get an email (shown below) stating that Microsoft cannot offer the workplace benefits discount to my organization’s employees. It directs the employees to contact their HR or IT Admin (I’m the IT Admin, but I don’t know what’s wrong).Has this happened to anyone else? Does anyone know who I can contact at Microsoft to try to get this straightened out? Thanks,Scott Read More
How do I prevent the Docusign Sender (docusign account) from getting signing request?
Hello All,
I have a Power Automate flow that should be doing the following:
-When signature is required for document, the flow should connect to Docusign and send sign request to the “Department approver” (Note there is a department list where each department is having a department approver).
-So when uploaded document’s department is “Legal” (For example), the approver specified for this department will be getting docusign email to sign document
-Once signed, signed document will be published in library
I have got the above flow working. But the problem is this:
-The connection to docusign is in name of User A, and the person who should Sign the document should only be the department approver (say user B).
-but when I test the flow, Docusign is sending 2 emails (to both user A who is the docusign account login user, and also the user B who is the department approver). User A should not be getting the signing request. Only the department approver (Recipient) should be signing the document
Can you help me understand how and where I can tweak either the template in docusign, or power automate, to not have the sender sign the document (and only have the recipient , who in this case is approver, receive the signing request?)
Here is the template I have created in Docusign (by signing as user A). Here the name and email I had given, is the user that I am logged in to docusign with.
The person who should be signing the document is the department approver (thats specified in the “department approvers” column in the below list)
This is the section of the flow that asks if esign is required, and if so, creates envelope with recipients
Below is a more drill down view of the above flow steps, where I declare variable to capture department approver (from the department list I screenshotted above), then I create envelope using the template I created in docusign. Then I set a variable to capture envelope ID . Then I update the properties of the uploaded document, where I am setting status to “pending signature” and the envelope id column to the envelope id variable
Then there is another flow that looks for changes in an envelope status, gets document from the envelope and then publishes document to library with signed status
Now when I execute these flows,they work. But Docusign sends signing request to both the sender(user A who is the docusign account user) and also the department approver (which I declared in variable in flow). Only the dept approver should get the email, and not the sender.
Please advice what changes should I make?
Hello All, I have a Power Automate flow that should be doing the following:-When signature is required for document, the flow should connect to Docusign and send sign request to the “Department approver” (Note there is a department list where each department is having a department approver).-So when uploaded document’s department is “Legal” (For example), the approver specified for this department will be getting docusign email to sign document-Once signed, signed document will be published in libraryI have got the above flow working. But the problem is this: -The connection to docusign is in name of User A, and the person who should Sign the document should only be the department approver (say user B).-but when I test the flow, Docusign is sending 2 emails (to both user A who is the docusign account login user, and also the user B who is the department approver). User A should not be getting the signing request. Only the department approver (Recipient) should be signing the document Can you help me understand how and where I can tweak either the template in docusign, or power automate, to not have the sender sign the document (and only have the recipient , who in this case is approver, receive the signing request?) Here is the template I have created in Docusign (by signing as user A). Here the name and email I had given, is the user that I am logged in to docusign with. The person who should be signing the document is the department approver (thats specified in the “department approvers” column in the below list)This is the section of the flow that asks if esign is required, and if so, creates envelope with recipientsBelow is a more drill down view of the above flow steps, where I declare variable to capture department approver (from the department list I screenshotted above), then I create envelope using the template I created in docusign. Then I set a variable to capture envelope ID . Then I update the properties of the uploaded document, where I am setting status to “pending signature” and the envelope id column to the envelope id variable Then there is another flow that looks for changes in an envelope status, gets document from the envelope and then publishes document to library with signed status Now when I execute these flows,they work. But Docusign sends signing request to both the sender(user A who is the docusign account user) and also the department approver (which I declared in variable in flow). Only the dept approver should get the email, and not the sender.Please advice what changes should I make? Read More
Data not showing in the column
Hello! On my SharePoint list, the data is not showing for few columns. When I click on each item, I can
see that data was entered but I just don’t know why it’s not showing up.
Hello! On my SharePoint list, the data is not showing for few columns. When I click on each item, I cansee that data was entered but I just don’t know why it’s not showing up. Read More
Adding a , after a group of text
Good morning, I was looking to add a ,(coma) and a space after every 4 characters.
For example,
qwerasdfzxcv ——> qwer, asdf, zxcv
Ive been able to just add 1 , after qwer, but no multple using
=replace(a1,5,0,”, “)
im sure I can probably tie in the same formula Mutiple times but have no clue how
Good morning, I was looking to add a ,(coma) and a space after every 4 characters.For example,qwerasdfzxcv ——> qwer, asdf, zxcv Ive been able to just add 1 , after qwer, but no multple using =replace(a1,5,0,”, “) im sure I can probably tie in the same formula Mutiple times but have no clue how Read More
Super master
name”:”آدرس ایمیل به دلایل حفظ حریم خصوصی حذف شد”,”globalState”:”{“storage”:{“workbench.panel.markers.hidden”:”[{\”id “:”workbench.panel.markers.view\”,\”isHidden\”:false}]”,”workbench.panel.output.hidden”:”[{ \”id”:”workbench.panel.output\”,”isHidden\”:false}]”,”terminal.hidden”:” [{\”id\”:\”terminal\”,”isHidden\”:false}]””workbench.scm.views.state.hidden “:”[{\”id\”:”workbench.scm.\”,\” isHidden\ “:true}،{\”id \”:\”workbench.. scm\”,\”isHidden\”:false},{\”id\”:”workbench.scm.sync \”,”isHidden\”:false}]”,”workbench.view.search.state.hidden”:”[{\”id”:\”:\”workbench.view.search\”,”isHidden\”:false}]”,”workbench.explorer.views.state.hidden”:”[{\” id\”:”طرح کلی\”,”isHidden\”:false},{\”id\”:”زمان خط\”, \”isHidden\”:false},{\”id\”:\” workbench.explorer.openEditorsView\”,\”isHidden\” :true} ,{\\”id\”:\”workbench.explorer.emptyView\”,”isHidden\”:false}]”,”workbench.activity.pinnedViewlets2 “:”[{\”id\”:”workbench.view.explorer\”,\”pinned\”:true,\”visible\” “:true,\”order\”:0},{\”id\”:”workbench.view.search\”,”پین شده\” “:true,”visible\”:true,\”order”:1},{\”id\”:”workbench.view.scm \”,”پین شده\”:true,\” visible\”:true,\”order\”:2},{\”id\” “:”workbench.view.debug\ “,\”پین شده\”:true,\”visible\”:true,\”order\”: 3},{\\”id\”:”workbench.view.extensions\”,”pinned\”:true,\”visible\”: true، \”order\”:4},{\”id”:”workbench.view.remote\”,\”pinned\”: true، \”visible\”:true, \”order”:4},{\”id\”:”workbench.view.extension.test \”,\”پین شده”:true,\”visible\”:false,\”order\”:6},{\\ “id\” “:”workbench.view.extension.references-view\”,”pinned”:true,\” visible\”:false,\”order” \”:7}،{\”id\”:”workbench.. panel.chatSidebar\”,”pinned\”:true,\”visible” \”:false,\”order\”:100},{\”id”:”userDataProfiles\”,”پین شده”:true},{\”visible\”:true},{\”id\”:” workbench.view.editSessions\”,”پین شده\”:true, \”visible\”:false},{\”id\”:” workbench.view.sync\”,”pinned\ “:true,\”visible\”:false}]”,”اخیرا باز شده”:”{\”ورودی ها\”:[{“فضای کاری\”:{\\ “id\”:\”-2ad0bbb\”,”configPath\ “:\”tmp:/default.code-workspace\”}}]}”,”workbench.view.debug.state.hidden”:”[{\”id \”:”workbench.debug.welcome\”,”isHidden\”:false},{\”id\”:\”workbench.debug. variablesView\”,”isHidden\”:false},{\”id\”:”workbench.debug.watchExpressionsView\”,”isHidden” \”:false},{\ “id\”:”workbench.. debug.callStackView\”,”isHidden\”:false},{\” id\”:\\” میز کار. debug.loadedScriptsView\”,”isHidden\”:false}،{\”id\”:\”میز کار. debug.breakPointsView\”,”isHidden\”:false}]”,”workbench.telemetryOptOutShown”:”true”,”workbench.statusbar.hidden”:” [\”status.workspaceTrust.-2ad0bbb\”]”,”workbench.view.remote.state.hidden”:”[{\”id\”:\ “remoteHub.views.workspaceRepositories\”,” isHidden\”:false},{\”id\”:”remoteTargets\”,\”isHidden \”:false}]”” workbench.panel.pinnedPanels”:”[{\”id\”:\”workbench.panel.markers\”, \”name\”:\”مشکلات\”,”پین شده\”:true،\”سفارش\”:0، \”قابل مشاهده\ “:true}،{\”id\”:”workbench.panel.output\”,”name\”:”خروجی\”:true,\”سفارش”:true,”سفارش”:1, \”visible\”:true},{\”id\”: \”workbench.panel.repl\”,\\”name\”:”Debug Console\”,”pinned\”:true,\” order\”:2,”visible\”:false},{\”id\”:”workbench.panel.testResults\”,” name\”:\”نتایج تست”,”پین شده\”:true,\” ترتیب\”:3, “visible\” :false},{\”id\”:”terminal\”,”name\”:”Terminal\”,”پین شد \”:true،\”order”:3، \”visible\”:true}،{\”id\”:\” refactorPreview\ “”name\”:”Refactor Preview\”,”pinned\”:true,\”visible\ “:false}] “,”workbench.welcomePage.walkthroughMtadata”:”[[\”ms-vscode.remote-repositories#remoteRepositoriesWalkthrough\”,{“firstSeen\”:1695519464371,\”stepIDs\”:[ \”editCommitRepo\”,”createGitHubPullRequest\”,”continueOn\”,”openRepo\”,”RemoteIndicator\”] ,”manaullyOpened\”:false}]]”,”themeUpdatedNotificationShown “:”true”,”colorThemeData”:”{“id”: \”vs-dark vscode-theme-defaults-themes-dark_ modern-json\”,\\”label”:”Dark Modern\”,\”settingsId\” “:\”پیش فرض Dark Modern\”,”themeTokenColors\”:[{\\”settings”:{\”پیش زمینه\”:\”#D4D4D4\”}، \”scope\”:[\”meta.embedded\”,\”source.groovy.embedded\”, “string meta.image.inline.markdown\”,”variable.legacy.builtin.python\”]}،{\”settings\”:{\”fontStyle” \”: “مورب\”}، \”محدوده\”:”تأکید\”}،{\”تنظیمات\”:{\ “fontStyle\”:\”bold\”}، \” scope\”:\”stron
name”:”آدرس ایمیل به دلایل حفظ حریم خصوصی حذف شد”,”globalState”:”{“storage”:{“workbench.panel.markers.hidden”:”[{\”id “:”workbench.panel.markers.view\”,\”isHidden\”:false}]”,”workbench.panel.output.hidden”:”[{ \”id”:”workbench.panel.output\”,”isHidden\”:false}]”,”terminal.hidden”:” [{\”id\”:\”terminal\”,”isHidden\”:false}]””workbench.scm.views.state.hidden “:”[{\”id\”:”workbench.scm.\”,\” isHidden\ “:true}،{\”id \”:\”workbench.. scm\”,\”isHidden\”:false},{\”id\”:”workbench.scm.sync \”,”isHidden\”:false}]”,”workbench.view.search.state.hidden”:”[{\”id”:\”:\”workbench.view.search\”,”isHidden\”:false}]”,”workbench.explorer.views.state.hidden”:”[{\” id\”:”طرح کلی\”,”isHidden\”:false},{\”id\”:”زمان خط\”, \”isHidden\”:false},{\”id\”:\” workbench.explorer.openEditorsView\”,\”isHidden\” :true} ,{\\”id\”:\”workbench.explorer.emptyView\”,”isHidden\”:false}]”,”workbench.activity.pinnedViewlets2 “:”[{\”id\”:”workbench.view.explorer\”,\”pinned\”:true,\”visible\” “:true,\”order\”:0},{\”id\”:”workbench.view.search\”,”پین شده\” “:true,”visible\”:true,\”order”:1},{\”id\”:”workbench.view.scm \”,”پین شده\”:true,\” visible\”:true,\”order\”:2},{\”id\” “:”workbench.view.debug\ “,\”پین شده\”:true,\”visible\”:true,\”order\”: 3},{\\”id\”:”workbench.view.extensions\”,”pinned\”:true,\”visible\”: true، \”order\”:4},{\”id”:”workbench.view.remote\”,\”pinned\”: true، \”visible\”:true, \”order”:4},{\”id\”:”workbench.view.extension.test \”,\”پین شده”:true,\”visible\”:false,\”order\”:6},{\\ “id\” “:”workbench.view.extension.references-view\”,”pinned”:true,\” visible\”:false,\”order” \”:7}،{\”id\”:”workbench.. panel.chatSidebar\”,”pinned\”:true,\”visible” \”:false,\”order\”:100},{\”id”:”userDataProfiles\”,”پین شده”:true},{\”visible\”:true},{\”id\”:” workbench.view.editSessions\”,”پین شده\”:true, \”visible\”:false},{\”id\”:” workbench.view.sync\”,”pinned\ “:true,\”visible\”:false}]”,”اخیرا باز شده”:”{\”ورودی ها\”:[{“فضای کاری\”:{\\ “id\”:\”-2ad0bbb\”,”configPath\ “:\”tmp:/default.code-workspace\”}}]}”,”workbench.view.debug.state.hidden”:”[{\”id \”:”workbench.debug.welcome\”,”isHidden\”:false},{\”id\”:\”workbench.debug. variablesView\”,”isHidden\”:false},{\”id\”:”workbench.debug.watchExpressionsView\”,”isHidden” \”:false},{\ “id\”:”workbench.. debug.callStackView\”,”isHidden\”:false},{\” id\”:\\” میز کار. debug.loadedScriptsView\”,”isHidden\”:false}،{\”id\”:\”میز کار. debug.breakPointsView\”,”isHidden\”:false}]”,”workbench.telemetryOptOutShown”:”true”,”workbench.statusbar.hidden”:” [\”status.workspaceTrust.-2ad0bbb\”]”,”workbench.view.remote.state.hidden”:”[{\”id\”:\ “remoteHub.views.workspaceRepositories\”,” isHidden\”:false},{\”id\”:”remoteTargets\”,\”isHidden \”:false}]”” workbench.panel.pinnedPanels”:”[{\”id\”:\”workbench.panel.markers\”, \”name\”:\”مشکلات\”,”پین شده\”:true،\”سفارش\”:0، \”قابل مشاهده\ “:true}،{\”id\”:”workbench.panel.output\”,”name\”:”خروجی\”:true,\”سفارش”:true,”سفارش”:1, \”visible\”:true},{\”id\”: \”workbench.panel.repl\”,\\”name\”:”Debug Console\”,”pinned\”:true,\” order\”:2,”visible\”:false},{\”id\”:”workbench.panel.testResults\”,” name\”:\”نتایج تست”,”پین شده\”:true,\” ترتیب\”:3, “visible\” :false},{\”id\”:”terminal\”,”name\”:”Terminal\”,”پین شد \”:true،\”order”:3، \”visible\”:true}،{\”id\”:\” refactorPreview\ “”name\”:”Refactor Preview\”,”pinned\”:true,\”visible\ “:false}] “,”workbench.welcomePage.walkthroughMtadata”:”[[\”ms-vscode.remote-repositories#remoteRepositoriesWalkthrough\”,{“firstSeen\”:1695519464371,\”stepIDs\”:[ \”editCommitRepo\”,”createGitHubPullRequest\”,”continueOn\”,”openRepo\”,”RemoteIndicator\”] ,”manaullyOpened\”:false}]]”,”themeUpdatedNotificationShown “:”true”,”colorThemeData”:”{“id”: \”vs-dark vscode-theme-defaults-themes-dark_ modern-json\”,\\”label”:”Dark Modern\”,\”settingsId\” “:\”پیش فرض Dark Modern\”,”themeTokenColors\”:[{\\”settings”:{\”پیش زمینه\”:\”#D4D4D4\”}، \”scope\”:[\”meta.embedded\”,\”source.groovy.embedded\”, “string meta.image.inline.markdown\”,”variable.legacy.builtin.python\”]}،{\”settings\”:{\”fontStyle” \”: “مورب\”}، \”محدوده\”:”تأکید\”}،{\”تنظیمات\”:{\ “fontStyle\”:\”bold\”}، \” scope\”:\”stron Read More
How and where to download the new Windows 11 24H2 ISO
I installed the Windows 11 ISO from the Insider channel about a month ago in the preview version. I know a new preview version has been released, but the link where I downloaded the previous one doesn’t show it. Where can I download the new Windows 11 24H2 preview ISO? I am part of the Insider program.
I installed the Windows 11 ISO from the Insider channel about a month ago in the preview version. I know a new preview version has been released, but the link where I downloaded the previous one doesn’t show it. Where can I download the new Windows 11 24H2 preview ISO? I am part of the Insider program. Read More
New on Azure Marketplace: August 12-17, 2024
We continue to expand the Azure Marketplace ecosystem. For this volume, 143 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
biGENIUS-X: biGENIUS-X from biGENIUS AG is an automated data transformation tool that features a modern graphical user interface, advanced automation, and modeling wizards to quickly build or rearchitect data solutions. It supports customization and parallel development with Git.
Boardflare premium: Boardflare’s add-ins for Microsoft Excel offer AI functions like translation, fuzzy matching, and sentiment analysis. The base versions are free, with premium models available via subscription. Subscriptions are limited to Microsoft work or school accounts in certain countries.
Bocada Cloud – Standard Plan: Bocada Cloud is a SaaS platform for monitoring backups across your IT environment. Bocada Cloud centralizes data protection and offers automated data collection, reporting, and alerting with API-based connectors for accuracy.
CABEM Competency Manager Lite: Competency Manager by CABEM streamlines competency tracking across departments and locations with an evergreen skills matrix and continual audit readiness. It ensures efficient onboarding and reporting, and it integrates with learning management systems.
CentOS Stream 10 on Azure x86_64: This offer from Ntegral provides CentOS Stream 10 on a Microsoft Azure virtual machine. CentOS Stream 10 offers a stable, scalable environment ideal for developers and enterprises. Positioned upstream of Red Hat Enterprise Linux (RHEL), it provides early access to future RHEL releases.
Cien.ai Go-To-Market Suite: Cien’s offer helps B2B go-to-market teams by standardizing and enhancing customer relationship management data for AI-powered apps and analytics. Dashboards and heatmap analysis are two components of the plan. This offer is ideal for executives, revenue operations professionals, and management consultants seeking improved revenue insights.
Connected Sanctions & PEP Verification for AML Compliance Management: CORIZANCE’s platform offers AI-powered, real-time tracking of watch lists, sanctions (such as from the United Nations), and more to manage anti-money laundering and financial crime risks. Ensure compliance, enhance stakeholder confidence, and boost brand value with enhanced risk detection.
Crossware Email Signature: Crossware Email Signature for Office 365 offers businesses a centralized solution for managing consistent and compliant email signatures, disclaimers, and branding. It features a web-based signature designer, an advanced rule builder, and campaign management tools.
DataSphere Optimize: DataSphere Optimizer from Spektra Systems enhances data processing efficiency with advanced algorithms and real-time analytics. It features scalable infrastructure, a user-friendly interface, comprehensive security, and automated workflow management.
Elgg v6.0.2 on Ubuntu v20: This offer from Anarion Technologies provides Elgg with Ubuntu on a Microsoft Azure virtual machine. Elgg is an open-source social networking engine for creating custom social networking sites and online communities. Its flexibility and extensibility through plug-ins and a powerful API make it suitable for small-scale and large-scale social networks alike.
Gain Control of Cloud Expenses with Sirius: The Sirius cloud cost management platform from SGA, part of the FCamara Group, delivers continuous insights and recommendations to improve the visibility of your cloud spending and restore your sense of control over your budget. This offer is available only in Portuguese.
Helpdesk 365 – Enterprise: Helpdesk 365 from Apps 365 is a customizable ticketing system for SharePoint and Microsoft 365. It supports IT, HR, and finance requests, and it includes automation, chatbots, and multi-language support.
HyperData Flow Engine: HyperData Flow Engine from Spektra Systems optimizes data flows in complex systems. It features real-time data streaming, dynamic workload distribution, scalable architecture, advanced analytics, and automated workflow management. Ensure data security and compliance with HyperData Flow Engine.
Iostat on Debian 11: This offer from Apps4Rent provides Iostat along with Debian 11 on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Oracle Linux 8.8: This offer from Apps4Rent provides Iostat along with Oracle Linux 8.8 on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 20.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 20.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 22.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 22.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Iostat on Ubuntu 24.04 LTS: This offer from Apps4Rent provides Iostat along with Ubuntu 24.04 LTS on a Microsoft Azure virtual machine. Iostat is used for real-time system monitoring, oversight of input and output devices, and comprehensive performance metrics.
Kylin secured and supported by Hossted: Hossted offers a repackaged Kylin deployment with instant setup, robust security, and a control dashboard. It includes continuous security scans and round-the-clock premium support.
Lucid Data Hub Enterprise Application: Lucid Data Hub is a generative AI platform that automates data integration and analytics for complex ERP systems like SAP, Oracle, and Microsoft Dynamics. It simplifies data management, enhances data quality, and accelerates insights, benefiting data engineers, analysts, scientists, business intelligence teams, and IT managers by addressing integration, preparation, and scalability challenges.
Palantir AIP: Palantir AIP is a secure platform for integrating AI into enterprise decision-making. It features an intuitive workflow builder for AI apps, end-to-end evaluation tools for production readiness, and an ontology SDK for development. The AIP Now repository offers prebuilt AI applications and examples for accelerated development.
Prefect Cloud: Prefect is a data workflow orchestration platform that helps developers build, observe, and react to data pipelines. It offers real-time visibility, identifies bottlenecks, and ensures performance. Automating over 200 million tasks monthly, Prefect enables faster, resilient code deployment, empowering companies to leverage data for a competitive edge.
Quantum Compute Core: Quantum Compute Core from Spektra Systems is a cutting-edge platform utilizing quantum computing for greater speed and advanced algorithms. It ensures data security with quantum-resistant encryption and features a user-friendly interface for real-time processing and decision-making.
Red Hat Enterprise Linux 8 (8.10 LVM) DISA STIG Benchmarks: This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with the Defense Information Systems Agency’s Security Technical Implementation Guides (STIG). It ensures adherence to stringent security standards, mitigates vulnerabilities, and reduces cyber threats.
Red Hat Enterprise Linux 8 HIPAA (8.10 LVM): This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with the Health Insurance Portability and Accountability Act (HIPAA). HIPAA establishes national standards for the protection of certain health information and mandates measures for safeguarding electronic health records.
Red Hat Enterprise Linux 8 NIST 800-171 (8.10 LVM) Benchmarks: This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with NIST 800-171 standards, which protect controlled unclassified information in non-federal systems and organizations.
Red Hat Enterprise Linux 8 PCI DSS (8.10 LVM): This offer from Madarson IT provides a Red Hat Enterprise Linux 8 image preconfigured for compliance with PCI DSS, which concerns the protection of payment data. Madarson IT ensures images are up to date, secure, and ready to use, fostering trust and reducing vulnerabilities.
Salesbuildr: Salesbuildr is a sales and revenue operations platform for managed service providers using Autotask PSA, ConnectWise PSA, or Microsoft Dynamics 365. Ideal for midsized and large organizations, it standardizes products, creates competitive proposals, enables branded e-commerce storefronts, and identifies upsell and cross-sell opportunities.
Sanic on Debian 11: This offer from Apps4Rent provides Sanic along with Debian 11 on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 20.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 20.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 22.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 22.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sanic on Ubuntu 24.04 LTS: This offer from Apps4Rent provides Sanic along with Ubuntu 24.04 LTS on a Microsoft Azure virtual machine. Sanic is an asynchronous web framework for building scalable applications. It offers high performance, comprehensive routing, and middleware support.
Sensors-as-a-Service: Sensors-as-a-Service by Ectron Corporation offers seamless integration of more than 32,000 industrial sensors with Microsoft-hosted analytics. It monitors machine functionality, energy usage, product quality, and other key performance indicators. Eliminate human error and enhance your operational efficiency with AI and machine learning.
SlashNext Cloud Email: SlashNext Integrated Cloud Email Security for Microsoft 365 combines AI, natural language processing, and computer vision for real-time threat detection. Use it to protect against business email compromise, account takeovers, spear phishing, and more. Setup is easy via the Microsoft Graph API, and it integrates with Microsoft Sentinel.
Ubuntu 20.04 with Apache Subversion (SVN) Server: This offer from Virtual Pulse S. R. O. provides Apache Subversion along with Ubuntu 20.04 on a Microsoft Azure virtual machine. Apache Subversion is a full-featured version control system for source code, web pages, documentation, and more.
Ubuntu 20.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 20.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu 22.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 22.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu 24.04 with GNOME Desktop: This offer from Nuvemnest provides GNOME along with Ubuntu 24.04 on a Microsoft Azure virtual machine. GNOME (GNU Network Object Model Environment) is an open-source desktop environment for Unix-like operating systems. You can use GNOME with Remote Desktop Protocol to access Linux desktops from a Windows machine.
Ubuntu Pro with 24×7 Support: Ubuntu Pro enhances Ubuntu Server LTS with advanced security, compliance features, and system management tools. It includes continual support, expanded security maintenance, kernel live patching, and automated security and compliance tasks. It’s ideal for production environments.
Websoft9 Applications Hosting Platform for ArangoDB: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides ArangoDB 3.11 along with Docker and a cloud-native InfluxDB runtime on the Websoft9 Applications Hosting Platform. ArangoDB is a scalable graph database system.
Websoft9 Applications Hosting Platform for Bytebase: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides Bytebase 2.17 along with Docker on the Websoft9 Applications Hosting Platform. Bytebase is a database CI/CD solution for developers and database administrators.
Websoft9 Applications Hosting Platform for InfluxDB: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides InfluxDB along with Docker on the Websoft9 Applications Hosting Platform. InfluxDB is a popular open-source database for developers managing time-series data. Unlock real-time insights from time-series data at any scale in the cloud, on-premises, or at the edge.
Websoft9 Applications Hosting Platform for Redash: This preconfigured image offered by VMLab, an authorized reseller for Websoft9, provides Redash 10.1 along with Docker on the Websoft9 Applications Hosting Platform. Connect Redash to any data source (such as PostgreSQL, MySQL, Redshift, BigQuery, or MongoDB) to query, visualize, and share your data.
WizarD Core: WizarD from Systech Solutions empowers businesses with conversational AI, enabling natural language access to enterprise data warehouses. It utilizes generative AI and trained models to understand user intent and retrieve data directly from Snowflake. This tool provides business users and analysts direct access to data and insights, reducing IT dependency.
WizarD Doc Pro: The WizarD Document Processing Engine from Systech Solutions allows users to upload large text or PDF documents, perform optical character recognition on PDFs with images and charts, automatically index files for intelligent search, and interact with the data via a chat interface.
Zero Code AI Platform (AIPaaS): The AIPaaS platform from UCBOS offers no-code AI model building with features like data preparation, real-time scoring, and hyperparameter tuning. It supports predictive analytics, natural language processing, and computer vision.
Zero Code Application Composition Platform (aPaaS): aPaaS from UCBOS is a no-code application composition platform that enables rapid app development using a drag-and-drop builder, AI execution engine, and built-in integration tools. It supports mobile devices and the cloud, offers extensive customization, and ensures security and compliance.
Zero Code Enterprise-Centric Supply Chain Solutions (SCMPaaS): SCMPaaS from UCBOS offers no-code supply chain solutions to boost IT innovation, vendor independence, and business agility. Improve supply chain planning, foster supplier collaboration, and streamline procurement systems and logistics management.
Zero Code Semantic Integration & Orchestration Platform (iPaaS): The iPaaS platform from UCBOS functions as intelligent middleware to connect your disparate data sources, augment your enterprise systems with real-time analytics, and orchestrate advanced business process engines.
Go further with workshops, proofs of concept, and implementations
Altron’s Data Estate Modernization: Unlock your data’s potential with this offer from Altron Digital Business. Using Microsoft Azure services, Altron Digital Business will study your on-premises data estate, then design and build custom architecture for a modern analytics platform. After the deployment, Altron Digital Business will provide six months of support.
Application Migration to Microsoft Entra ID: 12-Week Implementation: This service from Modern Methodologies aids medium-size to large enterprises in migrating their identity provider to Microsoft Entra ID. Modern Methodologies will focus on SSO infrastructure migration, security, and compliance, with optional technical support for application modifications.
Assortment Intelligence Implementation: Sigmoid will implement a suite of assortment planning solutions so you can optimize your product mix, enhance inventory management, and boost sales. Sigmoid will use Azure Data Lake Storage to supply the storage layer and Azure Data Factory to orchestrate data integration pipelines. Microsoft Purview will be used for unified data governance, and Azure Machine Learning will integrate and analyze large data sets.
Azure Virtual Desktop Design and Deployment: Using Azure Virtual Desktop, The Partner Masters will build a virtual desktop infrastructure solution that enables remote work and meets your specific business needs. The Partner Masters will supply a design and configuration guide, along with a documented plan of how to get your team trained and certified to maintain the solution.
CMMC Workshop: 2-Hour Discovery Workshop: Coretek’s workshop for federal defense contractors will address Cybersecurity Maturity Model Certification 2.0 compliance. Coretek will review Microsoft’s proven architecture and multiple approaches to CMMC readiness, including solutions using Microsoft Azure and Microsoft 365 licensing options, and determine GCC or GCC-High requirements.
Consulting Service on Belake.ai: Dataside Solucoes em Dados LTDA will help clients integrate Belake.ai with Microsoft Azure and tools such as Azure OpenAI and Azure Cognitive Search. Belake.ai uses generative AI to convert natural language questions into detailed charts and visualizations. Specialized support will be offered for integrating Microsoft Power BI Embedded.
CoreConversations: Core BTS will implement its CoreConversations AI tool, which deploys proprietary conversational agents to unlock your company’s data potential. Simply pose questions to the AI and receive immediate, data-driven answers that facilitate smarter decision-making and process optimization.
Easy Migration Azure: Cloud Continuity will implement Easy Migration, a solution to smoothly migrate your applications and data to Microsoft Azure. Benefits include adaptable infrastructure, heightened security, management simplification, and potential savings of up to 50 percent through greater efficiency. This service is available only in Spanish.
Fabric Copilot: 1-Day Workshop: When using the Copilot capabilities of Microsoft Fabric, it’s essential to ensure that your semantic model follows best practices for modeling. In this workshop, iLink Systems will take one of your reports and review the AI features that are applicable to you. You’ll learn how to utilize the out-of-the-box AI visuals for Microsoft Power BI and how to update semantic models for optimal use with Copilot.
Fabric Accelerator: 4-Day Workshop: This hands-on workshop from HSO will give you a comprehensive understanding of how a data and analytics solution within Microsoft Fabric can benefit your organization. An action plan will ensure that all participants can effectively apply the workshop learnings. A high-level action plan for implementing Microsoft Fabric will also be drafted.
Federation Service: In this engagement, Avanade will implement flexible microservices that work with existing integration systems, such as Azure API Management, MuleSoft API Management, Microsoft Fabric, and Microsoft Azure Data Fabric.
Fortress Security Solution: Fortress-G from KAMIND IT is a comprehensive managed security offering in which KAMIND IT will set up your Microsoft environments and establish the correct security posture to defend your assets. This will include CMMC Level 2 and NIST-800-171 compliance, mobile device management, and more.
Nagarro’s XPerience360: Centralized Low-Code MDM Solution: Nagarro will implement its XPerience360 Platform so your company can consolidate data from various sources to create a unified customer profile using Microsoft Azure Synapse Analytics. The XPerience360 Platform enhances data quality, decision-making, and analytics through features like data deduplication, segmentation, and low-code development.
Planogram Assortment Optimization: 8-Week Implementation: Sigmoid will implement Microsoft tools, including Azure Data Factory, Azure Machine Learning, and Microsoft Power BI, to optimize store-specific assortments and planograms. This can increase sales, reduce inventory costs, and streamline your planning.
UST Insight for Microsoft Fabric: UST’s workshop will help businesses integrate Microsoft Fabric into their data strategy, enhancing efficiency and innovation. This four-hour session will include strategic insights, live demos, and tailored use cases. It’s intended for data-driven enterprises in finance, healthcare, retail, and manufacturing.
Contact our partners
CABIE: The Super Slick Customs Process
Click Armor Enterprise Security Awareness Training
Composable Architecture Platform (CAP)
Copilot Studio in a Day Workshop
Copilot User Empowerment Training
Cysana Malware Detector and Ransomware Blocker
Data Strategy: 2-Week Assessment
Devart ODBC Driver for Mailjet
Devart ODBC Driver for NexusDB
Devart ODBC Driver for QuestDB
Devart ODBC Driver for SendGrid
Devart ODBC Driver for ServiceNow
Devart ODBC Driver for ShipStation
Devart ODBC Driver for Shopify
Endpoint Privilege Manager miniOrange
Entra ID Connector for IntelliTime (Contact Me Offer)
Evolution CMS v3.1.27 on Ubuntu v20
Infosys Cobalt Cloud FinOps Assessment
KUARIO Personalized Payment for Self-Services
Lubyc for Employee Personal Business Profile
Lubyc for Employee Professional Profile
Managed Services for Microsoft Azure
Microsoft Azure Cloud Migration: 6-Week Assessment
Microsoft Entra ID Conditional Access Framework Review
MIM Migration to Entra ID Assessment
On Power BI Dashboard in a Day Workshop
Penetration Testing: 4-Week Assessment
Pico Manufacturing Process Error Proofing Platform
Power Platform Training: Automate Flow in a Day
ScriptString.AI – Utility Data Management
SFTP Gateway Enterprise Solution
Squid Proxy Server on AlmaLinux 8
Stacknexus for Microsoft Outlook
Syntho: AI-Generated Synthetic Data Platform
Tokiota Cloud Managed Service SSGG
Ubuntu 18.04 with Extended Lifecycle Support
Ubuntu 24.04 with Apache Subversion (SVN) Server
UNIFYSecure Managed Security Service for XDR MDR SOC
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
How can I save my figure to eps AND keep white margins (from my defined figure and axes position)?
I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs.I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs. I am producing multiple figures which I then need to vertically align in latex. I set my figure and axes position like this:
% Set figure total dimension
set(gcf,’Units’,’centimeters’)
set(gcf,’Position’,[0 0 4.5 5.8])
% Set size and position of axes plotting area within figure dimensions. To
% keep vertical axes aligned for multiple figure keep the horizontal
% position consistent
set(gca,’Units’,’centimeters’)
set(gca, ‘Position’,[1.5 1.3 2.85 4])
Some of my figures have a ylabel and some don’t, which thanks to the above set position does not affect the format of the figure. However when I save my figure to eps using
saveas(gcf,’filename’,’epsc’)
the eps file saves it as the tightest fit, ignoring my set positions. How can I get it to save whilst conserving my set formatting?
I’ve tried saving to .png but the quality is massively reduced (even when using the package export_fig). Is there a simple solution?
I am on MacOs. saveas, eps, nocrop, vertical alignement, figure position, export MATLAB Answers — New Questions
License Manager Error -9 Your username does not match the username in the license file.
License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users??License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users?? License checkout failed.
License Manager Error -9
Your username does not match the username in the license file.
To run on this computer, you must run the Activation client to reactivate your license.
Troubleshoot this issue by visiting:
https://www.mathworks.com/support/lme/R2019b/9
Diagnostic Information:
Feature: MATLAB
License path: /home/alex/.matlab/R2019b_licenses:/usr/local/MATLAB/R2019b/licenses/license.dat:/usr/local/MATLAB/R
2019b/licenses/license_thinkpad-p73_40871338_R2019b.lic
Licensing error: -9,57.
I am a student, I want to install the matlab under Ubuntu, but always get this problem??why is the matlab so unfrienddly to the users?? ubuntu MATLAB Answers — New Questions
How can I integrate RTOS in stm32F407XX automatic code generation?
Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation?Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation? Hi, I’m trying to integrate the Real Time Operating System capability for the Stm32f407vg discovery board Target by automatic generation code of a simulink model.
I found "ST Discovery Board Support from Embedded Coder" that is based on STM32 Standard Peripheral Libraries, but I use the Simulink+STM32CubeMx+Keil5 toolchain based on STM32 HAL libraries.
I know that STM32CubeMx supports freeRTOS packages. How can I integrate freeRTOS in my automatic code generation? st, stm32cubemx, keil MATLAB Answers — New Questions
Onboard domain computers by GPO deployment. Policy created by Defender Portal are not deployed
Hi
I onboarded computers using Group Policy Deployment and set additional GPO settings described in this document: Onboard Windows devices to Microsoft Defender for Endpoint via Group Policy – Microsoft Defender for Endpoint | Microsoft Learn
Then I created Endpoint Security Policies in Defender Portal. Assign to All Users and All computers groups. I see that these policies are not deployed to computers. Option “Policy sync” on computer menu is grey out (disabled). I don’t know why?
Perhaps if I set additional defender settings by GPO it is means that I cannot use Endpoint Security Policies in Defender Portal? We don’t use Intune or MDM. We have only Defender for Endpoint P1 licence and synchronization domain users and computers account with Microsoft Entra.
Thank you for help
Tomasz
HiI onboarded computers using Group Policy Deployment and set additional GPO settings described in this document: Onboard Windows devices to Microsoft Defender for Endpoint via Group Policy – Microsoft Defender for Endpoint | Microsoft Learn Then I created Endpoint Security Policies in Defender Portal. Assign to All Users and All computers groups. I see that these policies are not deployed to computers. Option “Policy sync” on computer menu is grey out (disabled). I don’t know why?Perhaps if I set additional defender settings by GPO it is means that I cannot use Endpoint Security Policies in Defender Portal? We don’t use Intune or MDM. We have only Defender for Endpoint P1 licence and synchronization domain users and computers account with Microsoft Entra. Thank you for helpTomasz Read More
Combobox return blank for 1st option
Hi,
I’m creating a non VBA combobox with the 1st option being “Select” as I need an option to have a null value. I’m referencing the combobox in another formula so can’t have “Select” return 1. is there any way if the option is set to “Select” for it to return blank or no value?
Thanks in advance.
Hi,I’m creating a non VBA combobox with the 1st option being “Select” as I need an option to have a null value. I’m referencing the combobox in another formula so can’t have “Select” return 1. is there any way if the option is set to “Select” for it to return blank or no value?Thanks in advance. Read More
Implementing Data Vault 2.0 on Fabric Data Warehouse
This Article is Authored By Michael Olschimke, co-founder and CEO at Scalefree International GmbH and Co-authored with @Trung_Ta Senior BI Consultant from Scalefree
The Technical Review is done by Ian Clarke and Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft
Introduction
In the previous articles of this series, we have discussed how to model Data Vault on Microsoft Fabric. Our initial focus was on the basic entity types including hubs, links, and satellites; advanced entity types, such as non-historized links and multi-active satellites and the third article was modeling a more complete model, including a typical modeling process, for Microsoft Dynamics CRM data.
But the model only serves a purpose: our goal for the entire blog series is to build a data platform using Microsoft Fabrics on the Azure Cloud. And for that, we also have to load the source data into the target model. This is the topic of this article: how to load the Data Vault entities. For that reason, we continue with our discussion of the basic and advanced Data Vault entities. The Microsoft Dynamics CRM article should be considered as an excursion to demonstrate how a more comprehensive model looks like and, also important, how we get there.
Data Vault 2.0 Design Principles
Data Vault was developed by Dan Linstedt, who is the inventor of Data Vault and co-founder of Scalefree. It is designed to meet the challenging requirements of the first users in the U.S. government. These requirements led to certain design principles that are the basis for the features and characteristics of Data Vault today.
One requirement is to perform insert-only operations on the target database. The data platform team might be required to delete records for legal reasons (e.g., GDPR or HIPAA), but updating records will never be done. The advantage of this approach is twofold: the performance of inserts is much faster than deleting or updating records. Also, inserting records is more in line with the task at hand: the data platform should capture source data and changes to the source data. Updating records means losing the old version of the record.
Another feature of the loading patterns in Data Vault is the ability to load the data incrementally. There is no need at all to load the same record twice into the Raw Data Vault layer. But also in subsequent layers, such as the Business Vault, there is no need to touch the same record another time, or run a calculation on it. Yes, it is not true for every case, but most cases. And it requires diligent implementation practices. But it is possible, and we have built many systems this way: in the best case, we touch data only if we haven’t touched it before.
The insert-only, incremental approach leads to full restartability: if only records which have not been processed yet are loaded in the next execution of the load procedure, why not partition the data in independent sub-sets, and then load partition by partition? If something goes wrong, just restart the process: it will continue loading what’s not in the target yet.
And if the partitions are based on independent sub-sets, the next step is to parallelize the loads on multiple distributed nodes. This requires independent processes, which means that there should be no dependencies between the individual loading procedures of the Raw Data Vault (and all the other layers). That leads to high-performant, scalable distributed systems in the cloud to process any data volume at any speed.
This design, and the modeling style of Data Vault, leads to many parallel jobs to be executed. On the other hand, due to the standardization of the model (all hubs, links, and satellites are based on simple, repeating patterns), it is possible to standardize the loading patterns as well. Essentially, every hub is not only modelled in a similar way, using a generation template, but also the loading process is based on templates. The same is true for links, satellites, and all special entity types.
The standardization and the number of loading procedures to be produced leads then to the need (and possibility) of automation. There is no need to invent these tools: tools such as Vaultspeed are readily available for project teams to use and speed up their development. These tools help the team to maintain the generated processes at scale.
It should also not matter which technology is used to load the Raw Data Vault: some customers prefer SQL statements, while others prefer Python scripts or ETL tools such as SQL Server Integration Services (SSIS). The Data Vault concepts for loading the entities are tool-agnostic and can even be performed in real-time scenarios, for example with Python or C# code.
These requirements are regarding the implementation of the Data Vault loading procedures. Additional, more business-focused requirements have been discussed in our introductory article of this blog series.
Identifying Records
Another design choice is made by the data modeller: how records should be identified in the Data Vault model. There are three options that are commonly used: sequences, hash keys or business keys.
Sequences have been used back in Data Vault 1.0 and are an option till today. However, not a desired one: all of our clients who are using sequences want to get rid of them by migrating to Data Vault 2.0 with hash keys or business keys to be used to identify records. There are many issues with the use of sequences in Data Vault models: one is the required loading order. Hubs must be loaded first because the sequences are used in links and hub satellites to refer to the hub’s business key. After the links are completely loaded, link satellites can be loaded. This loading order is particularly an issue in real-time systems where this leads to unnecessary waiting states.
However, the bigger issue is the implied requirement for lookups. In order to load links, the business keys’ sequence must be figured out first by performing a lookup with the business key from the source against the hub, where the sequence is generated for each new business key loaded. This puts a lot of I/O performance on the database disk level.
Business keys are another option, if the database engine supports joining on business keys (and especially multi-part business keys) efficiently. This is true for many distributed databases but not true for more traditional, relational database engines, where the third (and default) option is used: hash keys. Since it is possible to mix different database technologies (e.g., using a distributed database, such as Microsoft Fabric, in the Azure cloud and a Microsoft SQL Server on premise), one might end up in an unnecessarily complex environment where parts of the overall solution are identified by business keys and other parts are identified by hash keys.
Therefore, most clients opt for the use of hash keys in all databases, regardless of the actual capabilities. Hash keys offer the advantage of consistent query performance across different databases and easy to formulate join conditions that don’t span across many columns. This is the reason why we also decided to use hash keys in a previous article and throughout this blog series.
When using hash keys (or business keys), all Data Vault entities can be loaded in parallel, without any dependencies. Eventual consistency will be reached as soon as the data of one batch (or real-time message) has been fully processed and loaded into the Raw Data Vault. However, immediate consistency is not possible anymore when the entities are loaded in parallel, which is the recommendation. But it is certainly possible (but beyond the scope of this article) to guarantee the consistency of query results, which is sufficient in analytical (and most operational) use cases.
It is also recommended to use hash difference from satellite payload to streamline delta detection during loading procedures for Data Vault satellites. Satellites are delta-driven, i.e. only entirely new records and records in which at least one of the attributes contains a change, will be loaded. In order to perform the delta detection, incoming records shall be compared with the previous ones. Without a hash diff in place, this has to be done by comparing records column by column, which can have a negative impact on performance of loading processes. Therefore it is highly recommended to perform change detection using the fixed-length hash difference.
Loading Raw Data Vault Entities
The next sections discuss how to load the Raw Data Vault entities, namely hubs, links, and satellites. We will keep the main focus on standard entities.
The following figure drafts a Data Vault model from a few data tables of the CRM source system of a retail business.
The following sections will present the loading patterns for objects that are marked in the above figure.
Data Vault stage
Before talking about loading the actual Data Vault entities, we must first explore the Data Vault stage objects, where incoming data payloads are prepared for the following loading processes.
In Fabric, it is recommended to create Data Vault stages as views. This is to leverage caching in Fabric Data Warehouse. While the same stage view can be used to populate different target objects (hubs, links, satellites,…), the query behind it will be executed only once and its results will be automatically stored in the database’s cache, ready for access by subsequent loading procedures from the same stage. This technique practically eliminates the need for materializing stage objects, as per the traditional approach.
Data Vault stages also calculate hash key and hash difference values respectively from business keys and descriptive attributes. It is important to note that within a stage object, there may be more than one hash key, as well as more than one hash diff, in case the stage object should feed data to multiple target Hubs, Links and Satellites, which is common practice.
Moreover, Data Vault stages prepare the insertion of so-called ghost records. These are artificially generated data records added to Data Vault objects, which contain default/dummy values. To read more about ghost record and its usage, please visit: Implementing Ghost Records.
The example code script below creates a Data Vault stage view from source table store_address. This will in the next step be utilized to load Hub Store and its satellite Store Address.
CREATE VIEW dbo.stage_store_address_crm AS
WITH src_data AS (
SELECT
CURRENT_TIMESTAMP AS load_datetime,
‘CRM.store_address’ AS record_source,
store_id,
address_street,
postal_code,
country
FROM dbo.store_address
),
hash AS (
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
CASE
WHEN store_id IS NULL THEN ‘00000000000000000000000000000000’
ELSE CONVERT(CHAR(32), HASHBYTES(‘md5’,
COALESCE(CAST(store_id AS VARCHAR),”)
),2)
END AS hk_store_hub,
CONVERT(CHAR(32), HASHBYTES(‘md5’,
COALESCE(CAST(address_street AS VARCHAR),”) + ‘|’ +
COALESCE(CAST(postal_code AS VARCHAR),”) + ‘|’ +
COALESCE(CAST(country AS VARCHAR),”)
),2) AS hd_store_address_lroc_sat
FROM src_data
),
Zero_keys AS (
SELECT
CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126) AS load_datetime,
‘SYSTEM’ AS record_source,
‘??????’ AS store_id,
‘(unknown)’ AS address_street,
‘(unknown)’ AS postal_code,
‘??’ AS country,
‘00000000000000000000000000000000’ AS hk_store_hub,
‘00000000000000000000000000000000’ AS hd_store_address_lroc_sat
UNION
SELECT
CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126) AS load_datetime,
‘SYSTEM’ AS record_source,
‘XXXXXX’ AS store_id,
‘(error)’ AS address_street,
‘(error)’ AS postal_code,
‘XX’ AS country,
‘ffffffffffffffffffffffffffffffff’ AS hk_store_hub,
‘ffffffffffffffffffffffffffffffff’ AS hd_store_address_lroc_sat
),
final_select AS (
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
hk_store_hub,
hd_store_address_lroc_sat
FROM hash
UNION ALL
SELECT
load_datetime,
record_source,
store_id,
address_street,
postal_code,
country,
hk_store_hub,
hd_store_address_lroc_sat
FROM ghost_records
)
SELECT *
FROM final_select
;
This rather lengthy statement is preparing the data and adding the ghost records: the first CTE src_data selects the data from the staging table and adds the system attributes, such as the load date timestamp and record source. The next CTE hash then adds the hash keys and hash differences required for the target model. Yes, that implies that the target model is already known but we have seen in the previous article how we derive the target model from the staged data in a data-driven Data Vault design. Once that is done, the target model for the Raw Data Vault is known and we can add the required hash keys (for hubs and links) and hash diffs (for satellites) to the staged data. This is done only virtually in Fabric – on other platforms, it might be required to actually add the hash values to the staging tables.
Another CTE zero_keys is used to generate two records to be used as zero keys in hubs and links and ghost records in satellites. This CTE populates the two records with default values for the descriptive attributes and the business keys. It is recommended to use default descriptions that one would expect to see for the unknown member in a dimension. The reason is that these two records will later be turned into two members in the dimension: the unknown member and the erroneous member.
The CTE final_select then unionizes the two datasets: the staging data provided by the CTE hash and the zero key records provided by ghost_records. The loading processes for the Raw Data Vault then use this dataset as the input dataset.
Loading Hubs
The first loading pattern that we want to examine is the hub loading pattern. Since a hub contains a distinct list of business keys, a deduplication logic must be carried out during the hub loading process to eliminate duplicates of the same business key from the Data Vault stage view. In the code script below, we aim to load only the very first copy of incoming hash keys/business keys into the target hub entity. This is guaranteed by the ROW_NUMBER() window function, and the WHERE condition rno (Row Number) = 0 at the end of the loading script.
In addition, we also perform a forward-lookup check to verify if the incoming hash key doesn’t exist in the target. Only then will it be inserted into the hub entity. This is done in the WHERE condition: [hash key from stage] NOT IN (SELECT DISTINCT [hash key] FROM [target Hub]).
The example code script below loads the hub Store with the business key Store ID:
WITH dedupe AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
store_id,
ROW_NUMBER() OVER (PARTITION BY hk_store_hub ORDER BY load_datetime ASC) AS rno
FROM dbo.stage_store_address_crm
)
INSERT INTO DV.STORE_HUB
(
hk_store_hub,
load_datetime,
record_source,
store_id
)
SELECT
hk_store_hub,
load_datetime,
record_source,
store_id
FROM dedupe
WHERE rno = 1
AND hk_store_hub NOT IN (SELECT hk_store_hub FROM DV.STORE_HUB)
;
In the above statement, the CTE dedupe selects all business keys and adds a row number to select the first occurrence of the business key, including its record source and load date timestamp. Duplicates can exist for two reasons: either multiple batches exist in the staging area or the same business key appears multiple times in the same source of the single batch – for example, when a customer has purchased multiple products across multiple transactions in a retail.
The CTE is then input for the INSERT INTO statement into the hub entity. In the select from the CTE, a filter is applied to select only the first occurrence of the business key.
Loading Standard Links
Pattern similar to Hub loading – only the very first copy of incoming link hash keys will be loaded into the target link entity. Link hash key is calculated from the business key combination of the hubs that are connected by the Link entity.
The example code script below loads the link Store Employee with two Hub references from Hub Store and Hub Employee:
WITH dedupe AS (
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
FROM (
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source,
ROW_NUMBER() OVER (PARTITION BY hk_store_employee_lnk ORDER BY load_datetime ASC) AS rno
FROM DV.stage_store_employee_crm
) s
WHERE rno = 1
)
INSERT INTO DV.store_employee_lnk
(
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
)
SELECT
hk_store_employee_lnk,
hk_store_hub,
hk_employee_hub,
load_datetime,
record_source
FROM dedupe
WHERE hk_store_employee_lnk NOT IN (SELECT hk_store_employee_lnk FROM DV.store_employee_lnk)
;
Similar to the standard Hub’s loading pattern, the standard Link’s one starts with a deduplication process. Its goal is to only insert the very first occurrence of the combination of business keys being referenced in the Link relationship.
Then, the main INSERT INTO … SELECT statement also includes a forward look-up to the target Link entity, to only insert unknown Link hash keys into the Raw Vault.
Loading Non-Historized Links
In Data Vault 2.0, the recommended approach to model transactions, events or, in general, non-changing data is to utilize Non-historized Link entities – a.k.a. Transactional Link, discussed in our previous article to advanced Data Vault modeling.
The example code script below loads a Non-Historized Link that contains transactions made in retail stores – with two Hub references from Hub Store and Hub Customer:
WITH high_water_marking AS (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM DV.stage_store_transactions_crm
WHERE load_datetime > (
SELECT COALESCE(MAX(load_datetime), DATEADD(s, -1, CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126)))
FROM DV.store_transaction_nlnk
)
),
dedupe AS (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM (
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date,
ROW_NUMBER() OVER (PARTITION BY hk_store_transaction_nlnk ORDER BY load_datetime ASC) AS rno
FROM high_water_marking
) s
WHERE rno = 1
)
INSERT INTO DV.store_transaction_nlnk
(
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
)
SELECT
hk_store_transaction_nlnk,
hk_store_hub,
hk_customer_hub,
load_datetime,
record_source,
transaction_id,
amount,
transaction_date
FROM dedupe
WHERE hk_store_transaction_nlnk NOT IN (SELECT hk_store_transaction_nlnk FROM DV.store_transaction_nlnk)
;
The main difference between the loading patterns for Standard Links and Non-Historized Links lies within the so-called high-water marking logic in the first CTE of the same name. This logic only lets records through from the DV stage object that have a technical load_datetime that occurs after the latest one found in the target Link entity. This allows us to “skip” data records that have been processed by previous DV loads, effectively reduces the workload on the data warehouse’s side.
Loading Standard Satellites
Now to the more complicated loading patterns within a Data Vault 2.0 implementation, which are for Satellite entities. While querying from the stage, only records with a load date timestamp (LDTS) exceeding the latest load date from that target satellite will be fetched for further processing.
Note that these queries are more enhanced compared to other examples you would find elsewhere. The reason is that they are optimized for loading all data from the underlying data lake, even multiple batches at once, but in the right order. Especially for satellites, this poses a challenge as Data Vault satellites are typically delta-driven in order to save storage and improve performance.
A few principles from the above loading patterns for Hubs and Links also apply for Satellite – such as the deduplication logic. This eliminates hard duplicates (i.e. records with identical data attribute values) and, combined with the aforementioned filtering of old load date timestamps, aims to reduce the amount of incoming data records.
WITH stg AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM DV.stage_store_address_crm
WHERE load_datetime > (
SELECT COALESCE(MAX(load_datetime), DATEADD(s, -1, CONVERT(DATETIME, ‘1900-01-01T00:00:00’, 126)))
FROM DV.store_address_crm_lroc_sat
)
),
dedupe_hash_diff AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
COALESCE(LAG(hd_store_address_crm_lroc_sat) OVER (PARTITION BY hk_store_hub ORDER BY load_datetime), ”) AS prev_hd,
address_street,
postal_code,
country
FROM stg
) s
WHERE hd_store_address_crm_lroc_sat != prev_hd
),
dedupe_hard_duplicate AS (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM (
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country,
ROW_NUMBER() OVER(PARTITION BY hk_store_hub ORDER BY load_datetime DESC) AS rno
FROM dedupe_hash_diff
) dhd
WHERE rno = 1
),
latest_delta_in_target AS (
SELECT
hk_store_hub,
hd_store_address_crm_lroc_sat
FROM (
SELECT
hk_store_hub,
hd_store_address_crm_lroc_sat,
ROW_NUMBER() OVER(PARTITION BY hk_store_hub ORDER BY load_datetime DESC) AS rno
FROM
DV.store_address_crm_lroc_sat
) s
WHERE rno = 1
)
INSERT INTO DV.store_address_crm_lroc_sat
SELECT
hk_store_hub,
load_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM dedupe_hard_duplicate
WHERE NOT EXISTS (
SELECT 1
FROM latest_delta_in_target
WHERE latest_delta_in_target.hk_store_hub = dedupe_hard_duplicate.hk_store_hub
AND latest_delta_in_target.hd_store_address_crm_lroc_sat = dedupe_hard_duplicate.hd_store_address_crm_lroc_sat
)
;
The first CTE stg selects the batches from the staging table where the load date timestamp is not yet in the target satellite. Batches that are already processed in the past are ignored this way.
After that, the CTE dedupe_hash_diff removes non-deltas from the data flow: records that have not changed from the previous batch (identified by the load date timestamp) are removed from the dataset.
Next, the CTE dedupe_hard_duplicate removes those records from the dataset where hard duplicates exist. This statement assumes that a standard satellite is loaded, not a multi-active satellite.
The CTE latest_delta_in_target retrieves the latest delta for the hash key from the target satellite to perform the delta-check against the target.
Finally, the insert into statement selects the changed or new records from the sequence of CTEs and inserts them into the target satellite.
Calculating the Satellite’s End-Date
Typically, a Data Vault satellite contains not only a load date, but also a load end date. However, the drawback of a physical load end date is that it requires an update on the satellite. This is done in the load end-dating process after loading more data into the satellite.
This update is not desired. Nowadays, the alternative approach is to calculate the load end date virtually in a view on top of the satellite’s table. This view provides the same structure (all the attributes) of the underlying table and, in addition, the load end date, which is calculated using a window function.
CREATE VIEW DV.store_address_crm_lroc_sat AS
WITH enddating AS (
SELECT
hk_store_hub,
load_datetime,
COALESCE(
LEAD(DATEADD(ms, -1, load_datetime)) OVER (PARTITION BY hk_store_hub ORDER BY load_datetime),
CONVERT(DATETIME, ‘9999-12-31T23:59:59’, 126)
) AS load_end_datetime,
record_source,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM DV.store_address_crm_lroc0_sat
)
SELECT
hk_store_hub,
load_datetime,
load_end_datetime,
record_source,
CASE WHEN load_datetime = CONVERT(DATETIME, ‘9999-12-31T23:59:59’, 126)
THEN 1
ELSE 0
END AS is_current,
hd_store_address_crm_lroc_sat,
address_street,
postal_code,
country
FROM enddating
;
In the CTE enddating the load end date is calculated using the LEAD function. Other than that, the CTE selects all columns from the underlying table.
The view’s select statement is then calculating an is_current flag based on the load end date, which comes in handy often.
Outlook and conclusion
This concludes our article on loading standard entities of the Raw Data Vault. They provide the foundation for the loading patterns of the advanced Data Vault entities, such as non-historized links or multi-active satellites, with only minor modifications, which we typically discuss in our blog at https://www.scalefree.com/blog/ and in our training. In a subsequent article, we will demonstrate how to automate these patterns using Vaultspeed to improve the productivity (and therefore the agility) of your team. However, before we get there, we will continue our journey downstream in the architecture of the data platform and discuss in our next article how to implement business rules in the Business Vault.
About the Authors
Michael Olschimke is co-founder and CEO at Scalefree International GmbH, a Big Data consulting firm in Europe. The firm empowers clients across all industries to take advantage of Data Vault 2.0 and similar Big Data solutions. Michael has trained thousands of data warehousing individuals from the industry, taught classes in academia, and published on these topics regularly.
Trung Ta is a senior BI consultant at Scalefree International GmbH. With over 7 years of experience in data warehousing and BI, he has been advising Scalefree’s clients in different industries (banking, insurance, government,…) and of various sizes in establishing and maintaining their data architectures. Trung’s expertise lies within Data Vault 2.0 architecture, modeling, and implementation, with a specific focus on data automation tools.
<<< Back to Blog Series Title Page
Microsoft Tech Community – Latest Blogs –Read More
how to solve XCP connection error
when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance,when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance, when I enter connect(xcpch), an error "Device not detected." is appearing.
Does anyone know rootcause and how to solve it?
thank you in advance, xcp, matlab, simulink MATLAB Answers — New Questions
How I Can Find The HumanActivityData data set contains over 380000 observations of five different physical human activities? from Brian Hu
Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor.Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor. Load Raw Sensor Data
The HumanActivityData data set contains over 380000 observations of five different physical human activities captured at a frequency of 10 Hz. Each observation includes x, y, and z acceleration data measured by a smartphone accelerometer sensor. the humanactivitydata MATLAB Answers — New Questions