Month: June 2024
Using OBS virtual camera on Teams messes up aspect ratio especially on mobile – how to fix this?
Hi,
I hope someone here could guide me in the right direction… Our organization is trying to spice up our Teams webinars. We have a small streaming studio with four cameras, ATEM mini extreme iso, Rotem, green screen and OBS studio at hand. The idea is that our experts don’t have to sit in front of a laptop. All they have to do is control the slides of their Powerpoint presentation with the use of a remote controller while an assistant runs the stream through Atem or OBS.
Now here’s the problem: for some reason, Teams messes up the aspect ratio of the output from both OBS and Atem. It’s relatively easy to try and fix this issue in OBS by scaling the aspect ratio. However, there doesn’t seem to be an universal solution that works on Chrome, Firefox, Edge, Teams and Teams mobile alike. I have to run several tests on several different browsers and laptops to try and estimate how much I have to scale down in order for the output to still look good on most common devices and browsers. It’s really frustrating, not exact enough and time consuming.
Any idea how I could guarantee a solid quality regardless of the device or browser the participant is using? I have quite a bit of trouble with Teams mobile in particular. When presenting a Powerpoint the traditional way it works as it should but when it’s presented through OBS as a webcam output the view is super tiny, as the webcam view on mobile isn’t big to begin with. How can I guarantee that the output from OBS looks good on mobile as well? The solution can’t be something that requires effort from the participants (like pinning the webcam or fitting the webcam to screen) as a bulk of our participants are often elders and aren’t very tech savvy.
Thank you so much for your help, this is driving me up the wall!
Hi, I hope someone here could guide me in the right direction… Our organization is trying to spice up our Teams webinars. We have a small streaming studio with four cameras, ATEM mini extreme iso, Rotem, green screen and OBS studio at hand. The idea is that our experts don’t have to sit in front of a laptop. All they have to do is control the slides of their Powerpoint presentation with the use of a remote controller while an assistant runs the stream through Atem or OBS. Now here’s the problem: for some reason, Teams messes up the aspect ratio of the output from both OBS and Atem. It’s relatively easy to try and fix this issue in OBS by scaling the aspect ratio. However, there doesn’t seem to be an universal solution that works on Chrome, Firefox, Edge, Teams and Teams mobile alike. I have to run several tests on several different browsers and laptops to try and estimate how much I have to scale down in order for the output to still look good on most common devices and browsers. It’s really frustrating, not exact enough and time consuming. Any idea how I could guarantee a solid quality regardless of the device or browser the participant is using? I have quite a bit of trouble with Teams mobile in particular. When presenting a Powerpoint the traditional way it works as it should but when it’s presented through OBS as a webcam output the view is super tiny, as the webcam view on mobile isn’t big to begin with. How can I guarantee that the output from OBS looks good on mobile as well? The solution can’t be something that requires effort from the participants (like pinning the webcam or fitting the webcam to screen) as a bulk of our participants are often elders and aren’t very tech savvy. Thank you so much for your help, this is driving me up the wall! Read More
What To Do When 𝗤𝘂𝗶𝗰𝗸𝗕𝗼𝗼𝗸𝘀 error 15102 after latest update?
I’m encountering 𝘘𝘶𝘪𝘤𝘬𝘉𝘰𝘰𝘬𝘴 error 15102 when attempting to update. What are the troubleshooting steps to resolve this issue?
I’m encountering 𝘘𝘶𝘪𝘤𝘬𝘉𝘰𝘰𝘬𝘴 error 15102 when attempting to update. What are the troubleshooting steps to resolve this issue? Read More
Validate CSV files before ingestion in Microsoft Fabric Data Factory Pipelines
A very common task for Microsoft Fabric, Azure Data Factory and Synapse Analytics Pipelines is to receive unstructured files, land them in an Azure Data Lake (ADLS Gen2) and load them into structured tables. This often leads to a very common issue with unstructured files when “SOMETHING HAS CHANGED” and the unstructured file does not meet the defined table format. If issues are not handled properly within the pipeline, the data workloads will fail and users will be asking “WHERE’S MY DATA???” You then need to communicate with the owner of the file, have them fix the issues, then rerun the pipeline after the issues have been fixed. Along with unhappy users, rerunning failed pipelines adds cost. Validating these files before they are processed allows your pipelines to continue ingesting files that do have the correct format. For pipelines that do fail, your code or process can pinpoint what caused the error, leading to faster resolution of the issue. In this blog, we’ll walk through a Microsoft Fabric Data Factory Pipeline that validates incoming CSV files for common errors before loading to a Microsoft Fabric Lakehouse delta table.
Overview
This source files in this process are in an Azure Data Lake storage account, which has a shortcut in the Fabric Lakehouse. A data pipeline calls a Spark notebook to check the file for any new or missing columns, any invalid data for the expected data type, or any duplicate key values. If the file has no errors, the pipeline loads the CSV data into a parquet file and then calls another Spark notebook to load the parquet file into a delta table in the Lakehouse. Otherwise if there are errors in the file, the pipeline sends a notification email.
Source files and metadata files
In this solution, files are landing in an ADLS Gen 2 container folder called scenario1-validatecsv which has a shortcut to it in the Fabric Lakehouse. The files folder contains the files to process; the metadata folder contains a file describing the format each CSV file type should conform to.
This solution is to load to a table called customer, which has columns number, name, city and state. In the format definition file, customer_meta, there’s a row for each customer table column, providing the column name, the column data type, and whether or not it is a key value. This metadata file is later used in a Spark notebook to validate that the incoming file conforms to this format.
Orchestrator Pipeline
The orchestrator pipeline is very simple – since I am running my pipeline as a scheduled batch, it loops through the files folder and invokes another pipeline for each file. Note the parametrization of the lakehouse path, the source folders, the destination folder and the file format. This allows the same process to be run for any lakehouse and for any file format/table to load.
For Each activity
When invoking the child pipeline from the For Each activity, it passes in the parameter values from the orchestrator pipeline plus the name of the current file being processed and the metadata file name, which is the file format name with ‘_meta’ appended to it.
Child pipeline – Validate and load pipeline
The Validate and load pipeline validates the current CSV file, and if the file conforms to the format, loads it into a parquet table then merges the parquet data into a delta table.
1 – Parameters
Parameters passed in from the orchestrator pipeline for the current CSV file to process
2 – Set variable activity – Set parquet file name
Removes .csv to define the parquet file name
3- Notebook activity – Validate CSV and load file to parquet
Calls the notebook, passing in parameter and variable values
Below is the pyspark code for the notebook. It gets the column names and inferred data types from the CSV file as well as the column names, data types and key field names from the metadata file. It checks if the column names match, if the data types match, if there are keys defined for the file, and finally if there are any duplicate key values in the incoming file. If there were duplicate key fields, it writes the duplicate key values to a file. If the names and data types match and there are no duplicate key values, it writes the file to parquet and passes back the key field names from the metadata file; otherwise, it returns the appropriate error message.
# Files/landingzone/files parameters
lakehousepath = ‘abfss://xxxxx@msit-onelake.dfs.fabric.microsoft.com/xxxxx’
filename = ‘customer_good.csv’
outputfilename = ‘customer_good’
metadatafilename = ‘customer_meta.csv’
filefolder = ‘scenario1-validatecsv/landingzone/files’
metadatafolder = ‘scenario1-validatecsv/landingzone/metadata’
outputfolder = ‘scenario1-validatecsv/bronze’
fileformat = ‘customer’
# Import pandas and pyarrow
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
# Set path variables
inputfilepath = f'{lakehousepath}/Files/{filefolder}/’
metadatapath = f'{lakehousepath}/Files/{metadatafolder}/’
outputpath = f'{lakehousepath}/Files/{outputfolder}/’
# Read the text file and the metadata file
print(f'{inputfilepath}{filename}’)
data = pd.read_csv(f'{inputfilepath}{filename}’)
meta = pd.read_csv(f'{metadatapath}{metadatafilename}’)
# only get the column names for the file formattype that was input
meta = meta.loc[meta[‘formatname’] == fileformat]
print(data.dtypes)
print(list(meta[‘columname’]))
# get any key fields specified
keyfields = meta.loc[meta[‘iskeyfield’] == 1, ‘columname’].tolist()
print(keyfields)
# Check for errors in CSV
haserror = 0
# Check if the column names match
if list(data.columns) != list(meta[“columname”]):
# Issue an error
result = “Error: Column names do not match.”
haserror = 1
else:
# Check if the datatypes match
if list(data.dtypes) != list(meta[“datatype”]):
# Issue an error
result = “Error: Datatypes do not match.”
haserror = 1
else:
# If the file has key fields, check if there are any duplicate keys
# if there are duplicate keys, also write the duplicate key values to a file
if keyfields != ”:
checkdups = data.groupby(keyfields).size().reset_index(name=’count’)
print(checkdups)
if checkdups[‘count’].max() > 1:
dups = checkdups[checkdups[‘count’] > 1]
print(dups)
haserror = 1
(dups.to_csv(f'{lakehousepath}/Files/processed/error_duplicate_key_values/duplicaterecords_{filename}’,
mode=’w’,index=False))
result = ‘Error: Duplicate key values’
if haserror == 0:
# Write the data to parquet if no errors
df = spark.read.csv(f”{inputfilepath}{filename}”, header=True, inferSchema=True)
print(f’File is: {inputfilepath}{filename}’)
display(df)
df.write.mode(“overwrite”).format(“parquet”).save(f”{outputpath}{outputfilename}”)
result = f”Data written to parquet successfully. Key fields are:{keyfields} “
mssparkutils.notebook.exit(str(result))
4 – Copy data activity – Move File to processed folder
This Copy Data activity essentially moves the csv file from the ADLS Gen 2 files folder to a processed folder in the Fabric Lakehouse
Destination folder name is derived from the Notebook exit value, which returns success or the error message
5 – If condition activity: If File Validated
Check if the CSV file was successfully validated and loaded to parquet
5a – File validated successfully
If there were no errors in the file, call the spark notebook to merge the parquet file written from the previous pyspark notebook to the delta table.
Parameters for the lakehousepath, the parquet file path and name, the table name, and the key fields passed in. As shown above, the key fields were derived from the previous pyspark notebook and are passed into the Create or Merge to Table notebook.
Below is the spark notebook code. If the delta table already exists and there are key fields, it builds a string expression to be used on the pyspark merge statement and then performs the merge on the delta table. If there are no key fields or the table does not exist, it writes or overwrites the delta table.
# create or merge to delta
# input parameters below
lakehousepath = ‘abfss://xxxe@yyyy.dfs.fabric.microsoft.com/xxx’
inputfolder = ‘scenario1-validatecsv/bronze’
filename = ‘customergood’
tablename = ‘customer’
keyfields = “[‘number’]”
# define paths
outputpath = f'{lakehousepath}/Tables/{tablename}’
inputpath = f'{lakehousepath}/Files/{inputfolder}/{filename}’
# import delta table and sql functions
from delta.tables import *
from pyspark.sql.functions import *
# get list of key values
keylist = eval(keyfields)
print(keylist)
# read input parquet file
df2 = spark.read.parquet(inputpath)
# display(df2)
# if there are keyfields define in the table, build the merge key expression
if keyfields != None:
mergekey = ”
keycount = 0
for key in keylist:
mergekey = mergekey + f’t.{key} = s.{key} AND ‘
mergeKeyExpr = mergekey.rstrip(‘ AND’)
print(mergeKeyExpr)
# if table exists and if table should be upserted as indicated by the merge key, do an upsert and return how many rows were inserted and updated;
# if it does not exist or is a full load, overwrite existing table return how many rows were inserted
if DeltaTable.isDeltaTable(spark,outputpath) and mergeKeyExpr is not None:
deltaTable = DeltaTable.forPath(spark,outputpath)
deltaTable.alias(“t”).merge(
df2.alias(“s”),
mergeKeyExpr
).whenMatchedUpdateAll().whenNotMatchedInsertAll().execute()
history = deltaTable.history(1).select(“operationMetrics”)
operationMetrics = history.collect()[0][“operationMetrics”]
numInserted = operationMetrics[“numTargetRowsInserted”]
numUpdated = operationMetrics[“numTargetRowsUpdated”]
else:
df2.write.format(“delta”).mode(“overwrite”).save(outputpath)
numInserted = df2.count()
numUpdated = 0
print(numInserted)
result = “numInserted=”+str(numInserted)+ “|numUpdated=”+str(numUpdated)
mssparkutils.notebook.exit(str(result))
5b – File validation failed
If there was an error in the CSV file, send an email notification
Summary
Building resilient and efficient data pipelines is critical no matter your ETL tool or data sources. Thinking ahead to what types of problems can, and inevitably will, occur and incorporating data validation into your pipelines will save you a lot of headaches when those pipelines are moved into production. The examples in this blog are just a few of the most common errors with CSV files. Get ahead of those data issues and resolve them without last minute fixes and disrupting other processes! You can easily enhance the methods in this blog by including other validations or validating other unstructured file types like Json. You can change the pipeline to run as soon as the unstructured file is loaded into ADLS rather than in batch. Using techniques like this to reduce hard errors gives your pipelines (and yourself!) more credibility!
Microsoft Tech Community – Latest Blogs –Read More
How to solve:Abnormal termination: Illegal instruction Current Thread: ” id 23044
MATLAB Log File: C:UsersWYHAppDataLocalTempmatlab_crash_dump.7244-1
————————————————
MATLAB Log File
————————————————
——————————————————————————–
Illegal instruction detected at 2024-06-06 17:16:44 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.24033.1003 (2024-5-8)
Graphics card 2 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 4070 SUPER Version 32.0.15.5599 (2024-6-1)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 11761854
MATLAB Root : D:AppsForCodesMatlab
MATLAB Version : 9.13.0.2553342 (R2022b) Update 9
OpenGL : hardware
Operating System : Microsoft Windows 10 企业版
Process ID : 7244
Processor ID : x86 Family 25 Model 97 Stepping 2, AuthenticAMD
Session Key : bc7e146d-00cd-45fc-9784-95bb512b7245
Window System : Version 10.0 (Build 19045)
Fault Count: 1
Abnormal termination:
Illegal instruction
Current Thread: ” id 23044
Register State (from fault):
RAX = 000002b87d462000 RBX = 000002b87d463000
RCX = 000000d69e2ae0e0 RDX = 000002b88e0c04c0
RSP = 000000d69e2adef0 RBP = 000000d69e2ae858
RSI = 00007ffc9ebb05a8 RDI = 000000d69e2ae868
R8 = 0000000000000040 R9 = 000002b87d463000
R10 = 0000000000000180 R11 = 0000000000000080
R12 = 0000000000000002 R13 = 000000000000000c
R14 = 0000000000000004 R15 = 0000000000000002
RIP = 00007ffc9e0082d2 EFL = 00010212
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffc9e0082d2 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+12813010 pardiso_real_unsymmetry_driver_p+12800354
[ 1] 0x00007ffc9dc83b44 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09124676 pardiso_real_unsymmetry_driver_p+09112020
[ 2] 0x00007ffc9dc75738 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09066296 pardiso_real_unsymmetry_driver_p+09053640
[ 3] 0x00007ffc9dbeb825 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08501285 pardiso_real_unsymmetry_driver_p+08488629
[ 4] 0x00007ffc9dbdf4d1 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08451281 pardiso_real_unsymmetry_driver_p+08438625
[ 5] 0x00007ffc9d731314 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+03543828 pardiso_real_unsymmetry_driver_p+03531172
[ 6] 0x00007ffc9d8844ea D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+04932842 pardiso_real_unsymmetry_driver_p+04920186
[ 7] 0x00007ffce31bfa63 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01243747 _kmp_invoke_microtask+00000147
[ 8] 0x00007ffce31218c7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+00596167 _kmp_acquire_nested_drdpa_lock+00038327
[ 9] 0x00007ffce312138f D:AppsForCodesMatlabbinwin64libiomp5md.dll+00594831 _kmp_acquire_nested_drdpa_lock+00036991
[ 10] 0x00007ffce3192af7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01059575 _kmp_launch_worker+00000343
[ 11] 0x00007ffd59697344 C:WindowsSystem32KERNEL32.DLL+00095044 BaseThreadInitThunk+00000020
[ 12] 0x00007ffd5a2026b1 C:WindowsSYSTEM32ntdll.dll+00337585 RtlUserThreadStart+00000033MATLAB Log File: C:UsersWYHAppDataLocalTempmatlab_crash_dump.7244-1
————————————————
MATLAB Log File
————————————————
——————————————————————————–
Illegal instruction detected at 2024-06-06 17:16:44 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.24033.1003 (2024-5-8)
Graphics card 2 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 4070 SUPER Version 32.0.15.5599 (2024-6-1)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 11761854
MATLAB Root : D:AppsForCodesMatlab
MATLAB Version : 9.13.0.2553342 (R2022b) Update 9
OpenGL : hardware
Operating System : Microsoft Windows 10 企业版
Process ID : 7244
Processor ID : x86 Family 25 Model 97 Stepping 2, AuthenticAMD
Session Key : bc7e146d-00cd-45fc-9784-95bb512b7245
Window System : Version 10.0 (Build 19045)
Fault Count: 1
Abnormal termination:
Illegal instruction
Current Thread: ” id 23044
Register State (from fault):
RAX = 000002b87d462000 RBX = 000002b87d463000
RCX = 000000d69e2ae0e0 RDX = 000002b88e0c04c0
RSP = 000000d69e2adef0 RBP = 000000d69e2ae858
RSI = 00007ffc9ebb05a8 RDI = 000000d69e2ae868
R8 = 0000000000000040 R9 = 000002b87d463000
R10 = 0000000000000180 R11 = 0000000000000080
R12 = 0000000000000002 R13 = 000000000000000c
R14 = 0000000000000004 R15 = 0000000000000002
RIP = 00007ffc9e0082d2 EFL = 00010212
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffc9e0082d2 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+12813010 pardiso_real_unsymmetry_driver_p+12800354
[ 1] 0x00007ffc9dc83b44 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09124676 pardiso_real_unsymmetry_driver_p+09112020
[ 2] 0x00007ffc9dc75738 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09066296 pardiso_real_unsymmetry_driver_p+09053640
[ 3] 0x00007ffc9dbeb825 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08501285 pardiso_real_unsymmetry_driver_p+08488629
[ 4] 0x00007ffc9dbdf4d1 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08451281 pardiso_real_unsymmetry_driver_p+08438625
[ 5] 0x00007ffc9d731314 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+03543828 pardiso_real_unsymmetry_driver_p+03531172
[ 6] 0x00007ffc9d8844ea D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+04932842 pardiso_real_unsymmetry_driver_p+04920186
[ 7] 0x00007ffce31bfa63 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01243747 _kmp_invoke_microtask+00000147
[ 8] 0x00007ffce31218c7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+00596167 _kmp_acquire_nested_drdpa_lock+00038327
[ 9] 0x00007ffce312138f D:AppsForCodesMatlabbinwin64libiomp5md.dll+00594831 _kmp_acquire_nested_drdpa_lock+00036991
[ 10] 0x00007ffce3192af7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01059575 _kmp_launch_worker+00000343
[ 11] 0x00007ffd59697344 C:WindowsSystem32KERNEL32.DLL+00095044 BaseThreadInitThunk+00000020
[ 12] 0x00007ffd5a2026b1 C:WindowsSYSTEM32ntdll.dll+00337585 RtlUserThreadStart+00000033 MATLAB Log File: C:UsersWYHAppDataLocalTempmatlab_crash_dump.7244-1
————————————————
MATLAB Log File
————————————————
——————————————————————————–
Illegal instruction detected at 2024-06-06 17:16:44 +0800
——————————————————————————–
Configuration:
Crash Decoding : Disabled – No sandbox or build area path
Crash Mode : continue (default)
Default Encoding : UTF-8
Deployed : false
Graphics Driver : Uninitialized hardware
Graphics card 1 : Advanced Micro Devices, Inc. ( 0x1002 ) AMD Radeon(TM) Graphics Version 31.0.24033.1003 (2024-5-8)
Graphics card 2 : NVIDIA ( 0x10de ) NVIDIA GeForce RTX 4070 SUPER Version 32.0.15.5599 (2024-6-1)
Java Version : Java 1.8.0_202-b08 with Oracle Corporation Java HotSpot(TM) 64-Bit Server VM mixed mode
MATLAB Architecture : win64
MATLAB Entitlement ID : 11761854
MATLAB Root : D:AppsForCodesMatlab
MATLAB Version : 9.13.0.2553342 (R2022b) Update 9
OpenGL : hardware
Operating System : Microsoft Windows 10 企业版
Process ID : 7244
Processor ID : x86 Family 25 Model 97 Stepping 2, AuthenticAMD
Session Key : bc7e146d-00cd-45fc-9784-95bb512b7245
Window System : Version 10.0 (Build 19045)
Fault Count: 1
Abnormal termination:
Illegal instruction
Current Thread: ” id 23044
Register State (from fault):
RAX = 000002b87d462000 RBX = 000002b87d463000
RCX = 000000d69e2ae0e0 RDX = 000002b88e0c04c0
RSP = 000000d69e2adef0 RBP = 000000d69e2ae858
RSI = 00007ffc9ebb05a8 RDI = 000000d69e2ae868
R8 = 0000000000000040 R9 = 000002b87d463000
R10 = 0000000000000180 R11 = 0000000000000080
R12 = 0000000000000002 R13 = 000000000000000c
R14 = 0000000000000004 R15 = 0000000000000002
RIP = 00007ffc9e0082d2 EFL = 00010212
CS = 0033 FS = 0053 GS = 002b
Stack Trace (from fault):
[ 0] 0x00007ffc9e0082d2 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+12813010 pardiso_real_unsymmetry_driver_p+12800354
[ 1] 0x00007ffc9dc83b44 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09124676 pardiso_real_unsymmetry_driver_p+09112020
[ 2] 0x00007ffc9dc75738 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+09066296 pardiso_real_unsymmetry_driver_p+09053640
[ 3] 0x00007ffc9dbeb825 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08501285 pardiso_real_unsymmetry_driver_p+08488629
[ 4] 0x00007ffc9dbdf4d1 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+08451281 pardiso_real_unsymmetry_driver_p+08438625
[ 5] 0x00007ffc9d731314 D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+03543828 pardiso_real_unsymmetry_driver_p+03531172
[ 6] 0x00007ffc9d8844ea D:Desktopmfv_rk_freqpardiso_real_unsymmetry_driver_p.dll+04932842 pardiso_real_unsymmetry_driver_p+04920186
[ 7] 0x00007ffce31bfa63 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01243747 _kmp_invoke_microtask+00000147
[ 8] 0x00007ffce31218c7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+00596167 _kmp_acquire_nested_drdpa_lock+00038327
[ 9] 0x00007ffce312138f D:AppsForCodesMatlabbinwin64libiomp5md.dll+00594831 _kmp_acquire_nested_drdpa_lock+00036991
[ 10] 0x00007ffce3192af7 D:AppsForCodesMatlabbinwin64libiomp5md.dll+01059575 _kmp_launch_worker+00000343
[ 11] 0x00007ffd59697344 C:WindowsSystem32KERNEL32.DLL+00095044 BaseThreadInitThunk+00000020
[ 12] 0x00007ffd5a2026b1 C:WindowsSYSTEM32ntdll.dll+00337585 RtlUserThreadStart+00000033 崩溃 MATLAB Answers — New Questions
Error using trainNetwork: Predictors must be a N-by-1 cell array of sequences
Hello MATLAB friends,
I’m working on a project involving time series prediction using a Gated Recurrent Unit (GRU) model. The goal is to predict whether a stock will be a ‘winning’ or ‘losing’ stock based on historical data.
However, I’m encountering an error when trying to train the network using the trainNetwork function. The error message is:
Error using trainNetwork
Invalid training data. Predictors must be a N-by-1 cell array of sequences, where N is the number of sequences. All sequences must have the same feature dimension and at least one time step.
Here’s the relevant portion of my code:
% Initialize cell arrays to store training and testing features and labels
training_features = {};
training_labels = {};
testing_features = {};
testing_labels = {};
% Process each CSV file in the combined_stocks table
for f = 1:height(combined_stocks)
% Load data from CSV file using readtable
data = readtable(fullfile(directory_path, combined_stocks.Stock(f)), ‘VariableNamingRule’, ‘preserve’);
% Selecting the required features
features = data{:, {‘open’, ‘high’, ‘low’, ‘close’, ‘MA’, ‘MA_1’}};
% Normalizing the close prices
normalized_close = (data.close – min(data.close)) / (max(data.close) – min(data.close));
% Calculating logarithmic close prices
log_close = log(data.close);
% Combine all features into a matrix
features = [features, normalized_close, log_close];
% Determine the label for the current stock
if combined_stocks.Maximum_Profit(f) > 0
labels = ones(height(data), 1); % Winning class
else
labels = zeros(height(data), 1); % Losing class
end
% Convert labels to categorical
labels = categorical(labels);
% Generate a random permutation of indices
indices = randperm(size(features, 1));
% Split the data into 70% training and 30% testing
split_point = round(size(features, 1) * 0.7);
training_indices = indices(1:split_point);
testing_indices = indices(split_point+1:end);
% Append the training and testing features and labels to the respective arrays
training_features = [training_features; {features(training_indices, :)}];
training_labels = [training_labels; {labels(training_indices)}];
testing_features = [testing_features; {features(testing_indices, :)}];
testing_labels = [testing_labels; {labels(testing_indices)}];
end
% Define the GRU network architecture
layers = [ …
sequenceInputLayer(size(training_features{1}, 2))
gruLayer(100,’OutputMode’,’last’)
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
% Define the training options
options = trainingOptions(‘adam’, …
‘MaxEpochs’,100, …
‘MiniBatchSize’, 150, …
‘InitialLearnRate’, 0.01, …
‘GradientThreshold’, 1, …
‘ExecutionEnvironment’,’auto’,…
‘plots’,’training-progress’, …
‘Verbose’,false);
% Train the GRU network
net = trainNetwork(training_features, training_labels, layers, options);
In my code, training_features and training_labels are N-by-1 cell arrays. Each cell in training_features contains a matrix of size M-by-8, where M is the number of time steps and 8 is the number of features. Each cell in training_labels contains a categorical vector of length M, where M is the same as in the corresponding cell of training_features.
Despite this, I’m still getting the error. Any help would be greatly appreciated!Hello MATLAB friends,
I’m working on a project involving time series prediction using a Gated Recurrent Unit (GRU) model. The goal is to predict whether a stock will be a ‘winning’ or ‘losing’ stock based on historical data.
However, I’m encountering an error when trying to train the network using the trainNetwork function. The error message is:
Error using trainNetwork
Invalid training data. Predictors must be a N-by-1 cell array of sequences, where N is the number of sequences. All sequences must have the same feature dimension and at least one time step.
Here’s the relevant portion of my code:
% Initialize cell arrays to store training and testing features and labels
training_features = {};
training_labels = {};
testing_features = {};
testing_labels = {};
% Process each CSV file in the combined_stocks table
for f = 1:height(combined_stocks)
% Load data from CSV file using readtable
data = readtable(fullfile(directory_path, combined_stocks.Stock(f)), ‘VariableNamingRule’, ‘preserve’);
% Selecting the required features
features = data{:, {‘open’, ‘high’, ‘low’, ‘close’, ‘MA’, ‘MA_1’}};
% Normalizing the close prices
normalized_close = (data.close – min(data.close)) / (max(data.close) – min(data.close));
% Calculating logarithmic close prices
log_close = log(data.close);
% Combine all features into a matrix
features = [features, normalized_close, log_close];
% Determine the label for the current stock
if combined_stocks.Maximum_Profit(f) > 0
labels = ones(height(data), 1); % Winning class
else
labels = zeros(height(data), 1); % Losing class
end
% Convert labels to categorical
labels = categorical(labels);
% Generate a random permutation of indices
indices = randperm(size(features, 1));
% Split the data into 70% training and 30% testing
split_point = round(size(features, 1) * 0.7);
training_indices = indices(1:split_point);
testing_indices = indices(split_point+1:end);
% Append the training and testing features and labels to the respective arrays
training_features = [training_features; {features(training_indices, :)}];
training_labels = [training_labels; {labels(training_indices)}];
testing_features = [testing_features; {features(testing_indices, :)}];
testing_labels = [testing_labels; {labels(testing_indices)}];
end
% Define the GRU network architecture
layers = [ …
sequenceInputLayer(size(training_features{1}, 2))
gruLayer(100,’OutputMode’,’last’)
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
% Define the training options
options = trainingOptions(‘adam’, …
‘MaxEpochs’,100, …
‘MiniBatchSize’, 150, …
‘InitialLearnRate’, 0.01, …
‘GradientThreshold’, 1, …
‘ExecutionEnvironment’,’auto’,…
‘plots’,’training-progress’, …
‘Verbose’,false);
% Train the GRU network
net = trainNetwork(training_features, training_labels, layers, options);
In my code, training_features and training_labels are N-by-1 cell arrays. Each cell in training_features contains a matrix of size M-by-8, where M is the number of time steps and 8 is the number of features. Each cell in training_labels contains a categorical vector of length M, where M is the same as in the corresponding cell of training_features.
Despite this, I’m still getting the error. Any help would be greatly appreciated! Hello MATLAB friends,
I’m working on a project involving time series prediction using a Gated Recurrent Unit (GRU) model. The goal is to predict whether a stock will be a ‘winning’ or ‘losing’ stock based on historical data.
However, I’m encountering an error when trying to train the network using the trainNetwork function. The error message is:
Error using trainNetwork
Invalid training data. Predictors must be a N-by-1 cell array of sequences, where N is the number of sequences. All sequences must have the same feature dimension and at least one time step.
Here’s the relevant portion of my code:
% Initialize cell arrays to store training and testing features and labels
training_features = {};
training_labels = {};
testing_features = {};
testing_labels = {};
% Process each CSV file in the combined_stocks table
for f = 1:height(combined_stocks)
% Load data from CSV file using readtable
data = readtable(fullfile(directory_path, combined_stocks.Stock(f)), ‘VariableNamingRule’, ‘preserve’);
% Selecting the required features
features = data{:, {‘open’, ‘high’, ‘low’, ‘close’, ‘MA’, ‘MA_1’}};
% Normalizing the close prices
normalized_close = (data.close – min(data.close)) / (max(data.close) – min(data.close));
% Calculating logarithmic close prices
log_close = log(data.close);
% Combine all features into a matrix
features = [features, normalized_close, log_close];
% Determine the label for the current stock
if combined_stocks.Maximum_Profit(f) > 0
labels = ones(height(data), 1); % Winning class
else
labels = zeros(height(data), 1); % Losing class
end
% Convert labels to categorical
labels = categorical(labels);
% Generate a random permutation of indices
indices = randperm(size(features, 1));
% Split the data into 70% training and 30% testing
split_point = round(size(features, 1) * 0.7);
training_indices = indices(1:split_point);
testing_indices = indices(split_point+1:end);
% Append the training and testing features and labels to the respective arrays
training_features = [training_features; {features(training_indices, :)}];
training_labels = [training_labels; {labels(training_indices)}];
testing_features = [testing_features; {features(testing_indices, :)}];
testing_labels = [testing_labels; {labels(testing_indices)}];
end
% Define the GRU network architecture
layers = [ …
sequenceInputLayer(size(training_features{1}, 2))
gruLayer(100,’OutputMode’,’last’)
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
% Define the training options
options = trainingOptions(‘adam’, …
‘MaxEpochs’,100, …
‘MiniBatchSize’, 150, …
‘InitialLearnRate’, 0.01, …
‘GradientThreshold’, 1, …
‘ExecutionEnvironment’,’auto’,…
‘plots’,’training-progress’, …
‘Verbose’,false);
% Train the GRU network
net = trainNetwork(training_features, training_labels, layers, options);
In my code, training_features and training_labels are N-by-1 cell arrays. Each cell in training_features contains a matrix of size M-by-8, where M is the number of time steps and 8 is the number of features. Each cell in training_labels contains a categorical vector of length M, where M is the same as in the corresponding cell of training_features.
Despite this, I’m still getting the error. Any help would be greatly appreciated! matlab, deep learning toolbox, gru, trainnetwork MATLAB Answers — New Questions
left and right sides have a different number of elements.
Hi, i’ve this problem
i is a logical 5995×1… price and bubu have equal size
load(‘matlab_3Variable.mat’);
price(i)=bubu;Hi, i’ve this problem
i is a logical 5995×1… price and bubu have equal size
load(‘matlab_3Variable.mat’);
price(i)=bubu; Hi, i’ve this problem
i is a logical 5995×1… price and bubu have equal size
load(‘matlab_3Variable.mat’);
price(i)=bubu; left and right sides have a different number of el MATLAB Answers — New Questions
How do I export a matlabb app?
I’ll be brief.
How do I export a matlabb app so another person can edit it?
I’ve tried, but my colleague only gets a .mlapp.zip file and once he extracts it, he is not able to even open the GUI.
Thanks in advanceI’ll be brief.
How do I export a matlabb app so another person can edit it?
I’ve tried, but my colleague only gets a .mlapp.zip file and once he extracts it, he is not able to even open the GUI.
Thanks in advance I’ll be brief.
How do I export a matlabb app so another person can edit it?
I’ve tried, but my colleague only gets a .mlapp.zip file and once he extracts it, he is not able to even open the GUI.
Thanks in advance matlab gui, app designer, export MATLAB Answers — New Questions
Enabling Windows Hello for Business in Hybrid
Hello, we’re in a hybrid environment and we wanted to deploy Windows Hello. I have deployed the Cloud Kerberos for this requirement. I have done all the pre-requisite steps, I’m seeing the CloudTGT on dsregcmd. However, I’m still receiving the error below. Any suggestion on where to look at?
Hello, we’re in a hybrid environment and we wanted to deploy Windows Hello. I have deployed the Cloud Kerberos for this requirement. I have done all the pre-requisite steps, I’m seeing the CloudTGT on dsregcmd. However, I’m still receiving the error below. Any suggestion on where to look at? Read More
mIcrosoft Shifts – enable time off requests without notifications
Hi
Is it possible to enable time-off requests but not have the notifications go to the MS team owners? I have a MS team that contains all the staff in a department – the MS team has 8 owners but i want any time off requests to go to the resources manager on AAD not all the teams owners. Whilst I have a power automate to handle the approvals it does not prevent the mass notification of the Teams owners. It would be great if you could either:
a. Enable time off requests without notifications – and handle approvals via power automate
or better still
b. Enable time off requests and select whether notifications go to teams owners/users manager or both
If this is possible already can someone let me know how?
HiIs it possible to enable time-off requests but not have the notifications go to the MS team owners? I have a MS team that contains all the staff in a department – the MS team has 8 owners but i want any time off requests to go to the resources manager on AAD not all the teams owners. Whilst I have a power automate to handle the approvals it does not prevent the mass notification of the Teams owners. It would be great if you could either: a. Enable time off requests without notifications – and handle approvals via power automateor better stillb. Enable time off requests and select whether notifications go to teams owners/users manager or both If this is possible already can someone let me know how? Read More
Microsoft Forms “enter a date” option
Hello,
Forms has the function of adding the option ‘enter a date’ to a question. I would like the respondent to be able to enter 2 date options. However, it is not possible for me to assign the option ‘select a date’ twice to a question, but a new question is then generated directly.
Does anyone know how to solve this?
Many thanks
Hello, Forms has the function of adding the option ‘enter a date’ to a question. I would like the respondent to be able to enter 2 date options. However, it is not possible for me to assign the option ‘select a date’ twice to a question, but a new question is then generated directly. Does anyone know how to solve this?Many thanks Read More
Plotting Eddy Kinetic Energy
Hi I have problem with plotting eddy kinetic energy (EKE)
I have a data of sea surface height (variable name: adt) with matrix [n m k]. n is latitude, m is longitude and k is time.
I want to plot EKE with equation below
here is my initial code
regionName = ‘SouthIndian’ % change as you need to ..
eval([‘load ‘ regionName]); %to evaluate the data
[n, m, k] = size(b.adt); %n =lat, m= lon, k = absolute topohraphy
adt_mean = squeeze(mean(shiftdim(b.adt,2)));
g=(9.8).^2;
f=2.*(coriolisf(b.lat)).^2;
dhx=(diff(adt_mean)./diff(b.lat)).^2;
dhy=(diff(shiftdim(adt_mean,1))./diff(b.lon)).^2;
eke=g/f*(dhx+dhy);
But it doesnt working. Plese help or any suggestion about it.Hi I have problem with plotting eddy kinetic energy (EKE)
I have a data of sea surface height (variable name: adt) with matrix [n m k]. n is latitude, m is longitude and k is time.
I want to plot EKE with equation below
here is my initial code
regionName = ‘SouthIndian’ % change as you need to ..
eval([‘load ‘ regionName]); %to evaluate the data
[n, m, k] = size(b.adt); %n =lat, m= lon, k = absolute topohraphy
adt_mean = squeeze(mean(shiftdim(b.adt,2)));
g=(9.8).^2;
f=2.*(coriolisf(b.lat)).^2;
dhx=(diff(adt_mean)./diff(b.lat)).^2;
dhy=(diff(shiftdim(adt_mean,1))./diff(b.lon)).^2;
eke=g/f*(dhx+dhy);
But it doesnt working. Plese help or any suggestion about it. Hi I have problem with plotting eddy kinetic energy (EKE)
I have a data of sea surface height (variable name: adt) with matrix [n m k]. n is latitude, m is longitude and k is time.
I want to plot EKE with equation below
here is my initial code
regionName = ‘SouthIndian’ % change as you need to ..
eval([‘load ‘ regionName]); %to evaluate the data
[n, m, k] = size(b.adt); %n =lat, m= lon, k = absolute topohraphy
adt_mean = squeeze(mean(shiftdim(b.adt,2)));
g=(9.8).^2;
f=2.*(coriolisf(b.lat)).^2;
dhx=(diff(adt_mean)./diff(b.lat)).^2;
dhy=(diff(shiftdim(adt_mean,1))./diff(b.lon)).^2;
eke=g/f*(dhx+dhy);
But it doesnt working. Plese help or any suggestion about it. ssh, derivative, spatial MATLAB Answers — New Questions
How to interpolate data as stairs
I have a struct (lets call it Turbine State) with both time and value data. The issue is there aren’t enough data points captured with the range I want, so MATLAB intertpolates the missing data linearly between points.
i.e. shortly beore 14:10 the ‘state’ is 12 but this should remain 12 until it isn’t anymore. so the state should remain 12 before just after 14:30 when it drops to -1. How do i make this a ‘stairs’ representation and not a liner interpolation?
Thanks :)I have a struct (lets call it Turbine State) with both time and value data. The issue is there aren’t enough data points captured with the range I want, so MATLAB intertpolates the missing data linearly between points.
i.e. shortly beore 14:10 the ‘state’ is 12 but this should remain 12 until it isn’t anymore. so the state should remain 12 before just after 14:30 when it drops to -1. How do i make this a ‘stairs’ representation and not a liner interpolation?
Thanks 🙂 I have a struct (lets call it Turbine State) with both time and value data. The issue is there aren’t enough data points captured with the range I want, so MATLAB intertpolates the missing data linearly between points.
i.e. shortly beore 14:10 the ‘state’ is 12 but this should remain 12 until it isn’t anymore. so the state should remain 12 before just after 14:30 when it drops to -1. How do i make this a ‘stairs’ representation and not a liner interpolation?
Thanks 🙂 matlab, plot, plotting, interpolation MATLAB Answers — New Questions
How to block a drawnow from executing callbacks?
Mathworks has recklessly put a drawnow inside of its asynciolib for hardware read and writes:
toolboxsharedasynciolib+matlabshared+asyncio+internalStream.m Line 184: drawnow(‘limitrate’);
This will then execute callbacks in the middle of data transfer! I could remove those lines of code for every Matlab install, but if an update restores the code then I have a dangerous bug. Is there a way I can temporarily pause the event queue from executing callbacks?
The following function CallbackTest has a 10 second main loop which should not be interrupted with the press of a button. However if TCPIP commands are executed in the loop then a hidden drawnow is invoked and the button callback will be processed.
function CallbackTest(Do_TCPIP)
% Test to see if TCPIP reads evaluate callbacks.
% There is a 10s main loop which the button callback should not interrupt.
% The main loop will not be interrupted when Do_TCPIP is 0.
% However when Do_TCPIP is 1 then it will be interrupted.
%Initialize figure and button
hFigure=uifigure();
uibutton(hFigure,ButtonPushedFcn=@ACallBack);
%Initialize TCPIP
echotcpip("on",4000);
t = tcpclient("localhost",4000);
configureTerminator(t,"CR");
waitfor(hFigure,’FigureViewReady’)
%10 second main loop which should not be interrupted because there is no drawnow, figure, pause, or waitfor command
n=0;
while n < 20
if Do_TCPIP
writeline(t,"loop");
readline(t);
end
n=n+1;
java.lang.Thread.sleep(500);
end
% Wrap up
echotcpip("off");
delete(hFigure)
fprintf(‘Done!n’);
end
function ACallBack(~,~)
fprintf(‘Main loop interrupted.n’);
endMathworks has recklessly put a drawnow inside of its asynciolib for hardware read and writes:
toolboxsharedasynciolib+matlabshared+asyncio+internalStream.m Line 184: drawnow(‘limitrate’);
This will then execute callbacks in the middle of data transfer! I could remove those lines of code for every Matlab install, but if an update restores the code then I have a dangerous bug. Is there a way I can temporarily pause the event queue from executing callbacks?
The following function CallbackTest has a 10 second main loop which should not be interrupted with the press of a button. However if TCPIP commands are executed in the loop then a hidden drawnow is invoked and the button callback will be processed.
function CallbackTest(Do_TCPIP)
% Test to see if TCPIP reads evaluate callbacks.
% There is a 10s main loop which the button callback should not interrupt.
% The main loop will not be interrupted when Do_TCPIP is 0.
% However when Do_TCPIP is 1 then it will be interrupted.
%Initialize figure and button
hFigure=uifigure();
uibutton(hFigure,ButtonPushedFcn=@ACallBack);
%Initialize TCPIP
echotcpip("on",4000);
t = tcpclient("localhost",4000);
configureTerminator(t,"CR");
waitfor(hFigure,’FigureViewReady’)
%10 second main loop which should not be interrupted because there is no drawnow, figure, pause, or waitfor command
n=0;
while n < 20
if Do_TCPIP
writeline(t,"loop");
readline(t);
end
n=n+1;
java.lang.Thread.sleep(500);
end
% Wrap up
echotcpip("off");
delete(hFigure)
fprintf(‘Done!n’);
end
function ACallBack(~,~)
fprintf(‘Main loop interrupted.n’);
end Mathworks has recklessly put a drawnow inside of its asynciolib for hardware read and writes:
toolboxsharedasynciolib+matlabshared+asyncio+internalStream.m Line 184: drawnow(‘limitrate’);
This will then execute callbacks in the middle of data transfer! I could remove those lines of code for every Matlab install, but if an update restores the code then I have a dangerous bug. Is there a way I can temporarily pause the event queue from executing callbacks?
The following function CallbackTest has a 10 second main loop which should not be interrupted with the press of a button. However if TCPIP commands are executed in the loop then a hidden drawnow is invoked and the button callback will be processed.
function CallbackTest(Do_TCPIP)
% Test to see if TCPIP reads evaluate callbacks.
% There is a 10s main loop which the button callback should not interrupt.
% The main loop will not be interrupted when Do_TCPIP is 0.
% However when Do_TCPIP is 1 then it will be interrupted.
%Initialize figure and button
hFigure=uifigure();
uibutton(hFigure,ButtonPushedFcn=@ACallBack);
%Initialize TCPIP
echotcpip("on",4000);
t = tcpclient("localhost",4000);
configureTerminator(t,"CR");
waitfor(hFigure,’FigureViewReady’)
%10 second main loop which should not be interrupted because there is no drawnow, figure, pause, or waitfor command
n=0;
while n < 20
if Do_TCPIP
writeline(t,"loop");
readline(t);
end
n=n+1;
java.lang.Thread.sleep(500);
end
% Wrap up
echotcpip("off");
delete(hFigure)
fprintf(‘Done!n’);
end
function ACallBack(~,~)
fprintf(‘Main loop interrupted.n’);
end drawnow, tcpip, hardware, callbacks MATLAB Answers — New Questions
How can I batch Convert WEBP to JPG on my Windows 11?
I’m looking for a way to batch convert WEBP images to JPG format on my Windows 11 PC. I have a large number of WEBP files that I need to convert into JPGs for easier compatibility with various applications and services that don’t support the WEBP format. I’m seeking a solution that can handle the conversion of multiple files at once to save time, rather than having to manually convert each image one by one. Hopefully there is no loss in image quality! Any recommendations would be greatly appreciated.
My device:HP laptop, Windows 11 Home.
I’m looking for a way to batch convert WEBP images to JPG format on my Windows 11 PC. I have a large number of WEBP files that I need to convert into JPGs for easier compatibility with various applications and services that don’t support the WEBP format. I’m seeking a solution that can handle the conversion of multiple files at once to save time, rather than having to manually convert each image one by one. Hopefully there is no loss in image quality! Any recommendations would be greatly appreciated. My device:HP laptop, Windows 11 Home. Read More
LTSC Outlook Mac not having New Outlook
Hi All,
We are a hybrid exchange environment and a region’s mailboxes are not migrated to cloud. When we deployed Microsoft 365 version of Office to theses users, while configuring Outlook for some strange reason they were pointed to Office365 instead of Microsoft Exchange and upon investigation we finally decided to deploy LTSC version of Office. Post which they were pointed to Exchange and since then they are using this version.
The issue now here is, in this office version the users are not able to switch to New version of Outlook in Mac, when i was googling i found that we can actually enable users to switch by setting preferences as seen here:
https://learn.microsoft.com/en-us/deployoffice/mac/preferences-outlook#enable-new-outlook
The issue is, where do we run this command and how do we set this?
Hi All,We are a hybrid exchange environment and a region’s mailboxes are not migrated to cloud. When we deployed Microsoft 365 version of Office to theses users, while configuring Outlook for some strange reason they were pointed to Office365 instead of Microsoft Exchange and upon investigation we finally decided to deploy LTSC version of Office. Post which they were pointed to Exchange and since then they are using this version.The issue now here is, in this office version the users are not able to switch to New version of Outlook in Mac, when i was googling i found that we can actually enable users to switch by setting preferences as seen here:https://learn.microsoft.com/en-us/deployoffice/mac/preferences-outlook#enable-new-outlook The issue is, where do we run this command and how do we set this? Read More
Unable to edit a people column in sharepoint lists
Hello,
I have a SharePoint list with 3 people/group columns. For each column I need to restrict the people that can be selected for this item. When I allow the column content to be any user I am able to fill out the list just fine but when I set the list of users to a specific list I am unable to add anyone to these columns or edit people in these columns. This issue has been persistent with both the default form for the list and a custom form for the list.
When I have the list settings as in the below image I am unable to fill out the aforementioned columns. Any assistance would be greatly apricated in this matter.
Hello, I have a SharePoint list with 3 people/group columns. For each column I need to restrict the people that can be selected for this item. When I allow the column content to be any user I am able to fill out the list just fine but when I set the list of users to a specific list I am unable to add anyone to these columns or edit people in these columns. This issue has been persistent with both the default form for the list and a custom form for the list. When I have the list settings as in the below image I am unable to fill out the aforementioned columns. Any assistance would be greatly apricated in this matter. Read More
How to save Azure Data Factory work (objects)?
Hi,
I’m new to Azure Data Factory (ADF). I need to learn it in order to ingest external third-party data into our domain. I shall be using ADF Pipelines to retrieve the data and then load it into an Azure SQL Server database.
I currently develop Power BI reports and write SQL scripts to feed the Power BI reporting. These reports and scripts are saved in a backed-up drive – so if anything disappears, I can always use the back-ups to install the work.
The target SQL database scripts, the tables the ADF Pipelines will load to, will be backed-up following the same method.
How do I save the ADF Pipelines work and any other ADF objects that I may create (I don’t know what exactly will be created as I’m yet to develop anything in ADF)?
I’ve read about this CI/CD process but I don’t think it’s applicable to me. We are not using multiple environments (i.e. Dev, Test, UAT, Prod). I am using a Production environment only. Each data source that needs to be imported will have it’s own Pipeline, so breaking a Pipeline should not affect other Pipelines and that’s why I feel a single environment is suffice. I am the only Developer working within the ADF and so I have no need to be able to collaborate with peers and promote joint coding ventures.
Does the ADF back-up it’s Data Factories by default? If they do, can I trust that should our instance of ADF be deleted then I can retrieve the backed-up version to reinstall or roll-back?
Is the a process/software which saves the ADF objects so I can reinstall them if I need to (by the way, I’m not sure how to reinstall them so I’ll have to learn that)?
Thanks.
Hi,I’m new to Azure Data Factory (ADF). I need to learn it in order to ingest external third-party data into our domain. I shall be using ADF Pipelines to retrieve the data and then load it into an Azure SQL Server database.I currently develop Power BI reports and write SQL scripts to feed the Power BI reporting. These reports and scripts are saved in a backed-up drive – so if anything disappears, I can always use the back-ups to install the work.The target SQL database scripts, the tables the ADF Pipelines will load to, will be backed-up following the same method.How do I save the ADF Pipelines work and any other ADF objects that I may create (I don’t know what exactly will be created as I’m yet to develop anything in ADF)?I’ve read about this CI/CD process but I don’t think it’s applicable to me. We are not using multiple environments (i.e. Dev, Test, UAT, Prod). I am using a Production environment only. Each data source that needs to be imported will have it’s own Pipeline, so breaking a Pipeline should not affect other Pipelines and that’s why I feel a single environment is suffice. I am the only Developer working within the ADF and so I have no need to be able to collaborate with peers and promote joint coding ventures.Does the ADF back-up it’s Data Factories by default? If they do, can I trust that should our instance of ADF be deleted then I can retrieve the backed-up version to reinstall or roll-back?Is the a process/software which saves the ADF objects so I can reinstall them if I need to (by the way, I’m not sure how to reinstall them so I’ll have to learn that)?Thanks. Read More
Sync mail attribute from Entra ID to local Active Direcotry
Hello,
First question here and can’t seem to find the answer anywhere.
I have an existing sync with Entra Connect/Azure AD connect, however for local LDAP purposes I need to have the “mail” attribute in local Active Directory populated with the value of the user emailaddress in Entra ID. Is there any way that I can modify the connector so Entra ID syncs this value to local Active Directory?
Thanks in advance,
Kind regards,
Maik Brugman
Hello,First question here and can’t seem to find the answer anywhere.I have an existing sync with Entra Connect/Azure AD connect, however for local LDAP purposes I need to have the “mail” attribute in local Active Directory populated with the value of the user emailaddress in Entra ID. Is there any way that I can modify the connector so Entra ID syncs this value to local Active Directory?Thanks in advance,Kind regards,Maik Brugman Read More
windows ca sign certificate for linux
Hello, could you explain to me how to generate a certificate and then sign it via Windows ca?
I’m try many times using open’ssl, and then sing using website:
MyCaAddress/certsrv
Can someone explain me, step by step how i can do it ?
Hello, could you explain to me how to generate a certificate and then sign it via Windows ca?I’m try many times using open’ssl, and then sing using website:MyCaAddress/certsrvCan someone explain me, step by step how i can do it ? Read More
P-H diagram in Axes tool of App designer
Hi,
I want to reproduce the p-h diagram in App designer usin the ‘Axes’ component during the simulation of my simulink refrigeration model. My model has sensors elements from Simscape which provide the respective outputs of pressure and enthalpy necessary for the p-h diagram. However, I do not know how the code should be in UIAxes callback in order to take the live data from the sensors and then plot it in the ‘Axes’ component.
does anybody know how to do it?
Thanks in advance for your comments!Hi,
I want to reproduce the p-h diagram in App designer usin the ‘Axes’ component during the simulation of my simulink refrigeration model. My model has sensors elements from Simscape which provide the respective outputs of pressure and enthalpy necessary for the p-h diagram. However, I do not know how the code should be in UIAxes callback in order to take the live data from the sensors and then plot it in the ‘Axes’ component.
does anybody know how to do it?
Thanks in advance for your comments! Hi,
I want to reproduce the p-h diagram in App designer usin the ‘Axes’ component during the simulation of my simulink refrigeration model. My model has sensors elements from Simscape which provide the respective outputs of pressure and enthalpy necessary for the p-h diagram. However, I do not know how the code should be in UIAxes callback in order to take the live data from the sensors and then plot it in the ‘Axes’ component.
does anybody know how to do it?
Thanks in advance for your comments! app designer, axes tool, p-h diafram MATLAB Answers — New Questions