isolated databricks cluster call from synapses or azure datafactory
how can I create a job in databricks with parameters of isolated from synapses or azure datafactory, because I can not find any option that allows to pass as parameter this value and not being able to do so I have no access to my unit catalog in databricks
example:
{
“num_workers”: 1,
“cluster_name”: “…”,
“spark_version”: “14.0.x-scala2.12”,
“spark_conf”: {
“spark.hadoop.fs.azure.account.oauth2.client.endpoint”: “…”,
“spark.hadoop.fs.azure.account.auth.type”: “…”,
“spark.hadoop.fs.azure.account.oauth.provider.type”: “…”,
“spark.hadoop.fs.azure.account.oauth2.client.id”: “…”,
“spark.hadoop.fs.azure.account.oauth2.client.secret”: “…”
},
“node_type_id”: “…”,
“driver_node_type_id”: “…”,
“ssh_public_keys”: [],
“spark_env_vars”: {
“cluster_type”: “all-purpose”
},
“init_scripts”: [],
“enable_local_disk_encryption”: false,
“data_security_mode”: “USER_ISOLATION”,
“cluster_id”: “…”
}
how can I create a job in databricks with parameters of isolated from synapses or azure datafactory, because I can not find any option that allows to pass as parameter this value and not being able to do so I have no access to my unit catalog in databricksexample:{ “num_workers”: 1, “cluster_name”: “…”, “spark_version”: “14.0.x-scala2.12”, “spark_conf”: { “spark.hadoop.fs.azure.account.oauth2.client.endpoint”: “…”, “spark.hadoop.fs.azure.account.auth.type”: “…”, “spark.hadoop.fs.azure.account.oauth.provider.type”: “…”, “spark.hadoop.fs.azure.account.oauth2.client.id”: “…”, “spark.hadoop.fs.azure.account.oauth2.client.secret”: “…” }, “node_type_id”: “…”, “driver_node_type_id”: “…”, “ssh_public_keys”: [], “spark_env_vars”: { “cluster_type”: “all-purpose” }, “init_scripts”: [], “enable_local_disk_encryption”: false, “data_security_mode”: “USER_ISOLATION”, “cluster_id”: “…”} Read More