Tag Archives: microsoft
RAISE Summit Paris 2024 PRO Ticket
Hello everyone!
I just won a PRO ticket for the Paris RAISE Summit 2024.
The ticket price on the official website is €799, and I want to sell it for half price.
Can you please help me with where I can sell it, or maybe someone from here wants to buy it?
Hello everyone!I just won a PRO ticket for the Paris RAISE Summit 2024.The ticket price on the official website is €799, and I want to sell it for half price.Can you please help me with where I can sell it, or maybe someone from here wants to buy it? Read More
Announcing Azure Health Data Services DICOM service with Data Lake Storage
We are thrilled to announce the general availability of the Azure Health Data Services DICOM service with Data Lake Storage, a solution that enables teams to store, manage, and access their medical imaging data in the cloud. Whether you’re involved in clinical operations, research endeavors, AI/ML model development, or any other facet of healthcare that involves medical imaging, the DICOM service can expand the possibilities of your imaging data and enable new workflows.
The DICOM service is available for teams to start using today with production imaging data. To get started, visit the Azure Health Data Services docs and follow the steps to Deploy the DICOM service with Data Lake Storage.
Who Can Benefit?
The DICOM service with Data Lake Storage is designed for any team that requires a robust and scalable cloud storage solution for their medical imaging data. Whether you’re a healthcare institution migrating clinical and research data to the cloud, a development team in need of a scalable storage platform for imaging data, or an organization seeking to operationalize imaging data in AI/ML model development or secondary use scenarios, our DICOM service with Data Lake Storage is here to empower your endeavors.
Benefits of Azure Data Lake Storage
By integrating with Azure Data Lake Storage (ADLS Gen2), our DICOM service offers a myriad of benefits to healthcare teams:
Scalable Storage: Enjoy performant, massively scalable storage capabilities that can effortlessly accommodate your growing imaging data assets.
Data Governance: Take full control of your imaging data assets. Manage storage permissions, access controls, data replication strategies, backups, and more, ensuring compliance with global privacy standards.
Direct Data Access: Seamlessly access your DICOM data through Azure Storage APIs, enabling efficient retrieval and manipulation of your valuable medical imaging assets. The DICOM service continues to provide DICOMweb APIs for storing, querying for, and retrieving imaging data.
Ecosystem Integration: Leverage the entire ecosystem of tools surrounding ADLS, including AzCopy, Azure Storage Explorer, and Azure Storage Data Movement library, to help streamline your workflows and enhance productivity.
Unlock New Possibilities: Unlock new analytics and AI/ML scenarios by integrating with services like Azure Synapse, Azure Databricks, Azure Machine Learning, and Microsoft Fabric, enabling you to extract deeper insights and drive innovation in healthcare.
Integration with Microsoft Fabric
As called out above, a key benefit of Azure Data Lake Storage is that it connects to Microsoft Fabric. Microsoft Fabric is an end-to-end, unified analytics platform that brings together all the data and analytics tools that organizations need to unlock the potential of their data and lay the foundation for AI scenarios. By using Microsoft Fabric, you can use the rich ecosystem of Azure services to perform advanced analytics and AI/ML with medical imaging data, such as building and deploying machine learning models, creating cohorts for clinical trials, and generating insights for patient care and outcomes.
Get Started Today
The DICOM service with Data Lake Storage is available for teams to start using today with production imaging data – and customers can expect to receive the same level of support and adherence consistent with the healthcare privacy standards that Azure Health Data Services is known for. Whether you’re looking to enhance clinical operations, drive research breakthroughs, or unlock new AI-driven insights, the power of Azure Health Data Services can help you to achieve your goals.
To learn more about analytics with imaging data, see Get started using DICOM data in analytics workloads.
Pricing
With Azure Health Data Services, customers pay only for what they use. DICOM service customers incur storage costs for storage of the DICOM data and metadata used to operate the DICOM service as well as charges for API requests. The data lake storage model shifts most of the storage costs from Azure Health Data Services to Azure Data Lake Storage (where the .dcm files are stored).
For detailed pricing information, see Pricing – Azure Health Data Services and Azure Storage Data Lake Gen2 Pricing.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Azure Kubernetes Service Authentication Part 3
Welcome to the third installment of this series simplifying azure Kubernetes service authentication. Part two is here Part 2 .In this third part we’ll continue from where we left off and set up cert manager, create a CA issuer, upgrade our ingress routes, register our app, and create secrets and a cookie for authentication. You can also refer to the official documentation here for some of the steps TLS with an ingress controller.
Install cert-manager Let’s Encrypt
In the previous post we uploaded cert manager images to our ACR. Now lets install the cert manager images by running the following:
# Set variable for ACR location to use for pulling images
$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager –namespace ingress-basic –version $CertManagerTag –set installCRDs=true –set nodeSelector.”kubernetes.io/os”=linux –set image.repository=”${AcrUrl}/${CertManagerImageController}” –set image.tag=$CertManagerTag –set webhook.image.repository=”${AcrUrl}/${CertManagerImageWebhook}” –set webhook.image.tag=$CertManagerTag –set cainjector.image.repository=”${AcrUrl}/${CertManagerImageCaInjector}” –set cainjector.image.tag=$CertManagerTag
You should get some output and make sure the READY column is set to True.
Create a CA Issuer
A certificate authority (CA) validates the identities of entities (such as websites, email addresses, companies, or individual persons) and binds them to cryptographic keys through the issuance of digital certificates. We are using the letsencrypt CA. We can create a CA by applying a ClusterIssuer to our ingress-basic namespace. Create the following cluster-issuer.yaml file:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: MY_EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt
solvers:
– http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
“kubernetes.io/os”: linux
Now apply this yaml file by running the following kubectl command:
kubectl apply -f cluster-issuer.yaml –namespace ingress-basic
Update your ingress route
In the previous part of this series we created a FQDN which enabled us to route to our apps in the web browser via a URL. We need to update our ingress routes to handle this change. Update the hello-world-ingress.yaml as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: “true”
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
– hosts:
– hello-world-ingress.MY_CUSTOM_DOMAIN
secretName: tls-secret
rules:
– host: hello-world-ingress.MY_CUSTOM_DOMAIN
http:
paths:
– path: /hello-world-one(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
– path: /hello-world-two(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-two
port:
number: 80
– path: /(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
—
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
tls:
– hosts:
– hello-world-ingress.MY_CUSTOM_DOMAIN
secretName: tls-secret
rules:
– host: hello-world-ingress.MY_CUSTOM_DOMAIN
http:
paths:
– path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
Then apply the update:
kubectl apply -f hello-world-ingress.yaml –namespace ingress-basic
You should get some output and make sure the READY column is set to True.
Register your app in Entra ID and create a client secret
An Azure Active Directory (AAD) App referred to as Entra ID now, is an application registered in Entra ID, which allows it to interact with Azure services and authenticate users. We can then use the Entra ID App to obtain a client secret for authentication purposes. Perform the following actions to register an app and create a client secret.
In the Azure portal search for Microsoft Entra ID
Click App registrations in the left side navigation
Click new registration button
Add a name and enter your redirect URL (Web) https://FQDN/oauth2/callback
Register and take note of your Application (client) ID
Click Certificates and Secrets and click New client secret and take note of the Secret Value
Create a cookie secret and set Kubernetes secrets
Now register the following client-id, client-secret, and cookie secret. Remember this series is for educational purposes and thus may not meet all security requirements. If you need to store your secrets in a more secure location you can also refer to how to use Key Vault to do so here Key Vault. Run the following commands in PowerShell:
$cookie_secret=“$(openssl rand -hex 16)”
# or with python
python -c ‘import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())’
kubectl create secret generic client-id –from-literal=oauth2_proxy_client_id=<APPID> -n ingress-basic
kubectl create secret generic client-secret –from-literal=oauth2_proxy_client_secret=<SECRETVALUE> -n ingress-basic
kubectl create secret generic cookie-secret –from-literal=oauth2_proxy_cookie_secret=<COOKIESECRET> -n ingress-basic
Create a Redis Password
Azure uses large cookies when authenticating over Oauth2, thus it is recommended to setup Redis to handle these large cookies. For now we will create a Redis password and set the Kubernetes secret. In the next post we will install and setup Redis. Run the following command in PowerShell:
$REDIS_PASSWORD=“<YOUR_PASSWORD>”
kubectl create secret generic redis-password –from-literal=redis-password=$REDIS_PASSWORD -n ingress-basic
This ends the third post in our series. Look out for the fourth and final post.
Microsoft Tech Community – Latest Blogs –Read More