Month: September 2024
IMAGE function in excel not available
I can no longer see or use the IMAGE function in excel. Anyone know if this was deprecated or if there is an alternative that works?
I can no longer see or use the IMAGE function in excel. Anyone know if this was deprecated or if there is an alternative that works? Read More
Harnessing Generative AI with Weaviate on Azure Kubernetes Service and Azure NetApp Files
Table of Contents
Approximate Nearest Neighbor (ANN) Benchmarks
ANN Benchmarks – Glove 100 Angular
ANN Benchmarks – Sift 128 Euclidean
Introduction
In this era of generative AI, the ability to process and analyze large datasets with precision and speed is not just advantageous—it’s essential. Vector databases, such as Weaviate, play a pivotal role in the infrastructure that powers generative AI applications, from natural language processing to image generation. These databases efficiently handle the similarity search operations at the core of generative models, enabling them to parse vast datasets and identify patterns that drive the creation of new, synthetic content.
By leveraging Azure Kubernetes Service (AKS) and using high-performance Azure NetApp Files (ANF) as the back-end storage, deploying Weaviate creates a scalable foundation that effectively meets the demanding requirements of generative AI models. This blog post guides you through setting up Weaviate on AKS, backed by the robust storage solution of Azure NetApp Files. We then benchmark our setup with ANN-Benchmarks—the established framework for testing approximate nearest neighbor search algorithms with vector databases—to quantitatively measure Weaviate’s performance in a controlled environment.
Follow along as we streamline the deployment process and benchmarking steps, providing a clear view of Weaviate’s performance in a cloud environment. By the end of our journey, you’ll have a comprehensive understanding of how to deploy a scalable vector search solution and what to expect from its performance on Azure’s robust infrastructure.
Co-authors: Michael Haigh, Senior Technical Marketing Engineer, Kyle Radder, Technical Marketing Engineer (NetApp)
Prerequisites
If you’ll be following along step by step, be sure to have the following resources at your disposal:
An Azure Kubernetes Service cluster with at least one node with 64 vCPU (as called out in the ANN-Benchmarks readme), like Standard_D64_v4
An Azure NetApp Files capacity pool of service level Ultra with at least 30TiB available (A 30TiB volume provides 30Gbps throughput, roughly equivalent to the expected bandwidth of the Standard_D64_v4 node.)
NetApp Astra Trident™ installed on the AKS cluster, with a back-end configuration and storage class referencing the Azure NetApp Files capacity pool
An Azure Linux VM in the same virtual network as the AKS cluster, with helm and kubectl installed and configured to access your AKS cluster
Install Weaviate
We use the Kubernetes Helm chart to install Weaviate on the AKS cluster. First, SSH to the Linux VM that’s deployed in the same virtual network as your AKS cluster, then add the Weaviate repository:
helm repo add weaviate https://weaviate.github.io/weaviate-helm
helm repo update
To view the possible configuration values for the Weaviate Helm chart, run the following command:
helm show values weaviate/weaviate
Depending on your generative AI application, you may want to configure additional Weaviate replica pods or enable local machine learning (ML) models. For our performance benchmarking, we leave all the defaults except for the following settings:
cat <<EOF > values.yaml
storage:
size: 30Ti
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: “true”
grpcService:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: “true”
EOF
As mentioned in the prerequisites, a 30TiB Azure NetApp Files Ultra volume provides 30Gbps of throughput, which is roughly equivalent to the 30,000Mbps of bandwidth provided by the Standard_D64_v4 AKS node. If you’re using a smaller AKS node, you can reduce your volume size to result in an equivalent throughput (each TiB of an Ultra volume provides 1Gbps of throughput).
The other two Helm settings are to use internal IP addresses for the HTTP and GRPC Weaviate services, so network traffic stays confined to our internal virtual network.
To deploy Weaviate with these values, run the following command:
helm install weaviate -n weaviate –create-namespace weaviate/weaviate -f values.yaml
To check on the status of the deployment, run the following command:
kubectl -n weaviate get all,pvc
It takes less than a minute to get the external IPs populated, and about 5 to 10 minutes for the volume to go into a Bound state:
$ kubectl -n weaviate get all,pvc
NAME READY STATUS RESTARTS AGE
pod/weaviate-0 1/1 Running 0 8m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/weaviate LoadBalancer 172.16.213.188 10.20.0.8 80:31961/TCP 8m21s
service/weaviate-grpc LoadBalancer 172.16.23.238 10.20.0.9 50051:30943/TCP 8m21s
service/weaviate-headless ClusterIP None <none> 80/TCP 8m21s
NAME READY AGE
statefulset.apps/weaviate 1/1 8m21s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/weaviate-data-weaviate-0 Bound pvc-4c354c0d-29fe-4af8-a611-35e993c9ecab 30Ti RWO azure-netapp-files-ultra 8m21s
Depending on your virtual network settings, your external IPs will probably be different, but verify that they’re RFC 1918 internal IP addresses to ensure that network traffic stays on the internal virtual network. Take note of these IPs for use in the next section. Once the volume is bound and the weaviate-0 pod is in a Running state, we’re ready to start performance testing.
Approximate Nearest Neighbor (ANN) Benchmarks
ANN benchmarks is a benchmarking environment for approximate nearest neighbor (ANN) algorithms. ANN algorithms are used to find the nearest neighbors to a point in a dataset, where approximate means that the algorithm is allowed to return points that are close to the nearest neighbors, rather than the exact ones. This trade-off enables significantly faster processing times, which is especially useful when dealing with very large datasets.
ANN Benchmarks Setup
ANN algorithms are an effective tool for testing vector databases due to their efficiency with these large datasets, which are typical in real-world applications such as recommendation systems and natural language processing. By simulating practical use cases, ANN benchmarks allow the evaluation of a vector database’s ability to balance accuracy and speed, a critical aspect of user experience. These tests also offer insights into the scalability and resource efficiency of the databases, revealing how performance evolves with growing data volumes and complexity. ANN testing can also inform about the impact of the underlying infrastructure on the database’s performance, which is vital for optimizing deployments.
The Weaviate test module in ANN-Benchmarks uses the v3 Weaviate client and an embedded Weaviate instance. Because the v3 client is deprecated and is no longer recommended, and we’re using an external (running on AKS) Weaviate instance, the test module must be modified. A fork of the ANN-Benchmarks repository has been created with these modifications. If you’re curious about the specific changes, see this diff.
From your workstation VM, run the following commands to clone the forked ANN-Benchmarks repository and change into the created directory:
git clone https://github.com/MichaelHaigh/ann-benchmarks.git
cd ann-benchmarks
We now install Python 3.10, which is the validated Python version for ANN-Benchmarks:
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
sudo apt install -y python3.10 python3.10-distutils python3.10-venv
(This example is for Ubuntu; if you’re running a different flavor of Linux, the commands will be different.)
Next, we create our Python virtual environment and install the necessary packages:
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install weaviate-client
Finally, open the weaviate module.py file with your favorite text editor:
vim ann_benchmarks/algorithms/weaviate/module.py
Take note of lines 14-21, and especially lines 15 and 18:
14 self.client = weaviate.connect_to_custom(
15 http_host=”10.20.0.8″,
16 http_port=”80″,
17 http_secure=False,
18 grpc_host=”10.20.0.9″,
19 grpc_port=”50051″,
20 grpc_secure=False,
21 )
Lines 15 and 18 must be updated with the external IPs of the weaviate and weaviate-grpc services, respectively, from the previous section. When complete, save the file and exit the text editor.
ANN Benchmarks – Glove 100 Angular
We’re now ready to start our performance benchmarking with the following command:
python run.py –algorithm weaviate –local
(i) Note
This task will take 1 to 2 days to complete, depending on your setup; it took 30 hours with the configuration just described.
The –algorithm argument instructs ANN-Benchmarks to run the Weaviate tests, and the –local argument instructs to run the tests “locally” rather than the default Docker method. Because we’ve modified the test module to connect to our external Weaviate instance running on AKS, it’s not truly a “local” test.
The first action of the benchmark is to download the GloVe 100 Angular dataset. (This can be modified by the –dataset argument, as shown in the next section.) We then print out a large list of the order of the tests that will be run:
$ python run.py –algorithm weaviate –local
downloading https://ann-benchmarks.com/glove-100-angular.hdf5 -> data/glove-100-angular.hdf5…
2024-08-19 19:52:11,777 – annb – INFO – running only weaviate
2024-08-19 19:52:12,622 – annb – INFO – Order: [Definition(algorithm=’weaviate’, constructor=’Weaviate’, module=’ann_benchmarks.algorithms.weaviate’, docker_tag=’ann-benchmarks-weaviate’, arguments=[‘angular’, 64, 128], query_argument_groups=[[16], [32], [48], [64], [96], [128], [256], [512], [768]], disabled=False), Definition(algorithm=’weaviate’, constructor=’Weaviate’,
…
In this example (order varies because the tests are randomized), the first test is:
GloVe. Global vectors for word representation, where the vector representations of words are learned in such a way that the geometric relationships between the vectors capture semantic meaning.
100. The number of dimensions of the vectors in the dataset (100-dimensional).
Angular. The distance metric used to measure the similarity between vectors when searching for nearest neighbors. When the angular distance is used, the focus is on the orientation of the vectors, not their length, which is particularly useful for comparing word embeddings where the direction of the vector is more meaningful than its magnitude.
Arguments (64 and 128). The number of groups into which the dataset’s vectors are categorized during indexing. A higher value implies a more fine-grained partitioning of the data, which could lead to a longer preprocessing stage, because the algorithm must process and organize the vectors into more groups.
Query arguments (16, 32, 48, 96, 128, 256, 512, and 768). The number of groups that are considered when searching for the nearest neighbors to a query vector. A higher value means that the algorithm checks more groups, which can increase the computational effort during the query phase but may also improve the likelihood of finding the true nearest neighbors, thus increasing recall.
These tests are controlled by the config.yml file located in the algorithm directory, so feel free to modify that file to reduce the number of tests, if desired.
After the tests have been running for a few hours, you can view the PVC overview page of the Azure portal to view the volume’s metrics. Make sure that the “throughput limit reached” chart stays at 0; otherwise your volume has been sized too small in relation to the bandwidth of your selected node.
After 1 to 2 days, the GloVe 100 Angular benchmarking will be complete, and we can move on to our next dataset.
ANN Benchmarks – Sift 128 Euclidean
Because the GloVe 100 Angular dataset is geared toward word vectors, we’ll use the Sift 128 Euclidean dataset, which is geared toward image vectors:
Sift. Scale Invariant Feature Transform vectors capture local features of images and are widely used in computer vision tasks. Each vector in the dataset represents a distinct feature extracted from an image.
128. the number of dimensions of the vectors in the dataset (128-dimensional).
Euclidean. This term specifies the distance metric used to measure the similarity between vectors in the dataset; the smaller the distance the more similar the vectors. The Euclidean distance, also known as L2 norm or L2 distance, is the “ordinary” straight-line distance between two points in Euclidean space.
This time when we execute the benchmark, we’ll use the –dataset argument to specify this dataset:
python run.py –algorithm weaviate –local –dataset sift-128-euclidean
Again, this command can take 1 to 2 days to complete; in our testing it took roughly 24 hours. When complete, you can continue to run additional benchmarks with more datasets, if desired. However, we’ll now move on to analysis.
ANN Benchmarks Analysis
There are a handful of ways to analyze the results of our benchmark testing:
Run python plot.py, which creates a single image of the vector database(s) and dataset(s). This image can be heavily customized to specify the X and Y axis units and scales, in addition to several other options. (Run python plot.py –help to view all available configuration options.)
Run python create_website.py, which creates an HTML page with about a dozen graphs.
Run python data_export.py –out res.csv, which exports all results to a CSV file, which can be useful when additional post-processing is needed.
We’ll go with option 2 here, because a single command yields many interesting images. However, feel free to play around with options 1 and 3 in your environment. In your workstation, run the following command:
python create_website.py
If your workstation has a desktop environment, open the weaviate.html file that was generated. Otherwise, run the following command to copy the file to your physical machine:
scp <user>@<ip>:/home/<user>/ann-benchmarks/weaviate.html weaviate.html
Once opened, scroll through the page to view the results. The entire page of results is included in the results section of this blog, but let’s dig into just two of the images.
The axes of the above chart represent:
Recall. The fraction of true nearest neighbors that are returned by the approximate nearest neighbor search. For example, if the ANN search is supposed to return the 10 nearest neighbors to a query point, but only 7 of those are among the true 10 nearest neighbors, the recall would be 0.7.
Queries per second. The number of queries that can be processed per second, indicating the performance or efficiency of the vector database. In general, the longer amount of time spent on making the queries should result in a higher recall value.
Values to the up and right are better, meaning that Weaviate performed better with the Sift 128 Euclidean dataset than with the GloVe 100 Angular dataset. This indicates that Weaviate is a more capable vector database for computer vision tasks rather than natural language processing. However we recommend testing against additional datasets and vector databases to find the best match for your specific application.
Let’s investigate one more chart:
While the previous chart focused purely on the query phase, this chart focuses on the trade-off between the quality of the search results and the memory footprint of the index. The X axis (Recall) is the same; however the Y axis represents the amount of memory used by the vector database to store the data structure that facilitates the neighbor search. As we can see, for certain levels of recall, Weaviate has a lower memory footprint for the GloVe 100 Angular dataset, but for other levels of recall the Sift 128 Euclidean dataset’s memory footprint is lower.
(i) Note
For more information about the tooltip, see this page of the Weaviate documentation.
Depending on your generative AI application, you may value memory footprint over query speed—for example, a computer vision application in embedded systems. Other applications, like a chatbot or coding assistant, may value query speed over memory footprint. Performing benchmarks against potential vector databases with relevant datasets can help determine the ideal configuration for your generative AI applications.
Results
The remaining results of the ANN-Benchmarks testing are shown here.
Conclusion
The deployment and benchmarking of Weaviate on Azure Kubernetes Service with Azure NetApp Files demonstrates the platform’s robust capabilities in handling generative AI workloads. The detailed walk-through in this blog simplifies the setup process, and it also equips users with the necessary insights to make informed decisions about their vector database deployments.
The results from the ANN-Benchmarks reveal valuable performance metrics that are essential for optimizing AI applications. Weaviate’s impressive handling of the Sift 128 Euclidean dataset suggests a strong suit in computer vision tasks, and its performance with the GloVe 100 Angular dataset opens avenues for natural language processing applications. However, users must consider the specific requirements of their applications, because trade-offs can significantly impact the user experience and operational costs.
By leveraging Azure’s scalable infrastructure and Weaviate’s vector search capabilities, developers and organizations can confidently scale their AI solutions, knowing that they have a reliable and efficient system in place. The benchmarks are a testament to the potential of Weaviate on AKS and Azure NetApp Files, providing a solid foundation for future generative AI endeavors. Whether your focus is on maximizing recall, query throughput, or maintaining a minimal memory footprint, this setup means that you can achieve your goals with efficiency and precision.
Additional Information
Quick Bytes: What is Azure NetApp Files
Azure NetApp Files
What is Azure NetApp Files?
Azure Kubernetes Services and Kubernetes
Azure Kubernetes Services (AKS)
Microsoft Tech Community – Latest Blogs –Read More
IMCLIPBOARD in R2025a
Hi everyone,
I really like IMCLIPBOARD. I think I must have downloaded it some years ago from File Exchange. However, IMCLIPBOARD uses Java classes which will no longer be available in R2025a. What can we do?
Thanks
KevinHi everyone,
I really like IMCLIPBOARD. I think I must have downloaded it some years ago from File Exchange. However, IMCLIPBOARD uses Java classes which will no longer be available in R2025a. What can we do?
Thanks
Kevin Hi everyone,
I really like IMCLIPBOARD. I think I must have downloaded it some years ago from File Exchange. However, IMCLIPBOARD uses Java classes which will no longer be available in R2025a. What can we do?
Thanks
Kevin imclipboard MATLAB Answers — New Questions
Import data with filters
% How can I import some data in a table (import with filters)
Name | Age
Hugo|30
Paco|40
Luis |50
Gus|60
% I need import in a table only person with age >= 50% How can I import some data in a table (import with filters)
Name | Age
Hugo|30
Paco|40
Luis |50
Gus|60
% I need import in a table only person with age >= 50 % How can I import some data in a table (import with filters)
Name | Age
Hugo|30
Paco|40
Luis |50
Gus|60
% I need import in a table only person with age >= 50 data import MATLAB Answers — New Questions
Can I run matlab on a removable drive?
When i run the matlab? It indicates error 5201.When i run the matlab? It indicates error 5201. When i run the matlab? It indicates error 5201. removable drive MATLAB Answers — New Questions
Securing Containerized Applications with SSH Tunneling
As cloud engineers and architects embrace containerization, ensuring secure communication becomes paramount. Data transmission and access control are critical aspects of security that need to be considered. SSH tunneling is a technique that can help achieve secure communication between different components of an application or solution. SSH tunneling creates an encrypted channel over an existing SSH connection, allowing secure data transmission between a local machine (SSH client) and a remote server (SSH server).
In this article, we will show how to set up SSH tunneling between containers running in the cloud that need to communicate with downstream resources via an SSH server hosted in a cloud VM.
SSH Tunneling
Before diving into the implementation, let’s have a quick refresher about SSH tunneling. Also known as SSH port forwarding, SSH tunneling allows secure communication between two endpoints by creating an encrypted tunnel over an existing SSH connection. It enables data to be transmitted securely between a local machine (SSH client) and a remote server (SSH server) through an intermediary channel. Here is an overview of different scenarios where SSH tunneling can be used:
1. Secure Remote Access to Internal Services: An organization has internal services (e.g., databases, internal web applications) that are not exposed to the public internet for security reasons. Using SSH tunneling, employees can securely connect to these internal services from remote locations without exposing the services to the internet.
2. Bypassing Firewall Restrictions: Developers need to access specific resources that are behind a corporate firewall, but the firewall restricts direct access. By setting up an SSH tunnel, developers can securely forward traffic through the firewall, allowing them to access the restricted resources.
3. Protecting Sensitive Data in Transit: An application needs to send sensitive data between different components or services, and there’s a risk of data interception. SSH tunneling can be used to encrypt the data as it travels between the components, ensuring that it remains secure in transit.
4. Accessing a Remote Database Securely: A developer needs to access a remote database server for maintenance or development purposes, but direct access is not permitted due to security policies. The developer can set up an SSH tunnel to securely connect to the remote database server without exposing it to the public internet.
5. Securely Using Insecure Protocols: An application uses an insecure protocol (e.g., FTP, HTTP) to communicate between different services. By wrapping the insecure protocol within an SSH tunnel, the communication can be secured, protecting the data from being intercepted.
6. Remote Debugging: A developer needs to debug an application running on a remote server, but direct access to the debugging port is restricted. SSH tunneling can be used to forward the debugging port from the remote server to the local machine, allowing the developer to securely debug the application.
7. Protecting IoT Device Communication: IoT devices need to communicate with a central server, but the communication is vulnerable to interception or tampering. By establishing an SSH tunnel between the IoT devices and the central server, the communication can be encrypted and secured, protecting the data in transit.
8. Secure File Transfer: Files need to be transferred securely between different systems or locations. SSH tunneling can be used to securely transfer files over the encrypted tunnel, ensuring that the data remains confidential and integrity is maintained.
9. Accessing Remote Services: A user needs to access services or resources hosted on a remote server securely. By setting up an SSH tunnel, the user can securely access the remote services as if they were running locally, protecting the data in transit.
10. Protecting Web Traffic: Web traffic needs to be secured when accessing websites or web applications over untrusted networks. SSH tunneling can be used to create a secure connection to a remote server, encrypting the web traffic and protecting it from eavesdropping or interception.
Scenario
For this article, we will implement the following scenario:
Architecture Components
myInfraVNet: Virtual network where the downstream resources are deployed.
nginxVM: A virtual machine running Nginx, a web server or reverse proxy, within myInfraVNet. It is assigned a private IP address, so that it is not directly accessible from the internet.
nginxVM/NSG: Network Security Group associated with the nginxVM, controlling inbound and outbound traffic.
myAppVNet: Virtual network where the container apps are deployed.
Container Apps Environment: This environment hosts two containerized applications:
mycontainerapp: A simple containerized Python application that fetches content from the NGINX server running on the VM and renders this content along with other content.
sshclientcontainerapp: Another containerized application, used to establish secure SSH tunnels to other resources.
Container Registry: Stores container images that can be deployed to the container apps.
VNet Peering: Allows resources in myAppVNet and myInfraVNet to communicate with each other. It essentially bridges the two VNets, enabling low-latency, high-bandwidth interconnectivity.
SSH Tunnel: The sshclientcontainerapp in the myAppVNet establishes an SSH tunnel to the nginxVM in the myInfraVNet to enable secure communication between the containerized app and the VM.
Network Security Group (NSG): The nginxVM/NSG ensures that only allowed traffic can reach the nginxVM. It’s crucial to configure this NSG correctly to allow SSH traffic from the sshclientcontainerapp and restrict unwanted access.
Scripting the Scenario
Based on the scenario described above, we will now script the implementation of the architecture. The script will create the necessary resources, configure the SSH tunnel, and deploy the containerized applications.
Prerequisites
Before running the script, ensure that you have the following prerequisites:
Azure CLI installed on your local machine.
Docker installed on your local machine.
A valid Azure subscription.
A basic understanding of Azure Container Apps, Azure Container Registry, and Azure Virtual Networks.
Parameters
Let’s start by defining the parameters that will be used in the script. These parameters include the resource group name, location, virtual network names, subnet names, VM name, VM image, VM size, SSH key, admin username, admin password, container apps environment name, container registry name, container app image names, SSH client container image name, SSH port, and NGINX port. A random string is generated and appended to the resource group name, container apps environment name, and container registry name to ensure uniqueness.
random=$(echo $RANDOM | tr ‘[0-9]’ ‘[a-z]’)
echo “Random:” $random
export RESOURCE_GROUP=rg-ssh-$(echo $random)
echo “RESOURCE_GROUP:” $RESOURCE_GROUP
export LOCATION=”australiaeast”
export INFRA_VNET_NAME=”myInfraVNet”
export APP_VNET_NAME=”myAppVNet”
export INFRA_SUBNET_NAME=”mySubnet”
export APP_SUBNET_NAME=”acaSubnet”
export VM_NAME=”nginxVM”
export VM_IMAGE=”Ubuntu2204″
export VM_SIZE=”Standard_DS1_v2″
export VM_KEY=mykey$(echo $random)
export ADMIN_USERNAME=”azureuser”
export ADMIN_PASSWORD=”Password123$” # Replace with your actual password
export CONTAINER_APPS_ENV=sshacae$(echo $random)
export REGISTRY_NAME=sshacr$(echo $random)
export REGISTRY_SKU=”Basic”
export CONTAINER_APP_IMAGE=”mycontainerapp:latest”
export SSH_CLIENT_CONTAINER_IMAGE=”sshclientcontainer:latest”
export CONTAINER_APP_NAME=”mycontainerapp”
export SSH_CLIENT_CONTAINER_APP_NAME=”sshclientcontainerapp”
export SSH_PORT=22
export NGINX_PORT=80
Create Resource Group
Create a resource group using the az group create command. The resource group name and location are passed as parameters.
az group create –name $RESOURCE_GROUP –location $LOCATION
Create Virtual Networks and Subnets
Create two virtual networks, myInfraVNet and myAppVNet, using the az network vnet create command. The address prefixes and subnet prefixes are specified for each virtual network. The az network vnet subnet update command is used to delegate the Microsoft.App/environments to the myAppVNet subnet.
az network vnet create –resource-group $RESOURCE_GROUP –name $INFRA_VNET_NAME –address-prefix 10.0.0.0/16 –subnet-name $INFRA_SUBNET_NAME –subnet-prefix 10.0.0.0/24
az network vnet create –resource-group $RESOURCE_GROUP –name $APP_VNET_NAME –address-prefix 10.1.0.0/16 –subnet-name $APP_SUBNET_NAME –subnet-prefix 10.1.0.0/24
az network vnet subnet update –resource-group $RESOURCE_GROUP –vnet-name $APP_VNET_NAME –name $APP_SUBNET_NAME –delegations Microsoft.App/environments
Create VNET Peering
Create a VNET peering between myInfraVNet and myAppVNet using the az network vnet peering create command. Two peering connections are created, one from myInfraVNet to myAppVNet and the other from myAppVNet to myInfraVNet.
az network vnet peering create –name VNet1ToVNet2 –resource-group $RESOURCE_GROUP –vnet-name $INFRA_VNET_NAME –remote-vnet $APP_VNET_NAME –allow-vnet-access
az network vnet peering create –name VNet2ToVNet1 –resource-group $RESOURCE_GROUP –vnet-name $APP_VNET_NAME –remote-vnet $INFRA_VNET_NAME –allow-vnet-access
Create Network Security Group and Rules
Create a network security group (NSG) for the nginxVM using the az network nsg create command. Two NSG rules are created to allow SSH traffic on port 22 and HTTP traffic on port 80.
az network nsg create –resource-group $RESOURCE_GROUP –name ${VM_NAME}NSG
az network nsg rule create –resource-group $RESOURCE_GROUP –nsg-name ${VM_NAME}NSG –name AllowSSH –protocol Tcp –direction Inbound –priority 1000 –source-address-prefixes ‘*’ –source-port-ranges ‘*’ –destination-address-prefixes ‘*’ –destination-port-ranges $SSH_PORT –access Allow
az network nsg rule create –resource-group $RESOURCE_GROUP –nsg-name ${VM_NAME}NSG –name AllowHTTP –protocol Tcp –direction Inbound –priority 1001 –source-address-prefixes ‘*’ –source-port-ranges ‘*’ –destination-address-prefixes ‘*’ –destination-port-ranges $NGINX_PORT –access Allow
Create Network Interface
Create a network interface for the nginxVM using the az network nic create command. The NIC is associated with the myInfraVNet and mySubnet and the NSG created earlier.
az network nic create –resource-group $RESOURCE_GROUP –name ${VM_NAME}NIC –vnet-name $INFRA_VNET_NAME –subnet $INFRA_SUBNET_NAME –network-security-group ${VM_NAME}NSG
Create VM
Create a virtual machine using the az vm create command. The VM is created with the specified image, size, admin username, and password. The NIC created earlier is associated with the VM. Ensure that you have provided a value for the password in the ADMIN_PASSWORD variable.
az vm create –resource-group $RESOURCE_GROUP –name $VM_NAME –image $VM_IMAGE –size $VM_SIZE –admin-username $ADMIN_USERNAME –admin-password $ADMIN_PASSWORD –nics ${VM_NAME}NIC
export VM_PRIVATE_IP=$(az vm show -d -g $RESOURCE_GROUP -n $VM_NAME –query privateIps -o tsv)
echo “VM Private IP: $VM_PRIVATE_IP”
Generate SSH Key Pair and Add the Public Key to the VM
Generate an SSH key pair using the ssh-keygen command. The public key is added to the VM using the az vm user update command.
# Generate an SSH key pair
ssh-keygen -t rsa -b 4096 -f $VM_KEY -N “”
# Add the public key to the VM
az vm user update –resource-group $RESOURCE_GROUP –name $VM_NAME –username $ADMIN_USERNAME –ssh-key-value “$(cat $VM_KEY.pub)”
# Print success message
echo “SSH key pair generated and public key added to VM $VM_NAME”
Install NGINX and SSH Server on the VM
Install NGINX and SSH server on the VM using the az vm run-command invoke command. This command runs a shell script on the VM to update the package repository, install NGINX, start the NGINX service, install the SSH server, and start the SSH service.
az vm run-command invoke –command-id RunShellScript –name $VM_NAME –resource-group $RESOURCE_GROUP –scripts “sudo apt-get update && sudo apt-get install -y nginx && sudo systemctl start nginx && sudo apt-get install -y openssh-server && sudo systemctl start ssh”
Create Azure Container Registry
Create an Azure Container Registry using the az acr create command to store the container images that will be deployed to the container apps.
az acr create –resource-group $RESOURCE_GROUP –name $REGISTRY_NAME –sku $REGISTRY_SKU –location $LOCATION –admin-enabled true
Login to Azure Container Registry
Login to the Azure Container Registry using the az acr login command.
az acr login –name $REGISTRY_NAME
Create Dockerfile for mycontainerapp
Create a Dockerfile for the mycontainerapp. The Dockerfile specifies the base image, working directory, copy files, install packages, expose port, define environment variable, and run the application.
echo “
# Use an official Python runtime as a parent image
FROM python:3.8-slim
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install –no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD [“python”, “app.py”]
” > Dockerfile.mycontainerapp
Create Dockerfile for sshclientcontainer
Create a Dockerfile for the sshclientcontainer. The Dockerfile specifies the base image, install SSH client, copy SSH key, set working directory, copy files, expose port, and run the SSH client.
echo “
# Use an official Ubuntu as a parent image
FROM ubuntu:20.04
# Install SSH client
RUN apt-get update && apt-get install -y openssh-client && apt-get install -y curl
# Copy SSH key
COPY ${VM_KEY} /root/.ssh/${VM_KEY}
RUN chmod 600 /root/.ssh/${VM_KEY}
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Make port 80 available to the world outside this container
EXPOSE 80
# Run the SSH client when the container launches
CMD [“bash”, “-c”, “ssh -i /root/.ssh/${VM_KEY} -o StrictHostKeyChecking=no -L 0.0.0.0:80:localhost:80 ${ADMIN_USERNAME}@${VM_PRIVATE_IP} -N”]
” > Dockerfile.sshclientcontainer
Create an App for mycontainerapp
Create a simple app that can be hosted on mycontainerapp. The app.py file contains a simple Flask application that fetches content from the NGINX server running on the VM and renders it along with other content.
echo “
import requests
from flask import Flask, render_template_string
app = Flask(__name__)
@app.route(‘/’)
def hello_world():
response = requests.get(‘http://sshclientcontainerapp:80’)
html_content = “””
<!DOCTYPE html>
<html lang=”en”>
<head>
<meta charset=”UTF-8″>
<meta name=”viewport” content=”width=device-width, initial-scale=1.0″>
<title>Response Page</title>
</head>
<body>
<header>
<h1>Welcome to My Container App</h1>
</header>
<main>
<div>Response Content – The following response has been received from the NGINX server running on the VM via SSH tunnel</div>
<hr/>
<div>{}</div>
</main>
<footer>
<hr/>
<p>© 2024 My Flask App</p>
</footer>
</body>
</html>
“””.format(response.text)
return render_template_string(html_content)
if __name__ == ‘__main__’:
app.run(host=’0.0.0.0′, port=80)
” > app.py
echo “
Flask==2.0.0
Werkzeug==2.2.2
requests==2.25.1
” > requirements.txt
Build and Push Docker Images
Build the Docker images for sshclientcontainer and mycontainerapp using the docker build command. The images are tagged with the Azure Container Registry name and pushed to the registry using the docker push command.
# Build the Docker image for sshclientcontainer
docker build -t $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE -f Dockerfile.sshclientcontainer .
# Push the Docker image for sshclientcontainer to Azure Container Registry
docker push $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE
# Build the Docker image for mycontainerapp
docker build -t $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE -f Dockerfile.mycontainerapp .
# Push the Docker image for mycontainerapp to Azure Container Registry
docker push $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE
Create Azure Container Apps Environment
Create an Azure Container Apps environment using the az containerapp env create command. The environment is associated with the virtual network and subnet created earlier.
# Get the subnet ID for the infrastructure subnet
export INFRA_SUBNET_ID=$(az network vnet subnet show –resource-group $RESOURCE_GROUP –vnet-name $APP_VNET_NAME –name $APP_SUBNET_NAME –query id –output tsv)
echo $INFRA_SUBNET_ID# Create the Azure Container Apps environment
az containerapp env create –name $CONTAINER_APPS_ENV –resource-group $RESOURCE_GROUP –location $LOCATION –infrastructure-subnet-resource-id $INFRA_SUBNET_ID
Deploy Container Apps
Deploy the container apps to the Azure Container Apps environment using the az containerapp create command. The container images are pulled from the Azure Container Registry, and the container apps are configured to use the SSH tunnel for secure communication.
Deploy sshclientcontainerapp
az acr login –name $REGISTRY_NAME# Deploy sshclientcontainerapp
az containerapp create –name $SSH_CLIENT_CONTAINER_APP_NAME –resource-group $RESOURCE_GROUP –environment $CONTAINER_APPS_ENV –image $REGISTRY_NAME.azurecr.io/$SSH_CLIENT_CONTAINER_IMAGE –target-port 80 –ingress ‘external’ –registry-server $REGISTRY_NAME.azurecr.io
Deploy mycontainerapp
az acr login –name $REGISTRY_NAMEaz containerapp create –name $CONTAINER_APP_NAME –resource-group $RESOURCE_GROUP –environment $CONTAINER_APPS_ENV –image $REGISTRY_NAME.azurecr.io/$CONTAINER_APP_IMAGE –target-port 80 –ingress ‘external’ –registry-server $REGISTRY_NAME.azurecr.io
Testing the Deployment
After deploying the container apps, you can test the deployment by accessing the public URL of the mycontainerapp. The app should fetch content from the NGINX server running on the VM through the SSH tunnel and render it along with other content.
Retrieve the public URL of the mycontainerapp:
MY_CONTAINER_APP_URL=$(az containerapp show –name $CONTAINER_APP_NAME –resource-group $RESOURCE_GROUP –query ‘properties.configuration.ingress.fqdn’ -o tsv)
echo “mycontainerapp URL: http://$MY_CONTAINER_APP_URL”
Open the URL in your web browser by copying and pasting the URL printed in the previous step.
You should see a webpage that includes the response content from the NGINX server running on the VM via the SSH tunnel.
Clean Up
After testing the deployment, you can clean up the resources by deleting the resource group. This will remove all the resources created in the script.
az group delete –name $RESOURCE_GROUP –yes –no-wait
Conclusion
In this article, we demonstrated how to secure containerized applications using SSH tunneling. We covered the steps to set up the necessary infrastructure, create and deploy containerized applications, and establish an SSH tunnel for secure communication between the container apps and a VM hosting an NGINX server.
By following these steps, you can ensure that your containerized applications communicate securely with downstream resources, enhancing the overall security of your cloud-native architecture.
For more information on securing containerized applications, refer to the Azure Container Apps documentation.
If you have any questions or need further assistance, feel free to consult the Azure documentation or reach out to Microsoft support.
References
Azure Container Apps Documentation
Azure Virtual Network Documentation
Azure Container Registry Documentation
Azure CLI Documentation
SSH Tunneling Documentation
Microsoft Tech Community – Latest Blogs –Read More
Join the monthly Copilot Train the Trainer sessions!
These workshops include an overview of Copilot for Microsoft 365, hands-on prompt training, Demos across roles in HR, Marketing, Sales, and other disciplines, real time support, and Q&A.
Find out more
These workshops include an overview of Copilot for Microsoft 365, hands-on prompt training, Demos across roles in HR, Marketing, Sales, and other disciplines, real time support, and Q&A.
Find out more Read More
Landing Page – News Webpart
Hello! We are creating a new SharePoint landing page and would like to include a news section similar to the one below that we saw in a site showing various SharePoint page examples. It looks like it combines current news plus an option to search via category. Any suggestions on how to do this? We’ve read articles and watched videos but have had no success. All ideas are welcome. Thank you!
Hello! We are creating a new SharePoint landing page and would like to include a news section similar to the one below that we saw in a site showing various SharePoint page examples. It looks like it combines current news plus an option to search via category. Any suggestions on how to do this? We’ve read articles and watched videos but have had no success. All ideas are welcome. Thank you! Read More
How to remove border from MATLAB figure
I’m trying to compare spectrogram images in a MATLAB image analyzer, but I think the white border is causing them to be overly similar. Because of the number of images I need to process, I’d really like to have it automatically generate and save the image. Here is my current code that I’m using to make and save the spectrogram.
base=filename %The code saves multiple images with the label being the filename and a specific addition to each image
figure(1003)
spectrogram(Xacc,windowx,noverlap,nfft,fs,’yaxis’)
ylim([0 5])
colormap(gray(256));
caxis([-160 40])
% title(‘Spectrogram of X’)
s1=base + "SPEC_Acc_X GS";
saveas(gcf,s1,’jpg’)
When I run it I get an image like this.
What I want is an image like this, but in order to get it I had to adjust every setting manually in the image editor. Alternately, is there a way to automatically crop saved images? That could also be a solution.
Thanks so much for the help!I’m trying to compare spectrogram images in a MATLAB image analyzer, but I think the white border is causing them to be overly similar. Because of the number of images I need to process, I’d really like to have it automatically generate and save the image. Here is my current code that I’m using to make and save the spectrogram.
base=filename %The code saves multiple images with the label being the filename and a specific addition to each image
figure(1003)
spectrogram(Xacc,windowx,noverlap,nfft,fs,’yaxis’)
ylim([0 5])
colormap(gray(256));
caxis([-160 40])
% title(‘Spectrogram of X’)
s1=base + "SPEC_Acc_X GS";
saveas(gcf,s1,’jpg’)
When I run it I get an image like this.
What I want is an image like this, but in order to get it I had to adjust every setting manually in the image editor. Alternately, is there a way to automatically crop saved images? That could also be a solution.
Thanks so much for the help! I’m trying to compare spectrogram images in a MATLAB image analyzer, but I think the white border is causing them to be overly similar. Because of the number of images I need to process, I’d really like to have it automatically generate and save the image. Here is my current code that I’m using to make and save the spectrogram.
base=filename %The code saves multiple images with the label being the filename and a specific addition to each image
figure(1003)
spectrogram(Xacc,windowx,noverlap,nfft,fs,’yaxis’)
ylim([0 5])
colormap(gray(256));
caxis([-160 40])
% title(‘Spectrogram of X’)
s1=base + "SPEC_Acc_X GS";
saveas(gcf,s1,’jpg’)
When I run it I get an image like this.
What I want is an image like this, but in order to get it I had to adjust every setting manually in the image editor. Alternately, is there a way to automatically crop saved images? That could also be a solution.
Thanks so much for the help! image editing, border removal MATLAB Answers — New Questions
How to confirm my MATLAB license can run without internet?
We have a matlab script that runs inside an autonomous vessel collecting oceanography data.
I want to confirm that the license will allow matlab to work correctly while the vessel is offshore without internet access.
How can I make sure?We have a matlab script that runs inside an autonomous vessel collecting oceanography data.
I want to confirm that the license will allow matlab to work correctly while the vessel is offshore without internet access.
How can I make sure? We have a matlab script that runs inside an autonomous vessel collecting oceanography data.
I want to confirm that the license will allow matlab to work correctly while the vessel is offshore without internet access.
How can I make sure? license, no-internet MATLAB Answers — New Questions
how to resolve fprintf error when dealing with whole numbers?
G’day,
I have a matirx containing the following values
a = [16.0541, 17];
I am trying to write these types of data along with many other variables to a json file. However, I have encountered a problem with the second value, which produces an empty cell. Here’s a simple test
for i = 1:2
fprintf(‘n%s’,’"values":{‘);
fprintf(‘n%s’,’"min":’);
fprintf(‘%s’,a(i));
fprintf(‘%s’,’,’);
fprintf(‘n%s’,’"max":’);
fprintf(‘%s’,a(i));
fprintf(‘n%s’,’},’);
end
I am assuming it has something to do with the second value being a whole number. How do I resolve this issue?
Thanks in advance.
JonG’day,
I have a matirx containing the following values
a = [16.0541, 17];
I am trying to write these types of data along with many other variables to a json file. However, I have encountered a problem with the second value, which produces an empty cell. Here’s a simple test
for i = 1:2
fprintf(‘n%s’,’"values":{‘);
fprintf(‘n%s’,’"min":’);
fprintf(‘%s’,a(i));
fprintf(‘%s’,’,’);
fprintf(‘n%s’,’"max":’);
fprintf(‘%s’,a(i));
fprintf(‘n%s’,’},’);
end
I am assuming it has something to do with the second value being a whole number. How do I resolve this issue?
Thanks in advance.
Jon G’day,
I have a matirx containing the following values
a = [16.0541, 17];
I am trying to write these types of data along with many other variables to a json file. However, I have encountered a problem with the second value, which produces an empty cell. Here’s a simple test
for i = 1:2
fprintf(‘n%s’,’"values":{‘);
fprintf(‘n%s’,’"min":’);
fprintf(‘%s’,a(i));
fprintf(‘%s’,’,’);
fprintf(‘n%s’,’"max":’);
fprintf(‘%s’,a(i));
fprintf(‘n%s’,’},’);
end
I am assuming it has something to do with the second value being a whole number. How do I resolve this issue?
Thanks in advance.
Jon fprintf MATLAB Answers — New Questions
Remove MSP Admin Access to Tenant
Hi All,
I am taking over IT responsibilites for a mid-size company. Currently, we are dealing with a roque MSP who has admin credentials for everything including our M365 tenant. No internal employees of the company, including owner, have been provided admin rights.
I should add that this MSP is a one-man shop, I seriously doubt he has any formal partner relationship with Microsoft, but I may be wrong about that.
My feeling is that he has not registered the tenant the correct way and ownership shows his name rather than the company, but I can’t prove that.
due to conflicts, there is an eminent possibility that he will begin deleting accounts, removing licenses or otherwise interrupting business.
Is anyone aware of a method for contacting Microsoft to deal with these sorts of disputes? without an admin account, I don’t even have the option to raise a support case with them right now.
We may be in a tough situation given that we pay him directly for services, so the invoice and payments will probably be in his name.
any thoughts or suggestions are appreciated!
Hi All, I am taking over IT responsibilites for a mid-size company. Currently, we are dealing with a roque MSP who has admin credentials for everything including our M365 tenant. No internal employees of the company, including owner, have been provided admin rights. I should add that this MSP is a one-man shop, I seriously doubt he has any formal partner relationship with Microsoft, but I may be wrong about that. My feeling is that he has not registered the tenant the correct way and ownership shows his name rather than the company, but I can’t prove that. due to conflicts, there is an eminent possibility that he will begin deleting accounts, removing licenses or otherwise interrupting business. Is anyone aware of a method for contacting Microsoft to deal with these sorts of disputes? without an admin account, I don’t even have the option to raise a support case with them right now. We may be in a tough situation given that we pay him directly for services, so the invoice and payments will probably be in his name. any thoughts or suggestions are appreciated! Read More
two sccm to one tenant intune
I have a number of devices configured in SCCM “A” co-management with an intune tennant “A”
I have a number of devices configured in SCCM “B” co-management with an intune “B” tennant.
Now I need to undo the SCCM comanagement “A” and make a new co-management the intune tenant “B”
What are the risks and process to do this?
I have a number of devices configured in SCCM “A” co-management with an intune tennant “A”I have a number of devices configured in SCCM “B” co-management with an intune “B” tennant.Now I need to undo the SCCM comanagement “A” and make a new co-management the intune tenant “B”What are the risks and process to do this? Read More
Windows 11 Insider Preview 10.0.26120.1542 (ge_release_svc_betaflt_upr) nvidia geforce rtx 3080 err
Hello microsoft,
Yesterday I faced issues to download the update as it always stopped at 8%, today i was able to update to Windows 11 Insider Preview 10.0.26120.1542 (ge_release_svc_betaflt_upr) but as of today my nvidia geforce rtx 3080 is not visable and detectable anymore.
I have an acer predator triton 300 nvidia geforce rtx 3080
I tried:
windows x –> device management (show hidden ) -> analyze for changes – but not shown
checked for bios updates
tried to install latest drivers from nvidia but keep the error of not detected
out of options, seems to be the error related to the latest insiders update as it started after 20th of august *came back from holiday and updated to latest and issues started
not able to detect my dell external monitor via hdmi – even bought a new cable, error still there but not when other pc’s connected)
please help me to fix as i am nowhere w/o my rtx
Hello microsoft, Yesterday I faced issues to download the update as it always stopped at 8%, today i was able to update to Windows 11 Insider Preview 10.0.26120.1542 (ge_release_svc_betaflt_upr) but as of today my nvidia geforce rtx 3080 is not visable and detectable anymore.I have an acer predator triton 300 nvidia geforce rtx 3080I tried: windows x –> device management (show hidden ) -> analyze for changes – but not shownchecked for bios updatestried to install latest drivers from nvidia but keep the error of not detectedout of options, seems to be the error related to the latest insiders update as it started after 20th of august *came back from holiday and updated to latest and issues startednot able to detect my dell external monitor via hdmi – even bought a new cable, error still there but not when other pc’s connected)please help me to fix as i am nowhere w/o my rtx Read More
Organize posts in teams
Hello!
Is there a way to put posts in a folder? Not files that you send via teams. Posts. Maybe it’s an announcement or transcript that is made via a post. I’m not able to see the option where I save the transcripts to folder. Is this an option in teams?
Hello!Is there a way to put posts in a folder? Not files that you send via teams. Posts. Maybe it’s an announcement or transcript that is made via a post. I’m not able to see the option where I save the transcripts to folder. Is this an option in teams? Read More
How to change the image of .bim file
Dear All,
I have one fie as attached.
Then I wrote the code like below, and the image like below.
clc
clear all
close all
fid = fopen(‘test1.bim’, ‘r’, ‘ieee-le’);%result1.bim is your 2D planar
data = fread(fid, inf, ‘*float’);
fclose(fid);
data = reshape(data,128,128);
figure, imagesc(data)
But actually, my image supposedly to be like below:
Anyone can help me?Dear All,
I have one fie as attached.
Then I wrote the code like below, and the image like below.
clc
clear all
close all
fid = fopen(‘test1.bim’, ‘r’, ‘ieee-le’);%result1.bim is your 2D planar
data = fread(fid, inf, ‘*float’);
fclose(fid);
data = reshape(data,128,128);
figure, imagesc(data)
But actually, my image supposedly to be like below:
Anyone can help me? Dear All,
I have one fie as attached.
Then I wrote the code like below, and the image like below.
clc
clear all
close all
fid = fopen(‘test1.bim’, ‘r’, ‘ieee-le’);%result1.bim is your 2D planar
data = fread(fid, inf, ‘*float’);
fclose(fid);
data = reshape(data,128,128);
figure, imagesc(data)
But actually, my image supposedly to be like below:
Anyone can help me? digital image processing, image processing, image segmentation, image analysis MATLAB Answers — New Questions
Thingspeak – no reading data error code 0
Hi,
I’ve just tried to upload the example sketch ReadField with an Arduino R4, updating the library with WifiS3.h rather than Wifi.h. The network connection is OK.
Unfortunately, I get an error code 0 in the Serial Monitor :
"Problem reading channel. HTTP error code 0"
I d’ont konw what is wrong here and how to fix it. I’ve read it could be result to the update rate, but the delay of the example seemes to be sufficient (15s).Hi,
I’ve just tried to upload the example sketch ReadField with an Arduino R4, updating the library with WifiS3.h rather than Wifi.h. The network connection is OK.
Unfortunately, I get an error code 0 in the Serial Monitor :
"Problem reading channel. HTTP error code 0"
I d’ont konw what is wrong here and how to fix it. I’ve read it could be result to the update rate, but the delay of the example seemes to be sufficient (15s). Hi,
I’ve just tried to upload the example sketch ReadField with an Arduino R4, updating the library with WifiS3.h rather than Wifi.h. The network connection is OK.
Unfortunately, I get an error code 0 in the Serial Monitor :
"Problem reading channel. HTTP error code 0"
I d’ont konw what is wrong here and how to fix it. I’ve read it could be result to the update rate, but the delay of the example seemes to be sufficient (15s). thingspeak, error code 0, arduino MATLAB Answers — New Questions
Plotting 2 different color maps on one world map
Is it possible to plot 2 different color maps on the worldmap figure? I have a .tif file and a .nc file that are both color scales and I want to overlay them. Currently the code I have will plot them both, but using the same color scale:
figure
hold on
worldmap([69,79], [-167,-117])
colormap(‘bone’);
geoshow([.tif file] A2,R2,DisplayType="surface")
colorbar
colormap(‘winter’)
geoshow([.nc file],,’DisplayType’,’surface’, ‘FaceAlpha’, 0.2)
colorbar
Is there any way to have 2 different colormaps? Thank you!Is it possible to plot 2 different color maps on the worldmap figure? I have a .tif file and a .nc file that are both color scales and I want to overlay them. Currently the code I have will plot them both, but using the same color scale:
figure
hold on
worldmap([69,79], [-167,-117])
colormap(‘bone’);
geoshow([.tif file] A2,R2,DisplayType="surface")
colorbar
colormap(‘winter’)
geoshow([.nc file],,’DisplayType’,’surface’, ‘FaceAlpha’, 0.2)
colorbar
Is there any way to have 2 different colormaps? Thank you! Is it possible to plot 2 different color maps on the worldmap figure? I have a .tif file and a .nc file that are both color scales and I want to overlay them. Currently the code I have will plot them both, but using the same color scale:
figure
hold on
worldmap([69,79], [-167,-117])
colormap(‘bone’);
geoshow([.tif file] A2,R2,DisplayType="surface")
colorbar
colormap(‘winter’)
geoshow([.nc file],,’DisplayType’,’surface’, ‘FaceAlpha’, 0.2)
colorbar
Is there any way to have 2 different colormaps? Thank you! geoshow, colormap, worldmap MATLAB Answers — New Questions
AVD Truly Non-Persistent
We have an environment where there are machines available for public use (a public library). Users should be able to create documents and the like on the machine, and save them to locally attached storage, or e-mail/cloud storage, etc.
When a user logs out, the machine should completely reset; all changes made, documents created, history, and the like should be erased and the machine should automatically reset to the base image.
Everything in the MS documentation I can find that mentions non-persistent machines just means pooled machines, which doesn’t do this. If I want to set up an AVD pool that does this, how would I accomplish that?
We have an environment where there are machines available for public use (a public library). Users should be able to create documents and the like on the machine, and save them to locally attached storage, or e-mail/cloud storage, etc. When a user logs out, the machine should completely reset; all changes made, documents created, history, and the like should be erased and the machine should automatically reset to the base image. Everything in the MS documentation I can find that mentions non-persistent machines just means pooled machines, which doesn’t do this. If I want to set up an AVD pool that does this, how would I accomplish that? Read More
Rearrange columns with partial matching numbers
I’ll be working with three columns, including column A, which has the full number for an event; Ecode, which will have a partial number (the last five digits of column A); and column C, which will have a score related to column B. The difficult thing I’m trying to figure out is how to rearrange column B (and, by extension, C) so they match up with the full numbers of column A without moving the values of column A.
I’ll be working with three columns, including column A, which has the full number for an event; Ecode, which will have a partial number (the last five digits of column A); and column C, which will have a score related to column B. The difficult thing I’m trying to figure out is how to rearrange column B (and, by extension, C) so they match up with the full numbers of column A without moving the values of column A. Read More