Month: July 2024
Copy cells onto another sheet
If I type a number into cell A1 on sheet one so the cell A1 on sheet two copies, is there a formula for the next day, to change the number in cell A1 on sheet one and the cell A1 on sheet two stays the same as the previous day but cell A2 on sheet two updates to the new figure in cell A1 on sheet one?
Thanks
If I type a number into cell A1 on sheet one so the cell A1 on sheet two copies, is there a formula for the next day, to change the number in cell A1 on sheet one and the cell A1 on sheet two stays the same as the previous day but cell A2 on sheet two updates to the new figure in cell A1 on sheet one?Thanks Read More
Conditional Access Grant Access options
Scenario:
In Conditional Access Policies, under the grant controls section, we select 2 options:
1. Require multifactor authentication
2. Require approved client app
and then For multiple controls, we select “Require one of the selected controls“option.
Now assuming all the conditions defined in previous steps are satisfied, in this case which of the above 2 options would be evaluated? Is there a criteria? I tried checking the documentation, didn’t find the answer there.
Also, does this mean if I am coming from an approved app, I don’t have to do MFA?
Lastly, if this is the main MFA policy, then this configuration is not correct, right?
Scenario:In Conditional Access Policies, under the grant controls section, we select 2 options:1. Require multifactor authentication2. Require approved client appand then For multiple controls, we select “Require one of the selected controls”option. Now assuming all the conditions defined in previous steps are satisfied, in this case which of the above 2 options would be evaluated? Is there a criteria? I tried checking the documentation, didn’t find the answer there.Also, does this mean if I am coming from an approved app, I don’t have to do MFA?Lastly, if this is the main MFA policy, then this configuration is not correct, right? Read More
How to display geographic gridlines or tickmarks on the current figure and underlay it by a terrain basemap?
I have plottted this map using the following STAMPS command
"ps_plot(‘hgt’)" which plots a map over elevation values for a study area whose boundaries were cropped/predetrmined by numerous processing steps. I eventually would like to display gridlines or tick on intersections over this map in order to be able to export it as png or tiff image so I can manually georeference this map on other softwares.
Additionally, if matlab can plot already georeferenced/geocoded maps, please let me know how to do it!I have plottted this map using the following STAMPS command
"ps_plot(‘hgt’)" which plots a map over elevation values for a study area whose boundaries were cropped/predetrmined by numerous processing steps. I eventually would like to display gridlines or tick on intersections over this map in order to be able to export it as png or tiff image so I can manually georeference this map on other softwares.
Additionally, if matlab can plot already georeferenced/geocoded maps, please let me know how to do it! I have plottted this map using the following STAMPS command
"ps_plot(‘hgt’)" which plots a map over elevation values for a study area whose boundaries were cropped/predetrmined by numerous processing steps. I eventually would like to display gridlines or tick on intersections over this map in order to be able to export it as png or tiff image so I can manually georeference this map on other softwares.
Additionally, if matlab can plot already georeferenced/geocoded maps, please let me know how to do it! image processing, digital image processing, colormap, machine learning MATLAB Answers — New Questions
How can I execute the empirical mode decomposition (emd) syntax in MATLAB R2016b?
Hello, I want to inform you that I am currently using MATLAB version R2016b. Please guide me through executing the empirical mode decomposition (emd) syntax.
Best regards,
NavidHello, I want to inform you that I am currently using MATLAB version R2016b. Please guide me through executing the empirical mode decomposition (emd) syntax.
Best regards,
Navid Hello, I want to inform you that I am currently using MATLAB version R2016b. Please guide me through executing the empirical mode decomposition (emd) syntax.
Best regards,
Navid empirical mode decomposition (emd) MATLAB Answers — New Questions
Link a table from MS Fabric
Is it possible to link a table stored in MS Fabric Dataflow Gen2 to an Access database? The data set is roughly 500k rows of data.
FYI I’m not trying to link Fabric FROM an Access database. Rather I would like to work with the data in MS Access.
Any help would be appreciated.
Is it possible to link a table stored in MS Fabric Dataflow Gen2 to an Access database? The data set is roughly 500k rows of data. FYI I’m not trying to link Fabric FROM an Access database. Rather I would like to work with the data in MS Access. Any help would be appreciated. Read More
Creating a SLURM Cluster for Scheduling NVIDIA MIG-Based GPU Accelerated workloads
Today, researchers and developers often use a dedicated GPU for their workloads, even when only a fraction of the GPU’s compute power is needed. The NVIDIA A100, A30, and H100 Tensor Core GPUs introduce a revolutionary feature called Multi-Instance GPU (MIG). MIG partitions the GPU into up to seven instances, each with its own dedicated compute, memory, and bandwidth. This enables multiple users to run their workloads on the same GPU, maximizing per-GPU utilization and boosting user productivity.
In this blog, we will guide you through the process of creating a SLURM cluster and integrating NVIDIA’s Multi-Instance GPU (MIG) feature to efficiently schedule GPU-accelerated jobs. We will cover the installation and configuration of SLURM, as well as the setup of MIG on NVIDIA GPUs.
Overview:
SLURM (Simple Linux Utility for Resource Management) is an open-source job scheduler used by many of the world’s supercomputers and HPC (High-Performance Computing) clusters. It facilitates the allocation of resources such as CPUs, memory, and GPUs to users and their jobs, ensuring efficient use of available hardware. SLURM provides robust workload management capabilities, including job queuing, prioritization, scheduling, and monitoring.
MIG (Multi-Instance GPU) is a feature introduced by NVIDIA for its A100 and H100 Tensor Core GPUs, allowing a single physical GPU to be partitioned into multiple independent GPU instances. Each MIG instance operates with dedicated memory, cache, and compute cores, enabling multiple users or applications to share a single GPU securely and efficiently. This capability enhances resource utilization and provides a level of flexibility and isolation not previously possible with traditional GPUs.
Advantages of Using NVIDIA MIG (Multi-Instance GPU):
Improved Resource Utilization
Maximizes GPU Usage: MIG allows you to run multiple smaller workloads on a single GPU, ensuring that the GPU’s resources are fully utilized. This is especially useful for applications that do not need the full capacity of a GPU.
Cost Efficiency: By enabling multiple instances on a single GPU, organizations can achieve better cost-efficiency, reducing the need to purchase additional GPUs.
Workload Isolation
– Security and Stability: Each GPU instance is fully isolated, ensuring that workloads do not interfere with each other. This is critical for multi-tenant environments where different users or applications might run on the same physical hardware.
– Predictable Performance: Isolation ensures consistent and predictable performance for each instance, avoiding resource contention issues.
Scalability and Flexibility
– Adaptability: MIG allows dynamic partitioning of GPU resources, making it easy to scale workloads up or down based on demand. You can allocate just the right amount of resources needed for different tasks.
– Multi-Tenant Support: Ideal for cloud service providers and data centers that host services for multiple customers, each requiring different levels of GPU resources.
Simplified Management
– Administrative Control: Administrators can use NVIDIA tools to easily configure, manage, and monitor the GPU instances. This includes allocating specific memory and compute resources to each instance.
– Automated Management: Tools and software can automate the allocation and management of GPU resources, reducing the administrative overhead.
Enhanced Performance for Diverse Workloads
– Support for Various Applications: MIG supports a wide range of applications, from AI inference and training to data analytics and virtual desktops. This makes it versatile for different types of computational workloads.
– Optimized Performance: By running multiple instances optimized for specific tasks, you can achieve better overall performance compared to running all tasks on a single monolithic GPU.
Better Utilization in Shared Environments
– Educational and Research Institutions: In environments where GPUs are shared among students or researchers, MIG allows multiple users to access GPU resources simultaneously without impacting each other’s work.
– Development and Testing: Developers can use MIG to test and develop applications in an environment that simulates multi-GPU setups without requiring multiple physical GPUs.
By leveraging the power of NVIDIA’s MIG feature within a SLURM-managed cluster, you can significantly enhance the efficiency and productivity of your GPU-accelerated workloads. Join us as we delve into the steps for setting up this powerful combination and unlock the full potential of your computational resources.
Prerequisites
Scheduler:
Size: Standard D4s v5 (4 vCPUs, 16 GiB memory)
Image: Ubuntu-HPC 2204 – Gen2 (Ubuntu 22.04)
Scheduling software: Slurm 23.02.7-1
Execute VM:
Size: Standard NC40ads H100 v5 (40 vCPUs, 320 GiB memory)
Image: Ubuntu-HPC 2204 – Gen2 (Ubuntu 22.04) – Image contains Nvidia GPU driver.
It is recommended to install the latest NVIDIA GPU driver. The minimum versions are provided below:
If using H100, then CUDA 12 and NVIDIA driver R525 ( >= 525.53) or later
If using A100/A30, then CUDA 11 and NVIDIA driver R450 ( >= 450.80.02) or later
Scheduling software: Slurm 23.02.7-1
Slurm Scheduler setup:
Step 1: First, create users for Munge and SLURM services to manage their operations securely.
groupadd -g 11101 munge
useradd -u 11101 -g 11101 -s /bin/false -M munge
groupadd -g 11100 slurm
useradd -u 11100 -g 11100 -s /bin/false -M slurm
Step 2: Setup NFS Server on Scheduler
NFS will be used to share configuration files across the cluster.
apt install nfs-kernel-server -y
mkdir -p /sched /shared/home
echo “/sched *(rw,sync,no_root_squash)” >> /etc/exports
echo “/shared *(rw,sync,no_root_squash)” >> /etc/exports
systemctl restart nfs-server
systemctl enable nfs-server.service
showmount -e
Step 3: Install and Configure Munge
Munge is used for authentication across the SLURM cluster.
apt install -y munge
dd if=/dev/urandom bs=1 count=1024 > /etc/munge/munge.key
cp /etc/munge/munge.key /sched/
chown munge:munge /sched/munge.key
chmod 400 /sched/munge.key
systemctl restart munge
systemctl enable munge
Step 4: Install and Configure SLURM on Scheduler
Installing Slurm Scheduler daemon and setting up the directories for slurm.
apt install slurm-slurmctld -y
mkdir -p /etc/slurm /var/spool/slurmctld /var/log/slurmctld
chown slurm:slurm /etc/slurm /var/spool/slurmctld /var/log/slurmctld
Creating the `slurm.conf` file. Alternatively, you can generate the file using the Slurm configurator tool.
cat <<EOF > /sched/slurm.conf
MpiDefault=none
ProctrackType=proctrack/cgroup
ReturnToService=2
PropagateResourceLimits=ALL
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmdPidFile=/var/run/slurmd.pid
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
StateSaveLocation=/var/spool/slurmctld
SwitchType=switch/none
TaskPlugin=task/affinity,task/cgroup
SchedulerType=sched/backfill
SelectType=select/cons_tres
SelectTypeParameters=CR_Core
GresTypes=gpu
ClusterName=mycluster
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=debug
SlurmctldLogFile=/var/log/slurmctld/slurmctld.log
SlurmctldParameters=idle_on_node_suspend
SlurmdDebug=debug
SlurmdLogFile=/var/log/slurmd/slurmd.log
PrivateData=cloud
TreeWidth=65533
ResumeTimeout=1800
SuspendTimeout=600
SuspendTime=300
SchedulerParameters=max_switch_wait=24:00:00
Include accounting.conf
Include partitions.conf
EOF
echo “SlurmctldHost=$(hostname -s)” >> /sched/slurm.conf
Creating cgroup.conf for Slurm:
This command creates a configuration file named cgroup.conf in the /sched directory with specific settings for Slurm’s cgroup resource management.
cat <<EOF > /sched/cgroup.conf
CgroupAutomount=no
ConstrainCores=yes
ConstrainRamSpace=yes
ConstrainDevices=yes
EOF
Configuring Accounting Storage Type for Slurm:
echo “AccountingStorageType=accounting_storage/none” >> /sched/accounting.conf
Changing Ownership of Configuration Files:
chown slurm:slurm /sched/*.conf
Creating Symbolic Links for Configuration Files:
ln -s /sched/slurm.conf /etc/slurm/slurm.conf
ln -s /sched/cgroup.conf /etc/slurm/cgroup.conf
ln -s /sched/accounting.conf /etc/slurm/accounting.conf
Configure the Execute VM
Check and Enable NVIDIA GPU Driver and MIG Mode. more details on Nvidia MIG can be found in Nvidia MIG documentation
Ensure the GPU driver is installed. The Ubuntu HPC 2204 image includes the Nvidia GPU driver. If you don’t have the GPU driver, make sure to install it. Here are the commands to enable Nvidia GPU MIG mode:
root@h100vm:~# nvidia-smi -pm 1
Enabled persistence mode for GPU 00000001:00:00.0.
All done.
root@h100vm:~# nvidia-smi -mig 1
Enabled MIG Mode for GPU 00000001:00:00.0
All done.
2. Check supported profiles and create MIG partitions.
The following command check the supported MIG mode in Nvidia H100 GPU.
root@h100vm:~# nvidia-smi mig -lgip
+—————————————————————————–+
| GPU instance profiles: |
| GPU Name ID Instances Memory P2P SM DEC ENC |
| Free/Total GiB CE JPEG OFA |
|=============================================================================|
| 0 MIG 1g.12gb 19 7/7 10.75 No 16 1 0 |
| 1 1 0 |
+—————————————————————————–+
| 0 MIG 1g.12gb+me 20 1/1 10.75 No 16 1 0 |
| 1 1 1 |
+—————————————————————————–+
| 0 MIG 1g.24gb 15 4/4 21.62 No 26 1 0 |
| 1 1 0 |
+—————————————————————————–+
| 0 MIG 2g.24gb 14 3/3 21.62 No 32 2 0 |
| 2 2 0 |
+—————————————————————————–+
| 0 MIG 3g.47gb 9 2/2 46.38 No 60 3 0 |
| 3 3 0 |
+—————————————————————————–+
| 0 MIG 4g.47gb 5 1/1 46.38 No 64 4 0 |
| 4 4 0 |
+—————————————————————————–+
| 0 MIG 7g.94gb 0 1/1 93.12 No 132 7 0 |
| 8 7 1 |
+—————————————————————————–+
Create the MIG partitions using the following command. In this example, we are creating 4 MIG partitions using the 1g.24gb profile.
root@h100vm:~# nvidia-smi mig -cgi 15,15,15,15 -C
Successfully created GPU instance ID 6 on GPU 0 using profile MIG 1g.24gb (ID 15)
Successfully created compute instance ID 0 on GPU 0 GPU instance ID 6 using profile MIG 1g.24gb (ID 7)
Successfully created GPU instance ID 5 on GPU 0 using profile MIG 1g.24gb (ID 15)
Successfully created compute instance ID 0 on GPU 0 GPU instance ID 5 using profile MIG 1g.24gb (ID 7)
Successfully created GPU instance ID 3 on GPU 0 using profile MIG 1g.24gb (ID 15)
Successfully created compute instance ID 0 on GPU 0 GPU instance ID 3 using profile MIG 1g.24gb (ID 7)
Successfully created GPU instance ID 4 on GPU 0 using profile MIG 1g.24gb (ID 15)
Successfully created compute instance ID 0 on GPU 0 GPU instance ID 4 using profile MIG 1g.24gb (ID 7)
root@h100vm:~# nvidia-smi
Fri Jul 5 06:32:39 2024
+—————————————————————————————+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|—————————————–+———————-+———————-+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H100 NVL On | 00000001:00:00.0 Off | On |
| N/A 38C P0 61W / 400W | 51MiB / 95830MiB | N/A Default |
| | | Enabled |
+—————————————–+———————-+———————-+
+—————————————————————————————+
| MIG devices: |
+——————+——————————–+———–+———————–+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG |
| | | ECC| |
|==================+================================+===========+=======================|
| 0 3 0 0 | 12MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 0MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 4 0 1 | 12MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 0MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 5 0 2 | 12MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 0MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 6 0 3 | 12MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 0MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
+—————————————————————————————+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+—————————————————————————————+
Create Munge and SLURM users on the execute VM
groupadd -g 11101 munge
useradd -u 11101 -g 11101 -s /bin/false -M munge
groupadd -g 11100 slurm
useradd -u 11100 -g 11100 -s /bin/false -M slurm
4. Mount NFS Shares from Scheduler (Use Scheduler IP address)
mkdir /shared /sched
mount <scheduler ip>:/sched /sched
mount <scheduler ip>:/shared /shared
5. Install and Configure Munge
apt install munge -y
cp /sched/munge.key /etc/munge/
chown munge:munge /etc/munge/munge.key
chmod 400 /etc/munge/munge.key
systemctl restart munge.service
6. Install and Configure SLURM on execute VM
apt install slurm-slurmd -y
mkdir -p /etc/slurm /var/spool/slurmd /var/log/slurmd
chown slurm:slurm /etc/slurm /var/spool/slurmd /var/log/slurmd
chown slurm:slurm /etc/slurm/
ln -s /sched/slurm.conf /etc/slurm/slurm.conf
ln -s /sched/cgroup.conf /etc/slurm/cgroup.conf
ln -s /sched/accounting.conf /etc/slurm/accounting.conf
Create GRES Configuration for MIG. The following steps show how to use the Mig Detection program and use a single H100 system as an example.
git clone https://gitlab.com/nvidia/hpc/slurm-mig-discovery.git
cd slurm-mig-discovery
gcc -g -o mig -I/usr/local/cuda/include -I/usr/cuda/include mig.c -lnvidia-ml
./mig
8. check the GRES config file.
root@h100vm:~/slurm-mig-discovery# cat gres.conf
# GPU 0 MIG 0 /proc/driver/nvidia/capabilities/gpu0/mig/gi3/access
Name=gpu Type=1g.22gb File=/dev/nvidia-caps/nvidia-cap30
# GPU 0 MIG 1 /proc/driver/nvidia/capabilities/gpu0/mig/gi4/access
Name=gpu Type=1g.22gb File=/dev/nvidia-caps/nvidia-cap39
# GPU 0 MIG 2 /proc/driver/nvidia/capabilities/gpu0/mig/gi5/access
Name=gpu Type=1g.22gb File=/dev/nvidia-caps/nvidia-cap48
# GPU 0 MIG 3 /proc/driver/nvidia/capabilities/gpu0/mig/gi6/access
Name=gpu Type=1g.22gb File=/dev/nvidia-caps/nvidia-cap57
9. copy the generated configuration file to central location.
cp gres.conf cgroup_allowed_devices_file.conf /sched/
chown slurm:slurm /sched/cgroup_allowed_devices_file.conf
chown slurm:slurm /sched/gres.conf
10. create symlinks to slurm configuration directory.
ln -s /sched/cgroup_allowed_devices_file.conf /etc/slurm/cgroup_allowed_devices_file.conf
ln -s /sched/gres.conf /etc/slurm/gres.conf
11. create slurm partitions file. This command creates a configuration file named `partitions.conf` in the `/sched` directory. It defines:
– A GPU partition named `gpu` on node `h100vm` with default settings.
– The node `h100vm` has 40 CPUs, 1 board, 1 socket per board, 40 cores per socket, and 1 thread per core.
– It has a real memory of 322243 MB.
– GPU resources are specified with 4 partitions using the `gpu:1g.22gb` profile.
cat << ‘EOF’ > /sched/partitions.conf
PartitionName=gpu Nodes=h100vm Default=YES MaxTime=INFINITE State=UP
NodeName=h100vm CPUs=40 Boards=1 SocketsPerBoard=1 CoresPerSocket=40 ThreadsPerCore=1 RealMemory=322243 Gres=gpu:1g.22gb:4
EOF
12. setting the permission for partitions.conf and creating a symlink to slurm configuration directory.
chown slurm:slurm /sched/partitions.conf
ln -s /sched/partitions.conf /etc/slurm/partitions.conf
Finalize and Start the SLURM Services
On Scheduler:
ln -s /sched/partitions.conf /etc/slurm/partitions.conf
ln -s /sched/cgroup_allowed_devices_file.conf /etc/slurm/cgroup_allowed_devices_file.conf
ln -s /sched/gres.conf /etc/slurm/gres.conf
systemctl restart slurmctld
systemctl enable slurmctld
On Execute VM
systemctl restart slurmd
systemctl enable slurmd
Check sinfo command on scheduler VM to verify the slurm configuration.
root@scheduler:~# sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
gpu* up infinite 1 idle h100vm
Testing the job and functionality
1. To submit the job, first create a test user. In this example, we’ll create a test user named `vinil` for testing purposes. Start by creating the user on the scheduler and then on the execute VM. We have set up an NFS server to share the `/shared` directory, which will serve as the centralized home directory for the user.
# On Scheduler VM
useradd -m -d /shared/home/vinil -u 20001 vinil
# Execute VM
useradd -d /shared/home/vinil -u 20001 vinil
On Scheduler VM:
2. I am using the CIFAR-10 training model to run tests on the 4 MIG instances we created. I will set up an Anaconda environment to run the CIFAR-10 job. This involves installing the TensorFlow GPU machine learning libraries and running 4 jobs simultaneously on a single node using Slurm to demonstrate the capabilities of MIG partitions and GPU workload scheduling on MIG partitions.
# Download and install Anaconda software.
curl -O https://repo.anaconda.com/archive/Anaconda3-2024.06-1-Linux-x86_64.sh
chmod +x Anaconda3-2024.06-1-Linux-x86_64.sh
sh Anaconda3-2024.06-1-Linux-x86_64.sh -b
3. Create a Conda environment named `mlprog` and install the TensorFlow GPU libraries.
#Setting the PATH and creating a conda environment called mlprog enviornment.
export PATH=$PATH:/shared/home/vinil/anaconda3/bin
/shared/home/vinil/anaconda3/bin/conda init
source ~/.bashrc
/shared/home/vinil/anaconda3/bin/conda create -n mlprog tensorflow-gpu -y
4. The following code will download the `cifar10.py` script, which contains the CIFAR-10 image classification machine learning code written using TensorFlow.
#Download the CIFAR10 code.
wget https://raw.githubusercontent.com/vinil-v/slurm-mig-setup/main/test_job_setup/cifar10.py
5. Create a job submission script named `mljob.sh` to run the job on a GPU using the Slurm scheduler. This script is designed to submit a job named `MLjob` to the GPU partition (`–partition=gpu`) of the Slurm scheduler. It allocates 10 tasks (`–ntasks=10`) and specifies GPU resources (`–gres=gpu:1g.22gb:1`). The script sets up the environment by adding Conda to the PATH and activating the `mlprog` Conda environment before executing the `cifar10.py` script to perform CIFAR-10 image classification using TensorFlow.
#!/bin/sh
#SBATCH –job-name=MLjob
#SBATCH –partition=gpu
#SBATCH –ntasks=10
#SBATCH –gres=gpu:1g.22gb:1
export PATH=$PATH:/shared/home/vinil/anaconda3/bin/conda
source /shared/home/vinil/anaconda3/bin/activate mlprog
python cifar10.py
6. Submit the job using the `sbatch` command and execute 4 instances of the job using the same `mljob.sh` script. This method will fully utilize all 4 MIG partitions available on the node. After submission, use the `squeue` command to check the status. You will observe all 4 jobs in the Running state.
(mlprog) vinil@scheduler:~$ sbatch mljob.sh
Submitted batch job 7
(mlprog) vinil@scheduler:~$ sbatch mljob.sh
Submitted batch job 8
(mlprog) vinil@scheduler:~$ sbatch mljob.sh
Submitted batch job 9
(mlprog) vinil@scheduler:~$ sbatch mljob.sh
Submitted batch job 10
(mlprog) vinil@scheduler:~$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
7 gpu MLjob vinil R 0:05 1 h100vm
8 gpu MLjob vinil R 0:01 1 h100vm
9 gpu MLjob vinil R 0:01 1 h100vm
10 gpu MLjob vinil R 0:01 1 h100vm
7. Log in to the execution VM and execute the `nvidia-smi` command. You will observe that all 4 MIG GPU partitions are allocated to the jobs and are currently running.uj
azureuser@h100vm:~$ nvidia-smi
Fri Jul 5 07:32:50 2024
+—————————————————————————————+
| NVIDIA-SMI 535.161.08 Driver Version: 535.161.08 CUDA Version: 12.2 |
|—————————————–+———————-+———————-+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA H100 NVL On | 00000001:00:00.0 Off | On |
| N/A 43C P0 90W / 400W | 83393MiB / 95830MiB | N/A Default |
| | | Enabled |
+—————————————–+———————-+———————-+
+—————————————————————————————+
| MIG devices: |
+——————+——————————–+———–+———————–+
| GPU GI CI MIG | Memory-Usage | Vol| Shared |
| ID ID Dev | BAR1-Usage | SM Unc| CE ENC DEC OFA JPG |
| | | ECC| |
|==================+================================+===========+=======================|
| 0 3 0 0 | 20846MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 2MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 4 0 1 | 20846MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 2MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 5 0 2 | 20850MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 2MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
| 0 6 0 3 | 20850MiB / 22144MiB | 26 0 | 1 0 1 0 1 |
| | 2MiB / 32767MiB | | |
+——————+——————————–+———–+———————–+
+—————————————————————————————+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 3 0 11813 C python 20826MiB |
| 0 4 0 11836 C python 20826MiB |
| 0 5 0 11838 C python 20830MiB |
| 0 6 0 11834 C python 20830MiB |
+—————————————————————————————+
azureuser@h100vm:~$
Conclusion:
You have now successfully set up a SLURM cluster with NVIDIA MIG integration. This setup allows you to efficiently schedule and manage GPU jobs, ensuring optimal utilization of resources. With SLURM and MIG, you can achieve high performance and scalability for your computational tasks. Happy computing!
Microsoft Tech Community – Latest Blogs –Read More
Field ii ultrasound simulation: Unable to run C compiled binary files (mexw64 extension)
In Field ii ultrasound simulation library a particular function field_init needs to be run initially. Here is the code for the m-file containing a binary complied function Mat_field that Matlab does not seem to recognize:
function res = field_init (suppress)
% Call the C-part of the program to initialize it
if (nargin==1)
Mat_field (5001,suppress);
else
Mat_field (5001,1);
end
Here is what I get when I invoke this function. Matlab does not seem to recognize Mat_field function tho the files Mat_field.mexw64 exists:
field_init
Unrecognized function or variable ‘Mat_field’.
Error in field_init (line 25)
Mat_field (5001,1);In Field ii ultrasound simulation library a particular function field_init needs to be run initially. Here is the code for the m-file containing a binary complied function Mat_field that Matlab does not seem to recognize:
function res = field_init (suppress)
% Call the C-part of the program to initialize it
if (nargin==1)
Mat_field (5001,suppress);
else
Mat_field (5001,1);
end
Here is what I get when I invoke this function. Matlab does not seem to recognize Mat_field function tho the files Mat_field.mexw64 exists:
field_init
Unrecognized function or variable ‘Mat_field’.
Error in field_init (line 25)
Mat_field (5001,1); In Field ii ultrasound simulation library a particular function field_init needs to be run initially. Here is the code for the m-file containing a binary complied function Mat_field that Matlab does not seem to recognize:
function res = field_init (suppress)
% Call the C-part of the program to initialize it
if (nargin==1)
Mat_field (5001,suppress);
else
Mat_field (5001,1);
end
Here is what I get when I invoke this function. Matlab does not seem to recognize Mat_field function tho the files Mat_field.mexw64 exists:
field_init
Unrecognized function or variable ‘Mat_field’.
Error in field_init (line 25)
Mat_field (5001,1); c compiled files mex file MATLAB Answers — New Questions
how convert arraycell to use with writecell
{’31/07/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2486]}
{’01/08/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2496]}
{’02/08/2024′} {[ 0]} {[ 0]} {[ 1]} {[ 0]} {[2405]}
{’03/08/2024′} {[ 0]} {[ 1]} {[ 0]} {[ 0]} {[2486]}
i want write file .txt and convert in :
31/07/2024 0 0 0 0 2486
01/08/2024 0 0 0 0 2496
02/08/2024 0 0 2 0 2405
03/08/2024 0 1 0 0 2486
i try to use writecell(AA,’C:TitancancmyTextFile.txt’);
but i see this:
0,0,0,0,0…{’31/07/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2486]}
{’01/08/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2496]}
{’02/08/2024′} {[ 0]} {[ 0]} {[ 1]} {[ 0]} {[2405]}
{’03/08/2024′} {[ 0]} {[ 1]} {[ 0]} {[ 0]} {[2486]}
i want write file .txt and convert in :
31/07/2024 0 0 0 0 2486
01/08/2024 0 0 0 0 2496
02/08/2024 0 0 2 0 2405
03/08/2024 0 1 0 0 2486
i try to use writecell(AA,’C:TitancancmyTextFile.txt’);
but i see this:
0,0,0,0,0… {’31/07/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2486]}
{’01/08/2024′} {[ 0]} {[ 0]} {[ 0]} {[ 0]} {[2496]}
{’02/08/2024′} {[ 0]} {[ 0]} {[ 1]} {[ 0]} {[2405]}
{’03/08/2024′} {[ 0]} {[ 1]} {[ 0]} {[ 0]} {[2486]}
i want write file .txt and convert in :
31/07/2024 0 0 0 0 2486
01/08/2024 0 0 0 0 2496
02/08/2024 0 0 2 0 2405
03/08/2024 0 1 0 0 2486
i try to use writecell(AA,’C:TitancancmyTextFile.txt’);
but i see this:
0,0,0,0,0… how convert arraycell to use with writecell MATLAB Answers — New Questions
How to use the class of Interleaved ADC.m?
Why the input signal “analog” should be a scalar? Coud Matlab give an example of m-file scripts to demonstrate its usage?Why the input signal “analog” should be a scalar? Coud Matlab give an example of m-file scripts to demonstrate its usage? Why the input signal “analog” should be a scalar? Coud Matlab give an example of m-file scripts to demonstrate its usage? interleaved adc MATLAB Answers — New Questions
The frequency response between the components obtained from wavelet decomposition and the original signal
Why does the fft of the component corresponding to the last approximation coefficient obtained after wavelet decomposition not match the fft of the original signal in the low-frequency region, resulting in an unsmooth frequency response between the two and varying with the original signal. Is using FFT to solve frequency response unreliable?
clearvars;close all;clc;
fs=10;
dt=1/fs;
t=dt:dt:200;
N=length(t);
signal=(0.2)*randn(1,N);
Max_level=wmaxlev(length(signal),’db10′);
[C,L]=wavedec(signal,Max_level,’fk8′);
level=6;
xL_DWT = wrcoef(‘a’,C,L,’fk8′,level);
xH_DWT=signal-xL_DWT;
%fft
Nfft=length(t);
f_DWT = (1:Nfft/2)*fs/(Nfft);
xL_DWT_fft=fftshift(fft(xL_DWT,Nfft));
xH_DWT_fft=fftshift(fft(xH_DWT,Nfft));
signal_fft=fftshift(fft(signal,Nfft));
wn_low_DWT=xL_DWT_fft./signal_fft;
wn_high_DWT=1-wn_low_DWT;
figure
plot(f_DWT,wn_low_DWT(Nfft/2+1:end));
hold on
plot(f_DWT,wn_high_DWT(Nfft/2+1:end));
ylim([-0.5 2]);
yticks([-0.5:0.5:2])
xlim([0 fs/2^(level-1)]);
xticks([0 fs/2^(level+1) fs/2^(level) fs/2^(level-1)])
xticklabels({‘0′,’fs/2^{i+1}’,’fs/2^{i}’,’fs/2^{i-1}’})Why does the fft of the component corresponding to the last approximation coefficient obtained after wavelet decomposition not match the fft of the original signal in the low-frequency region, resulting in an unsmooth frequency response between the two and varying with the original signal. Is using FFT to solve frequency response unreliable?
clearvars;close all;clc;
fs=10;
dt=1/fs;
t=dt:dt:200;
N=length(t);
signal=(0.2)*randn(1,N);
Max_level=wmaxlev(length(signal),’db10′);
[C,L]=wavedec(signal,Max_level,’fk8′);
level=6;
xL_DWT = wrcoef(‘a’,C,L,’fk8′,level);
xH_DWT=signal-xL_DWT;
%fft
Nfft=length(t);
f_DWT = (1:Nfft/2)*fs/(Nfft);
xL_DWT_fft=fftshift(fft(xL_DWT,Nfft));
xH_DWT_fft=fftshift(fft(xH_DWT,Nfft));
signal_fft=fftshift(fft(signal,Nfft));
wn_low_DWT=xL_DWT_fft./signal_fft;
wn_high_DWT=1-wn_low_DWT;
figure
plot(f_DWT,wn_low_DWT(Nfft/2+1:end));
hold on
plot(f_DWT,wn_high_DWT(Nfft/2+1:end));
ylim([-0.5 2]);
yticks([-0.5:0.5:2])
xlim([0 fs/2^(level-1)]);
xticks([0 fs/2^(level+1) fs/2^(level) fs/2^(level-1)])
xticklabels({‘0′,’fs/2^{i+1}’,’fs/2^{i}’,’fs/2^{i-1}’}) Why does the fft of the component corresponding to the last approximation coefficient obtained after wavelet decomposition not match the fft of the original signal in the low-frequency region, resulting in an unsmooth frequency response between the two and varying with the original signal. Is using FFT to solve frequency response unreliable?
clearvars;close all;clc;
fs=10;
dt=1/fs;
t=dt:dt:200;
N=length(t);
signal=(0.2)*randn(1,N);
Max_level=wmaxlev(length(signal),’db10′);
[C,L]=wavedec(signal,Max_level,’fk8′);
level=6;
xL_DWT = wrcoef(‘a’,C,L,’fk8′,level);
xH_DWT=signal-xL_DWT;
%fft
Nfft=length(t);
f_DWT = (1:Nfft/2)*fs/(Nfft);
xL_DWT_fft=fftshift(fft(xL_DWT,Nfft));
xH_DWT_fft=fftshift(fft(xH_DWT,Nfft));
signal_fft=fftshift(fft(signal,Nfft));
wn_low_DWT=xL_DWT_fft./signal_fft;
wn_high_DWT=1-wn_low_DWT;
figure
plot(f_DWT,wn_low_DWT(Nfft/2+1:end));
hold on
plot(f_DWT,wn_high_DWT(Nfft/2+1:end));
ylim([-0.5 2]);
yticks([-0.5:0.5:2])
xlim([0 fs/2^(level-1)]);
xticks([0 fs/2^(level+1) fs/2^(level) fs/2^(level-1)])
xticklabels({‘0′,’fs/2^{i+1}’,’fs/2^{i}’,’fs/2^{i-1}’}) fft, wavelet, frequency response, decomposition MATLAB Answers — New Questions
Two OneDrive folders showing with Explorer
Explorer shows a OneDrive – Personal folder, which I assume is the one I should use. AND a OneDrive folder under C:. Can I delete the C: folder after saving any unique files under it?
Explorer shows a OneDrive – Personal folder, which I assume is the one I should use. AND a OneDrive folder under C:. Can I delete the C: folder after saving any unique files under it? Read More
How can I manipulate the 2D data to make it smoother
I have the following data represented by the surface plot:
clear; clc;
load(‘data.mat’)
figure;
surf(nXX,nYY,nZZ,’linestyle’,’none’,’facecolor’,’interp’)
hold on
plot3([0, 0.5], [0, 0.145], [2, 2],’Color’,’white’,’LineStyle’,’–‘,’LineWidth’,2);
plot3([0, 0.5], [2*0.145, 0.145], [2, 2],’Color’,’black’,’LineStyle’,’–‘,’LineWidth’,2);
annotation(‘ellipse’,[0.08 0.58 0.18 0.34],’LineWidth’,2);
xlabel(‘C_a’)
ylabel(‘C_d’)
zlabel(‘C_z’)
view(2)
colorbar
set(gca,’FontSize’,13)
I want to remove the blue part inside the ellipse and extend the yellow area smoothly till C_a=0 and above the black dashed line. Although the yellow region appears constant, it actually changes very slowly. I aim to extend this yellow part while preserving its gradual variation.I have the following data represented by the surface plot:
clear; clc;
load(‘data.mat’)
figure;
surf(nXX,nYY,nZZ,’linestyle’,’none’,’facecolor’,’interp’)
hold on
plot3([0, 0.5], [0, 0.145], [2, 2],’Color’,’white’,’LineStyle’,’–‘,’LineWidth’,2);
plot3([0, 0.5], [2*0.145, 0.145], [2, 2],’Color’,’black’,’LineStyle’,’–‘,’LineWidth’,2);
annotation(‘ellipse’,[0.08 0.58 0.18 0.34],’LineWidth’,2);
xlabel(‘C_a’)
ylabel(‘C_d’)
zlabel(‘C_z’)
view(2)
colorbar
set(gca,’FontSize’,13)
I want to remove the blue part inside the ellipse and extend the yellow area smoothly till C_a=0 and above the black dashed line. Although the yellow region appears constant, it actually changes very slowly. I aim to extend this yellow part while preserving its gradual variation. I have the following data represented by the surface plot:
clear; clc;
load(‘data.mat’)
figure;
surf(nXX,nYY,nZZ,’linestyle’,’none’,’facecolor’,’interp’)
hold on
plot3([0, 0.5], [0, 0.145], [2, 2],’Color’,’white’,’LineStyle’,’–‘,’LineWidth’,2);
plot3([0, 0.5], [2*0.145, 0.145], [2, 2],’Color’,’black’,’LineStyle’,’–‘,’LineWidth’,2);
annotation(‘ellipse’,[0.08 0.58 0.18 0.34],’LineWidth’,2);
xlabel(‘C_a’)
ylabel(‘C_d’)
zlabel(‘C_z’)
view(2)
colorbar
set(gca,’FontSize’,13)
I want to remove the blue part inside the ellipse and extend the yellow area smoothly till C_a=0 and above the black dashed line. Although the yellow region appears constant, it actually changes very slowly. I aim to extend this yellow part while preserving its gradual variation. interpolation, fitting MATLAB Answers — New Questions
Find the turing pattern for following equation
the equation is
dx/dt={r/(1-ky) – r0-r1x-[alpha*(1-beta*y)*x]/[a+(1-beta*y)*x]}x
dy/dt={[new-cy/[a+(1-beta*y)*x]}*ythe equation is
dx/dt={r/(1-ky) – r0-r1x-[alpha*(1-beta*y)*x]/[a+(1-beta*y)*x]}x
dy/dt={[new-cy/[a+(1-beta*y)*x]}*y the equation is
dx/dt={r/(1-ky) – r0-r1x-[alpha*(1-beta*y)*x]/[a+(1-beta*y)*x]}x
dy/dt={[new-cy/[a+(1-beta*y)*x]}*y turing MATLAB Answers — New Questions
Unable to open Embedded excel document in excel
hi.
i have an excel document from work which has embedded word,excel and pdf documents in it. i cannot seem to open one of the excel documents and one of the pdf documents but other people are able to. i get the same message when trying to open either of them ‘ cannot start the source application for this object.
any ideas?
Thanks
hi. i have an excel document from work which has embedded word,excel and pdf documents in it. i cannot seem to open one of the excel documents and one of the pdf documents but other people are able to. i get the same message when trying to open either of them ‘ cannot start the source application for this object. any ideas? Thanks Read More
Older versions of Teams are still appearing in the registry for other user profiles and are being fl
Hello,
I wanted to update you on the issues we are facing after cleaning Classic Teams. Older versions of Teams are still appearing in the registry for other user profiles and are being flagged as vulnerable in 365 Defender, specifically in the HKEY_USERS registry path for others users.
For example, as evidence from the Defender portal, here are some entries indicating software issues:
– Endpoint Name: TestPC
– ComputerHKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionUninstallTeams
– HKEY_USERSuser1SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams
– HKEY_USERSuser2SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams
– HKEY_USERSuser3SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams
We attempted to remove the registry entries from other user profiles to clean up the Classic Teams presence by using the following commands:
powershell
” reg load “hku$user” “C:Users$userNTUSER.DAT”
” Test-Path -Path Registry::HKEY_USERS$hiveNameSOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams “
For checking the registry presence, we used the detection and remediation method in Intune for cleaning Classic Teams. I ran the detection script on only three PCs for testing.
Surprisingly, we received a warning from Sentinel about “User and group membership reconnaissance (SAMR) on one endpoint,” indicating a potential security incident involving suspicious SAMR (Security Account Manager Remote) queries. This was detected for admin accounts, DC, and also for an account belonging to someone who left the organization five years ago (ABC Admin).
I am looking for appreciate your guidance on the best practices for detecting and removing Classic Teams leftovers in the registry for other user profiles.
Best Practice:
– How to detect and remove Classic Teams registry entries for other user profiles in the system.
– Best method? Using the Hive to load another user profile into the registry and remove the Classic Teams registry entries.
Reference Links:
– [Older versions of Teams showing in user profiles](https://answers.microsoft.com/en-us/msteams/forum/all/older-versions-of-teams-showing-in-user-profiles/2bc7563c-ccc9-4afc-b522-337acff9d20e?page=1)
– [Remove old user profiles on Microsoft Teams (Reddit)](https://www.reddit.com/r/PowerShell/comments/1bvjner/remove_old_user_profiles_on_microsoft_teams/)
Hello, I wanted to update you on the issues we are facing after cleaning Classic Teams. Older versions of Teams are still appearing in the registry for other user profiles and are being flagged as vulnerable in 365 Defender, specifically in the HKEY_USERS registry path for others users. For example, as evidence from the Defender portal, here are some entries indicating software issues:- Endpoint Name: TestPC – ComputerHKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionUninstallTeams – HKEY_USERSuser1SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams – HKEY_USERSuser2SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams – HKEY_USERSuser3SOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams We attempted to remove the registry entries from other user profiles to clean up the Classic Teams presence by using the following commands:powershell ” reg load “hku$user” “C:Users$userNTUSER.DAT” ” Test-Path -Path Registry::HKEY_USERS$hiveNameSOFTWAREMicrosoftWindowsCurrentVersionUninstallTeams ” For checking the registry presence, we used the detection and remediation method in Intune for cleaning Classic Teams. I ran the detection script on only three PCs for testing. Surprisingly, we received a warning from Sentinel about “User and group membership reconnaissance (SAMR) on one endpoint,” indicating a potential security incident involving suspicious SAMR (Security Account Manager Remote) queries. This was detected for admin accounts, DC, and also for an account belonging to someone who left the organization five years ago (ABC Admin). I am looking for appreciate your guidance on the best practices for detecting and removing Classic Teams leftovers in the registry for other user profiles. Best Practice:- How to detect and remove Classic Teams registry entries for other user profiles in the system.- Best method? Using the Hive to load another user profile into the registry and remove the Classic Teams registry entries. Reference Links:- [Older versions of Teams showing in user profiles](https://answers.microsoft.com/en-us/msteams/forum/all/older-versions-of-teams-showing-in-user-profiles/2bc7563c-ccc9-4afc-b522-337acff9d20e?page=1)- [Remove old user profiles on Microsoft Teams (Reddit)](https://www.reddit.com/r/PowerShell/comments/1bvjner/remove_old_user_profiles_on_microsoft_teams/) Read More
I have been working on 3D imaging in the form of pixels with spheres in a box and need help in finding out the contact points in between the spheres in the produced pixel data
I have tried finding the exact point by 1st extracting the 2 object from data set of 161*161*105 unit8 in volumesegmenter and then used edge command to observe onle the boundary.I want to know how can I exactract exact contact point between 2 spheres by searching in 105 slices and without using the voxel command to get the co-ordinates.
Here is my code:
%% joining bd1,bd2,bd3,bd4
A=cat(3,Bd1, Bd2,Bd3,Bd4);
B=cat(3,L1,L2,L3,L4);
%% for extracting 2 spheres
T=table(A,B);
P=(T.B=="sphere8");
Q=(T.B=="sphere7");
R=P+Q;
%% visualization of 2 joined spheres
volumeSegmenter(R)
%% making the only edge visible for inspection
SS=edge3(R,"approxcanny",0.5);
SS1=im2double(SS);
volumeSegmenter(SS1)I have tried finding the exact point by 1st extracting the 2 object from data set of 161*161*105 unit8 in volumesegmenter and then used edge command to observe onle the boundary.I want to know how can I exactract exact contact point between 2 spheres by searching in 105 slices and without using the voxel command to get the co-ordinates.
Here is my code:
%% joining bd1,bd2,bd3,bd4
A=cat(3,Bd1, Bd2,Bd3,Bd4);
B=cat(3,L1,L2,L3,L4);
%% for extracting 2 spheres
T=table(A,B);
P=(T.B=="sphere8");
Q=(T.B=="sphere7");
R=P+Q;
%% visualization of 2 joined spheres
volumeSegmenter(R)
%% making the only edge visible for inspection
SS=edge3(R,"approxcanny",0.5);
SS1=im2double(SS);
volumeSegmenter(SS1) I have tried finding the exact point by 1st extracting the 2 object from data set of 161*161*105 unit8 in volumesegmenter and then used edge command to observe onle the boundary.I want to know how can I exactract exact contact point between 2 spheres by searching in 105 slices and without using the voxel command to get the co-ordinates.
Here is my code:
%% joining bd1,bd2,bd3,bd4
A=cat(3,Bd1, Bd2,Bd3,Bd4);
B=cat(3,L1,L2,L3,L4);
%% for extracting 2 spheres
T=table(A,B);
P=(T.B=="sphere8");
Q=(T.B=="sphere7");
R=P+Q;
%% visualization of 2 joined spheres
volumeSegmenter(R)
%% making the only edge visible for inspection
SS=edge3(R,"approxcanny",0.5);
SS1=im2double(SS);
volumeSegmenter(SS1) matlab, image processing, 3d image processing, contact points MATLAB Answers — New Questions
tomographic wifi sensing using wifi adapters
hello i would like to ask how to modify the following programs to use wifi adapters rather than sdr to perform tomographic wifi sensing. thanks very much.
https://www.mathworks.com/help/wlan/ug/detect-human-presence-using-wireless-sensing-with-deep-learning.html
https://www.mathworks.com/matlabcentral/fileexchange/43008-2-d-tomographic-reconstruction-demohello i would like to ask how to modify the following programs to use wifi adapters rather than sdr to perform tomographic wifi sensing. thanks very much.
https://www.mathworks.com/help/wlan/ug/detect-human-presence-using-wireless-sensing-with-deep-learning.html
https://www.mathworks.com/matlabcentral/fileexchange/43008-2-d-tomographic-reconstruction-demo hello i would like to ask how to modify the following programs to use wifi adapters rather than sdr to perform tomographic wifi sensing. thanks very much.
https://www.mathworks.com/help/wlan/ug/detect-human-presence-using-wireless-sensing-with-deep-learning.html
https://www.mathworks.com/matlabcentral/fileexchange/43008-2-d-tomographic-reconstruction-demo tomographic, wifi, sensing MATLAB Answers — New Questions
Install apps from microsoft store (New)
i try to install the apps from Microsoft store new some of the apps give me this ( Required and available install) but when i check the Microsoft store i see the apps but need to me to install manually
can i know why
there anu policy i need to add for that actually all apps give me the same error
the attachment for more information
i try to install the apps from Microsoft store new some of the apps give me this ( Required and available install) but when i check the Microsoft store i see the apps but need to me to install manually can i know why there anu policy i need to add for that actually all apps give me the same error the attachment for more information Read More
Empty spherical plot – strange error
I find it strange that I get an empty plot with the give command, and get the given error:
Error using matlab.graphics.chart.primitive.Surface
Value must be a vector or 2D array of numeric type.
Error in surf (line 145)
hh = matlab.graphics.chart.primitive.Surface(allargs{:});
Error in polar_coord_soln_Manz (line 59)
surf(X, Y, Z, Psi);
% Constants
hbar = 1.0545718e-34;
m = 9.10938356e-31;
E_ion = 5.139 * 1.60218e-19;
k_f = 2 * m * E_ion / hbar^2;
% Define alpha (renamed to avoid conflict with MATLAB function)
alpha_val = sqrt(k_f);
% Radial wave function
function R = radial_wavefunction(r, n, l, alpha)
L = laguerreL(n-l-1, 2*l+1, alpha * r.^2);
R = sqrt((2 * alpha)^(l+1) / factorial(n-l-1)) .* exp(-alpha * r.^2) .* (alpha * r).^l .* L;
end
% Spherical harmonic (assuming it’s defined elsewhere)
function Y = spherical_harmonic(theta, phi, l, m)
Y = legendre(l, cos(theta)) .* exp(1i * m * phi);
end
% Total wave function in spherical coordinates
function psi = spherical_wavefunction(r, theta, phi, n, l, m, alpha)
R = radial_wavefunction(r, n, l, alpha);
Y = spherical_harmonic(theta, phi, l, m);
psi = R .* Y;
end
% Define grid
r = linspace(0, 10, 50); % Radial coordinate r
theta = linspace(0, pi, 50); % Polar angle theta
phi = linspace(0, 2*pi, 50); % Azimuthal angle phi
% Create grid for 3D plotting
[R, Theta, Phi] = meshgrid(r, theta, phi);
n = 1;
l = 0;
m = 0;
Psi = zeros(size(R));
for i = 1:numel(R)
Psi(i) = abs(spherical_wavefunction(R(i), Theta(i), Phi(i), n, l, m, alpha_val))^2; % Taking absolute value and squaring
end
% Reshape Psi to be 2D
Psi = reshape(Psi, size(R));
% Spherical to Cartesian Conversion
X = R .* sin(Theta) .* cos(Phi);
Y = R .* sin(Theta) .* sin(Phi);
Z = R .* cos(Theta);
% Plotting 3D surface
figure;
surf(X, Y, Z, Psi);
xlabel(‘x’);
ylabel(‘y’);
zlabel(‘z’);
title([‘|psi_{‘, num2str(n), ‘,’, num2str(l), ‘,’, num2str(m), ‘}(r, theta, phi)|^2 for Sodium’]);
colorbar;
axis equal;I find it strange that I get an empty plot with the give command, and get the given error:
Error using matlab.graphics.chart.primitive.Surface
Value must be a vector or 2D array of numeric type.
Error in surf (line 145)
hh = matlab.graphics.chart.primitive.Surface(allargs{:});
Error in polar_coord_soln_Manz (line 59)
surf(X, Y, Z, Psi);
% Constants
hbar = 1.0545718e-34;
m = 9.10938356e-31;
E_ion = 5.139 * 1.60218e-19;
k_f = 2 * m * E_ion / hbar^2;
% Define alpha (renamed to avoid conflict with MATLAB function)
alpha_val = sqrt(k_f);
% Radial wave function
function R = radial_wavefunction(r, n, l, alpha)
L = laguerreL(n-l-1, 2*l+1, alpha * r.^2);
R = sqrt((2 * alpha)^(l+1) / factorial(n-l-1)) .* exp(-alpha * r.^2) .* (alpha * r).^l .* L;
end
% Spherical harmonic (assuming it’s defined elsewhere)
function Y = spherical_harmonic(theta, phi, l, m)
Y = legendre(l, cos(theta)) .* exp(1i * m * phi);
end
% Total wave function in spherical coordinates
function psi = spherical_wavefunction(r, theta, phi, n, l, m, alpha)
R = radial_wavefunction(r, n, l, alpha);
Y = spherical_harmonic(theta, phi, l, m);
psi = R .* Y;
end
% Define grid
r = linspace(0, 10, 50); % Radial coordinate r
theta = linspace(0, pi, 50); % Polar angle theta
phi = linspace(0, 2*pi, 50); % Azimuthal angle phi
% Create grid for 3D plotting
[R, Theta, Phi] = meshgrid(r, theta, phi);
n = 1;
l = 0;
m = 0;
Psi = zeros(size(R));
for i = 1:numel(R)
Psi(i) = abs(spherical_wavefunction(R(i), Theta(i), Phi(i), n, l, m, alpha_val))^2; % Taking absolute value and squaring
end
% Reshape Psi to be 2D
Psi = reshape(Psi, size(R));
% Spherical to Cartesian Conversion
X = R .* sin(Theta) .* cos(Phi);
Y = R .* sin(Theta) .* sin(Phi);
Z = R .* cos(Theta);
% Plotting 3D surface
figure;
surf(X, Y, Z, Psi);
xlabel(‘x’);
ylabel(‘y’);
zlabel(‘z’);
title([‘|psi_{‘, num2str(n), ‘,’, num2str(l), ‘,’, num2str(m), ‘}(r, theta, phi)|^2 for Sodium’]);
colorbar;
axis equal; I find it strange that I get an empty plot with the give command, and get the given error:
Error using matlab.graphics.chart.primitive.Surface
Value must be a vector or 2D array of numeric type.
Error in surf (line 145)
hh = matlab.graphics.chart.primitive.Surface(allargs{:});
Error in polar_coord_soln_Manz (line 59)
surf(X, Y, Z, Psi);
% Constants
hbar = 1.0545718e-34;
m = 9.10938356e-31;
E_ion = 5.139 * 1.60218e-19;
k_f = 2 * m * E_ion / hbar^2;
% Define alpha (renamed to avoid conflict with MATLAB function)
alpha_val = sqrt(k_f);
% Radial wave function
function R = radial_wavefunction(r, n, l, alpha)
L = laguerreL(n-l-1, 2*l+1, alpha * r.^2);
R = sqrt((2 * alpha)^(l+1) / factorial(n-l-1)) .* exp(-alpha * r.^2) .* (alpha * r).^l .* L;
end
% Spherical harmonic (assuming it’s defined elsewhere)
function Y = spherical_harmonic(theta, phi, l, m)
Y = legendre(l, cos(theta)) .* exp(1i * m * phi);
end
% Total wave function in spherical coordinates
function psi = spherical_wavefunction(r, theta, phi, n, l, m, alpha)
R = radial_wavefunction(r, n, l, alpha);
Y = spherical_harmonic(theta, phi, l, m);
psi = R .* Y;
end
% Define grid
r = linspace(0, 10, 50); % Radial coordinate r
theta = linspace(0, pi, 50); % Polar angle theta
phi = linspace(0, 2*pi, 50); % Azimuthal angle phi
% Create grid for 3D plotting
[R, Theta, Phi] = meshgrid(r, theta, phi);
n = 1;
l = 0;
m = 0;
Psi = zeros(size(R));
for i = 1:numel(R)
Psi(i) = abs(spherical_wavefunction(R(i), Theta(i), Phi(i), n, l, m, alpha_val))^2; % Taking absolute value and squaring
end
% Reshape Psi to be 2D
Psi = reshape(Psi, size(R));
% Spherical to Cartesian Conversion
X = R .* sin(Theta) .* cos(Phi);
Y = R .* sin(Theta) .* sin(Phi);
Z = R .* cos(Theta);
% Plotting 3D surface
figure;
surf(X, Y, Z, Psi);
xlabel(‘x’);
ylabel(‘y’);
zlabel(‘z’);
title([‘|psi_{‘, num2str(n), ‘,’, num2str(l), ‘,’, num2str(m), ‘}(r, theta, phi)|^2 for Sodium’]);
colorbar;
axis equal; 3d plots, spherical MATLAB Answers — New Questions
What is Convolutional Neural Network — CNN (Deep Learning)
Convolutional Neural Networks (CNNs) are a type of deep learning neural network architecture that is particularly well suited to image classification and object recognition tasks. A CNN works by transforming an input image into a feature map, which is then processed through multiple convolutional and pooling layers to produce a predicted output.
Convolutional Neural Network — CNN architecture
In this blog post, we will explore the basics of CNNs, including how they work, their architecture, and how they can be used for a wide range of computer vision tasks. We will also provide examples of some real-world applications of CNNs, and outline some of the benefits and limitations of this deep-learning architecture.
Working of Convolutional Neural Network:
A convolutional neural network starts by taking an input image, which is then transformed into a feature map through a series of convolutional and pooling layers. The convolutional layer applies a set of filters to the input image, each filter producing a feature map that highlights a specific aspect of the input image. The pooling layer then downsamples the feature map to reduce its size, while retaining the most important information.
The feature map produced by the convolutional layer is then passed through multiple additional convolutional and pooling layers, each layer learning increasingly complex features of the input image. The final output of the network is a predicted class label or probability score for each class, depending on the task.
The architecture of Convolutional Neural Network:
A typical CNN architecture is made up of three main components: the input layer, the hidden layers, and the output layer. The input layer receives the input image and passes it to the hidden layers, which are made up of multiple convolutional and pooling layers. The output layer provides the predicted class label or probability scores for each class.
The hidden layers are the most important part of a CNN, and the number of hidden layers and the number of filters in each layer can be adjusted to optimize the network’s performance. A common architecture for a CNN is to have multiple convolutional layers, followed by one or more pooling layers, and then a fully connected layer that provides the final output.
Applications of Convolutional Neural Network:
CNNs have a wide range of applications in computer vision, including image classification, object detection, semantic segmentation, and style transfer.
Image classification: Image classification is the task of assigning a class label to an input image. CNNs can be trained on large datasets of labeled images to learn the relationships between the image pixels and the class labels, and then applied to new, unseen images to make a prediction.
Object detection: Object detection is the task of identifying objects of a specific class in an input image and marking their locations. This can be useful for applications such as security and surveillance, where it is important to detect and track objects in real time.
Semantic segmentation: Semantic segmentation is the task of assigning a class label to each pixel in an input image, producing a segmented image that can be used for further analysis. This can be useful for applications such as medical image analysis, where it is important to segment specific structures in an image for further analysis.
Style transfer: Style transfer is the task of transferring the style of one image to another image while preserving the content of the target image. This can be useful for applications such as art and design, where it is desired to create an image that combines the content of one image with the style of another.
Layers of Convolutional neural network:
The layers of a Convolutional Neural Network (CNN) can be broadly classified into the following categories:
Convolutional Layer: The convolutional layer is responsible for extracting features from the input image. It performs a convolution operation on the input image, where a filter or kernel is applied to the image to identify and extract specific features.
Convolutional Layer
Pooling Layer: The pooling layer is responsible for reducing the spatial dimensions of the feature maps produced by the convolutional layer. It performs a down-sampling operation to reduce the size of the feature maps and reduce computational complexity.
MaxPooling Layer
Activation Layer: The activation layer applies a non-linear activation function, such as the ReLU function, to the output of the pooling layer. This function helps to introduce non-linearity into the model, allowing it to learn more complex representations of the input data.
Activation Layer
Fully Connected Layer: The fully connected layer is a traditional neural network layer that connects all the neurons in the previous layer to all the neurons in the next layer. This layer is responsible for combining the features learned by the convolutional and pooling layers to make a prediction.
Fully Connected Layer
Normalization Layer: The normalization layer performs normalization operations, such as batch normalization or layer normalization, to ensure that the activations of each layer are well-conditioned and prevent overfitting.Dropout Layer: The dropout layer is used to prevent overfitting by randomly dropping out neurons during training. This helps to ensure that the model does not memorize the training data but instead generalizes to new, unseen data.Dense Layer: After the convolutional and pooling layers have extracted features from the input image, the dense layer can then be used to combine those features and make a final prediction. In a CNN, the dense layer is usually the final layer and is used to produce the output predictions. The activations from the previous layers are flattened and passed as inputs to the dense layer, which performs a weighted sum of the inputs and applies an activation function to produce the final output.
Dense layer
Benefits of Convolutional Neural Network:
Feature extraction: CNNs are capable of automatically extracting relevant features from an input image, reducing the need for manual feature engineering.Spatial invariance: CNNs can recognize objects in an image regardless of their location, size, or orientation, making them well-suited to object recognition tasks.Robust to noise: CNNs can often handle noisy or cluttered images, making them useful for real-world applications where image quality may be variable.Transfer learning: CNNs can leverage pre-trained models, reducing the amount of data and computational resources required to train a new model.Performance: CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks, including image classification, object detection, and semantic segmentation.
Limitations of Convolutional Neural Network:
Computational cost: Training a deep CNN can be computationally expensive, requiring significant amounts of data and computational resources.Overfitting: Deep CNNs are prone to overfitting, especially when trained on small datasets, where the model may memorize the training data rather than generalize to new, unseen data.Lack of interpretability: CNNs are considered to be a “black box” model, making it difficult to understand why a particular prediction was made.Limited to grid-like structures: CNNs are limited to grid-like structures and cannot handle irregular shapes or non-grid-like data structures.
Conclusion:
In conclusion, Convolutional Neural Networks (CNNs) is a powerful deep learning architecture well-suited to image classification and object recognition tasks. With its ability to automatically extract relevant features, handle noisy images, and leverage pre-trained models, CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks. However, they also have their limitations, including a high computational cost, overfitting, a lack of interpretability, and a limited ability to handle irregular shapes. Nevertheless, CNNs remain a popular choice for many computer vision tasks and are likely to continue to be a key area of research and development in the coming years.
Convolutional Neural Networks (CNNs) are a type of deep learning neural network architecture that is particularly well suited to image classification and object recognition tasks. A CNN works by transforming an input image into a feature map, which is then processed through multiple convolutional and pooling layers to produce a predicted output. Convolutional Neural Network — CNN architecture In this blog post, we will explore the basics of CNNs, including how they work, their architecture, and how they can be used for a wide range of computer vision tasks. We will also provide examples of some real-world applications of CNNs, and outline some of the benefits and limitations of this deep-learning architecture.Working of Convolutional Neural Network:A convolutional neural network starts by taking an input image, which is then transformed into a feature map through a series of convolutional and pooling layers. The convolutional layer applies a set of filters to the input image, each filter producing a feature map that highlights a specific aspect of the input image. The pooling layer then downsamples the feature map to reduce its size, while retaining the most important information.The feature map produced by the convolutional layer is then passed through multiple additional convolutional and pooling layers, each layer learning increasingly complex features of the input image. The final output of the network is a predicted class label or probability score for each class, depending on the task. The architecture of Convolutional Neural Network:A typical CNN architecture is made up of three main components: the input layer, the hidden layers, and the output layer. The input layer receives the input image and passes it to the hidden layers, which are made up of multiple convolutional and pooling layers. The output layer provides the predicted class label or probability scores for each class.The hidden layers are the most important part of a CNN, and the number of hidden layers and the number of filters in each layer can be adjusted to optimize the network’s performance. A common architecture for a CNN is to have multiple convolutional layers, followed by one or more pooling layers, and then a fully connected layer that provides the final output. Applications of Convolutional Neural Network:CNNs have a wide range of applications in computer vision, including image classification, object detection, semantic segmentation, and style transfer. Image classification: Image classification is the task of assigning a class label to an input image. CNNs can be trained on large datasets of labeled images to learn the relationships between the image pixels and the class labels, and then applied to new, unseen images to make a prediction. Object detection: Object detection is the task of identifying objects of a specific class in an input image and marking their locations. This can be useful for applications such as security and surveillance, where it is important to detect and track objects in real time. Semantic segmentation: Semantic segmentation is the task of assigning a class label to each pixel in an input image, producing a segmented image that can be used for further analysis. This can be useful for applications such as medical image analysis, where it is important to segment specific structures in an image for further analysis. Style transfer: Style transfer is the task of transferring the style of one image to another image while preserving the content of the target image. This can be useful for applications such as art and design, where it is desired to create an image that combines the content of one image with the style of another. Layers of Convolutional neural network:The layers of a Convolutional Neural Network (CNN) can be broadly classified into the following categories: Convolutional Layer: The convolutional layer is responsible for extracting features from the input image. It performs a convolution operation on the input image, where a filter or kernel is applied to the image to identify and extract specific features. Convolutional LayerPooling Layer: The pooling layer is responsible for reducing the spatial dimensions of the feature maps produced by the convolutional layer. It performs a down-sampling operation to reduce the size of the feature maps and reduce computational complexity. MaxPooling LayerActivation Layer: The activation layer applies a non-linear activation function, such as the ReLU function, to the output of the pooling layer. This function helps to introduce non-linearity into the model, allowing it to learn more complex representations of the input data. Activation LayerFully Connected Layer: The fully connected layer is a traditional neural network layer that connects all the neurons in the previous layer to all the neurons in the next layer. This layer is responsible for combining the features learned by the convolutional and pooling layers to make a prediction. Fully Connected LayerNormalization Layer: The normalization layer performs normalization operations, such as batch normalization or layer normalization, to ensure that the activations of each layer are well-conditioned and prevent overfitting.Dropout Layer: The dropout layer is used to prevent overfitting by randomly dropping out neurons during training. This helps to ensure that the model does not memorize the training data but instead generalizes to new, unseen data.Dense Layer: After the convolutional and pooling layers have extracted features from the input image, the dense layer can then be used to combine those features and make a final prediction. In a CNN, the dense layer is usually the final layer and is used to produce the output predictions. The activations from the previous layers are flattened and passed as inputs to the dense layer, which performs a weighted sum of the inputs and applies an activation function to produce the final output. Dense layerBenefits of Convolutional Neural Network:Feature extraction: CNNs are capable of automatically extracting relevant features from an input image, reducing the need for manual feature engineering.Spatial invariance: CNNs can recognize objects in an image regardless of their location, size, or orientation, making them well-suited to object recognition tasks.Robust to noise: CNNs can often handle noisy or cluttered images, making them useful for real-world applications where image quality may be variable.Transfer learning: CNNs can leverage pre-trained models, reducing the amount of data and computational resources required to train a new model.Performance: CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks, including image classification, object detection, and semantic segmentation. Limitations of Convolutional Neural Network:Computational cost: Training a deep CNN can be computationally expensive, requiring significant amounts of data and computational resources.Overfitting: Deep CNNs are prone to overfitting, especially when trained on small datasets, where the model may memorize the training data rather than generalize to new, unseen data.Lack of interpretability: CNNs are considered to be a “black box” model, making it difficult to understand why a particular prediction was made.Limited to grid-like structures: CNNs are limited to grid-like structures and cannot handle irregular shapes or non-grid-like data structures. Conclusion:In conclusion, Convolutional Neural Networks (CNNs) is a powerful deep learning architecture well-suited to image classification and object recognition tasks. With its ability to automatically extract relevant features, handle noisy images, and leverage pre-trained models, CNNs have demonstrated state-of-the-art performance on a range of computer vision tasks. However, they also have their limitations, including a high computational cost, overfitting, a lack of interpretability, and a limited ability to handle irregular shapes. Nevertheless, CNNs remain a popular choice for many computer vision tasks and are likely to continue to be a key area of research and development in the coming years. Read More