Month: September 2024
Simscape multibody reinforced learning: unpossible to run examples
I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues?I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues? I was trying to run some examples of training on simscape using reinforced learning, but I’m not able to run them. They look to be by far excessiive for my machine. Anybody shares my issues? simscape, reinforced learning MATLAB Answers — New Questions
MATLAB stalls after running script
I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue.I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue. I have a pretty large script that analyzes a somewhat large data file (~5gbs). My script works fine when I first run it but then when I go to run a codeblock or even just try to get the output from a single simple variable in the command window AFTER I loaded everything into my workspace, MATLAB will stall for a minute or even more before starting to run whatever command I gave it. I’ve been monitoring my PC resources and it doesn’t seem like I am running out of RAM or anything (working with 64gbs). I even went through and cleared many larger variables that were not needed for later parts of the script and the problem persists. I do not recieve any errors, it is just very slow to do simple things. Once it starts executing the command, it runs at the expected speed (I’ve verfied with some manual progress bars I coded in).
The data that I load is from a single .MAT file which has a structure in it with all of my data. I’ve also ran this script on 3 other PCs and had the same issue. structures MATLAB Answers — New Questions
How to change units in Bode Diagram?
I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this?I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this? I want to change the units of frequency and magnitude in the Bode Diagram, how can I do this? change, bode, diagram, units, properties, problematically, command, line, figure, setoptions MATLAB Answers — New Questions
How to Change Line Color on Mouse Click Without Disabling Pan and Zoom in MATLAB?
I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot?I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot? I’m working on a MATLAB plot where I want to achieve the following functionality:
Change the color of a line when it’s clicked.
Enable panning by dragging while clicking off the line.
Enable zooming using the scroll wheel.
Here’s the simple code I’m using:
% Plotting a simple line
h = plot([0,1],[0,1],’LineWidth’,3);
% Setting the ButtonDownFcn to change line color (placeholder code)
h.ButtonDownFcn = @(~,~) disp(h);
In this example, clicking on the line displays its handle h in the Command Window, which is a placeholder for my actual code to change the line color.
The Problem:
Assigning a ButtonDownFcn to the line object seems to override MATLAB’s built-in pan and zoom functionalities. After setting the ButtonDownFcn, I’m unable to pan by clicking and dragging off the line, and the scroll wheel zoom no longer works. It appears that the custom callback interferes with the default interactive behaviors, even when I’m not interacting directly with the line.
My Questions:
Why does setting the ButtonDownFcn on the line object disable panning and zooming, even when interacting off the line?
Is there a way to have both the custom click behavior (changing the line color when clicked) and retain the default pan and zoom functionalities when interacting elsewhere on the plot? callback, plot MATLAB Answers — New Questions
It’s Nearly 2025 and Meeting Channel invites still don’t work properly
With channel meetings, all members get a meeting invite regardless of whether the organizer invites them or not.
Why even offer the ability to invite individuals in the first place?
It’s been so long, I’m starting to think this was by design and MS has no intention of fixing it.
With channel meetings, all members get a meeting invite regardless of whether the organizer invites them or not. Why even offer the ability to invite individuals in the first place? It’s been so long, I’m starting to think this was by design and MS has no intention of fixing it. Read More
Hidden Symbol in Word; Cannot Find and Replace
When copying pasting from web pages or from Google email, I often encounter this weird symbol, imbedded in the document and only visible from the Show/Hide function. It creates an extra space in documents. Problem is, I cannot do a “Find/Replace” to remove it from Word docs. Can someone tell me what this character is referred to in Word and how to “Find/Replace” it? Thanks.
When copying pasting from web pages or from Google email, I often encounter this weird symbol, imbedded in the document and only visible from the Show/Hide function. It creates an extra space in documents. Problem is, I cannot do a “Find/Replace” to remove it from Word docs. Can someone tell me what this character is referred to in Word and how to “Find/Replace” it? Thanks.Hidden Symbol? Read More
Update a sharepoint Excel file with the contents of multiple Excels in another folder
For context we recieve a monthly Excel report every month that is automatically upload to our Sharepoint.
At the moment we have someone manually copy and paste the content from these newly uploaded files into a “master worksheet” that contains all the reports data in a single file. I want to know if there is a way that we that we can automate the process of updating this excl file?
The tabs and columns on all the Excel’s are exactly the same.
For context we recieve a monthly Excel report every month that is automatically upload to our Sharepoint. At the moment we have someone manually copy and paste the content from these newly uploaded files into a “master worksheet” that contains all the reports data in a single file. I want to know if there is a way that we that we can automate the process of updating this excl file? The tabs and columns on all the Excel’s are exactly the same. Read More
Re: Notes
How do I recover notes that were on my IPhone previous to today? I deleted my account to register it again today and my notes were gone when I added my email back to my phone
How do I recover notes that were on my IPhone previous to today? I deleted my account to register it again today and my notes were gone when I added my email back to my phone Read More
Update: Cost-effective genomics analysis with Sentieon on Azure
This Blog was Co-Authored by Don Freed – Sr. Bioinformatics Scientist, Brendan Gallagher – Head of Business Development at Sentieon, Inc.
In our previous blog, we discussed benchmarking the performance of Sentieon’s, DNAseq and DNAscope pipelines using Azure instances using v202112.05 of the software. Since the publication of those results, there have been significant updates to the Sentieon software. As a result, we have updated the benchmarking to use Sentieon version 202308.01. We break down the runtime and cost of the pipelines on a wide range of currently available instances. These benchmarks use publicly available datasets, and the pipeline is available on Github.
Additionally, we have worked with Sentieon to develop a Terraform template for deploym
ent of the license server.
Running Sentieon on Azure
The pipelines and scripts needed for setup used in this benchmarking are provided on GitHub.
Instance Setup
The script at misc/instance_setup.sh performs initial setup of the instance and download/installation of software packages used in the benchmark.
Input datasets
In these benchmarks, as we stated before, we use the GIAB HG002 sample sequenced on multiple sequencing platforms. Input datasets for the benchmark are recorded in the config/config.yaml. With the exception of the Element dataset, that you will have to download on your own.
We recommend downloading all the files and placing them in an azure blob storage. You can use AzCopy to transfer the required files to your own Storage account using a shared access signature with “Write” access. Then we recommend updating the configs to use a shared access signature to each file. The pipeline will automatically download input files.
Input FASTQ were obtained as previously outlined, we have added the new ONT dataset below:
ONT HPRC
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_1_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_2_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_3_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_4_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_5_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_6_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_7_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_8_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_9_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_10_Dordo_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
https://human-pangenomics.s3.amazonaws.com/submissions/0CB931D5-AE0C-4187-8BD8-B3A9C9BFDADE–UCSC_HG002_R1041_Duplex_Dorado/Dorado_v0.1.1/stereo_duplex/11_15_22_R1041_Duplex_HG002_11_Dorado_v0.1.1_400bps_sup_stereo_duplex_pass.fastq.gz
The input files vary in their coverage, so the datasets with FASTQ input were down-sampled to approximately 93 billion bases (~30x coverage) prior to processing with the Sentieon secondary analysis pipelines. The Ultima CRAM file was not down-sampled and is at 40x coverage as recommended by Ultima Genomics. The ONT duplex sample was not down-sampled and is at approximately 30x coverage.
The data were processed using the hg38 reference genome. The reference genome at https://giab.s3.amazonaws.com/release/references/GRCh38/GCA_000001405.15_GRCh38_no_alt_analysis_set.fasta.gz was used for files with input in the FASTQ format. The reference genome at https://broad-references.s3.amazonaws.com/hg38/v0/Homo_sapiens_assembly38.fasta was used with the Ultima data in CRAM format, as this dataset was already aligned to this reference genome.
Running benchmarks on Azure
The script at misc/run_benchmarks.sh was used to run the benchmarks. This orchestrates the localization of the input datasets, references, model files and execution of Snakemake workflows on the machine. The workflow will down-sample the input data to be consistent to run on the Sentieon analysis workflows and will calculate variant calling accuracy against the Genome in a Bottle (GIAB) v 4.2.1 truth set. For the ARM benchmarking we didn’t run ONT and Pacbio data as minimap2 is not support by Sentieon on that architecture in version 202308.01. Support for minimap2 on ARM was added in version 202308.03 of the Sentieon software.
Improved Benchmarking with HBv3
To test the improvement of the software we wanted to retest on the HBv3 series of machines, that we previously recommended. These machines are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation and would be a good fit for Sentieon’s analysis pipelines. Figure 1 presents the runtime and Spot compute cost of running Sentieon’s analysis pipelines for germline variant calling across multiple sequencing technologies on Standard_HB120rs_v3 instance in US East at the time of publication.
Figure 1: Runtime and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on Standard_HB120rs_v3.
Using the Standard_HB120rs_v3, we analyzed 30x Illumina NovaSeq and HiSeqX samples from FASTQ to VCF using the DNAseq and DNAscope pipelines. The DNAseq pipeline took around 28 minutes with a cost of $0.17. Sentieon’s DNAscope pipeline has been speed up and takes only 10 minutes shorter– around 18 minutes with a cost of $0.11, about 6 cents less, see Table 1
The Ultima UG100 dataset is already aligned to the reference genome and pipeline performed variant calling without alignment. The DNAscope pipeline finished in 18 minutes for Spot cost of $0.10.
Sentieon’s DNAscope LongRead pipeline for PacBio HiFi data is more computationally intensive as it includes multiple passes of variant calling along with a read-backed phasing. The DNAscope LongRead pipeline finished in 41 minutes with a Spot cost of $0.25. We add in ONT data in this round of tests, similar to the PacBio data, the ONT pipeline is more computationally involved. The DNAscope LongRead pipeline finished in 88 minutes with a Spot cost of $0.53 with the ONT long reads.
The Element Biosciences AVITI system is supported by a customized Sentieon DNAscope pipeline. Sentieon’s DNAscope pipeline for Element Biosciences finished in 21 minutes with a Spot cost of $0.13.
All run times and costs can be found in Table 1.
Sample
Pipeline
Alignment (min)
Preprocessing (min)
Variant Calling (min)
Total Runtime (min)
On Demand($)
Spot ($)
Element Aviti
DNAscope
11.05
2.30
7.39
20.74
1.241
0.121
Illumina HiSeq X
DNAseq
21.09
2.97
4.11
28.18
1.691
0.171
Illumina HiSeq X
DNAscope
9.47
1.40
7.71
18.57
1.111
0.111
Illumina NovaSeq
DNAseq
21.53
2.63
4.43
28.59
1.721
0.171
Illumina NovaSeq
DNAscope
9.74
1.39
7.78
18.92
1.141
0.111
ONT Duplex
DNAscope
32.91
N/A
55.37
88.28
5.301
0.531
PacBio HiFi
DNAscope
11.49
N/A
29.75
41.24
2.471
0.251
Ultima UG100
DNAscope
N/A
N/A
17.87
17.87
1.071
0.111
Table 1: Runtime and On Demand and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on Standard_HB120rs_v3. Alignment includes alignment with Sentieon BWA-MEM for short-read data and alignment with Sentieon minimap2 for PacBio HiFi and ONT Duplex data. Preprocessing includes duplicate marking, base-quality score recalibration, and merging of multiple aligned files into a single file. Variant calling includes variant calling or variant candidate identification along with variant genotyping and filtering. Variant calling for PacBio HiFi data is implemented as a multi-stage pipeline. All runs were in the eastus region1 Pricing is accurate at the time of publication.
Let’s compare the improvements between v202112.05 and v202308.01 of the software results based on the provided information:
1. DNAseq Pipeline Performance:
– v202112.05: Took around 30 minutes with a cost Spot of $0.18.
– v202308.01: Took around 28 minutes with a cost Spot of $0.17.
– Improvement: In v202308.01, the runtime decreased by 2 minutes; and the cost decreased by $0.01.
2. DNAscope Pipeline Performance:
– v202112.05: Took around 32 minutes with a cost of $0.19.
– v202308.01: Improved to 19 minutes with a cost of $0.11.
– Improvement: In v202308.01, the runtime decreased significantly to 19 minutes, and the cost decreased by $0.07.
3. DNAscope LongRead Pipeline Performance (PacBio HiFi Data):
– v202112.05: Finished in 72 minutes with a Spot cost of $0.42.
– v202308.01: Improved to 41 minutes with a Spot cost of $0.25.
– Improvement: In v202308.01, decreased significantly to 41minutes, and the cost decreased by $0.17.
4. Element Biosciences AVITI System Performance:
– v202112.05: Finished in 31 minutes with a Spot cost of $0.18.
– v202308.01: Improved to 20 minutes with a Spot cost of $0.12.
– Improvement: In v202308.01, the runtime decreased slightly to 20 minutes, and the cost decreased by $0.06.
Overall, in v202308.01, significant improvements were observed in the runtime and cost efficiency of the DNAscope pipeline, whereas minor fluctuations were noted in other pipeline performances. It’s also important to note that v202308.01 introduced support for ONT data in the DNAscope LongRead pipeline.
Sentieon benchmark across multiple instance families and architectures
The Sentieon pipelines and software can scale to smaller or larger instances depending on data as well as instance availability. To provide an accurate representation of performance across various architectures, we again benchmarked the Sentieon DNASeq and DNAscope pipeline with Illumina NovaSeq dataset on ARM and x86 architecture. The runtime, On Demand and Spot compute cost is shown in Figures 2 and 3 respectively. On Demand VMs are pay for compute capacity by the second, with no commitments or upfront payments. While Spot VMs are pay for unused compute capacity at a discount.
Figure 2: Runtime and Dedicated and Spot compute cost of Sentieon DNAseq pipeline across various Azure machine types using Illumina NovaSeq dataset sorted by overall runtime. Larger instances provide lower runtime, while cost is generally consistent within a family but does differ between architectures.
Figure 3: Runtime and Dedicated and Spot compute cost of Sentieon DNAscope pipeline across various Azure machine types using Illumina NovaSeq dataset sorted by overall runtime. Larger instances provide lower runtime, while cost is generally consistent within a family but does differ between architectures.
For the fastest turnaround, the Sentieon DNAseq pipeline can process the Illumina 30x NovaSeq dataset in 28 minutes on a Standard_HB120rs_v3, with a Dedicated cost of $1.72 or a Spot cost of $0.11, see Figure 2. As another cost-effective option, DNAseq can be used on the Standard_D96ads_v5 instance with an On-Demand cost of $3.38, a spot cost of $0.34 and a turnaround time of under 40 minutes, see Figure 2. The DNAscope pipeline for Standard_D96ads_v5 instance with an On-Demand cost of $2.55, a spot cost of $0.26 and a turnaround time of 31 minutes, see Figure 3. Note, for the Standard_F48s_v2, an additional external disk was used to accommodate all the test data for the analysis but wasn’t included in the overall cost.
Let’s compare the performance and cost efficiency between version v202308.01 and v202112.05:
1. DNAseq Pipeline Performance:
– v202112.05: Processed Illumina 30x NovaSeq dataset in 30 minutes on a Standard_HB120rs_v3 with a Spot cost of $0.18.
– v202308.01: Processes the dataset in 28 minutes on a Standard_HB120rs_v3 with a Spot cost of $0.11. Alternatively, it can be processed on a Standard_D96ads_v5 instance in under 40 minutes with a Spot cost of $0.34.
– Improvement: The turnaround time for the Standard_HB120rs_v3 decrased slightly to 28 minutes, with a decrease in Spot cost by $0.07. Additionally, a new option is available on the Standard_D96ads_v5 instance with a slightly longer turnaround time of under 40 minutes but at a higher Spot cost of $0.34 compared to $0.11.
2. DNAscope Pipeline Performance:
– v202112.05: Turnaround time of under 50 minutes with a Spot cost of $0.39.
– v202308.01: Turnaround time of 31 minutes on a Standard_D96ads_v5 instance with an On-Demand cost of $2.55 and a Spot cost of $0.26.
– Improvement: In v202308.01, the turnaround time decreased to 31 minutes, with a Spot cost of $0.26, offering improved performance and cost efficiency compared to the previous version.
3. Comparison Against ARM CPUs:
– v202112.05: ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. Spot price of $0.33 for DNAscope and $0.30 for DNAseq pipeline.
– v202308.01: ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. No significant difference in cost between architectures.
– Improvement: No significant difference in cost between the architectures is noted in v202308.01, whereas in v202112.05, there was a significant difference in cost for AMD architecture compared to Intel.
Overall, in v202308.01, while the DNAseq pipeline on the Standard_HB120rs_v3 shows a slight increase in turnaround time and cost, the DNAscope pipeline on the Standard_D96ads_v5 instance demonstrates improved performance and cost efficiency compared to the previous
version. Additionally, there is no significant difference in cost between ARM and X86 architectures in v202308.01, unlike in v202112.05. We would also like to note that the order of the machine types is slightly different but not with significant changes.
We were able to also run comparison against ARM CPUs. For direct comparison we were able to use the equivalent 32 vCPU machines, but the highest available is 64 vCPU when compared to 96 vCPU in X86 (Figure 2 and 3). In Table 2, we can see that ARM runtime was within 10-20 minutes of X86 equivalent for Intel and AMD. Additionally, Dedicated cost was comparable for DNAscope and DNAseq pipeline comparable across the board. However, this time there was not significant difference in cost between the architectures.
VM Size
Architecture
Pipeline
Total Runtime (min)
On Demand ($)
Spot ($)
D32ds_v5
x86 (Intel)
DNAscope
64.51
1.941
0.191
D32ads_v5
x86 (AMD)
DNAscope
76.95
2.111
0.211
D32pds_v5
ARM
DNAscope
82.00
1.981
0.201
D32ds_v5
x86 (Intel)
DNAseq
121.51
3.661
0.371
D32ads_v5
x86 (AMD)
DNAseq
115.12
3.161
0.321
D32pds_v5
ARM
DNAseq
123.72
2.981
0.301
Table 2: Runtime, Dedicated and Spot compute cost of Sentieon DNAseq and DNAscope pipelines on across 32cpu architectures. All runs were in the eastus region.
1 Pricing is accurate at the time of publication.
These results highlight the ability of the Sentieon software to scale up large instances for faster turnaround and down to smaller instances as needed. We only included a subset of potential compute, based on optimized compute-to-price ratios. However, the Sentieon tools can also be used with other machine families, based on availability in a given region.
Conclusion
Sentieon’s updated DNAseq and DNAscope pipelines are highly scalable and can be used on a variety of machine types. The software can scale up to the 120 vCPU Standard_HB120rs_v3, instances for turnaround times of 28 minutes or down to Standard_D32pds_v5 instances for better pricing on Spot instance of $0.30
If you can get Standard_HB120rs_v3 in your preferred region, it is the cheapest per run. However, if not available, all other Spot pricing options are great with the following two being your best cost advantage, Standard_D32ds_v5 and Standard_D96ds_v5. If you are looking for turnaround time, we recommend any of the 96vCPU options. Sentieon’s FASTQ to VCF pipelines can process Illumina 30x whole genomes for less than $3.60 on On Demand machines or $0.33 on Spot machines and in under 120 minutes. Standard_D32ds_v5 process the DNAseq pipeline in for $3.66 on On Demand machines or $0.37 on Spot machines and in about 121 minutes. While on Spot machines Sentieon DNAseq is capable of processing 30x genomes from FASTQ to VCF with a Spot machine cost of less than $1.50 on a variety of machine types that we tested.
Overall, the new version of the software has decreased cost and, in some cases, decreased turnaround time, with increased performance and range of datasets it can analyze.
Readers should note that all costs represent hardware costs and don’t represent software licensing costs.
To get started with the Sentieon software on Azure, please reach out to info@sentieon.com or visit the Sentieon website at www.sentieon.com
Microsoft Tech Community – Latest Blogs –Read More
Key Architectural Differences Between AWS and Azure Explained
Introduction
In today’s fast-moving digital world, cloud platforms are the foundation of everything from small startups to global enterprises. Choosing the right one can make all the difference when it comes to scalability, security, and driving innovation. With over 94% of companies relying on cloud services, expanding from AWS to Microsoft Azure unlocks a host of new possibilities.
Azure not only provides robust tools and services to optimize your infrastructure, but it also puts you at the forefront of AI advancements. From integrated AI services like Azure OpenAI to sophisticated machine learning models, Azure empowers businesses to transform how they build, deploy, and scale intelligent applications.
This guide explores the key differences between AWS and Azure—covering network architecture, availability zones, security, and more—helping you make informed decisions to future-proof your cloud strategy and stay ahead in an AI-driven world.
1. Network Architecture: AWS VPC vs. Azure VNET
AWS Virtual Private Cloud (VPC)
In AWS, the Virtual Private Cloud (VPC) is the backbone of your network architecture. It lets you build isolated environments where you control every aspect of your networking. The subnets in a VPC must be clearly designated as either public or private, ensuring a firm boundary between internet-facing resources and internal systems. Here’s how AWS VPC handles traffic and segmentation:
AWS VPC Network Segmentation
Key Components:
Public Subnet: Hosts internet-facing resources, such as web servers, which handle incoming HTTP traffic through an Internet Gateway (IGW).
Private Subnet: Hosts internal resources like databases that don’t have direct internet access.
Internet Gateway (IGW): The bridge that provides internet access for public subnets.
VPC Endpoint Gateway: Allows secure, private access to AWS services like S3 and DynamoDB without needing an internet connection.
NAT Gateway: Enables outbound internet traffic from private subnets.
Security Groups and Network ACLs: Provide both stateful and stateless traffic filtering to control inbound and outbound traffic.
Architectural Characteristics:
Explicit Segmentation: Subnets are clearly marked as public or private, making it easy to manage resource placement.
Manual Configuration: Setting up Internet Gateway (IGW), NAT Gateway, and route tables requires hands-on configuration.
Availability Zones (AZs): Resources are often spread across multiple AZs to ensure high availability and fault tolerance.
Azure Virtual Network (VNet)
Azure Virtual Network (VNet) provides similar network isolation as AWS, but with a stronger focus on managed services and simplifying network segmentation. It’s designed to reduce the complexity of manual configuration and make networking more efficient.
Azure VNET Network Segmentation
Key Components:
Public Subnet: Hosts resources that have direct internet access through assigned public IP addresses.
Private Subnet: Holds internal resources and securely connects to Azure services using Private Endpoints through Private Link.
Network Security Groups (NSGs): Control traffic to and from both public and private subnets, ensuring your resources are properly shielded.
Azure NAT Gateway: Offers outbound internet connectivity for resources that don’t have public IPs.
Service Endpoints and Private Links: Enable secure, private access to Azure services without needing to expose your resources to the internet.
Architectural Characteristics:
Streamlined Internet Access: Public IP addresses can be directly assigned to resources, bypassing the need for an Internet Gateway (IGW). Azure’s NAT Gateway provides outbound internet connectivity for private subnets, offering a simpler setup compared to AWS’s NAT Gateway.
Azure NAT Gateway: Offers outbound connectivity for private subnets without public IPs. The setup is simpler compared to AWS’s NAT Gateway, reducing the need for intricate routing configurations.
Integrated Services: Azure emphasizes managed services like Private Link, which simplify complex networking tasks, reducing the need for hands-on management.
Abstraction: Less manual configuration of routing and network appliances, making it easier for organizations to manage.
Key Architectural Differences:
Internet Connectivity:
AWS: Requires an Internet Gateway (IGW) for public subnet internet access.
Azure: Public IPs are directly assigned; no IGW equivalent is needed, and Azure NAT Gateway abstracts much of the internet connectivity configuration.
Subnet Designation:
AWS: Subnets must be explicitly marked as public or private.
Azure: Subnets are neutral; traffic control is handled by NSGs and public IP assignment.
Network Segmentation:
AWS: Provides granular control using Security Groups and NACLs.
Azure: Simplifies this with NSGs and Application Security Groups (ASGs), offering easier management of security rules.
2. Availability Zones and Redundancy
AWS Availability Zones
In AWS, regions are divided into multiple Availability Zones (AZs) to ensure high availability and fault tolerance. Resources can be deployed across these AZs, but it’s not automatic—you need to explicitly distribute them for redundancy, which often involves manual setup.
Multi-AZ architecture ensures redundancy and fault tolerance.
Architectural Approach:
Manual Distribution: Resources must be manually deployed across AZs to achieve redundancy.
Load Balancing: AWS uses Elastic Load Balancers to distribute traffic across multiple AZs for high availability.
High Availability Configurations: For services like RDS, configuring multi-AZ deployments requires additional setup to ensure proper redundancy and failover.
Azure Availability Zones
Azure also provides Availability Zones but takes a different approach by offering automatic zone-redundancy for many services. This abstraction reduces the complexity of managing high availability, especially for managed services. However, it’s important to remember that certain IaaS services, like Azure VMs, still require explicit configuration for redundancy across AZs. Additionally, geo-redundancy (multi-region failover) isn’t automatic for every service and must be configured for mission-critical workloads.
Azure abstracts zone management for many services. It’s zone redundant by default without manual configuration
Architectural Approach:
Automatic Redundancy: Many managed services, like Azure SQL Database, come with built-in zone redundancy by default, saving you the hassle of manual configuration.
Managed Services: Azure abstracts most of the complexity by automatically handling replication and failover for services like Azure SQL Database.
Zone-Aware Services: Not all services in Azure require explicit AZ configurations, making it easier to achieve high availability without manual effort.
Key Architectural Differences:
Resource Deployment:
AWS: Requires manual placement across AZs for redundancy.
Azure: Many services are inherently zone-redundant, though not all services are automatically redundant.
Operational Overhead:
AWS: Achieving high availability often requires more manual configuration.
Azure: Reduces complexity with built-in redundancy for managed services, such as Azure SQL Database, allowing for easier scaling and high availability without additional setup.
3. Security Models: AWS vs. Azure Controls
AWS Security Controls
In AWS, security is managed with a combination of Security Groups (SGs) and Network ACLs (NACLs). Security Groups operate at the instance level, while NACLs control traffic at the subnet level, offering multiple layers of security.
AWS uses SGs for instance-level security and NACLs for subnet-level control.
Key Points:
Security Groups: Manage inbound and outbound traffic by attaching to instances. Since they are stateful, they automatically allow return traffic without the need for additional rules.
Network ACLs: Control traffic at the subnet level and are stateless, meaning both inbound and outbound rules must be defined.
Architectural Implications:
Layered Security: By combining SGs for instance-level control and NACLs for subnet-level control, AWS provides a granular approach to managing traffic.
Complexity: The trade-off is complexity, as you need to manage both SGs and NACLs separately, which can add overhead when configuring security across large deployments.
Azure Security Controls
Azure takes a more streamlined approach to security with Network Security Groups (NSGs) and Application Security Groups (ASGs), making it easier to manage security policies across your infrastructure. Unlike AWS, Azure simplifies the process by combining functionality, reducing the need to manage multiple layers.
Azure simplifies security management through NSGs and ASGs, integrating directly with VMs or network interfaces
Key Points:
NSGs: Control inbound and outbound traffic at both the VM and subnet levels, similar to AWS SGs. Like AWS SGs, NSGs are stateful and automatically allow return traffic.
Flexible Application: NSGs can be applied to subnets, individual VMs, or network interfaces.
ASGs: Offer centralized security rules for logical groupings of VMs, making it easier to manage policies for specific sets of resources.
Dynamic Security Policies: Security rules can reference ASGs, reducing the need to manually update IP addresses whenever new instances are added.
Architectural Implications:
Simplified Management: With NSGs handling both instance-level and subnet-level security, Azure eliminates the need for a separate layer like NACLs, streamlining your security setup.
Efficient Policy Application: ASGs make it easier to apply consistent security policies across groups of VMs without needing to reconfigure individual resources.
Key Architectural Differences:
Security Layers:
AWS: Uses both SGs (stateful) and NACLs (stateless) for security, which can lead to more granular control but requires more effort.
Azure: Primarily uses NSGs (stateful), simplifying the model by not needing an additional layer like NACLs.
Resource Grouping:
AWS: Lacks a direct equivalent to ASGs, though you can use EC2 tagging for dynamic grouping in some cases.
Azure: ASGs allow for more efficient security management by applying centralized policies to logical groupings of VMs.
4. Managed Services: Levels of Automation
AWS Managed Services
AWS offers powerful managed services, but achieving high availability and scaling often requires manual setup. For example, if you want to configure RDS Multi-AZ deployments, you’ll need to manually set up replication across Availability Zones to ensure redundancy.
AWS services provide a high level of control but require more configuration for high availability.
Key Services:
RDS Multi-AZ: Requires manual configuration to enable replication across AZs for high availability.
EC2 Auto Scaling: Involves setting up scaling rules to automatically adjust resources based on demand.
Elastic Load Balancer (ELB): Distributes incoming traffic across AZs but requires additional setup.
Architectural Characteristics:
Customization: AWS gives you full control over configurations, allowing you to tailor setups to your needs.
Operational Responsibility: With more control comes more responsibility—there’s a greater need for hands-on management to ensure high availability and scaling.
Azure Managed Services
Azure takes a different approach by emphasizing automation and built-in redundancy in its managed services. Services like Azure SQL Database and Cosmos DB come with high availability baked in, so you spend less time configuring infrastructure and more time focusing on your core business. However, even though Azure automates much of the infrastructure management, careful planning for failover is still essential, particularly for mission-critical workloads.
Azure services are more abstracted, automating key operational tasks like scaling and availability across zones.
Key Services:
Azure SQL Database: Automatically manages replication, backups, zone redundancy, and scaling without manual intervention.
Azure App Service: Provides a fully managed PaaS solution for web applications, with built-in autoscaling and minimal configuration required.
Azure Cosmos DB: Delivers global replication with automatic scaling, making it easy to build globally distributed applications.
Architectural Characteristics:
Built-In High Availability: Services are designed with resilience in mind, ensuring high availability without additional configuration.
Reduced Operational Overhead: By automating critical tasks like redundancy and scaling, Azure reduces the need for manual maintenance, allowing you to focus on innovation instead of infrastructure management.
Key Architectural Differences:
Control vs. Convenience:
AWS: Offers more control but requires manual configurations to achieve redundancy and scaling, especially across AZs.
Azure: Automates much of the redundancy and scaling, particularly for managed services, with minimal user intervention required.
5. Storage Resiliency and Data Replication
AWS Storage Options
AWS offers a range of storage tiers, each designed for different durability and cost requirements. For instance, S3 Standard replicates data across multiple facilities in a region, providing high durability by default, while S3 One Zone-IA offers a more cost-effective option by storing data in a single Availability Zone (AZ), though this comes with lower durability.
Key Characteristics:
S3 Standard: Automatically replicates data across multiple facilities within a region for high durability.
S3 One Zone-IA: Stores data in a single AZ, reducing cost but sacrificing some resiliency.
Architectural Characteristics:
Automatic Replication: By default, S3 provides high durability across multiple AZs, ensuring data redundancy.
Choice of Redundancy: AWS offers a range of storage classes to allow flexibility in cost and durability, letting users balance redundancy with budget.
Azure Storage Options
Azure gives users more granular control over data replication, offering several replication strategies depending on your needs. Whether you require local, zonal, or geo-redundancy, Azure provides storage options that ensure data availability and resilience.
Key Characteristics:
Locally Redundant Storage (LRS): Keeps three copies of your data within a single data center, ensuring protection against local hardware failures.
Zone-Redundant Storage (ZRS): Replicates data synchronously across three AZs for higher availability.
Geo-Redundant Storage (GRS): Replicates data asynchronously to a secondary region, providing protection against regional failures.
Geo-Zone-Redundant Storage (GZRS): Combines ZRS and GRS for maximum resilience by replicating both within and across regions.
Architectural Characteristics:
Customization: Azure provides multiple levels of control over data replication, letting you choose the redundancy model that best suits your business needs.
Disaster Recovery: Azure includes built-in options for cross-regional replication, giving you out-of-the-box disaster recovery capabilities.
Key Architectural Differences:
Replication Control:
AWS: Automatic multi-AZ replication with fewer options for customization.
Azure: Offers a wider range of replication strategies, including local, zonal, and geo-redundancy, for greater flexibility.
Disaster Recovery Planning:
AWS: Cross-region replication requires additional services and setup.
Azure: Provides built-in geo-redundancy options for simpler disaster recovery planning.
6. Private Connectivity to Cloud Services
AWS VPC Endpoints
In AWS, VPC Endpoints allow you to connect privately to AWS services without exposing your resources to the internet. However, setting up these endpoints requires manual configuration for each service, making it a more hands-on process.
Types:
Gateway Endpoints: Used for services like S3 and DynamoDB.
Interface Endpoints: Powered by AWS PrivateLink to connect to other AWS services.
Architectural Characteristics:
Manual Setup: Each service you want to connect privately to requires its own endpoint, meaning more manual work.
Service-Specific Endpoints: The type of endpoint you need depends on the service, with different setups for gateway versus interface endpoints.
Azure Private Link and Endpoints
Azure streamlines private connectivity with Private Link and Private Endpoints, offering a more unified approach to accessing both Azure services and your own services securely. This reduces the complexity compared to AWS and makes managing private connections more efficient.
Features:
Private Endpoints: These are network interfaces that allow you to privately and securely connect to a service through Azure Private Link.
Service Integration: Works seamlessly with Azure services and can also be used for your own custom applications, creating a more versatile connection model.
Architectural Characteristics:
Simplified Configuration: With a more unified setup, it’s easier to manage and configure private connections in Azure.
Unified Approach: Azure uses the same method—Private Link—to connect to various services, making the process much more consistent and straightforward compared to AWS.
Key Architectural Differences:
Configuration Complexity:
AWS: Requires different setups depending on the type of service, with separate configurations for gateway and interface endpoints.
Azure: Simplifies this with Private Link, providing a unified approach for connecting to multiple services.
Service Accessibility:
AWS: Each service requires a specific endpoint type, which can lead to more management overhead.
Azure: Private Link offers broader access with fewer configurations, making it more user-friendly.
Conclusion
Understanding the key architectural differences between AWS and Azure is crucial for organizations looking to optimize their cloud strategy. While both platforms provide robust services, their approaches to network architecture, availability zones, security models, managed services, and storage resiliency vary significantly. By understanding these distinctions, businesses can fully leverage Azure’s capabilities while complementing their existing AWS expertise, creating a powerful multi-cloud strategy that boosts operational efficiency.
Key Takeaways:
Network Architecture: AWS offers granular control over network segmentation, but Azure simplifies it with integrated managed services, reducing manual configuration.
Availability Zones: Azure’s managed services come with built-in zone redundancy, while AWS often requires more manual intervention to achieve multi-AZ redundancy.
Public Internet Access: AWS uses an Internet Gateway for public internet access, whereas Azure simplifies this by directly assigning public IPs to resources.
Private Subnet Outbound Traffic: Both platforms use NAT Gateways for outbound traffic, but Azure abstracts the configuration more, making it easier to manage.
Security Models: Azure streamlines security with NSGs and ASGs, offering simpler and more flexible traffic control than AWS’s combination of Security Groups and NACLs.
Managed Services: Azure automates critical tasks like redundancy and scaling, while AWS often requires manual configuration for high availability.
Storage Resiliency: Azure provides more granular replication options, while AWS relies on predefined storage tiers.
Private Endpoints: Azure’s Private Link and Endpoints offer a more seamless and integrated approach to private connectivity compared to AWS’s VPC Endpoints, which require more manual setup.
By adapting to these architectural differences, your organization can unlock Azure’s full potential, complementing your AWS expertise and creating a multi-cloud strategy that enhances availability, operational efficiency, and cost management.
Additional resources:
Azure Architecture Guide for AWS Professionals: For a detailed comparison and further reading on transitioning from AWS to Azure.
Mapping AWS IAM concepts to similar ones in Azure: For a direct mapping of AWS IAM concepts to Azure’s security solutions, read this detailed discussion.
Microsoft Tech Community – Latest Blogs –Read More
calculate angles for walking robot
I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"?I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"? I am studying the model of a walking robot from the project https://www.mathworks.com/matlabcentral/fileexchange/64227-matlab-and-simulink-robotics-arena-walking-robot, if I increase the length and width of the legs and body, and also add arms and a head block, then the robot takes only one step and falls, as I understand it, for the changed parameters it is necessary to calculate the angles again, how can I do this?
how did the authors of the project calculate the variables "jAngsL", "jAngR", "siminL", "siminR"? roboticsarena, walkingrobot, bipedalrobot, inversekinematics, simulink MATLAB Answers — New Questions
App of chatbot restarts if you leave chatbot and go to normal chat options
Hi All,
I have a chatbot deployed as an app in MS Teams environment, called IT Support, now when we are using the chatbit and a colleague pings you and you respond to colleauge go back to chatbot, the chatbot has restarted even if you come back in matter of 10 seconds.
What settings we can do.
Hi All,I have a chatbot deployed as an app in MS Teams environment, called IT Support, now when we are using the chatbit and a colleague pings you and you respond to colleauge go back to chatbot, the chatbot has restarted even if you come back in matter of 10 seconds. What settings we can do. Read More
Inconsistency with health state and drain mode status in host pool
When viewing in the list, health state is not updating, see 2nd VM in this list, shows deallocated, correct, but health state Available
2nd VM in the list when you click into the session host, shows correctly shut down
Also 2nd VM in the list above, shows drain mode off, but when clicked into the session host, drain mode shows on.
using powershell Get-Azwvdsessionhost – seems to match the portal list and shows the host Status as Active and AllowNewSession is True.
Update: After several minutes, these statuses do seem to catch up, but it is at least 10 minutes delayed, maybe more (I didn’t have a clock on it).
Update 2: the user session count returned by the portal and powershell are also incorrect.
When viewing in the list, health state is not updating, see 2nd VM in this list, shows deallocated, correct, but health state Available2nd VM in the list when you click into the session host, shows correctly shut downAlso 2nd VM in the list above, shows drain mode off, but when clicked into the session host, drain mode shows on. using powershell Get-Azwvdsessionhost – seems to match the portal list and shows the host Status as Active and AllowNewSession is True.Update: After several minutes, these statuses do seem to catch up, but it is at least 10 minutes delayed, maybe more (I didn’t have a clock on it). Update 2: the user session count returned by the portal and powershell are also incorrect. Read More
What the latest Copilot enhancements mean for Small and Medium-sized Businesses
Hey Everyone! Brenna Robinson, GM for Microsoft 365 small and medium sized businesses, discusses what these latest announcements on the next wave of Copilot could mean for you, highlighting some of the most impactful enhancements.
No matter your business size, I highly recommend the read here!
Would love to hear your thoughts!
Hey Everyone! Brenna Robinson, GM for Microsoft 365 small and medium sized businesses, discusses what these latest announcements on the next wave of Copilot could mean for you, highlighting some of the most impactful enhancements.
No matter your business size, I highly recommend the read here!
Would love to hear your thoughts! Read More
Error in Teams Sorry something went wrong. Please try again or share your feedback.
Hello!
I’m curious if anyone has run into this or may have some insight on how to troubleshoot. Myself and my coworkers have licenses for Copilot 365. We have been using it for several months but in the last week have suddenly started seeing an issue where it seems to be completely disconnected and is throwing “Sorry, something went wrong. Please try again or share your feedback.” within the Meeting Recap (though it will show “AI Notes”), during a meeting that is being transcribed, and strangely just the “work” side of the Windows 11 desktop application gives the error “Sorry, looks like something went wrong.”
We see the same behavior in Teams desktop and web versions.
However, when using Copilot in the regular Teams chat, in the web, or in Microsoft applications it is connected and working as expected.
What could be the disconnect here?
Thanks in advance for any insight!!
Hello! I’m curious if anyone has run into this or may have some insight on how to troubleshoot. Myself and my coworkers have licenses for Copilot 365. We have been using it for several months but in the last week have suddenly started seeing an issue where it seems to be completely disconnected and is throwing “Sorry, something went wrong. Please try again or share your feedback.” within the Meeting Recap (though it will show “AI Notes”), during a meeting that is being transcribed, and strangely just the “work” side of the Windows 11 desktop application gives the error “Sorry, looks like something went wrong.”We see the same behavior in Teams desktop and web versions.However, when using Copilot in the regular Teams chat, in the web, or in Microsoft applications it is connected and working as expected. What could be the disconnect here? Thanks in advance for any insight!! Read More
‘Open-ended’ sequential numbering of Rows in Excel…
I have been looking at the SEQUENCE function but, if I understand it correctly, you need to enter and end point for how many sequential numbers you want Excel to autofill (e.g. 1 to 1000)?
Is there a way to get Excel to autofill a sequential number when new data is entered in a Row?
For example, Column A is sequential numbers starting with ‘1’ (without the ‘ ‘ inverted commas) in A1, then 2 in A2 etc and every time data is entered in a new Row in Column B (for example), a new sequential number is entered automatically in the same Row in Column A? TIA
I have been looking at the SEQUENCE function but, if I understand it correctly, you need to enter and end point for how many sequential numbers you want Excel to autofill (e.g. 1 to 1000)? Is there a way to get Excel to autofill a sequential number when new data is entered in a Row?For example, Column A is sequential numbers starting with ‘1’ (without the ‘ ‘ inverted commas) in A1, then 2 in A2 etc and every time data is entered in a new Row in Column B (for example), a new sequential number is entered automatically in the same Row in Column A? TIA Read More
NEW: Wave 2 updates to Adoption.microsoft.com/Copilt
The work we do as Service Adoption and User Enablement Specialists has never been more important. During this moment, where AI experiences are on the rise, the human connection is essential to helping people overcome their AI anxiety. Our tools, updated this morning to support Wave 2 of our Microsoft 365 Copilot experiences is designed to support you in this journey.
This post will give you all the details of what’s new and available. We’d love to hear your feedback here. How can we assist you further? And don’t forget to take our User Enablement for Copilot course on Microsoft Learn to add the badge to your LinkedIn profile. There are never enough of us that are dedicated to the empowerment of people with technology and your role is paramount in businesses getting the value of the investment they make in these services!
The work we do as Service Adoption and User Enablement Specialists has never been more important. During this moment, where AI experiences are on the rise, the human connection is essential to helping people overcome their AI anxiety. Our tools, updated this morning to support Wave 2 of our Microsoft 365 Copilot experiences is designed to support you in this journey.
Copilot Hub on adoption.microsoft.com
This post will give you all the details of what’s new and available. We’d love to hear your feedback here. How can we assist you further? And don’t forget to take our User Enablement for Copilot course on Microsoft Learn to add the badge to your LinkedIn profile. There are never enough of us that are dedicated to the empowerment of people with technology and your role is paramount in businesses getting the value of the investment they make in these services! Read More
Sensitivity Labels not working as expected
Hi experts,
I’ve been playing with sensitivity labels recently and I’m in testing phase currently having few ppl testing it for me before I officially deploy to all. However, it looks like there are few things that do not work as expected and I’m not sure why. Hope I can find some help here.
Here is what I have configured and what is the experience during our testing
Email should inherit sensitivity label form attachmentI have label for documents set as required , and email is set to no default label and selected “inherit” label from attachmentI have “ConfidentialView Only” label that has allowed only “View rights / Reply / Reply all” allowed permission.Testing experience: When I attach a document with this label assigned, there is no restriction at all and I can forward, download, etc… looks like inheritance of label from attachments to email is not working at all. When I download the attachment, I see that the document has restricted permissions (can’t print, save, etc) so it looks it is working on the document level.“ConfidentialInternal” label should be blockedI can share with external users via SharePoint …and can even open it as external user with no issues at all.. Label access control nor DLP prevents this!!! Is there something I miss here? Not sure if important – I have “MS Entra for Sharepoint enabled”DLP is configured to check Sharepoint, Emails, OneDrive for “ConfidentialInternal” for “content shared outside the organization” and “sensitivity label ConfidentialInternal” and BLOCK itDLP works fine for emails with attachments labelled with this label, and it is blocked as expectedConfidentialInternal is blocked in the outlook when trying to send emailwhen I am sending an attachment with ConfidentialInternal document in Outlook (New Outlook), I see a note about external users that needs to be removed. When trying to send anyway, it is blocked and I get a message below. Which is great
however, another two testers do not get this experience and their email is blocked with DLP (mentioned above) only – which is nice, but the experience I get is much better as users can correct recipients instantly (FYI – I am using NEW Outlook – need to check later this week with the testers if they are on Old or NEW one)
When I go through New Email > Options > Sensitivity – I can see the labels I configured
Hi experts, I’ve been playing with sensitivity labels recently and I’m in testing phase currently having few ppl testing it for me before I officially deploy to all. However, it looks like there are few things that do not work as expected and I’m not sure why. Hope I can find some help here. Here is what I have configured and what is the experience during our testingEmail should inherit sensitivity label form attachmentI have label for documents set as required , and email is set to no default label and selected “inherit” label from attachmentI have “ConfidentialView Only” label that has allowed only “View rights / Reply / Reply all” allowed permission.Testing experience: When I attach a document with this label assigned, there is no restriction at all and I can forward, download, etc… looks like inheritance of label from attachments to email is not working at all. When I download the attachment, I see that the document has restricted permissions (can’t print, save, etc) so it looks it is working on the document level.”ConfidentialInternal” label should be blockedI can share with external users via SharePoint …and can even open it as external user with no issues at all.. Label access control nor DLP prevents this!!! Is there something I miss here? Not sure if important – I have “MS Entra for Sharepoint enabled”DLP is configured to check Sharepoint, Emails, OneDrive for “ConfidentialInternal” for “content shared outside the organization” and “sensitivity label ConfidentialInternal” and BLOCK itDLP works fine for emails with attachments labelled with this label, and it is blocked as expectedConfidentialInternal is blocked in the outlook when trying to send emailwhen I am sending an attachment with ConfidentialInternal document in Outlook (New Outlook), I see a note about external users that needs to be removed. When trying to send anyway, it is blocked and I get a message below. Which is great however, another two testers do not get this experience and their email is blocked with DLP (mentioned above) only – which is nice, but the experience I get is much better as users can correct recipients instantly (FYI – I am using NEW Outlook – need to check later this week with the testers if they are on Old or NEW one) Its a bit of text, and I apologize… Wanted to describe is as best as I can 🙂 … and hopefully help anyone else facing the same… Would be grateful for your help…. As the testing is super time consuming due to the fact that any change I make to sensitivity label and policy, I prefer to wait recommended 24 hrs to see if it had any effect…. Update:forgot to ask, why I see some “default” labels when creating emails? When I go to “More Options”, in new email, I can see the below:When I go through New Email > Options > Sensitivity – I can see the labels I configured Read More
MS Teams Visibility Context
Hi there,
I am developing a Microsoft Teams bot application and am encountering an issue where the bot’s visibility and functionality are not being restricted as specified in the app manifest. Here are the details of the problem.
Issue Description:
Our bot is intended to be used only with the bot itself once sideloaded.We have set the bot’s scope in the manifest to “personal” only.Despite this setting, the bot remains visible and functional in 1:1 chats, group chats and team channels.
Steps Taken:
Updated the manifest.json file to include only “personal” in the bot’s scopes.
Expected Behaviour:
The bot should only be visible and functional in direct interactions with the bot.Users should not be able to add or interact with the bot in 1:1 chats, group chats and team channels.
Questions:
Are there additional steps or configurations required to restrict a bot’s visibility and functionality to direct bot contexts only?Is there a known issue with the manifest scope settings not being enforced for bots?
Here is the manifest:
{
“$schema”:”https://developer.microsoft.com/en-us/json-schemas/teams/v1.16/MicrosoftTeams.schema.json”,
“manifestVersion”:”1.16″,
“version”:”1.2.0″,
“id”:”{{.AppID}}”,
“localizationInfo”:{
“defaultLanguageTag”:”en-gb”,
“additionalLanguages”:[
]
},
“developer”:{
“name”:”REDACTED”,
“websiteUrl”:”REDACTED”,
“privacyUrl”:”REDACTED”,
“termsOfUseUrl”:”REDACTED”
},
“icons”:{
“color”:”color.png”,
“outline”:”outline.png”
},
“name”:{
“short”:”{{.AppName}}”,
“full”:”{{.AppName}}”
},
“description”:{
“short”:”REDACTED”,
“full”:”REDACTED”
},
“accentColor”:”#00bd00″,
“configurableTabs”:[
],
“staticTabs”:[
],
“bots”:[
{
“botId”:”{{.AppID}}”,
“scopes”:[
“personal”
],
“needsChannelSelector”:false,
“isNotificationOnly”:false,
“supportsFiles”:false,
“supportsCalling”:false,
“supportsVideo”:false,
“commandLists”:[
{
“scopes”:[
“personal”
],
“commands”:[
]
}
]
}
],
“composeExtensions”:[
{
“botId”:”{{.AppID}}”,
“commands”:[
{
“id”:”REDACTED”,
“context”:[
“commandBox”,
“compose”,
“message”
],
“description”:”REDACTED”,
“title”:”REDACTED”,
“type”:”action”,
“fetchTask”:true
}
]
}
],
“permissions”:[
“identity”,
“messageTeamMembers”
],
“devicePermissions”:[
],
“validDomains”:[
“REDACTED”,
“REDACTED”
],
“showLoadingIndicator”:false,
“isFullScreen”:false,
“activities”:{
},
“defaultInstallScope”:”personal”
}
Thanks.
Hi there, I am developing a Microsoft Teams bot application and am encountering an issue where the bot’s visibility and functionality are not being restricted as specified in the app manifest. Here are the details of the problem. Issue Description:Our bot is intended to be used only with the bot itself once sideloaded.We have set the bot’s scope in the manifest to “personal” only.Despite this setting, the bot remains visible and functional in 1:1 chats, group chats and team channels.Steps Taken:Updated the manifest.json file to include only “personal” in the bot’s scopes.Expected Behaviour:The bot should only be visible and functional in direct interactions with the bot.Users should not be able to add or interact with the bot in 1:1 chats, group chats and team channels.Questions:Are there additional steps or configurations required to restrict a bot’s visibility and functionality to direct bot contexts only?Is there a known issue with the manifest scope settings not being enforced for bots?Here is the manifest: {
“$schema”:”https://developer.microsoft.com/en-us/json-schemas/teams/v1.16/MicrosoftTeams.schema.json”,
“manifestVersion”:”1.16″,
“version”:”1.2.0″,
“id”:”{{.AppID}}”,
“localizationInfo”:{
“defaultLanguageTag”:”en-gb”,
“additionalLanguages”:[
]
},
“developer”:{
“name”:”REDACTED”,
“websiteUrl”:”REDACTED”,
“privacyUrl”:”REDACTED”,
“termsOfUseUrl”:”REDACTED”
},
“icons”:{
“color”:”color.png”,
“outline”:”outline.png”
},
“name”:{
“short”:”{{.AppName}}”,
“full”:”{{.AppName}}”
},
“description”:{
“short”:”REDACTED”,
“full”:”REDACTED”
},
“accentColor”:”#00bd00″,
“configurableTabs”:[
],
“staticTabs”:[
],
“bots”:[
{
“botId”:”{{.AppID}}”,
“scopes”:[
“personal”
],
“needsChannelSelector”:false,
“isNotificationOnly”:false,
“supportsFiles”:false,
“supportsCalling”:false,
“supportsVideo”:false,
“commandLists”:[
{
“scopes”:[
“personal”
],
“commands”:[
]
}
]
}
],
“composeExtensions”:[
{
“botId”:”{{.AppID}}”,
“commands”:[
{
“id”:”REDACTED”,
“context”:[
“commandBox”,
“compose”,
“message”
],
“description”:”REDACTED”,
“title”:”REDACTED”,
“type”:”action”,
“fetchTask”:true
}
]
}
],
“permissions”:[
“identity”,
“messageTeamMembers”
],
“devicePermissions”:[
],
“validDomains”:[
“REDACTED”,
“REDACTED”
],
“showLoadingIndicator”:false,
“isFullScreen”:false,
“activities”:{
},
“defaultInstallScope”:”personal”
} Thanks. Read More