Month: August 2024
Difference between Pre-Training with SFT
Difference between Pre-Training with SFT
The goals of pre-training, the datasets used, and the number of GPUs required are all different. However, if we are to explain the difference from the essence of deep learning training, it is:
Pre-training involves randomly initializing model parameters, constructing the model, and then training it on a large amount of unlabeled data to learn general features of the corpus; whereas fine-tuning loads parameters from the pre-trained model, retains the general features learned during pre-training, and trains the model on a small amount of high-quality labeled data to enhance the model’s capability and performance on specific tasks.
The parameters mentioned above include: weights, biases, Word Embeddings, Positional Encoding, attention mechanism parameters, etc.
More detail explaination
Pre-Training
Pre-Training aims to learn the fundamental structure and semantic features of a language using large-scale unsupervised datasets (such as text corpora). Pre-training typically involves the following steps:
Random Initialization of Weights: The model’s parameters, such as weights and biases, are randomly initialized at the start of pre-training.
Large-Scale Dataset: Training is conducted using a vast amount of unsupervised data.
Learning General Features: The model learns the general features of the language by optimizing a loss function (e.g., the cross-entropy loss of a language model).
Key Points of Pre-Training
Random Initialization: All model parameters (weights, biases, etc.) are random at the beginning of pre-training.
Large-Scale Data: Training is done using a large-scale unsupervised dataset.
General Features: The model learns the basic structure and semantic features of the language, providing a good starting point for subsequent tasks.
Fine-Tuning
Fine-Tuning aims to optimize the model’s performance on a specific task using a task-specific dataset. Fine-tuning typically involves the following steps:
Loading Pre-Trained Weights: The model’s weights and biases are loaded from the pre-trained model.
Task-Specific Data: Training is conducted using a dataset specific to the task.
Optimizing Task Performance: The model adjusts its parameters by optimizing a loss function to improve performance on the specific task.
Key Points of Fine-Tuning
Loading Pre-Trained Weights: The model’s parameters are loaded from the pre-trained model, retaining the general features learned during pre-training.
Task-Specific Data: Training is done using a dataset specific to the task.
Task Optimization: The model’s parameters are further adjusted to optimize performance on the specific task.
Summary
Training Efficiency: Pre-training usually requires substantial computational resources and time because it involves training all model parameters on a large-scale dataset. Fine-tuning is relatively efficient as it builds on the pre-trained model and only requires further optimization on task-specific data.
Model Performance: The pre-trained model has already learned general language features, allowing fine-tuning to converge faster and perform better on specific tasks. Training a task-specific model from random initialization typically requires more data and time, and its performance may not match that of the pre-training + fine-tuning approach.
Application Scenarios: Pre-trained models can serve as general-purpose base models suitable for various downstream tasks. Fine-tuning allows for quick adaptation to different task requirements without the need to train a model from scratch.
Pre-training Code Demonstration
Taking GPT-2 as an Example
https://huggingface.co/docs/transformers/v4.44.0/en/model_doc/gpt2#transformers.GPT2LMHeadModel
To pre-train GPT-2, we need to use the classes GPT2LMHeadModel and GPT2Config.
config = GPT2Config()
model = GPT2LMHeadModel(config)
tokenizer = GPT2Tokenizer.from_pretrained(“gpt2”)
tokenizer.pad_token = tokenizer.eos_token
dataset = load_dataset(“wikitext”, “wikitext-2-raw-v1”)
def tokenize_function(examples):
return tokenizer(examples[“text”], truncation=True, padding=”max_length”, max_length=512, return_special_tokens_mask=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=[“text”])
print(“Train dataset size:”, len(tokenized_datasets[“train”]))
print(“Validation dataset size:”, len(tokenized_datasets[“validation”]))
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
training_args = TrainingArguments(
output_dir=”./results”,
overwrite_output_dir=True,
num_train_epochs=5,
per_device_train_batch_size=64,
save_steps=10_000,
save_total_limit=2,
remove_unused_columns=False,
report_to=[],
learning_rate=5e-4
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_datasets[“train”],
eval_dataset=tokenized_datasets[“validation”]
)
if torch.cuda.is_available():
model.cuda()
trainer.train()
Since the model is small, pre-training can be done with a single H100 GPU.
Training result is as following:
Step
Training Loss
500
6.505700
1000
5.657100
1500
5.269900
2000
4.972000
2500
4.725000
The trained model can be used for inference validation.
model = GPT2LMHeadModel.from_pretrained(“./results/checkpoint-2870”)
tokenizer = GPT2Tokenizer.from_pretrained(“gpt2”)
tokenizer.pad_token = tokenizer.eos_token
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
model.to(device)
model.eval()
input_text = “Once upon a time”
inputs = tokenizer(input_text, return_tensors=”pt”, padding=True).to(device)
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
attention_mask=inputs.attention_mask,
max_length=100,
num_return_sequences=1,
no_repeat_ngram_size=2,
early_stopping=True,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Inference result is as following:
Once upon a time of the earthquake, the local community of local and a new new government, a military government who had begun with the ” the most prominent “.
Fine-tuning Code Demonstration
When we fine-tune a model, it usually refers to Supervised Fine Tuning (SFT). SFT can be divided into Parameter-Efficient Fine-Tuning (PEFT) and Full Fine Tuning.In PEFT implementations, methods like LoRA, QLoRA, and GA-LoRA are quite popular.
Let’s first look at how to load a model for Full Fine Tuning. We use the AutoModelForCausalLM.from_pretrained class, which retrieves the parameters of the pre-trained model.
model = AutoModelForCausalLM.from_pretrained(
model_name, attn_implementation=attn_implementation, device_map={“”: 0}
)
model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={‘use_reentrant’:True})
For the complete Full fine tuning code, refer to the repository:
https://github.com/davidsajare/david-share/tree/master/Deep-Learning/SmolLM-Full-Fine-Tuning
Next, let’s look at the differences in code implementation for fine-tuning, LoRA, and QLoRA. In terms of loading models and training parameters, Full Fine-Tuning, LoRA, and QLoRA have the following differences:
Difference in Loading Models
Full Fine-Tuning
Directly load the complete model for training.
Use AutoModelForCausalLM.from_pretrained to load the model.
LoRA
Load the model and then use LoRA configuration for parameter-efficient fine-tuning.
Use LoraConfig from the peft library to configure LoRA parameters.
Target modules are usually specific projection layers, such as k_proj, q_proj, etc.
QLoRA
Based on LoRA, it combines quantization techniques (e.g., 4-bit quantization) to reduce memory usage.
Use BitsAndBytesConfig for quantization configuration.
Call prepare_model_for_kbit_training to prepare the model.
Difference in Training Parameters
Full Fine-Tuning
Train all model parameters.
Typically requires more memory and computational resources.
Use standard optimizers like adamw_torch.
LoRA
Only train the low-rank matrices inserted by LoRA, keeping other parameters unchanged.
Faster training speed and less memory usage.
Use optimizers like paged_adamw_8bit.
QLoRA
Combine LoRA and quantization techniques to further reduce memory usage.
Suitable for fine-tuning large models in resource-constrained environments.
Also use the paged_adamw_8bit optimizer.
It should be noted that when performing LoRA or QLoRA fine-tuning, we can specify the modules to be trained, such as:
model = FastLanguageModel.get_peft_model(
model,
r = 128,
target_modules = [“q_proj”, “k_proj”, “v_proj”, “o_proj”,
“gate_proj”, “up_proj”, “down_proj”,
“embed_tokens”, “lm_head”,], # Add for continual pretraining
lora_alpha = 32,
lora_dropout = 0, # Supports any, but = 0 is optimized
bias = “none”, # Supports any, but = “none” is optimized
use_gradient_checkpointing = “unsloth”, # True or “unsloth” for very long context
random_state = 3407,
use_rslora = True,
)
For detailed information, refer to:
https://github.com/davidsajare/david-share/tree/master/Deep-Learning/Continue-Pre-training
Distributed Implementation of Training
There is no doubt that pre-training large language models requires multi-node and multi-GPU setups. This necessitates distributed training. Currently, the underlying distributed pre-training can be implemented by calling NCCL. Higher-level tools such as Megatron, DeepSpeed, and HF’s accelerate library (which currently supports FSDP) can be used. These tools effectively implement DP/PP/TP.
Megatron-DeepSpeed
For detailed information on pre-training using Megatron combined with DeepSpeed, refer to:
DeepSpeed
For an example of SFT implementation using DeepSpeed, refer to:
Axolotl
Currently, some open-source fine-tuning tools like Axolotl can also directly interface with DeepSpeed. For an example, refer to:
https://github.com/davidsajare/david-share/tree/master/Deep-Learning/Fine-tuning-with-Axolotl
Accelerate
When using FSDP with accelerate, other parallel strategies can be combined to achieve more efficient training.
Data Parallelism (DP)
FSDP itself is a data parallel strategy, achieved by sharding model parameters.
Pipeline Parallelism (PP)
The model can be divided into multiple stages, with each stage running on different devices. This requires manual partitioning of the model and managing the data flow.
Tensor Parallelism (TP)
The computation of a single layer is distributed across multiple devices. This requires modifications to the model’s computation graph.
Combining these strategies usually requires significant customization and adjustments to the model and training scripts. accelerate provides some tools to simplify these processes, but specific implementations may require combining other PyTorch libraries (such as torch.distributed) and custom code.
For an example of FSDP with accelerate, refer to:
https://github.com/davidsajare/david-share/tree/master/Deep-Learning/Llama-3.1-70B-FSDP-Fine-Tuning
Microsoft Tech Community – Latest Blogs –Read More
Simulink model -Code generation
Hello
I am currently trying to generate C++ code from a Simulink model (code only for grt.tlc system target file). The model includes some s-functions in which the mdlstart() functions are waiting for outside signal (TCP connexion).
I can’t send those signal during code generation. However, the code generation phases includes build and initialization steps. So then, initialization triggers mdlstart functions in s-function blocks.
Is there a way to avoid this intialization step during code generation (options in configuration parameters window)?
Thank youHello
I am currently trying to generate C++ code from a Simulink model (code only for grt.tlc system target file). The model includes some s-functions in which the mdlstart() functions are waiting for outside signal (TCP connexion).
I can’t send those signal during code generation. However, the code generation phases includes build and initialization steps. So then, initialization triggers mdlstart functions in s-function blocks.
Is there a way to avoid this intialization step during code generation (options in configuration parameters window)?
Thank you Hello
I am currently trying to generate C++ code from a Simulink model (code only for grt.tlc system target file). The model includes some s-functions in which the mdlstart() functions are waiting for outside signal (TCP connexion).
I can’t send those signal during code generation. However, the code generation phases includes build and initialization steps. So then, initialization triggers mdlstart functions in s-function blocks.
Is there a way to avoid this intialization step during code generation (options in configuration parameters window)?
Thank you simulink, code generation, s-function MATLAB Answers — New Questions
load mex library error on Linux
Hi,
I getting this error while trying to load a mex library on Ubuntu 20.04
>> loadlibrary(‘ice’, @iceproto)
Error using message/getString
In ‘MATLAB:loadlibrary:ErrorRunningFromCommandLine’, parameter {0} must be a real scalar.
Error in loadlibrary
I have build a mex, proto and thunk files
ice.mexa64 iceproto.m icethunk.so
Cheers,
JoseHi,
I getting this error while trying to load a mex library on Ubuntu 20.04
>> loadlibrary(‘ice’, @iceproto)
Error using message/getString
In ‘MATLAB:loadlibrary:ErrorRunningFromCommandLine’, parameter {0} must be a real scalar.
Error in loadlibrary
I have build a mex, proto and thunk files
ice.mexa64 iceproto.m icethunk.so
Cheers,
Jose Hi,
I getting this error while trying to load a mex library on Ubuntu 20.04
>> loadlibrary(‘ice’, @iceproto)
Error using message/getString
In ‘MATLAB:loadlibrary:ErrorRunningFromCommandLine’, parameter {0} must be a real scalar.
Error in loadlibrary
I have build a mex, proto and thunk files
ice.mexa64 iceproto.m icethunk.so
Cheers,
Jose mex MATLAB Answers — New Questions
Error using montage function
I have a script that, given 16 images,creates eight montages
I have read in the documentation that it is possible to use an handle to call the montage, and i have done that because i want to be able to show every single montage using imshow to be able to give titles to every one of them.
Running the script causes errors to occur.
Here is the script
clear all;
close all;
clc;
%Importare le immagini
%Posizioni centroidi
PosClust1=imread("Cent_col_cluster1.png");
PosClust2=imread("Cent_col_cluster2.png");
PosClust3=imread("Cent_col_cluster3.png");
PosClust4=imread("Cent_col_cluster4.png");
PosClust5=imread("Cent_col_cluster5.png");
PosClust6=imread("Cent_col_cluster6.png");
PosClust7=imread("Cent_col_cluster7.png");
PosClust8=imread("Cent_col_cluster8.png");
PosClust9=imread("Cent_col_cluster9.png");
%Colori centroidi
Cent_col_Clust1=imread("Cent_pos_cluster1.png");
Cent_col_Clust2=imread("Cent_pos_cluster2.png");
Cent_col_Clust3=imread("Cent_pos_cluster3.png");
Cent_col_Clust4=imread("Cent_pos_cluster4.png");
Cent_col_Clust5=imread("Cent_pos_cluster5.png");
Cent_col_Clust6=imread("Cent_pos_cluster6.png");
Cent_col_Clust7=imread("Cent_pos_cluster7.png");
Cent_col_Clust8=imread("Cent_pos_cluster8.png");
Cent_col_Clust9=imread("Cent_pos_cluster9.png");
%————————————————————————-
%Trasformazione in imagetipe double
%Posizioni centroidi
PosClust1Double=im2double(PosClust1);
PosClust2Double=im2double(PosClust2);
PosClust3Double=im2double(PosClust3);
PosClust4Double=im2double(PosClust4);
PosClust5Double=im2double(PosClust5);
PosClust6Double=im2double(PosClust6);
PosClust7Double=im2double(PosClust7);
PosClust8Double=im2double(PosClust8);
PosClust9Double=im2double(PosClust9);
%Colori centroidi
Cent_col_Clust1Double=im2double(Cent_col_Clust1);
Cent_col_Clust2Double=im2double(Cent_col_Clust2);
Cent_col_Clust3Double=im2double(Cent_col_Clust3);
Cent_col_Clust4Double=im2double(Cent_col_Clust4);
Cent_col_Clust5Double=im2double(Cent_col_Clust5);
Cent_col_Clust6Double=im2double(Cent_col_Clust6);
Cent_col_Clust7Double=im2double(Cent_col_Clust7);
Cent_col_Clust8Double=im2double(Cent_col_Clust8);
Cent_col_Clust9Double=im2double(Cent_col_Clust9);
%Creazione montaggi
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
Clust2=montage(Cent_col_Clust2Double,PosClust2Double);
Clust3=montage(Cent_col_Clust3Double,PosClust3Double);
Clust4=montage(Cent_col_Clust4Double,PosClust4Double);
Clust5=montage(Cent_col_Clust5Double,PosClust5Double);
Clust6=montage(Cent_col_Clust6Double,PosClust6Double);
Clust7=montage(Cent_col_Clust7Double,PosClust7Double);
Clust8=montage(Cent_col_Clust8Double,PosClust8Double);
Clust9=montage(Cent_col_Clust9Double,PosClust9Double);
%Plot
%Cluster1
imshow(Clust1)
title(‘Cluster1’)
%Cluster2
figure
imshow(Clust2)
title(‘Cluster2’)
%Cluster3
figure
imshow(Clust3)
title(‘Cluster3’)
%Cluster4
figure
imshow(Clust4)
title(‘Cluster4’)
%Cluster5
figure
imshow(Clust5)
title(‘Cluster5’)
%Cluster6
figure
imshow(Clust6)
title(‘Cluster6’)
%Cluster7
figure
imshow(Clust7)
title(‘Cluster7’)
%Cluster8
figure
imshow(Clust8)
title(‘Cluster8’)
%Cluster9
figure
imshow(Clust9)
title(‘Cluster9’)
These errors appear
Error using images.internal.imageDisplayParsePVPairs
Invalid input arguments.
Error in images.internal.imageDisplayParseInputs (line 70)
[common_args,specific_args] = images.internal.imageDisplayParsePVPairs(varargin{:});
Error in imshow (line 253)
images.internal.imageDisplayParseInputs({‘Parent’,’Border’,’Reduce’},preparsed_varargin{:});
Error in montage (line 231)
hh = imshow(bigImage,cmap,parentArgs{:},interpolationArgs{:});
Error in Montaggi_kmeans (line 52)
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
All Cent_col_Clust images are 420x560x3 uint8
All PosClust images are 595x842x3 uint8
All Cent_col_ClustDouble images are 420x560x3 double
All PosClustDouble images are 595x842x3 double
I have no clue what the problem is. Any ideas?
I cannot post all images, daily attachments limit is 10.I have a script that, given 16 images,creates eight montages
I have read in the documentation that it is possible to use an handle to call the montage, and i have done that because i want to be able to show every single montage using imshow to be able to give titles to every one of them.
Running the script causes errors to occur.
Here is the script
clear all;
close all;
clc;
%Importare le immagini
%Posizioni centroidi
PosClust1=imread("Cent_col_cluster1.png");
PosClust2=imread("Cent_col_cluster2.png");
PosClust3=imread("Cent_col_cluster3.png");
PosClust4=imread("Cent_col_cluster4.png");
PosClust5=imread("Cent_col_cluster5.png");
PosClust6=imread("Cent_col_cluster6.png");
PosClust7=imread("Cent_col_cluster7.png");
PosClust8=imread("Cent_col_cluster8.png");
PosClust9=imread("Cent_col_cluster9.png");
%Colori centroidi
Cent_col_Clust1=imread("Cent_pos_cluster1.png");
Cent_col_Clust2=imread("Cent_pos_cluster2.png");
Cent_col_Clust3=imread("Cent_pos_cluster3.png");
Cent_col_Clust4=imread("Cent_pos_cluster4.png");
Cent_col_Clust5=imread("Cent_pos_cluster5.png");
Cent_col_Clust6=imread("Cent_pos_cluster6.png");
Cent_col_Clust7=imread("Cent_pos_cluster7.png");
Cent_col_Clust8=imread("Cent_pos_cluster8.png");
Cent_col_Clust9=imread("Cent_pos_cluster9.png");
%————————————————————————-
%Trasformazione in imagetipe double
%Posizioni centroidi
PosClust1Double=im2double(PosClust1);
PosClust2Double=im2double(PosClust2);
PosClust3Double=im2double(PosClust3);
PosClust4Double=im2double(PosClust4);
PosClust5Double=im2double(PosClust5);
PosClust6Double=im2double(PosClust6);
PosClust7Double=im2double(PosClust7);
PosClust8Double=im2double(PosClust8);
PosClust9Double=im2double(PosClust9);
%Colori centroidi
Cent_col_Clust1Double=im2double(Cent_col_Clust1);
Cent_col_Clust2Double=im2double(Cent_col_Clust2);
Cent_col_Clust3Double=im2double(Cent_col_Clust3);
Cent_col_Clust4Double=im2double(Cent_col_Clust4);
Cent_col_Clust5Double=im2double(Cent_col_Clust5);
Cent_col_Clust6Double=im2double(Cent_col_Clust6);
Cent_col_Clust7Double=im2double(Cent_col_Clust7);
Cent_col_Clust8Double=im2double(Cent_col_Clust8);
Cent_col_Clust9Double=im2double(Cent_col_Clust9);
%Creazione montaggi
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
Clust2=montage(Cent_col_Clust2Double,PosClust2Double);
Clust3=montage(Cent_col_Clust3Double,PosClust3Double);
Clust4=montage(Cent_col_Clust4Double,PosClust4Double);
Clust5=montage(Cent_col_Clust5Double,PosClust5Double);
Clust6=montage(Cent_col_Clust6Double,PosClust6Double);
Clust7=montage(Cent_col_Clust7Double,PosClust7Double);
Clust8=montage(Cent_col_Clust8Double,PosClust8Double);
Clust9=montage(Cent_col_Clust9Double,PosClust9Double);
%Plot
%Cluster1
imshow(Clust1)
title(‘Cluster1’)
%Cluster2
figure
imshow(Clust2)
title(‘Cluster2’)
%Cluster3
figure
imshow(Clust3)
title(‘Cluster3’)
%Cluster4
figure
imshow(Clust4)
title(‘Cluster4’)
%Cluster5
figure
imshow(Clust5)
title(‘Cluster5’)
%Cluster6
figure
imshow(Clust6)
title(‘Cluster6’)
%Cluster7
figure
imshow(Clust7)
title(‘Cluster7’)
%Cluster8
figure
imshow(Clust8)
title(‘Cluster8’)
%Cluster9
figure
imshow(Clust9)
title(‘Cluster9’)
These errors appear
Error using images.internal.imageDisplayParsePVPairs
Invalid input arguments.
Error in images.internal.imageDisplayParseInputs (line 70)
[common_args,specific_args] = images.internal.imageDisplayParsePVPairs(varargin{:});
Error in imshow (line 253)
images.internal.imageDisplayParseInputs({‘Parent’,’Border’,’Reduce’},preparsed_varargin{:});
Error in montage (line 231)
hh = imshow(bigImage,cmap,parentArgs{:},interpolationArgs{:});
Error in Montaggi_kmeans (line 52)
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
All Cent_col_Clust images are 420x560x3 uint8
All PosClust images are 595x842x3 uint8
All Cent_col_ClustDouble images are 420x560x3 double
All PosClustDouble images are 595x842x3 double
I have no clue what the problem is. Any ideas?
I cannot post all images, daily attachments limit is 10. I have a script that, given 16 images,creates eight montages
I have read in the documentation that it is possible to use an handle to call the montage, and i have done that because i want to be able to show every single montage using imshow to be able to give titles to every one of them.
Running the script causes errors to occur.
Here is the script
clear all;
close all;
clc;
%Importare le immagini
%Posizioni centroidi
PosClust1=imread("Cent_col_cluster1.png");
PosClust2=imread("Cent_col_cluster2.png");
PosClust3=imread("Cent_col_cluster3.png");
PosClust4=imread("Cent_col_cluster4.png");
PosClust5=imread("Cent_col_cluster5.png");
PosClust6=imread("Cent_col_cluster6.png");
PosClust7=imread("Cent_col_cluster7.png");
PosClust8=imread("Cent_col_cluster8.png");
PosClust9=imread("Cent_col_cluster9.png");
%Colori centroidi
Cent_col_Clust1=imread("Cent_pos_cluster1.png");
Cent_col_Clust2=imread("Cent_pos_cluster2.png");
Cent_col_Clust3=imread("Cent_pos_cluster3.png");
Cent_col_Clust4=imread("Cent_pos_cluster4.png");
Cent_col_Clust5=imread("Cent_pos_cluster5.png");
Cent_col_Clust6=imread("Cent_pos_cluster6.png");
Cent_col_Clust7=imread("Cent_pos_cluster7.png");
Cent_col_Clust8=imread("Cent_pos_cluster8.png");
Cent_col_Clust9=imread("Cent_pos_cluster9.png");
%————————————————————————-
%Trasformazione in imagetipe double
%Posizioni centroidi
PosClust1Double=im2double(PosClust1);
PosClust2Double=im2double(PosClust2);
PosClust3Double=im2double(PosClust3);
PosClust4Double=im2double(PosClust4);
PosClust5Double=im2double(PosClust5);
PosClust6Double=im2double(PosClust6);
PosClust7Double=im2double(PosClust7);
PosClust8Double=im2double(PosClust8);
PosClust9Double=im2double(PosClust9);
%Colori centroidi
Cent_col_Clust1Double=im2double(Cent_col_Clust1);
Cent_col_Clust2Double=im2double(Cent_col_Clust2);
Cent_col_Clust3Double=im2double(Cent_col_Clust3);
Cent_col_Clust4Double=im2double(Cent_col_Clust4);
Cent_col_Clust5Double=im2double(Cent_col_Clust5);
Cent_col_Clust6Double=im2double(Cent_col_Clust6);
Cent_col_Clust7Double=im2double(Cent_col_Clust7);
Cent_col_Clust8Double=im2double(Cent_col_Clust8);
Cent_col_Clust9Double=im2double(Cent_col_Clust9);
%Creazione montaggi
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
Clust2=montage(Cent_col_Clust2Double,PosClust2Double);
Clust3=montage(Cent_col_Clust3Double,PosClust3Double);
Clust4=montage(Cent_col_Clust4Double,PosClust4Double);
Clust5=montage(Cent_col_Clust5Double,PosClust5Double);
Clust6=montage(Cent_col_Clust6Double,PosClust6Double);
Clust7=montage(Cent_col_Clust7Double,PosClust7Double);
Clust8=montage(Cent_col_Clust8Double,PosClust8Double);
Clust9=montage(Cent_col_Clust9Double,PosClust9Double);
%Plot
%Cluster1
imshow(Clust1)
title(‘Cluster1’)
%Cluster2
figure
imshow(Clust2)
title(‘Cluster2’)
%Cluster3
figure
imshow(Clust3)
title(‘Cluster3’)
%Cluster4
figure
imshow(Clust4)
title(‘Cluster4’)
%Cluster5
figure
imshow(Clust5)
title(‘Cluster5’)
%Cluster6
figure
imshow(Clust6)
title(‘Cluster6’)
%Cluster7
figure
imshow(Clust7)
title(‘Cluster7’)
%Cluster8
figure
imshow(Clust8)
title(‘Cluster8’)
%Cluster9
figure
imshow(Clust9)
title(‘Cluster9’)
These errors appear
Error using images.internal.imageDisplayParsePVPairs
Invalid input arguments.
Error in images.internal.imageDisplayParseInputs (line 70)
[common_args,specific_args] = images.internal.imageDisplayParsePVPairs(varargin{:});
Error in imshow (line 253)
images.internal.imageDisplayParseInputs({‘Parent’,’Border’,’Reduce’},preparsed_varargin{:});
Error in montage (line 231)
hh = imshow(bigImage,cmap,parentArgs{:},interpolationArgs{:});
Error in Montaggi_kmeans (line 52)
Clust1=montage(Cent_col_Clust1Double,PosClust1Double);
All Cent_col_Clust images are 420x560x3 uint8
All PosClust images are 595x842x3 uint8
All Cent_col_ClustDouble images are 420x560x3 double
All PosClustDouble images are 595x842x3 double
I have no clue what the problem is. Any ideas?
I cannot post all images, daily attachments limit is 10. montage, image processing, matlab function MATLAB Answers — New Questions
Simulink Embedded Coder Zero Initialization of Local Variables Not Working
Hello, I am using Embedded Coder on Simulink Model (MATLAB 2022B). I want Local variables of generated code to be initialized to zero. I already unticked ‘Remove root level I/O zero initialization’ and ‘Remove internal data zero initialization’ so code shouldn’t optimize initializations. These two settings initialize Global variables to zero, but don’t initialize Local variables to zero. There is an example of generated code and how I want it to be. How can I do it? Thanks for your help.
How it is:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide;
uint16_t UnitDelay1;
…
How it should be:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide = 0;
uint16_t UnitDelay1 = 0;
…Hello, I am using Embedded Coder on Simulink Model (MATLAB 2022B). I want Local variables of generated code to be initialized to zero. I already unticked ‘Remove root level I/O zero initialization’ and ‘Remove internal data zero initialization’ so code shouldn’t optimize initializations. These two settings initialize Global variables to zero, but don’t initialize Local variables to zero. There is an example of generated code and how I want it to be. How can I do it? Thanks for your help.
How it is:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide;
uint16_t UnitDelay1;
…
How it should be:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide = 0;
uint16_t UnitDelay1 = 0;
… Hello, I am using Embedded Coder on Simulink Model (MATLAB 2022B). I want Local variables of generated code to be initialized to zero. I already unticked ‘Remove root level I/O zero initialization’ and ‘Remove internal data zero initialization’ so code shouldn’t optimize initializations. These two settings initialize Global variables to zero, but don’t initialize Local variables to zero. There is an example of generated code and how I want it to be. How can I do it? Thanks for your help.
How it is:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide;
uint16_t UnitDelay1;
…
How it should be:
void step_function(uint16_t in1, uint16_t in2)
{
uint16_t Divide = 0;
uint16_t UnitDelay1 = 0;
… embedded coder, code generation, simulink, zero initialization MATLAB Answers — New Questions
Two issues with AutoRuns 14.11
Hi,
For AutoRuns 14.11:
1.
At the folder of C:UsersusernameAppDataRoamingMicrosoftWindowsStart MenuProgramsStartup I have an .LNK file with link to the patch of Windows Task Manager, C:WindowsSystem32Taskmgr.exe, but it is not shown in AutoRuns at all, mostly not the logon tab and it is not found when you search for it, not even when loading it as admin. Looks like a bug.
2.
I also have an object there that is an .LNK file, but it is not linked to an executable, but it is a definition to launch the WhatsApp application, the link is disabled/blocked for a editing, looks something like “5319275A.WhatsAppDesktop_2.2342.7.0_x64__cv1g1gvanyjgm“. So AutoRuns naturally marks its line with yellow background an the “Image Path” column states “File not found: .exe”, which is technically true, but it will be wiser to consider to not showing a warning, since the object works OK and does launch the app, so it will better to mark it as windows app store object valid object.
Thank you.
Hi, For AutoRuns 14.11: 1.At the folder of C:UsersusernameAppDataRoamingMicrosoftWindowsStart MenuProgramsStartup I have an .LNK file with link to the patch of Windows Task Manager, C:WindowsSystem32Taskmgr.exe, but it is not shown in AutoRuns at all, mostly not the logon tab and it is not found when you search for it, not even when loading it as admin. Looks like a bug. 2.I also have an object there that is an .LNK file, but it is not linked to an executable, but it is a definition to launch the WhatsApp application, the link is disabled/blocked for a editing, looks something like “5319275A.WhatsAppDesktop_2.2342.7.0_x64__cv1g1gvanyjgm”. So AutoRuns naturally marks its line with yellow background an the “Image Path” column states “File not found: .exe”, which is technically true, but it will be wiser to consider to not showing a warning, since the object works OK and does launch the app, so it will better to mark it as windows app store object valid object.Thank you. Read More
Scrolling snaps to grid automatically
I was working with some files in excel, for some reason whenever I leave the scroller the page will always snap to grid.
So, whenever I leave the scroller at this.
It snaps it back to this.
Is there a way to turn this feature off, it’s really irritating.
I was working with some files in excel, for some reason whenever I leave the scroller the page will always snap to grid.So, whenever I leave the scroller at this.It snaps it back to this.Is there a way to turn this feature off, it’s really irritating. Read More
Authenticator issues
Hello team,
I am struggling to log in to the microsoft admin account, as I lost the mobile which I used to authenticate.
I tried password reset which was successful (using my mail and mobile no), but while login in admin, it still asks for authentication.
What is the way to login
Please help at the earliest.
Hello team, I am struggling to log in to the microsoft admin account, as I lost the mobile which I used to authenticate. I tried password reset which was successful (using my mail and mobile no), but while login in admin, it still asks for authentication. What is the way to login Please help at the earliest. Read More
Rufus Not Working on my Windows 11 PC
I was trying to upgrade my Windows 10 PC to Windows 11 but it tells me The PC must support TPM 2.0. I don’t know what TPM is and it is a quite old PC I bought five years ago.
After that, I did some searching and a lot of people recommend the Rufus app to install Windows 11 on unsupported PC that lacks TPM 2.0 and Secure Boot. However, the USB created by Rufus is not seen as a bootable media. I did set the BIOS to boot from USB first.
I burned the Windows 11 ISO two more times with different booting scheme prorvided by Rufus. Unfortunately, it does not work either. How to fix this if Rufus is not working on Windows 11? Anyone know what is the problem in my case?
Thanks.
I was trying to upgrade my Windows 10 PC to Windows 11 but it tells me The PC must support TPM 2.0. I don’t know what TPM is and it is a quite old PC I bought five years ago.After that, I did some searching and a lot of people recommend the Rufus app to install Windows 11 on unsupported PC that lacks TPM 2.0 and Secure Boot. However, the USB created by Rufus is not seen as a bootable media. I did set the BIOS to boot from USB first.I burned the Windows 11 ISO two more times with different booting scheme prorvided by Rufus. Unfortunately, it does not work either. How to fix this if Rufus is not working on Windows 11? Anyone know what is the problem in my case? Thanks. Read More
jsonencode not encoding entire large structure data
Hi
I have a large table converting to structure type and then use the jsonencode to encode the data.
However I realized that not all data has been converted to json, since the ending was not "]" or "}".
Is there a fail-safe way to convert large table to json?
Thanks,Hi
I have a large table converting to structure type and then use the jsonencode to encode the data.
However I realized that not all data has been converted to json, since the ending was not "]" or "}".
Is there a fail-safe way to convert large table to json?
Thanks, Hi
I have a large table converting to structure type and then use the jsonencode to encode the data.
However I realized that not all data has been converted to json, since the ending was not "]" or "}".
Is there a fail-safe way to convert large table to json?
Thanks, matlab MATLAB Answers — New Questions
Is the oscillation in the picture a boundary condition problem?
Hi,
I have an equation which is as follows
When I solve this equation using the pdepe it shows an oscillation pattern at the edge as it is here (shown by arrow).
I changed my parameters a lot but this oscillation sometime reduces, but not zero. My bc are as follows
pl = 0;
ql = 1;
pr = 0;
qr = 1;
Can you tell how to reduce this oscillations?
Also, in the above equation, for the diffusion coefficient (sigma) the minimum value for which the pdepe converges is 1 below which the solution becomes diverging, it should not be actually.
with regardsHi,
I have an equation which is as follows
When I solve this equation using the pdepe it shows an oscillation pattern at the edge as it is here (shown by arrow).
I changed my parameters a lot but this oscillation sometime reduces, but not zero. My bc are as follows
pl = 0;
ql = 1;
pr = 0;
qr = 1;
Can you tell how to reduce this oscillations?
Also, in the above equation, for the diffusion coefficient (sigma) the minimum value for which the pdepe converges is 1 below which the solution becomes diverging, it should not be actually.
with regards Hi,
I have an equation which is as follows
When I solve this equation using the pdepe it shows an oscillation pattern at the edge as it is here (shown by arrow).
I changed my parameters a lot but this oscillation sometime reduces, but not zero. My bc are as follows
pl = 0;
ql = 1;
pr = 0;
qr = 1;
Can you tell how to reduce this oscillations?
Also, in the above equation, for the diffusion coefficient (sigma) the minimum value for which the pdepe converges is 1 below which the solution becomes diverging, it should not be actually.
with regards pdepe MATLAB Answers — New Questions
Code to hide lines where the MergeField is Zero, using IF
Hi.
I use software called WorkflowMax which uses MSWord format for custom Invoice Templates. There is a type of Invoice which shows several columns of data for each item on the invoice inlcuded Original Quoted Price, what was Previously Claimed, what’s being claimed on the current invoice, and what is remaining as outstanding.
I have managed to work out the code to not show a “0.00” for any particular field when the value of the field is Zero so that part is all good. As below in the third line where there is no “Claimed Incl This Invoice” for the Task “Consulting RMA Ecology”, nothing prints.
However, I need conditional code so that I can not show all 4 columns of a specific line, if just one of those columns is Zero. ie. if Nothing is being claimed on part of the quote in an invoice, then I don’t want to print the other columns for that line, which do still have values (Task Name / Quoted / Previously Claimed/ Claimed Incl This Invoice / Balance Remaining). So as an example, to not show the Quoted column it will need to look and see if there is a zero value in the “This Invoice” column. I’ve really had a go with it but just can’t make it work!
Hi. I use software called WorkflowMax which uses MSWord format for custom Invoice Templates. There is a type of Invoice which shows several columns of data for each item on the invoice inlcuded Original Quoted Price, what was Previously Claimed, what’s being claimed on the current invoice, and what is remaining as outstanding.I have managed to work out the code to not show a “0.00” for any particular field when the value of the field is Zero so that part is all good. As below in the third line where there is no “Claimed Incl This Invoice” for the Task “Consulting RMA Ecology”, nothing prints.However, I need conditional code so that I can not show all 4 columns of a specific line, if just one of those columns is Zero. ie. if Nothing is being claimed on part of the quote in an invoice, then I don’t want to print the other columns for that line, which do still have values (Task Name / Quoted / Previously Claimed/ Claimed Incl This Invoice / Balance Remaining). So as an example, to not show the Quoted column it will need to look and see if there is a zero value in the “This Invoice” column. I’ve really had a go with it but just can’t make it work! Read More
HYPER-Y on ARM based Surface Pro 9 / Registry Changes to enable HYPER-V
According a Microsoft support agent the HYPER-V functionality can be enabled/activate by changing and adding some Registry entries of my Surface 9 Pro SD3 3.00 GHz (64-bit operating system ARM-based processor).
Unfortunately he was not authorized to give me the information what need to be changed to enable HYPER-V on my system.
Does anyone have experience with in change registry to enable HYPER-V topic and is willing to help me and tell me what and how to do ?
Appreciate any advices, help and instructions.
See you
Greg
According a Microsoft support agent the HYPER-V functionality can be enabled/activate by changing and adding some Registry entries of my Surface 9 Pro SD3 3.00 GHz (64-bit operating system ARM-based processor).Unfortunately he was not authorized to give me the information what need to be changed to enable HYPER-V on my system.Does anyone have experience with in change registry to enable HYPER-V topic and is willing to help me and tell me what and how to do ?Appreciate any advices, help and instructions.See you Greg Read More
Send adaptive card via Graph Mail
Dear community
A third-party monitoring application creates static adaptive cards when an alert is triggered.
This application calls a PowerShell script and provides a couple parameters including the adaptive card json.
I have now tried to pack this adaptive card into a html email and send it via Graph API.
Sadly I cannot get it to render the adaptive card for example in Outlook.
HTML message
<html>
<head>
<metahttp-equiv=”Content-Type”content=”text /html;charset=utf-8″>
<scripttype=’application /adaptivecard+json’>
{“type”:”AdaptiveCard”,”version”:”1.4″,”hideOriginalBody”:true,”body”:[{…}]}
</script>
</head>
<body> </body>
</html>
Graph Mail
$emailRecipients = @(
’email address removed for privacy reasons’
)
[array]$toRecipients = ConvertTo-IMicrosoftGraphRecipient -SmtpAddresses $emailRecipients
$emailSender = ’email address removed for privacy reasons’
$emailSubject = “Sample Email AdaptiveCard”
$emailBody = @{
ContentType = ‘html’
Content = Get-Content -Path ‘C:…adaptivecard.html’
}
$body += @{subject = $emailSubject}
$body += @{toRecipients = $toRecipients}
$body += @{body = $emailBody}
$bodyParameter += @{‘message’ = $body}
$bodyParameter += @{‘saveToSentItems’ = $false}
Send-MgUserMail -UserId $emailSender -BodyParameter $bodyParameter
I am grateful for any advice or help with this problem.
Many thanks
Simon
Dear community A third-party monitoring application creates static adaptive cards when an alert is triggered.This application calls a PowerShell script and provides a couple parameters including the adaptive card json.I have now tried to pack this adaptive card into a html email and send it via Graph API.Sadly I cannot get it to render the adaptive card for example in Outlook. HTML message<html>
<head>
<metahttp-equiv=”Content-Type”content=”text /html;charset=utf-8″>
<scripttype=’application /adaptivecard+json’>
{“type”:”AdaptiveCard”,”version”:”1.4″,”hideOriginalBody”:true,”body”:[{…}]}
</script>
</head>
<body> </body>
</html> Graph Mail$emailRecipients = @(
’email address removed for privacy reasons’
)
[array]$toRecipients = ConvertTo-IMicrosoftGraphRecipient -SmtpAddresses $emailRecipients
$emailSender = ’email address removed for privacy reasons’
$emailSubject = “Sample Email AdaptiveCard”
$emailBody = @{
ContentType = ‘html’
Content = Get-Content -Path ‘C:…adaptivecard.html’
}
$body += @{subject = $emailSubject}
$body += @{toRecipients = $toRecipients}
$body += @{body = $emailBody}
$bodyParameter += @{‘message’ = $body}
$bodyParameter += @{‘saveToSentItems’ = $false}
Send-MgUserMail -UserId $emailSender -BodyParameter $bodyParameter I am grateful for any advice or help with this problem. Many thanksSimon Read More
Tasks no longer repeating
Hi, I know this has been mentioned a few times before but it’s still a problem:
Several months (or even a few years) ago I set up repeating tasks (e.g pay bills) and suddenly these are no longer repeating. That is, I clicked completed on a recurring task two weeks ago and it didn’t create a new task.
This is a major problem for me and anyone else who either has memory problems, or is using ToDo to remind them to take medication.
I’m using a MacBookPro with everything up to date.
Hope you can I’m help @microsoft developers I’m happy to work with you on this (share my screen or whatever).
J
Hi, I know this has been mentioned a few times before but it’s still a problem: Several months (or even a few years) ago I set up repeating tasks (e.g pay bills) and suddenly these are no longer repeating. That is, I clicked completed on a recurring task two weeks ago and it didn’t create a new task. This is a major problem for me and anyone else who either has memory problems, or is using ToDo to remind them to take medication. I’m using a MacBookPro with everything up to date. Hope you can I’m help @microsoft developers I’m happy to work with you on this (share my screen or whatever). J Read More
Microsoft Copilot to Get Enterprise Data Protection
The August 15 announcement that Microsoft Copilot (the version that doesn’t use the Graph) will benefit from enterprise data protection from September is good new. However, Microsoft said nothing about the security issues around Copilot for Microsoft 365 reported at the recent BlackHat USA 2024 conference. In other news, tenants can pin Microsoft Copilot to app navigation bars using a new control in the Microsoft 365 admin center.
https://office365itpros.com/2024/08/16/microsoft-copilot-edp/
The August 15 announcement that Microsoft Copilot (the version that doesn’t use the Graph) will benefit from enterprise data protection from September is good new. However, Microsoft said nothing about the security issues around Copilot for Microsoft 365 reported at the recent BlackHat USA 2024 conference. In other news, tenants can pin Microsoft Copilot to app navigation bars using a new control in the Microsoft 365 admin center.
https://office365itpros.com/2024/08/16/microsoft-copilot-edp/ Read More
How can I convert pcm to wav on a Windows PC?
I have some raw PCM audio files copied from my CDs. Now, I need to convert to WAV format on my Windows PC. I’m relatively new to working with audio files, so I’m not sure of the best way to handle this conversion. I want to make sure the process is straightforward and that the resulting WAV files are of high quality.
I’ve done some research, but I’m still unclear about which tools or software would be best suited for this task. Could you please suggest a working solution to help me convert PCM to WAV on Windows? And any specific settings I should be aware of during the conversion process to maintain audio quality?
I have some raw PCM audio files copied from my CDs. Now, I need to convert to WAV format on my Windows PC. I’m relatively new to working with audio files, so I’m not sure of the best way to handle this conversion. I want to make sure the process is straightforward and that the resulting WAV files are of high quality. I’ve done some research, but I’m still unclear about which tools or software would be best suited for this task. Could you please suggest a working solution to help me convert PCM to WAV on Windows? And any specific settings I should be aware of during the conversion process to maintain audio quality? Read More
Office 365 excel pivot sorting bug
Hi
I’m using Excel office 365. Seems sorting Error in pivot table, for alphabet (a,b,c,d,e,f) sorting showing as (a,d,b,c,e,f)
Row LabelsADBCEFGHGrand Total
Hi I’m using Excel office 365. Seems sorting Error in pivot table, for alphabet (a,b,c,d,e,f) sorting showing as (a,d,b,c,e,f) Row LabelsADBCEFGHGrand Total Read More
Unable to select line style and borders in timeline in excel
Hi,
I’m facinng a strange issue. I’ve created a timeline for a dataset and its working fine.
However, while I’m trying to edit the timeline style, it doesn’t allow me to choose the border of the ‘Header’ element except the presets of ‘None’ and ‘Outline’, rest everything is shaded-out. The same thing is happening while trying to select the linestyle of the same. This is also happening for the ‘Whole Timeline’ element.
Also, the underline functionality is also grayed out while setting the font formatting.
Does anyone has any idea as to why this is happening and how to resolve it.
Hi, I’m facinng a strange issue. I’ve created a timeline for a dataset and its working fine. However, while I’m trying to edit the timeline style, it doesn’t allow me to choose the border of the ‘Header’ element except the presets of ‘None’ and ‘Outline’, rest everything is shaded-out. The same thing is happening while trying to select the linestyle of the same. This is also happening for the ‘Whole Timeline’ element. Also, the underline functionality is also grayed out while setting the font formatting. Does anyone has any idea as to why this is happening and how to resolve it. Unable to choose the line styleBorders grayed outUnderline grayed-out Read More
Using Simulink to read/send MAVlink packets to/from a PX4
Hello,
I am getting an "Arrays have incompatible sizes for this operation" error when running the ‘jetsoncpu_pixhawk_interface’ public Simulink model. I am hooked up to the PX4 Cube Orange Plus flight controller. Regardless of when I use the serial send/receive blocks or even the MAVlink blocks, I run into this issue.
Please let me know if anyone has any suggestions for this issue.
Send and Receive MAVLink Packets on Jetson Boards – MATLAB & Simulink (mathworks.com)Hello,
I am getting an "Arrays have incompatible sizes for this operation" error when running the ‘jetsoncpu_pixhawk_interface’ public Simulink model. I am hooked up to the PX4 Cube Orange Plus flight controller. Regardless of when I use the serial send/receive blocks or even the MAVlink blocks, I run into this issue.
Please let me know if anyone has any suggestions for this issue.
Send and Receive MAVLink Packets on Jetson Boards – MATLAB & Simulink (mathworks.com) Hello,
I am getting an "Arrays have incompatible sizes for this operation" error when running the ‘jetsoncpu_pixhawk_interface’ public Simulink model. I am hooked up to the PX4 Cube Orange Plus flight controller. Regardless of when I use the serial send/receive blocks or even the MAVlink blocks, I run into this issue.
Please let me know if anyone has any suggestions for this issue.
Send and Receive MAVLink Packets on Jetson Boards – MATLAB & Simulink (mathworks.com) transferred MATLAB Answers — New Questions