Month: August 2024
undefined function or variable: docopt
Undefined function or variable ‘docopt’.
Error in install_SplitLab (line 384)
[doccmd,options,docpath] = docoptUndefined function or variable ‘docopt’.
Error in install_SplitLab (line 384)
[doccmd,options,docpath] = docopt Undefined function or variable ‘docopt’.
Error in install_SplitLab (line 384)
[doccmd,options,docpath] = docopt splitlab1.2.1 MATLAB Answers — New Questions
Methods of Detecting and Removing Protrusions in Image
Is there any way to remove only the red shaded area of an image like the one below?
The data is a binary image and is binarized.
The image we are recognizing is basically a figure like the one on the left, so we can use bwareafilt to extract the maximum structure.
However, sometimes we get images like the one on the right. It does not mean that every time they are attached.
It would be best if we could set a threshold (if they are too close together, we recognize them as one), since the degree of attachment of the two objects varies.
We would appreciate it if you could let us know.Is there any way to remove only the red shaded area of an image like the one below?
The data is a binary image and is binarized.
The image we are recognizing is basically a figure like the one on the left, so we can use bwareafilt to extract the maximum structure.
However, sometimes we get images like the one on the right. It does not mean that every time they are attached.
It would be best if we could set a threshold (if they are too close together, we recognize them as one), since the degree of attachment of the two objects varies.
We would appreciate it if you could let us know. Is there any way to remove only the red shaded area of an image like the one below?
The data is a binary image and is binarized.
The image we are recognizing is basically a figure like the one on the left, so we can use bwareafilt to extract the maximum structure.
However, sometimes we get images like the one on the right. It does not mean that every time they are attached.
It would be best if we could set a threshold (if they are too close together, we recognize them as one), since the degree of attachment of the two objects varies.
We would appreciate it if you could let us know. image analysis, image segmentation MATLAB Answers — New Questions
How to create dynamic options in system object block mask parameters
I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible?I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible? I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible? matlab system objects MATLAB Answers — New Questions
Error when inviting the bot to a meeting: Server Internal Error. DiagCode: 500#7117.@
We have created a solution that invites the Microsoft Teams media bot to an online teams meeting. Unfortunately the bot does not always seem to join the call.
In these cases;
We only receive the “Establishing…” callback, but then the call is never actually transfered to “Established”. After waiting for about 60 seconds. We receive the “Terminated” signal, without the call ever reaching the “established” state.
Along with this terminated callback, we also receive the following error code (for which we can not find any documentation..)
Server Internal Error. DiagCode: 500#7117.@
The steps and conditions of inviting the bot are always the same.
1. Create New meeting
2. Invite bot to meeting
3. 50% (sometimes lower, sometimes higher) chance bot will join
A workaround I found was, inviting the bot multiple times. Which sometimes work, but the bot is then kicked subsequently when the “Terminated” callback is sent our way.
await this.Client.Calls().AddAsync(joinParams).ConfigureAwait(false);
This is the code to trigger the invitation of the bot, as per the microsoft sample. (docs)
Can someone please shed some light on the meaning of this error code? I have not found any patterns in its occurrence and our calls to the service do not change. The only difference likely being a different meetingId.
Server Internal Error. DiagCode: 500#7117.@
We have created a solution that invites the Microsoft Teams media bot to an online teams meeting. Unfortunately the bot does not always seem to join the call. In these cases;We only receive the “Establishing…” callback, but then the call is never actually transfered to “Established”. After waiting for about 60 seconds. We receive the “Terminated” signal, without the call ever reaching the “established” state.Along with this terminated callback, we also receive the following error code (for which we can not find any documentation..)Server Internal Error. DiagCode: 500#7117.@The steps and conditions of inviting the bot are always the same.1. Create New meeting2. Invite bot to meeting3. 50% (sometimes lower, sometimes higher) chance bot will joinA workaround I found was, inviting the bot multiple times. Which sometimes work, but the bot is then kicked subsequently when the “Terminated” callback is sent our way. await this.Client.Calls().AddAsync(joinParams).ConfigureAwait(false); This is the code to trigger the invitation of the bot, as per the microsoft sample. (docs)Can someone please shed some light on the meaning of this error code? I have not found any patterns in its occurrence and our calls to the service do not change. The only difference likely being a different meetingId. Server Internal Error. DiagCode: 500#7117.@ Read More
Why doesn’t Copilot offer the options for GPT-4 and GPT-4 turbo anymore?
Why doesn’t Copilot offer the options for GPT-4 and GPT-4 turbo anymore? Now that Copilot no longer provides the choice of GPT-4, it’s completely not worth paying for!
I can’t chat with the old creative model anymore. It was the reason I subscribed in the first place.
If you no longer offer the GPT-4 model, and instead are using a garbage distilled model GPT-4 turbo to scam money, then I will not continue to pay for and use your Copilot Pro!
Why doesn’t Copilot offer the options for GPT-4 and GPT-4 turbo anymore? Now that Copilot no longer provides the choice of GPT-4, it’s completely not worth paying for! I can’t chat with the old creative model anymore. It was the reason I subscribed in the first place. If you no longer offer the GPT-4 model, and instead are using a garbage distilled model GPT-4 turbo to scam money, then I will not continue to pay for and use your Copilot Pro! Read More
table references for drop down
I have created table1 on tab Empl By Dept with headers for Employee, supervisor and dept. I am using a drop down list on another tab of the same workbook to select data from this table. Per excel help when choosing my source for this drop down it should put the table name references when I click on the cell below the header name employee. instead it puts the cell reference names ie…a2. I have tried several variations shown in help trying to get the syntax right to pull this data. I am wanting the drop down choices to automatically change if the table info changes. Using table references is supposed to do this. help….
I have created table1 on tab Empl By Dept with headers for Employee, supervisor and dept. I am using a drop down list on another tab of the same workbook to select data from this table. Per excel help when choosing my source for this drop down it should put the table name references when I click on the cell below the header name employee. instead it puts the cell reference names ie…a2. I have tried several variations shown in help trying to get the syntax right to pull this data. I am wanting the drop down choices to automatically change if the table info changes. Using table references is supposed to do this. help…. Read More
Single Power Automate Flow that supports Multiple Booking Page
In Microsoft Bookings with Power Automate can we create a single flow that supports multiple booking pages? It seems like a trigger flow will only support a single booking page. Is this possible and how?
Also, I ChatGPT the question and it thinks this is possible using a “Single Flow with Conditional Logic.” I am not familiar with that so any suggestion would help.
Thank you
Ralph
In Microsoft Bookings with Power Automate can we create a single flow that supports multiple booking pages? It seems like a trigger flow will only support a single booking page. Is this possible and how?Also, I ChatGPT the question and it thinks this is possible using a “Single Flow with Conditional Logic.” I am not familiar with that so any suggestion would help. Thank you Ralph Read More
Table from JSON object
I get the following JSON response from an API call
[
[
{
“field”: “ID”,
“value”: 29
},
{
“field”: “Created at”,
“value”: “06/08/2024 15:18”
},
{
“field”: “Created by”,
“value”: “Amanda”
},
{
“field”: “Job Card Status”,
“value”: “Final”
},
{
“field”: “Sales Amount”,
“value”: “2500”
}
],
[
{
“field”: “ID”,
“value”: 28
},
{
“field”: “Created at”,
“value”: “06/08/2024 15:16”
},
{
“field”: “Created by”,
“value”: “Amanda”
},
{
“field”: “Job Card Status”,
“value”: “Final”
},
{
“field”: “Sales Amount”,
“value”: “15400”
}
]
]
and need to create a table in Power Query for further processing. The table will have a column Date (from date created) and a column ‘Sales amount’ from Sales amount. No idea how do I approach this?
I get the following JSON response from an API call [
[
{
“field”: “ID”,
“value”: 29
},
{
“field”: “Created at”,
“value”: “06/08/2024 15:18”
},
{
“field”: “Created by”,
“value”: “Amanda”
},
{
“field”: “Job Card Status”,
“value”: “Final”
},
{
“field”: “Sales Amount”,
“value”: “2500”
}
],
[
{
“field”: “ID”,
“value”: 28
},
{
“field”: “Created at”,
“value”: “06/08/2024 15:16”
},
{
“field”: “Created by”,
“value”: “Amanda”
},
{
“field”: “Job Card Status”,
“value”: “Final”
},
{
“field”: “Sales Amount”,
“value”: “15400”
}
]
] and need to create a table in Power Query for further processing. The table will have a column Date (from date created) and a column ‘Sales amount’ from Sales amount. No idea how do I approach this? Read More
Pivot table with variable database
Hi All,
I am trying to generate a dynamic table with a variable database using the OFFSET formula, however after performing the entire procedure I see that it still takes three rows with cells without data. I have tried to delete everything thinking that there might be some hidden data, but the problem continues.
Does anyone know what could be the cause?.
The formula is: DESREF($B$5;0;0;CONTARA($B:$B);CONTARA($5:$5))
Thanks¡.
Hi All, I am trying to generate a dynamic table with a variable database using the OFFSET formula, however after performing the entire procedure I see that it still takes three rows with cells without data. I have tried to delete everything thinking that there might be some hidden data, but the problem continues.Does anyone know what could be the cause?. The formula is: DESREF($B$5;0;0;CONTARA($B:$B);CONTARA($5:$5)) Thanks¡. Read More
Quick Steps – New email
Hello!
One thing that is driving me insane with this new Outlook is that it’s missing a lot of great functionality that the last version had. One example is Quick Steps doesn’t have the ‘new email’ option so that I can send to a group of contacts by adding all of their emails and saving. The Groups option only applies to contacts within your organization, which doesn’t help me because I need to send to outside contacts.
How do I do this without adding them as contacts?
Hello! One thing that is driving me insane with this new Outlook is that it’s missing a lot of great functionality that the last version had. One example is Quick Steps doesn’t have the ‘new email’ option so that I can send to a group of contacts by adding all of their emails and saving. The Groups option only applies to contacts within your organization, which doesn’t help me because I need to send to outside contacts. How do I do this without adding them as contacts? Read More
Integrating Microsoft Project Desktop Client and Project for the web
Hello Mates,
In my corporation, we are shifting from using the desktop client for each project separately to using the web for all the projects, programs and portfolios we manage.
I am searching for a way to integrate the desktop client with the project for the web to avoid importing the .mpp file each time the Project managers make changes to plans in the desktop client and delete the previous project for the web plan.
In a nutshell,
1-I am searching for a way to integrate desktop client and project for the web.
2- The availability to import .mpp files to the same plan in project for the web and track the changes automatically.
I really appreciate any help you can provide.
Hello Mates, In my corporation, we are shifting from using the desktop client for each project separately to using the web for all the projects, programs and portfolios we manage. I am searching for a way to integrate the desktop client with the project for the web to avoid importing the .mpp file each time the Project managers make changes to plans in the desktop client and delete the previous project for the web plan. In a nutshell,1-I am searching for a way to integrate desktop client and project for the web.2- The availability to import .mpp files to the same plan in project for the web and track the changes automatically. I really appreciate any help you can provide. Read More
My Macro is copying formula’s instead of pasting values
Hi All,
I have an excel spreadsheet that I generate using data that pulls from a few different tabs within the spreadsheet.
I am trying to run a macro that copies everything in each tab as values and removes a few tabs to generate a ‘cleaned up’ version of my report.
When I run the macro, everything works as it should apart from the copy values part. Instead of just copying the cell value, the formula’s are being copied and they are trying to pull from one of the tabs I have deleted in the cleaned up version.
Please see formula I am using below:
For Each Ws In ActiveWorkbook.Worksheets
Range(“A1”).Select
Cells.Copy
Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _
:=False, Transpose:=False
Any help would be greatly appreciated 🙂
Hi All, I have an excel spreadsheet that I generate using data that pulls from a few different tabs within the spreadsheet.I am trying to run a macro that copies everything in each tab as values and removes a few tabs to generate a ‘cleaned up’ version of my report. When I run the macro, everything works as it should apart from the copy values part. Instead of just copying the cell value, the formula’s are being copied and they are trying to pull from one of the tabs I have deleted in the cleaned up version. Please see formula I am using below: For Each Ws In ActiveWorkbook.WorksheetsRange(“A1”).SelectCells.CopySelection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _:=False, Transpose:=False Any help would be greatly appreciated 🙂 Read More
Intune Feature Update without Windows Version upgrade
I’m trying to setup Feature Update policies in Intune to update my Windows 10 machines to the latest Windows 10, and Windows 11 machines to the latest Windows 11, but NOT to upgrade Windows 10 machines to Windows 11.
The Windows 10 to latest Windows 10 is dead easy. The problem is I cannot see how to setup a Feature Update policy to deploy the latest Windows 11 without that policy trying to upgrade Windows 10 machines to Windows 11.
Or am I missing something?
I’m trying to setup Feature Update policies in Intune to update my Windows 10 machines to the latest Windows 10, and Windows 11 machines to the latest Windows 11, but NOT to upgrade Windows 10 machines to Windows 11. The Windows 10 to latest Windows 10 is dead easy. The problem is I cannot see how to setup a Feature Update policy to deploy the latest Windows 11 without that policy trying to upgrade Windows 10 machines to Windows 11. Or am I missing something? Read More
Building high scale RAG applications with Microsoft Fabric Eventhouse
Introduction
In this article I will guide you on how to build a Generative AI application in Microsoft Fabric.
This guide will walk you through implementing a RAG (Retrieval Augmented Generation) system in Microsoft Fabric using Azure OpenAI and Microsoft Fabric Eventhouse as your vector store.
Why MS Fabric Eventhouse?
Fabric Eventhouse is built using the Kusto Engine that delivers top-notch performance for similarity search at high scale.
If you are looking to build a RAG application with a large number of embeddings vectors, look no more, using MS Fabric you can leverage the processing power for building the Vector Database and the high performant engine powering Fabric Eventhouse DB.
If you want to know more about using Fabric Eventhouse as a Vector store here are some links.
Azure Data Explorer for Vector Similarity Search
Optimizing Vector Similarity Search on Azure Data Explorer – Performance Update
Optimizing Vector Similarity Searches at Scale
What is RAG – Retrieval Augmented Generation?
Large Language Models (LLMs) excel in creating text that resembles human writing.
Initially, LLMs are equipped with a broad spectrum of knowledge from extensive datasets used for their training. This grants them flexibility but may not provide the specialized focus or knowledge necessary in certain topics.
Retrieval Augmented Generation (RAG) is a technique that improves the pertinence and precision of LLMs by incorporating real-time, relevant information into their responses. With RAG, an LLM is boosted by a search system that sifts through unstructured text to find information, which then refines the LLM’s replies.
What is a Vector Database?
The Vector Database is a vital component in the retrieval process in RAG, facilitating the quick and effective identification of relevant text sections in response to a query, based on how closely they match the search terms.
Vector DBs are data stores optimized for storing and processing vector data. Vector data can refer to data types such as geometric shapes, spatial data, or more abstract high-dimensional data used in machine learning applications, such as embeddings.
These databases are designed to efficiently handle operations such as similarity search, nearest neighbour search, and other operations that are common when dealing with high-dimensional vector spaces.
For example, in machine learning, it’s common to convert text, images, or other complex data into high-dimensional vectors using models like word embeddings, image embeddings, etc. To efficiently search and compare these vectors, a vector database or vector store with specialized indexing and search algorithms would be used.
In our case we will use Azure OpenAI Ada Embeddings model to create embeddings, which are vector representations of the text we are indexing and storing in Microsoft Fabric Eventhouse DB.
The code
The code can be found here.
We will use the Moby Dick book from the Gutenberg project in PDF format as our knowledge base.
We will read the PDF file, cut the text into chunks of 1000 characters and calculate the embeddings for each chunk, then we will store the text and the embeddings in our Vector Database (Fabric Eventhouse)
We will then ask questions and get answers from our Vector DB and send the question and answers to Azure OpenAI GPT4 to get a response in natural language.
Processing the files and indexing the embeddings
We will do this once – only to create the embeddings and then save them into our Vector Database – Fabric Eventhouse
Read files from Fabric Lakehouse
Create embeddings from the text using Azure OpenAI ada Embeddings model
Save the text and embeddings in our Fabric Eventhouse DB
RAG – Getting answers
Every time we want to search for answers from our knowledge base, we will:
Create the embeddings for the question and search our Fabric Eventhouse for the answers, using Similarity search
Combining the question and the retrieved answers from our Vector Database, we will call Azure OpenAI GPT4 model to get “natural language” answer.
Prerequisites
To follow this guide, you will need to ensure that you have access to the following services and have the necessary credentials and keys set up.
Microsoft Fabric.
Azure OpenAI Studio to manage and deploy OpenAI models.
Setup
Create a Fabric Workspace
Create a Lakehouse
Upload the moby dick pdf file
Create an Eventhouse DB called “GenAI_eventhouse”
Click on the DB name and then “Explore your data” on the top-right side
Create the “bookEmbeddings” table
Paste the following command and run it
.create table bookEmbeddings (document_name:string, content:string, embedding:dynamic)
Import our notebook
Grab your Azure openAI endpoint and secret key and paste it in the notebook, replace your models deployment names if needed.
Get the Eventhouse URI and paste it as “KUSTO_URI” in the notebook
Connect the notebook to the Lakehouse
Let’s run our notebook
This will install all the python libraries we need
%pip install openai==1.12.0 azure-kusto-data langchain tenacity langchain-openai pypdf
Run cell 2 after configuring the environment variables for:
OPENAI_GPT4_DEPLOYMENT_NAME=”gpt-4″
OPENAI_DEPLOYMENT_ENDPOINT=”<your-azure openai endpoint>”
OPENAI_API_KEY=”<your-azure openai api key>”
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = “text-embedding-ada-002”
KUSTO_URI = “<your-eventhouse cluster-uri>”
Run cell 3
Here we create an Azure OpenAI client and define a function to calculate embeddings
client = AzureOpenAI(
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
api_key=OPENAI_API_KEY,
api_version=”2023-09-01-preview”
)
#we use the tenacity library to create delays and retries when calling openAI embeddings to avoid hitting throttling limits
@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
def generate_embeddings(text):
# replace newlines, which can negatively affect performance.
txt = text.replace(“n”, ” “)
return client.embeddings.create(input = [txt], model=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME).data[0].embedding
Run cell 4
Read the file, divide it into 1000 chars chunks
# splitting into 1000 char long chunks with 30 char overlap
# split [“nn”, “n”, ” “, “”]
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=30,
)
documentName = “moby dick book”
#Copy File API path
fileName = “/lakehouse/default/Files/moby dick.pdf”
loader = PyPDFLoader(fileName)
pages = loader.load_and_split(text_splitter=splitter)
print(“Number of pages: “, len(pages))
Run cell 5
Save the text chunks to a pandas dataframe
#save all the pages into a pandas dataframe
import pandas as pd
df = pd.DataFrame(columns=[‘document_name’, ‘content’, ’embedding’])
for page in pages:
df.loc[len(df.index)] = [documentName, page.page_content, “”]
df.head()
Run cell 6
Calculate embeddings
# calculate the embeddings using openAI ada
df[“embedding”] = df.content.apply(lambda x: generate_embeddings(x))
print(df.head(2))
Run cell 7
Write the data to MS Fabric Eventhouse
df_sp = spark.createDataFrame(df)
df_sp.write.
format(“com.microsoft.kusto.spark.synapse.datasource”).
option(“kustoCluster”,KUSTO_URI).
option(“kustoDatabase”,KUSTO_DATABASE).
option(“kustoTable”, KUSTO_TABLE).
option(“accessToken”, accessToken ).
mode(“Append”).save()
Let’s check the data was saved to our Vector Database
Go to the Eventhouse and run this query
bookEmbeddings
| take 10
Go back to the notebook and run the rest of the cells
Creates a function to call GPT4 for a NL answer
def call_openAI(text):
response = client.chat.completions.create(
model=OPENAI_GPT4_DEPLOYMENT_NAME,
messages = text,
temperature=0
)
return response.choices[0].message.content
Creates a function to retrieve answers using embeddings with similarity search
def get_answer_from_eventhouse(question, nr_of_answers=1):
searchedEmbedding = generate_embeddings(question)
kusto_query = KUSTO_TABLE + ” | extend similarity = series_cosine_similarity(dynamic(“+str(searchedEmbedding)+”), embedding) | top ” + str(nr_of_answers) + ” by similarity desc “
kustoDf = spark.read
.format(“com.microsoft.kusto.spark.synapse.datasource”)
.option(“kustoCluster”,KUSTO_URI)
.option(“kustoDatabase”,KUSTO_DATABASE)
.option(“accessToken”, accessToken)
.option(“kustoQuery”, kusto_query).load()
return kustoDf
Retrieves 2 answers from Eventhouse
nr_of_answers = 2
question = “Why does the coffin prepared for Queequeg become Ishmael’s life buoy once the Pequod sinks?”
answers_df = get_answer_from_eventhouse(question, nr_of_answers)
Concatenates the answers
answer = “”
for row in answers_df.rdd.toLocalIterator():
answer = answer + ” ” + row[‘content’]
Creates a prompt for GPT4 with the question and the 2 answers
prompt = ‘Question: {}’.format(question) + ‘n’ + ‘Information: {}’.format(answer)
# prepare prompt
messages = [{“role”: “system”, “content”: “You are a HELPFUL assistant answering users questions. Answer the question using the provided information and do not add anything else.”},
{“role”: “user”, “content”: prompt}]
result = call_openAI(messages)
display(result)
That’s it, you have built your very first RAG app using MS Fabric
All the code can be found here.
Thanks
Denise
Microsoft Tech Community – Latest Blogs –Read More
Leveraging Azure native tooling to hunt Kubernetes security issues
Introduction
Container binary drift refers to the phenomenon where a running container deviates from its original image over time. This can happen due to various reasons, such as manual updates, automated processes, or security vulnerabilities. Essentially, the container starts to differ from the static snapshot it was created from, leading to potential inconsistencies and security risks.
When thinking of container image drifts, it is important to understand the following:
Security Risks: Image drift can introduce security risks, as the container may run software or processes that were not part of the original image. This can create a security blind spot, as traditional image scanning may not detect these changes
Detection: Detecting image drift involves monitoring the container for changes that deviate from the original image. This can be done using tools that compare the running container’s state with its original image.
Prevention: To prevent image drift, it is recommended to implement image immutability, regularly update base images, and use image scanning tools. Monitoring and alerting for image drift can also help in identifying and addressing any deviations.
In this 3-part series we will look at the:
Part 1: Newest detection “binary drift” and how you can expand the capability using Microsoft XDR Portal https://learn.microsoft.com/en-us/defender-xdr/microsoft-365-defender-portal. We will also look what you get as result of native integration between Defender for Cloud and Microsoft XDR. We will also showcase why this integration is advantageous for your SOC teams
Part 2: Further expanding on the integration capabilities, we will demonstrate how you can automate your hunts using Custom Detection Rules https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules. Reducing operational burden and allowing you to proactively detect Kubernetes security issues. Wherever applicable, we will also suggest an alternative way to perform the detection
Part 3: Bringing AI to your advantage, we will show how you can leverage Security Copilot both in Defender for Cloud and XDR portal for Kubernetes security use cases.
Note: To keep the discussion contained, we will assume that your container workloads are running on Azure Kubernetes Services (AKS) and that your AKS cluster leverages Azure’s RBAC (https://learn.microsoft.com/en-us/azure/aks/azure-ad-rbac). We also assume that you are using Azure Container Registry (ACR) for storing images (https://learn.microsoft.com/en-us/azure/container-registry/container-registry-concepts)
Capabilities needed to detect the drift
Let’s discuss what you will need to set up in your environment to detect and triage the drift. (Remember not all drifts might be malicious, it might very well be a user or pipeline error)
Setting up Defender for Containers
We assume that you have already enabled Defender for Containers if not please follow the directions listed here https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-aks
We will set up the Defender for Containers to detect binary drifts. The feature is available for the Azure (AKS), Amazon (EKS), and Google (GKE) clouds.
To detect the drift you will need to set up Drift Policies (https://learn.microsoft.com/en-us/azure/defender-for-cloud/binary-drift-detection#configure-drift-policies). These policies define what you want or do not to alert on. You can create exclusions by setting higher priority rules for specific scopes or clusters, images, pods, Kubernetes labels, or namespaces.
In the sample rule below
Fig. Binary drift detection rule
Scope description: Human understandable description of where you are trying to detect the binary drift
Cloud Scope: Refers to Azure, AWS, or GCP where the rule applies. If you expand a cloud provider, you can select specific subscription. If you don’t select the entire cloud provider, new subscriptions added to the cloud provider won’t be included in the rule.
Resource Scope: Here you can narrow the scope to a specific object – Container name, Image name, Namespace, Pod labels, Pod name, or Cluster name.
Allow list of processes: List of processes that will not trigger an alert on the given Resource Scope
Also, note that there each rule has a priority and they are evaluated in ascending order. There is a default rule to ignore the binary drift detection
Fig. Default rule
Pre-requisites for generating Alerts
Once you set up the rule it will be deployed on the Kubernetes nodes using Defender for Container’s enhanced sensor https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-introduction#sensor-based-capabilities. You can check the Defender for Container Settings https://portal.azure.com/#view/Microsoft_Azure_Security/DataCollectionBladeV2/subscriptionId/<SUB_ID_WITH _BINARY_DRIFT_DETECTION>/defendersPlan/Containers
Fig. Defender Sensor needs to be enabled
With Binary Drift detection rules set and the Defender Sensor enabled, you are all set to detect the binaries that are executing but did not originate from the original image.
Reviewing the alerts (a case study)
You can see the alerts in the “Security Alerts” pane, like so
Fig. Binary Drift alert
For example, the image that the container is running does not come with wget
An attacker probably got hold of this container and downloaded this utility to import some tools.
The alert gives you information about where the activity happened like the object namespace, image, cluster, etc.
This might or might not be enough information for you to act. Say, if you want to identify “how” this drift came to be for example, did a user logged on to container and downloaded the said binary. To supplement the information provided by the alert we can then use Defender XDR portal (https://learn.microsoft.com/en-us/defender-xdr/microsoft-365-defender-portal)
Summary
This article showed you how to leverage binary drift detection and in the next article we will focus on how you can use XDR Portal to build more context around this alert and conduct hunts.
We will also share some queries that can serve as starter [Part 2 ].
Microsoft Tech Community – Latest Blogs –Read More
Using Defender XDR Portal to hunt for Kubernetes security issues
As we saw in previous article, the binary drift alert gives you information about where the activity happened like the object namespace, image, cluster, etc.
This might or might not be enough information for you to act. Say, if you want to identify “how” this drift came to be for example, did a user logged on to container and downloaded the said binary. To supplement the information provided by the alert we can then use Defender XDR portal (https://learn.microsoft.com/en-us/defender-xdr/microsoft-365-defender-portal)
Harnessing the power of the Microsoft Ecosystem
If you are an E5 customer, your security teams most likely are very familiar with Advance Hunting on security.microsoft.com portal. Here we will extend that hunting capability to add context to your Kubernetes alerts. This is a huge time saver and cost advantage for you as you don’t need to teach your Red Team or SOC analysts Level 400 Kubernetes concepts. To jump start your Kubernetes hunting, you can leverage the developer knowledge of your Platform teams to provide most common Kubernetes actions like exec (access to a container), debug (access to node). The hunting team can then leverage these in the hunting queries in a data structure and format they already know using KQL.
Enhancing the hunt using XDR portal
Defender for Cloud sends incidents and alerts to the Defender portal (https://learn.microsoft.com/en-us/defender-xdr/microsoft-365-security-center-defender-cloud#investigation-experience-in-the-microsoft-defender-portal).
Similarly, Microsoft Sentinel also send the data to Defender portal (https://learn.microsoft.com/en-us/azure/sentinel/microsoft-365-defender-sentinel-integration?toc=%2Fdefender-xdr%2Ftoc.json&bc=%2Fdefender-xdr%2Fbreadcrumb%2Ftoc.json&tabs=defender-portal)
Since your SOC and Red Teams are already proficient in using XDR portal, Kubernetes hunts can now easily become part of their playbook.
By looking at the Alerts and Incidents in the XDR portal you can see the birds eye view of what, who, and where. This will equip you to further narrow down your search in Hunting Queries.
Fig. Attack Story for Binary Drift
The Evidence and Response tab shows all the relevant evidence. Most likely, you will create your hunting queries using these fields.
Fig. Kubernetes objects related to the incident
This integration allows us to run Advance Hunting queries using CloudAuditEvents table that has Defender for Cloud data.
The query below looks for exec in a Pod named ubuntu (assumption here being that this pod is also running a container ubuntu where the drift happened and alert generated)
CloudAuditEvents
| where DataSource == “Azure Kubernetes Service”
| where OperationName == “create”
| where RawEventData.ObjectRef.resource == “pods” and RawEventData.ResponseStatus.code == 101
| where RawEventData.ObjectRef.subresource == “exec”
| where RawEventData.ResponseStatus.code == 101
| extend RequestURI = tostring(RawEventData.RequestURI)
| extend PodName = tostring(RawEventData.ObjectRef.name)
| extend PodNamespace = tostring(RawEventData.ObjectRef.namespace)
| extend Username = tostring(RawEventData.User.username)
| where PodName startswith “ubuntu”
| extend Commands = extract_all(@”command=([^&]*)”, RequestURI)
| extend ParsedCommand = url_decode(strcat_array(Commands, ” “))
| project Timestamp, AzureResourceId , OperationName, IPAddress, UserAgent, PodName, PodNamespace, Username, ParsedCommand
The query above will produce the following, here you can see who executed the command, when was the command executed and many other fields that are helpful for the context
Fig. Query output
Another scenario you would want to look for are activities performed by a managed identity that might have been related to an alert.
CloudAuditEvents
| where RawEventData.principaloid == <managed identity id>
This query will provide you the actions that this managed identity performed. These might be enumerating key vaults, storage accounts, etc. As before by reviewing the output such as timestamp, geo etc. you can make a determination if this action is malicious.
You can then look at the activities that were performed during this timeframe, like so,
CloudAuditEvents
| where AzureResourceId == <AKS Cluster ID>
| where TimeGenerated > datetime(<start time>) and TimeGenerated < datetime(<end time>)
| where OperationName == “create”
| where UserAgent has “kubectl”
If the attacked conducted any “exec” during this time frame you will be able to see it under “ObjectRef” and “RequestURI” will show you the exact command that was executed.
Important to note that Query results are presented in your local time zone as per settings. Kusto filters, however, work in UTC.
Automating the process and Extending the out of the box detections
Now that you have queries defined that provide you additional context behind the drifts, you can convert them into a custom detection rule that you can run at a defined frequency.
For example, say you want to get alerted on privileged pods:
These pods run with an elevated set of privileges required to do their job, but that could conceivably be used as a jumping off point to gain escalated privileges.
Note: You can also use Azure Policy built-in definitions to prevent this:
CloudAuditEvents
| where Timestamp > ago(1d)
| where DataSource == “Azure Kubernetes Service”
| where OperationName == “create”
| where RawEventData.ObjectRef.resource == “pods” and isnull(RawEventData.ObjectRef.subresource)
| where RawEventData.ResponseStatus.code startswith “20”
| extend PodName = RawEventData.RequestObject.metadata.name
| extend PodNamespace = RawEventData.ObjectRef.namespace
| mv-expand Container = RawEventData.RequestObject.spec.containers
| extend ContainerName = Container.name
| where Container.securityContext.privileged == “true”
| extend Username = RawEventData.User.username
| extend DeviceId = RawEventData.AzureResourceId
| project Timestamp, ReportId, DeviceId , OperationName, IPAddress, UserAgent, PodName, PodNamespace, ContainerName, Username
Note that we will be using “Timestamp, ReportId, DeviceId” to turn the query into a custom detection rule. This again can be a starter pattern for your rules.
To create a custom detection/rule you will need to include the columns mentioned under https://learn.microsoft.com/en-us/defender-xdr/custom-detection-rules#required-columns-in-the-query-results
You can now easily convert this query into a custom detection rule
Fig. Creating custom detection rule
Complete the fields that will show up in Alerts
Fig. Privileged pod alert details
Select the impacted object in our case the DeviceId (aka AzureResourceId)
Fig. Identity impacted by the alert
Choose the action
Fig. Action to be taken upon alert
Lastly, create your alert
Fig. Alert created
Once created the rule will appear in your custom detection rules in XDR portal
Fig. Custom detection rules
Call to action
As you saw in this two-part series that unlike many third party solutions, you can maximize your existing investment in Microsoft’s security ecosystem to conduct deeper hunts using the tools that you already know.
Our suggestion is to:
Engage your Platform Engineering team to identify most risky Kubernetes actions that are relevant to your environment. For example, if you are using distroless images exec won’t make much sense
Provide your SOC and Red Team the XDR Portal and Defender for Cloud integration documentation so they can start leveraging the starter queries and customize to your environment
Review your third-party Kubernetes security solutions to see if the value you are getting for features that are not available in Native is worth the investment. In our discussions with dozens of customers, Native proves to be most direct, cost efficient, easier to use, and manage in the longer term
Moreover, if you have not thought about going Native-First in your security strategy reevaluate the choice and talk to your Security Seller about the latest roadmap of Native Security Services
Remember, a lot of on-premises security challenges were because of reliance on “best of breed”. That strategy does not translate well in Cloud. Share the following Native First security resources with your decision makers
https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/native-first-cloud-security-approach/ba-p/4102367
https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/unleashing-the-power-of-microsoft-defender-for-cloud-unique/ba-p/4102392
Microsoft Tech Community – Latest Blogs –Read More
Azure Firewall and WAF integrations in Microsoft Copilot for Security
Azure Firewall and WAF are critical security services that many Microsoft Azure customers use to protect their network and applications from threats and attacks. Azure Firewall is a fully managed, cloud-native network security service that safeguards your Azure resources. It ensures high availability and scalability while filtering both inbound and outbound traffic, catching threats and only allowing legitimate traffic. Azure WAF is a cloud-native service that protects your web applications from common web-hacking techniques such as SQL injection and cross-site scripting. It offers centralized protection for web applications hosted behind Azure Application Gateway and Azure Front Door.
The Azure Firewall integration in Copilot for Security enables analysts to perform detailed investigations of malicious traffic intercepted by the IDPS [Intrusion Detection and Prevention System] feature of their firewalls across their entire fleet. Analysts can use natural language queries in the Copilot for Security standalone experience for threat investigation. With the Azure WAF integration, security and IT teams can operate more efficiently, focusing on high-value tasks. Copilot summarizes data and generates in-depth contextual insights into the WAF threat landscape. Both integrations simplify complex tasks, allowing analysts to ask questions in natural language instead of writing complex KQL queries.
In this blog, we will focus on setting up and leveraging the integration of Network Security services with Copilot for Security for hunting and troubleshooting malicious traffic.
Network Security Capabilities Available today in Copilot:
Azure Firewall:
Retrieve the top IDPS signature hits for an Azure Firewall
Get additional details to enrich the threat profile of an IDPS signature beyond log information
Look for a given IDPS signature across your tenant, subscription or resource group
Generate recommendations to secure your environment using Azure Firewall’s IDPS feature
Azure WAF:
Retrieve contextual details about WAF detections and the top rules triggered
Retrieve the top malicious IPs in the environment along with related WAF rules and patterns triggering the attack
Get information on SQL Injection attacks blocked by Azure WAF
Get information on XSS attacks blocked by Azure WAF
Prerequisites for enabling the integration:
In case you haven’t used Copilot for Security for other products, you need to onboard to Copilot for Security by following the process below:
Provision Capacity
This can be done through either signing in to Copilot for Security (https://securitycopilot.microsoft.com) or through the Azure Portal, as shown below:
More details about the detailed setup process for Copilot for Security can be found here.
The details around pricing for Copilot for Security can be found here.
Setup the default environment using the instructions mentioned here.
Enable Plugins:
For Firewall, only the plugin needs to be enabled as shown in the image below.
For WAF, along with enabling the plugin, we also need to ensure the WAF Log Analytics workspace name, Log Analytics resource group name and Log Analytics subscription ID are configured.
Once the Security Compute Units (SCUs) are provisioned as specified, the Azure WAF and Firewall logs are present in the Azure Log Analytics workspace, and the respective plugins are enabled, the capabilities will be ready for use.
Investigation of Threats in Azure Firewall using Copilot for Security:
Retrieving IDPS hits in Azure Firewall using Natural Language prompts:
Get additional details to enrich the threat profile of an IDPS signature beyond log information
Look for a given IDPS signature across your tenant, subscription or resource group
Investigation of Threats in Azure WAF using Copilot for Security:
Retrieve contextual details, top IP offenders and WAF rule matches using Natural Language prompts
Here, Regional WAF refers to App Gateway WAF and Global WAF refers to Front Door WAF.
Get information on SQL Injection attacks blocked by Azure WAF
Get information on XSS attacks blocked by Azure WAF
Recommendations for Network Security:
Copilot for Security also provides recommendations on using Azure Firewall’s capabilities to secure your environment as shown below:
For more details on all the available prompts that can be used with this integration, refer to the respective documentation here for Firewall and WAF.
Integrating Microsoft Azure’s robust network security services with Copilot for Security offers a powerful solution for enhancing your security posture. By leveraging Azure Firewall and Azure Web Application Firewall (WAF) within Copilot, security analysts can efficiently investigate and mitigate threats using natural language queries. This integration not only simplifies complex security tasks but also provides comprehensive protection for your applications and data, allowing your security and IT teams to focus on high-value activities.
Microsoft Tech Community – Latest Blogs –Read More
Impact of Gripper’s Roll Angle on Reachable Poses for UR5e Robot
When I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
endWhen I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
end When I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
end matlab MATLAB Answers — New Questions
Conv2d, fully connected layers, and regression – number of predictions and number of channels mismatch
Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers?Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? convolution, cnn, regression, trainnet MATLAB Answers — New Questions
What happened to the figure toolbar? Why is it an axes toolbar? How can I put the buttons back?
From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window?From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window? From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window? figure, toolbar, axes, missing MATLAB Answers — New Questions