General Availability of Kubernetes Event-Driven Autoscaler (KEDA) in Azure Portal
Kubernetes Event-Driven Autoscaler (KEDA) is an open-source, lightweight component that allows users to autoscale container workloads on events in external scalers. KEDA extends the functionality of the native Kubernetes Horizontal Pod Autoscaler (HPA) with a wide variety of scalers and scale-to-zero capabilities, thus allowing user applications to meet demand in a more sustainable and cost-efficient manner.
Today, we are excited to announce that the Azure Portal now supports KEDA scaling on memory, CPU, cron and Azure Service Bus scalers. Users will be able to easily create scaled objects and monitor their scaled objects and jobs all within the Portal interface, and for Azure Service Bus, Portal will handle the deployment and configuration of workload identity. This will streamline the creation and management of KEDA resources through the Portal interface.
What is KEDA and why use it?
KEDA is a Kubernetes-based event-driven autoscaler that can scale applications in Kubernetes based on metrics from external scalers. KEDA also supports scaling workloads to zero when there are no events to process and scaling them back up when events occur. This way, users can optimize their resource utilization and reduce their costs.
KEDA works by creating custom resources in Kubernetes that define the scaling logic and the external scaler to use. KEDA then creates an HPA for each ScaledObject and monitors the metrics from the external scaler to adjust the number of pods accordingly. KEDA also supports scaling on metrics from the Kubernetes metrics server, such as memory and CPU usage.
How to use KEDA in the Azure Portal
Azure Portal, a web-based interface that allows users to manage their Azure resources and services, now supports creating KEDA ScaledObjects and monitoring ScaledObjects and ScaledJobs in just a few clicks. Users can also view the status and current metrics of their KEDA resources, as well as all KEDA events and warnings.
To scale with KEDA through the Azure Portal, users need to have a Kubernetes cluster with the AKS KEDA add-on enabled, as well as an Azure subscription. Users can then follow these steps:
Navigate to the Workloads tab, select a workload, and select “Scale” from the top menu, or select the “Application scaling” tab from the left menu.
Enable the AKS KEDA add-on if not already enabled.
Click on the “Create” button to create a new ScaledObject.
Fill in the required fields, such as the name, namespace, workload type, workload name, and trigger details. Please note that the autoscaling of Kubernetes system deployments is not recommended and thus disabled through the Portal experience.
Click on the “Create” button to deploy the KEDA resource.
For the Azure Service Bus scaler, Portal will automatically create and assign a managed identity to the workload and grant it the necessary permissions to access the Service Bus namespace and queue or topic.
Go back to the “Application scaling” tab and select the resource to view its details, such as the current and desired metric and current replicas.
Conclusion
KEDA in the Azure Portal makes it easy and convenient for users to create and monitor KEDA resources that scale container workloads on events in Kubernetes. Users can now enjoy the benefits of KEDA and Azure Portal to run their event-driven applications in a scalable and cost-effective way.
Microsoft Tech Community – Latest Blogs –Read More