Scalable Container Orchestration Without Kubernetes

Scalable Container Orchestration Without Kubernetes

Organizations want to deploy and orchestrate container workloads without having to hire an entire team to manage them. Sure, engineers will always need to ensure that the clusters, environment, and code are running as effectively as possible, but they shouldn't have to sit and babysit a Kubernetes cluster.

In this blog post, I break down one method of container orchestration with Azure Container Apps.

đź’ˇ
If you're into AWS and/or GCP, AWS Elastic Container Service (ECS) and GCP Cloud Run are very similiar services in comparison to Azure Container Apps.

Prerequisites

To follow along from a hands-on perspective, you'll need an Azure account. If you don't have an Azure account, that's totally fine! You can still follow along to understand how Azure Container Apps works.

Why Azure Container Apps

Azure Container Apps (ACA) is a great middle ground between App Services and Azure Kubernetes Service (AKS). The goal of it is to give you a scalable method of orchestrating containers that performs well without having to implement Kubernetes.

đź’ˇ
With AKS Automatic, it'll be interesting to see if ACA continues.

ACA uses AKS underneath the hood, so it's still doing the "Kubernetes thing" without you having to manage the Kubernetes thing.

As with all Azure services, it integrates well with other portions of Azure that you may want to incorporate like Entra for authentication and authorization. As far as deploying workloads with ACA, you can use the Azure portal, the Azure CLI, Azure Bicep, and Terraform - which means you have plenty of repeatable processes that you can implement with the tools/platforms you're already using.

Aside from the implementation methods and abstraction that were highlighted above, the "scale to zero" option is something that certainly differentiates ACA. Scale to 0 gives you the ability to scale replicas down when they aren't being used which is great for cost optimization, and that's very important to the enterprise.

ACA is a great option if you want to move away from heavier orchestration platforms, but still need the performance and scalability of a big orchestration platform.

ACA Cons

As with any service, product, tool, or just about anything in tech, there are cons. No service or implementation is perfect, and it's important to understand where things may not work entirely for you.

  1. Lock-in: With ACA, you're losing a fair amount of customization. For example, you can use Ingress, but you can't choose the Ingress Controller you use.
  2. Not a lot of metrics are available.
  3. Security features that you may want like SecurityContexts aren't available.

You're giving up control for abstraction.

Deploying Container Apps From The Azure UI

Now that you know a bit about ACA and why you may want to use it, let's break down a few key methods for deploying workloads to and with ACA. You'll first see how to deploy a container and then you'll see how to deploy to ACA right from source code.

Container Deployment

  1. Log into the Azure portal and search for Container Apps.
  1. Click the + Create button and then choose the + Container App option.
  1. Input your subscription name and resource group along with an app name of your choosing. You'll also need to choose Container image for the Deployment Source.
  1. Create a new ACA environment (it's where ACA runs, you don't have to manage it) and the region where you want the environment to be deployed to.
  1. Input the information about your container including where it's getting deployed from, the name, and the image/tag itself.

If you don't have a container image readily available, you can choose the Use quickstart image option.

  1. Within the Container resource allocation section, you can choose the amount of CPU and memory your application needs.
đź’ˇ
There's a Serverless GPU option as well, but it hasn't been activated on my account yet. If you see that option, you can use a GPU for within your container.
  1. You can also enable ingress to ensure that the traffic to your application is secure.
  1. Once ready, click the Create button.

In the next section, you'll learn how to deploy source code to ACA.

Code Deployment

Luckily with the Code Deployment option, it's not much different in terms of the options you'll select other than the Deployment source.

  1. Within Deployment source, choose Source code or artifact.
  1. Specify your Git org, repo, and branch. You can also choose to upload an artifact which is a preview feature in ACA.

Congrats! You've successfully learned how to deploy to ACA manually. In the next section, you'll learn how to do the same, but programmatically.

Programmatically Deploy To ACA

In the previous section, you learned from a graphical perspective how to deploy to ACA. In this section, you'll learn how to deploy in a programmatic and repeatable fashion with the Azure CLI and Terraform.

đź’ˇ
You can use Azure Bicep to deploy to ACA as well.

Azure CLI

  1. Add the containerapp extension.
az extension add --name containerapp --upgrade
  1. Register the Microsoft.App provider.
az provider register --namespace Microsoft.App
  1. Register the Microsoft.OperationalInsights provider.
az provider register --namespace Microsoft.OperationalInsights
  1. Create the container app with your specified metadata.
az containerapp up --name pyweb \
--resource-group devrelasaservice \
--location eastus \
--environment 'my-container-apps' \
--image pyweb:latest \
--target-port 80 \
--ingress external

Luckily, the AZ CLI method is quite quick!

Terraform Config

If you need another form of automation that may align more with how you're deploying workloads in production, you can use Terraform.

  1. Create a new resource for log analytics.
resource "azurerm_log_analytics_workspace" "log-deploy" {
  name                = var.logAnalyticsName
  location            = var.region
  resource_group_name = var.rg
  sku                 = "PerGB2018"
  retention_in_days   = 30
}
  1. Create the ACA Environment.
resource "azurerm_container_app_environment" "test-deploy" {
  name                       = var.envName
  location                   = var.region
  resource_group_name        = var.rg
  log_analytics_workspace_id = var.logAnalyticsId
}
  1. Create the container app itself. Ensure that you specify the right metadata (container image, name, etc.).
resource "azurerm_container_app" "test-deploy" {
  name                         = var.appName
  container_app_environment_id =azurerm_container_app_environment.example.id
  resource_group_name          = var.rg
  revision_mode                = "Single"

  template {
    container {
      name   = "nginxcontainerapp"
      image  = "nginxt:latest"
      cpu    = 0.25
      memory = "0.5Gi"
    }
  }
}

To make the Terraform process as efficient as possible, you should use a variables.tf file. That way, you aren't hard coding the values into the main.tf.