4 steps to Application Modernisation with Microsoft Container Apps.

4 steps to Application Modernisation with Microsoft Container Apps.

Containerising monolithic applications made easier with Microsoft Container Apps.

In a previous post, we discussed the reasons why Microsoft  Container Apps is a game changer, and the benefits of Microsoft Container Apps for running container-based workloads in the cloud. Now that Container Apps are generally available, as we were quick to jump on board and migrate one of our existing App Service Web Apps, we can share our tips and experience in getting an application into Container Apps.

Aerial view of a container terminal, showcasing rows of stacked shipping containers and cranes for loading and unloading cargo
Depiction: Containerising monolithic applications

Getting your application into Container Apps

There are three main pathways to running your application workloads in Container Apps.

Your application is:

  1. Greenfield, and you plan to use containers
  2. Brownfield, and you’re already using containers
  3. Brownfield, and the app is not containerised (hosted in a VM, on a physical server in a data centre, or possibly even an Application Service in Azure).

In all but the third case, adoption of Container Apps should be straightforward.

In each case you’ll need to ensure your application targets Linux-based containers, as Windows containers are not supported.

If your application currently runs on IIS or in a Windows container, you’ll need to consider migrating your application to .NET Core or some other Linux-friendly platform prior to containerisation.

Container Apps are not opinionated about your choice of language – you can use any language of your choosing, providing it runs in a Linux container, e.g., .NET Core, Node.js, Python, PHP, Ruby, Go, Rust, Scala, etc.

Once you have a containerised application

  1. Create a Container App Environment in your subscription.
  2. Build your container image and push it to a registry (Azure Container Registry, Dockerhub, etc.)
  3. Deploy your application into the Container App Environment – via the CLI, ARM/Bicep, Terraform, etc.
  4. Enable external ingress so you can access your API endpoints.

Of course, this is the simplest possible approach. You’ll need a domain and SSL certificate if you want to expose the application on a vanity URL. You’ll also need a Landing Zone.

Playtime Solutions - 4 Steps Container App Migration with Microsoft Container Apps
Playtime Solutions - 4 Steps Container App Migration with Microsoft Container Apps

Deploy your application into a Landing Zone

Landing zones are fundamental to any cloud-based solution, and include

  • Networking and security
  • Logging and monitoring
  • Shared services
  • Access controls
  • Build and deployment pipelines
  • Configuration and secrets management

Your organisation will require a landing zone to deploy your application environments. Depending on your organisational complexity, this can vary from a ‘ClickOps’ environment created manually in the Azure portal, to an enterprise-grade solution following best practice DevOps and DevSecOps principles and governance.

Landing zones are an in-depth topic, so we’ll cover the details in another article.

For now, just assume that your landing zone will have the resources you need to run your application – Container Registry, Key Vault, Container App Environment, etc.

Benefits of adopting Container Apps

Container orchestration

Kubernetes has long been the choice for orchestrating containers at scale for microservice based architectures. However, Kubernetes brings considerable overhead in terms of developer and operational capability. Container Apps are implemented on top of Kubernetes, and abstract away the most challenging aspects so you can focus on your business, not your cluster.


Your applications will always run in a VNET and are isolated from other workloads by default. Microsoft provides this isolation if you don’t BYO networking. If you already have a VNET, integration with your existing networking security and service/private endpoints is easy.


Kubernetes Event-driven Autoscaling (KEDA) is built into Container Apps and provides scaling rules out of the box. Using common metrics such as concurrent requests, CPU and memory utilisation, queue lengths, etc., your application can scale out dynamically, and even scale down to zero. Scaling to zero can drastically reduce your Azure consumption spend and is a compelling feature.


Dapr is an application runtime that provides a rich set of components common to microservice style architectures, including service discovery, service mesh, mTLS, persistence, messaging, observability, etc. Dapr is built into the fabric of Container Apps, and you can opt-in to using it. Along with handling many common cross-cutting concerns, Dapr provides a local development environment runtime, reducing developer friction and helping facilitate dev/remote parity.


Based on our hands on experience in a live environment with Container Apps, we believe this technology is the future of running containerised workloads in Azure. This is a bold statement, but from what we can see, this technology covers all bases for running simple web apps that would have typically been deployed in an App Service and supersedes Container Instances. Additionally, this service vastly simplifies running and orchestrating complex distributed applications that would typically require Kubernetes.

The key takeaways are:

  • App Services are no longer our recommended hosting solution. From single-application deployments to complex microservice-based architectures – we’ll be suggesting that our clients adopt this technology for both new and existing applications.
  • Kubernetes is not required for most containerised use cases, unless an organisation specifically requires this technology. Container Apps provide a rich platform for container orchestration on top of Kubernetes, while vastly reducing the operational overhead and complexity of cluster management.
  • Migration of workloads requires minimal effort for already containerised applications. Windows based applications are an exception – you’ll need to port your legacy IIS-based applications to dotnet core and run them in a Linux container.
  • Externalise configuration such as app settings and secrets into environment specific configuration files and key vaults. Inject application into each environment and use the same container image for all environments from DEV through to PROD. This is generally a best-practice, and it is not specific to Container Apps.
  • Use Infrastructure as Code (Iac) and deployment automation from the start of your journey. We used ARM/Bicep for automation of resource creation. If you prefer Terraform, Pulumi, etc., that’s also fine.

In part 2 of the series, we’ll go through a step-by-step example for deploying a dotnet core application into a Container App.

Playtime-Solutions-AKS-Best-Practice-Guideline-eBookPlaytime Solutions’
AKS Best Practice Guide

A 23 page best-practice checklist, leveraging Playtime Solutions’ hands-on experience in designing, developing and delivering enterprise-grade application. This guide assists IT and DevOps professionals in creating an enterprise-grade Kubernetes environment in Microsoft AKS.

Download Nulled WordPress Themes
Download WordPress Themes Free
Free Download WordPress Themes
Download Best WordPress Themes Free Download
udemy paid course free download
download coolpad firmware
Download WordPress Themes Free
Get in Touch

    ( * ) Required Fields