Thursday, May 11, 2023

I just want to run a container...

 ...is a developer-centric point of view?

Generic data center image to exemplify a place many of us have never set foot into.

After a few hours spent on upgrading the toolchain that starts from Terraform, goes through a few AWS-maintained modules and reached the Elastic Kubernetes Service APIs, my team was entertaining the thought of how difficult it is to set up infrastructure. Infrastructure that, for this use case takes a container image of a web application that Docker generated and run it somewhere for users to access.

Thinking through this, either Kubernetes is the new IBM (no one ever get fired for choosing it), or there is more to a production web application than running a container which is what Kubernetes is often sold as: a tool to run containers in the cloud without the necessity to set up specific virtual machines, but treating them as anonymous, sacrificeable nodes where the aforementioned containers can be tightly packed and sharing CPU, memory and other resources.

What exactly is the infrastructure doing for us here? There are a few concerns about operations, the other side of that magic DevOps collaboration. For example:

  • the container image is exposing an HTTP service on port 80. This is not useful for a modern browser and its padlock icons, so to achieve a secure connection a combination of DNS and Let's Encrypt generated and automatically renews certificates after verifying proof of ownership of our domain name.
  • the container image produces logs in the form of JSON lines. Through part of the Grafana stack, these lines are annotated with a timestamp, the container that generated them and various labels such as an environment or the name of a component of the application (think frontend or prod or staging). After these lines are indexed, further software provides the capability to query these logs, zooming in on problems or prioritizing errors during investigation.
  • if we get too much of these logs at the level of error for a particular message, we'd like to receive an alert email that triggers us into action.
  • invariably applications benefit from scheduled processes that run periodically and independently from the request/response lifecycle. It's such a useful architectural pattern that Kubernetes even named a resource after the cron daemon, introduced in 1975.
  • it's also very useful for applications to maintain state and store news rows in a relational database. Provided turnkey by every cloud, a set of hostname, username and password allows access from any combination of programming language and library. No need to worry about rotating the logs of Postgres anymore, or that we are running out of space.
  • the data contained in the application and generated by user also lends itself to timely analysis, so that we know whether to kill a feature or to invest in it. This means somehow taking out recent updates (or whole tables) into a data pipeline that transforms it into something that can be analyzed by a data scientist. All with an acceptable speed.
  • Those credentials nevertheless need to be provided to the application itself, hopefully without accidentally disclosing them into chats, logs, or emails. No human should need to enter them into an interface.
  • Configuration is slightly easier to manage than credentials due to the lack of sensitive values, but we'd still like the capability to generate some of these values such as a canonical URL or to pass in different numbers for different environments.
  • And of course, whenever we commit, we'd like to deploy the change and start the latest version of our application, check that it is responding correctly to requests, and then stop the old, still running one.

This is all in the context of a single team, running operating a single product. Part of this is achieved through running off-the-shelf software on top of Kubernetes; part of it by paid-for cloud services; part of it by outsourcing to a specialized software as a service.

However, when we think about software development we don't necessarily think about software operations. Many of us started our careers by writing code, testing and releasing it without a further look to what was happening afterwards. DevOps as a philosophy meant bridging the gaps; You Build It You Run It is the easiest summary for me.

Yet what I see and what I hear from experienced people is that silos in infrastructure seriously slow teams down, and create fears of insurmountable problems. Abandoned projects, "finished" from a development point of view, but now having unclear ownership and running in production supporting the main revenue stream of an organization.

So I don't want to just run a container. I'd rather deploy many incrementally improved versions of a container, monitor its traffic and user activity, and close the feedback loop that links usage to the next development decision.

In other words: I can always build simple code if it doesn't have to run in production.

No comments:

Post a Comment