- implementing Infrastructure as Code for these CDNs inside the Github repositories we already use for all other cloud resources (AWS and GCP)
- effectively authorize HTTPS on the CDN side, which will be impersonating your origin servers
- create instances of the same CDN services, first in testing and then in production environments, keeping them in parity with each other
- expand end-to-end testing (the tip of the pyramid) to cover also the CDNs rather than just covering the applications involved
- integrate logging in order to catch any problem happening between the user and the origin servers
- finally phase in the new CDNs with new geotagged DNS entries
Infrastructure as Code
Our AWS-based setup is making a large use of CloudFormation, the native service for declaratively specifying resources such as servers, load balancers and disks. The simple setup has been augmented over the years by a code generation layer for the CloudFormation templates; this Python code reduces duplication between the various templates by starting from standard EC2/ELB/EBS resources that can be customized in size and other parameters.If we start from a simple single-server setup for a microservice (this was before Docker containers got stable enough), we are looking at a template containing at least an EC2 instance and a DNS entry pointing to it. With multiple servers, we expand this with a load balancer that pulls in a TLS certificate provided to IAM by an administrator.
To configure CloudFront via CloudFormation, an additional resource for the CDN distribution is introduced. All the configuration you need will be visible in this resource, a JSON dictionary or XML tag respecting a certain schema.
Since CloudFormation can only manage AWS resources and nothing outside that tended garden, Fastly was the reason for introducing Terraform alongside it. Whereas almost anything AWS-specific still goes through CloudFormation, Terraform has opened up new roads such as Infrastructure as Code implementations for Google Cloud Platform (storage buckets and BigQuery tables).
Applying changes in this context is not trivial as you may inadvertently reboot or destroy a server while believing you were only changing a minor setting. Yet Infrastructure as Code is about making the current state of infrastructure and all changes visible, easy to review and safe to rollout across multiple environments. It is imperative therefore to maintain testing environments created with the same tooling as production, and to use them to ultimately integration test all changes.
The caveat of using multiple tools in lockstep for the same instance of a project (including servers, cloud resources and CDNs) is that they can't declare dependencies between resources managed by different tools. For example, since we manage DNS in CloudFormation and Fastly CDNs in Terraform, we can both at the same time but can't couple together the existence of a DNS and the CDN it points to, or impose a creation or update order that is different from the general order we run the tools in.
The most glaring difference in updates rollout between the various options is that, to rollout a CDN configuration change, it takes:
- no deployment time if you don't use a CDN (obviously)
- 10s of seconds for Fastly
- 10s of minutes (up to 1 hour was common) for CloudFront
Still, minutes of update and/or creation time make Fastly unavailable for inclusion in the CI environments where the tests of a single service are run. You could in theory create a Fastly service on the fly when the build of the service runs, but this will add minutes to your build _and_ also promote coupling to the CDN itself. Fast forward this a bit and you'll see an application unable to be run locally anymore for exploration because of the missing CDN layer. Therefore, like cloud services the CDN is treated like a long-lived resource, with its regression testing performed into a shared environment on every new application commit, but after merge.
Logging
Within a web service, you usually have some kind of access log being generated by nginx or Apache. These logs can sit on a single server or can be uploaded to some aggregation point, whether it is a local Logstash or an external platform that can index them.Even load balancing doesn't change this picture very much as the load balancer(s) logs should be identical to the ones of the application servers if everything is working well. But with a CDN, large-scale caching is introduced and so it's plausible that you will stop directly seeing a large percentage of your traffic. Statistics or monitoring based on access logs may get skewed; or worse, Japan may be cut off from your website for a while because the health checks from the CDN points of presence there have a timeout of a few milliseconds too low to get to your servers in us-east-1 (of course this never happened).
Hence, to understand what's going on in those few hundred servers you have no access to, you need a way to stream them to some outsourced service; this can be storage as a service (S3 or GCS) or directly some log infrastructure provider. The latency with which logs can get in the right place is a key metric of the feedback loop from changes.
Since we are striving for Infrastructure as Code, all the logging configuration should be kept under version control together with hostnames and caching policies. We got to a standard logging format (JSON Lines with certain fields) and frequency, along with GCS bucket where to put new entries, bucket names following conventions. This was later expanded into BigQuery tables providing queries over the same data, after the Terraform Fastly provided started supporting this delivery mechanism.
The main difficulty in integration was credentials management: you aren't told much if credentials are not correct or not authorized to perform certain actions like writing to BigQuery. Moreover, you can't just commit a bunch of private keys for anyone to see, especially since Infrastructure as Code repositories tend to be made very visible to as many people as possible.
We ended up putting GCP credentials and similar secrets in Vault, running on the same server as the Salt master (same thing as Puppet master). The GCP Service Account itself and its permissions to write to the bucket needed some special permissions to set up (it's turtles all the way down) so couldn't put it directly into Infrastructure as Code but had an admin manually creating it instead. The ideal thing would be for Vault to generate credentials by itself, following the pattern of periodically rotating them. But then it would need to push these credentials somehow into the Fastly configuration and I'm here to provide efficient delivery pipelines, not make cloud giants wrestle.
Flexibility
Your own application is usually highly customizable, with a certain cost associated. You have to write some code in your favorite programming language, possibly following some framework conventions and calling your classes Middleware or EventListener.CDNs work on shared servers, so they have limits on what can be safely run in that sandboxed environment. Nevertheless, Fastly provides the possibility to customize the VCL that runs each service with your own snippets and macros.
This is very flexible, perhaps even too much: you can introduce headers with random values, write conditionals and implement loops by restarting requests. It feels similar to working in nginx configurations but with a more predictable language.
The main problem with this form of customization is that there is no way to run it or test it on your own. The best feedback loop we found is the Fastly Fiddle (similar to JS Fiddle) where you test out bits of code, hit a save button and see it propagated to servers around the world for you to test.
The fact that this even exists is impressive, but you can imagine how well it works for actual development. Once you get past experimenting, you can't integrate a Fiddle with your own Infrastructure as Code approach (e.g. Terraform templates) nor easily port code from one to another besides copying and pasting. You can run integration-only tests in some other window, but the feedback loop can't be shorter than the deployment time; unit tests are not a thing. You can't even use your IDE as much as you may love it. In the end, Fastly's Varnish diverged from the open source one 4 major versions ago; hence, this VCL is a proprietary language and you'll feel the same as writing stored procedures in Oracle's PL/SQL.
I tend to see VCL and other intermediate declarative templates (such as Terraform .tf files) as a generation target for Infrastructure as Code to compile to. This lets you unit test that your tools generate a certain output for these templates; use dummy inputs in tests and check dummy expected outputs; all of this will still need to be integration tested with the application itself in a real environment, but some of the responsibilities can be developed in the tool itself and reused across many applications.
Integration testing
We have understood by now that to keep the ensemble of servers, code, cloud services and CDNs we need some automated integration testing in place that touches all the different pieces. We don't want many scenarios to be tested at this level because it's slow and brittle to do so, but we need a tracer bullet that goes through everything, if only to verify all configurations are correct.In the general context of outsourcing of responsibilities to a service or a library, you still own it as a dependency of your application and still need to verify the emergent behavior of custom code and borrowed architecture.
Therefore, I always put at least a staging environment in place replicating production where automated tests can run. This doubles as the place where to try and roll out infrastructure updates that are risky (which are risky? If you have to ask, all of them; just roll out everything through staging).
As we have seen, creating too many different, ad-hoc environments to test pull requests doesn't scale; this will reach death by feature branch as all of your Jenkins nodes are waiting for yet one more RDS node or CloudFront distribution to be created.
A common example of a coupled, integration-related feature to test is the forwarding of Host and other headers; these go through so many layers: a couple of CDN servers, a load balancer, an nginx daemon and finally the application. Some headers don't just have to be forwarded, but have to be rewritten or renamed or added (X-Forwarded-For). All of this can in theory be specified for every single layer but testing the whole architecture probably makes for easier long-term maintenance.
Why?
In various projects you always have to ask yourself why you are doing something (especially complex things) and what value you want to get out of it. CDNs are one of the go-to solution for web performance, their killer feature being huge caches for slow-changing HTML and assets across the world so that even a casual Indian reader can load your homepage in one second. Moreover, if done right the load on your origin servers will also be greatly reduced with respect to not using caching layers.On the other hand, you can see the complexity, observability and maintenance needs that every additional layer introduces. When asking whether a CDN should do something or your application should do something, it's the same decision as for a database or a cloud service: how can you effectively store and update its configuration in multiple environments? Do you want to oursource that responsibility? How will you know when something's wrong? Do you feel comfortable writing stored procedures in a language you can't run on your laptop? All of these are architectural questions to go through when evaluating various CDNs, or no CDN.