Content Management Systems are Change Management Systems


Choosing a Content Management System is a very difficult decision. Even if you have already decided on whether to build or buy, there’s still a lot to think about. From integration with existing systems, to automated testing and gradual release, this article can help you understand the nature of changing code or content in a production system, the scope of evaluation for different options, and how to avoid common pitfalls.

If we didn’t have Content Management Systems, DevOps teams running sites with frequently changing content would be overwhelmed with content change work from content-authoring teams. In response, they would develop tooling to reduce the overhead of those changes, the natural conclusion of this is what we would call a CMS. It’s likely that the solution would be quite specific to the type of content changes the team had to deal with, and therefore reusability is less likely. It may have been possible to use an off the shelf product or open-source tool, and this could have saved us the time we spent building those tools. Either way, the physics of changing a production system are the same.

Apple’s dictionary defines “CMS” as:



content management system (a system designed to manage the content of a website or other electronic resource that is used collaboratively by a number of people): the choice of the proper CMS can have a big impact on the success of a website.

The term “CMS” is necessarily broad, since there are an infinite number of types of content all with different purposes. Drawn very generally, the kind of content management system use-cases we’re talking about look like this:

Content and code changes being authored into a system

Part of your website is content-managed and content changes happen along side code changes.

Content Management Systems are typically capable of handling text, images, video, files etcetera. These different data types can create significant engineering challenges, such as video storage and streaming, or image resizing. An out-of-the-box tool or product can relieve your team of having to deal with some of these engineering problems. However what the data represents will have as much, if not more of, an influence on the design of the overall system. For example, a video containing national secrets is unlikely to be stored or released to consumers in the same way as a cat video.

Here are some things which could be loosely described as Content Management Systems:


You can create a website with different types of pages containing text and images, have pages in unpublished states, apply layout and style templates to make it beautiful.


You can manage a shop and upload the text and images which describe the things you’re selling, and there’s ways to control pricing and handle payments.


You keep a library of you photos, it allows to to publish albums into websites and to social media.


Developers upload code to a project, others developers can discover, use and contribute to it. Project owners define release strategies for consumers of their code.

Superficially, these systems have lots in common. They allow users to upload text and images, they allow you to release content to consumers, they allow you to make changes to published content. Clearly though, these Content Management Systems are all very different, as a result of being developed for different purposes.

More specifically they have different processes for authoring and distributing content. Where content on github typically goes through a peer-review process before being more widely distributed, content submitted to eBay doesn’t. For Photos, sharing content with others is a less prominent feature.

Content changes, like code changes, are changes to a production system. The process of  release a change to production system involves Authoring, Releasing and Serving. A system under active development involves engineers continuously change the production system. In each stage the aim is to improve the production system in a way which reduces risk of unintended adverse effects. Next we’ll go through each stage in more detail, starting with Serving, since that’s the goal state.


Assume that you have creatively authored some hot new content. You need somewhere to host it, and some infrastructure to serve it. Depending on it how hot it is, sufficient infrastructure could be anything from a static file server, to elastically scalable compute with a globally distributed CDN and load balancers. Then you discover that your hot new content found a niche in commuters using underground mass-transit, where no network connectivity is available. Now you need users to be able to download all of the hot content the night before, so they can indulge the next morning while underground. To make sure everybody has the content ready, you need push-based distribution.

Serving content from a content-management system is the same problem space as serving any other content from any other system. Infrastructure and tooling is rapidly improving fuelled by an ecosystem including Google, Amazon and the Open-Source community. These new tools solve some of the engineering problems that a CMS might have previously been used for, and is evolving much faster that a single vendor can keep up with. Therefore it’s critical that when adopting a content management system, to consider how easy it will be to migrate all or part of the system to different infrastructure and toolchains.

As well as the interoperability of tools and infrastructure, there is a system design consideration in how the content in a content management system relates to data in existing systems of record. For example, consider a product listing page. The content is served from two different systems. A stocking system with the product codes and pricing, and a CMS which stores the pictures and product descriptions. You now need to manage the mapping of identifiers between these separate systems. When the product team create a new product, how can they be sure the images and descriptions for that product are available in the CMS?

This is a distributed systems problem, and the decision to partition the system as described should be a conscious one, since we likely now have to choose between consistency and availability (reference:


Releasing content changes can expose an organisation to all of the same risks as code changes. The good news is that releasing code changes to consumers is well understood.

Left to right in flow, a rough view of the stages code-changes go through to reach production

DevOps teams rely on tooling and process to catch mistakes before they reach production. This can include anything from the design of the programming language they’re using, to static analysis, unit testing, integration testing, pair-programming and peer-review. Finally, after the change is released, monitoring, logging, alerting and analytics tools are used to observe the effects of the change on consumers. All of this tooling gives the team confidence that changes they introduce will not have adverse effects for consumers.

Historically, engineering teams would release monthly/quarterly/annually, and so content-authoring teams would need a way to release content changes between deployments. If the motivation for using a CMS is to be able to make copy changes quickly because deployments are too infrequent or too slow, then you may be solving the wrong problem. A DevOps team should be able to deploy at any time. Modern DevOps tooling allows teams to deploy code changes in minutes, allowing them to release to consumers multiple times per-day. If you’re deciding on a CMS to avoid slow deployments, you now have two problems rather than one.

A common argument for the case of using a CMS is to “let the business edit the content”. The DevOps team operating the service is “the business” and so is the content-authoring and editorial team. They are both making changes to the same system. If those changes can happen simultaneously, there is an additional risk that a content change and code change and introduce a defect in combination. How big is this risk, and does the release process and tool provide a way to mitigate this risk with, for example, preview environments and automated tests?

The utility of a CMS often comes down to how frequently content is changed. For example, a news organisation which publishes an article every hour can easily make the case for investing in content authoring and editing autonomously from the DevOps team operating the news site. However, they’ll be less motivated to invest time to make the annually-updated footer content-editable. In either case, the release of content to consumers could result in adverse effect for the consumers. The news organisation may release content which goes viral, resulting in a traffic spike, making the site slow for all consumers. This highlights that, even though the editorial and operational teams are loosely coupled, they’re not independent. Those footer changes could include terms and conditions which need peer review and have to be consistent with other clauses elsewhere on the site. The legal team could consider the inconsistency of those clauses to represent considerable risk, the engineering team respond with automated tests as a safe-guard.

The later mistakes are found, the more expensive they are to fix. And the cost increases by an order of magnitude in three discrete stages.

Cost of correcting a mistake as changes go further along the process toward production increases by orders of magnitude

A badly encoded image can take down your service. A piece of content can cause your UI to display incorrectly, making the site unusable. Changes to copy could end up in litigation. The techniques to reduce and mitigate this risk of code changes can be applied to content changes too. This is includes automated testing and checks, gradual release and rollback mechanisms. These are all techniques which may already be being used by the DevOps team operating the service which the CMS would be used to author for, so it may be a simpler and more manageable release content changes through the existing deployment infrastructure.

Although we might be able to use the infrastructure that engineering teams uses to release content changes as well as code changes, discovering mistakes at this stage in the process is still a bit late. The ideal time to detect a mistake is as soon as it is authored.


We want content authors and editors to get fast feedback, and correct mistakes early. As well as an authoring experience that at the very least isn’t frustrating. Goals for the content management system include making it very hard to author a mistake, and very easy to author a valid content change.

If the content being authored is destined for multiple different channels, such as a mobile application, web site, API for third party etc, it becomes less feasible to end-to-end test with authoring tools and consumers in a closed system.

You could end-to-end test this:


But probably not this:

High number of consumers with different applications, including some not in your organisation

In this case, you must be confident that content entering the system is valid for all of the consumer use-cases, which puts even more onus on the authoring tooling to create valid content.

The tooling provided by a content management system, to create content-authoring UX needs to save the implementing team from having to create it from scratch, as well as allowing them to extend the system to effectively meet the goals of this part of the system. This problem space is the same as any UI, and the tooling to create UIs ranges from component libraries to TDD cycles which support the development of validation logic. Input validations which are declared in configuration is an example of something which is unlikely to be effective in all but the simplest of use-cases. When your requirements exceed what the CMS vendor expected, how simple is it to extend the system?

It Depends

The editors, the engineers, content changes and code changes are all part of the same system. The level of dependency that content authors have on engineering teams depends on the capability of the tooling that authors use to make content changes. If the content authoring flows are core to your business, so too is the tooling you use to make them. Everyone understands that UX is important. Having safeguards is an essential enabler for those making changes to a production system, regardless of whether it is content or code. If you fear breaking something, you will be less motivated to make the change.

Given that the general problem space for authoring, releasing and serving content changes into a production system is the same as the general engineering problem space, it seems unlikely that a one-size-fits-all vendor CMS will meet all of your needs. It would need to be as extensible as code and cloud-infrastructure services, with all of the same tooling and process capability as the open source community. Content Management Systems written for more specific purposes maybe integrate well into your system as a whole, or you may have to use an out-of-the-box solution because you don’t have the engineering capability. As with any technology choice, there isn’t one answer. It depends on the problem you’re solving and these decisions are always a trade-off.

At one extreme there is an out-of-the-box CMS where we only get features the vendor creates. At the other, a completely bespoke system where we can have all the features we want, but we have to invest in building them. Where to position your choice on this spectrum comes down to how core the content and content-management process is to your organisation. If you choose an out-of-the-box solution, then the same solution is available to your competitors, too. If the content and the content-management process are core to your organisation, it’s more likely you will need to extend it in a unique way to give your organisation a competitive advantage.

Have you got any feedback or anything to discuss in this article? Tweet me directly here: @sjltaylor

Sign up to Badger News