How do I deploy my Symfony API - Part 5 - Conclusion

This is the fifth and last article from a series of blog posts on how to deploy a symfony PHP based application to a docker swarm cluster hosted on Amazon EC2 instances. This post is a conclusive post. Contains a summary and some tips.

This is the fifth post from a series of posts that described the whole deploy process from development to production of a Symfony API.

This series of blog posts had the aim to show a possible approach to build a continuous integration and continuous delivery pipeline.

The continuous integration and continuous delivery process reduces bugs and makes the development simper.

The blog post story was divided into:

  • Part 1 - Development In this step was shown how to build the local development environment using Docker and Docker Compose. Considering some important details (as example: stateless application or share-nothing containers...), the work in the next steps will be much easier.

  • Part 2 - Build This step was about "building" and "pushing" the artifacts (the docker images to the docker registry). Docker images are ready-to-run applications containing almost everything necessary to be executed on a docker engine. This is one fundamental steps for the development flow. Here is also a great spot where introduce automated tests.

  • Part 3 - Infrastructure In order to run in a reliable way the application, is necessary to have a properly configured infrastructure (servers, services running on them, docker...).

  • Part 4 - Deploy When everything is ready we can finally deploy the application. The application has to stay up and running! Always!

Improvements

As always happen in software development, solutions are not perfect. There is always room for improvement. Here are few examples of what could have be done better. Obviously can be done better and this are not all the possible improvements that can be done on the system.

Migrations

Running migrations (database changes as example) in between of deployments is a common use case.

In an environment with multiple copies of the same application running in parallel (and probably having different versions), the application must be able to run without errors with different database versions. To do that we need a to have backward-compatible migrations.

Let's suppose we have an application version:1 running on database version:1 and we have to run a migration that renames a database column from "old" to "new". We can do:

  1. Run migration 1
    1. Add a "new" column (with the new name)
    2. Copy values from "old" to "new" column
    3. (now the database version is v2)
  2. Deploy application v2 (this app must be able to work with both "new" and "old" column)
    1. Wait till all the copies of application v1 are not in service anymore
  3. Run migration 2
    1. Copy values from "old" to "new" column where the "new" values are still NULL (or something similar).
      This is necessary because while the application v2 was going up, application v1 was still running and was using only the "old" column.
    2. (at the end of this migration the database version is still v2)
  4. Deploy application v3 (knows only the "new" column)
    1. Wait till all the copies of application v2 are not in service anymore
  5. Run migration 3
    1. Drop "old" column
    2. (now the database version is v3)

The "new" column can't be marked as "not null" in the first migration (this because the first application version does not know about it), only in the last migration can be set to eventually to "not null". If the "not null" constraint is a must, is necessary to specify a default.

In a real world example, supposing we are using Doctrine Migrations and its DoctrineMigrationsBundle, to run the migrations we can execute:

docker-compose run --rm -u www-data --no-deps php bin/console doctrine:migrations:migrate

Health checks

As of docker-compose 2.1 file format, is possible to specify container health checks to ensure that our application is ready to accept traffic. The docker swarm cluster routing mechanism will send traffic only to healthy containers.

# docker-compose.live.yml
version: '3.3'
services:
    php:
        # php service definition...             
    www:
        # www service definition...             
        healthcheck:
          test: ["CMD", "curl", "-f", "http://localhost/check.php"]
          interval: 30s
          timeout: 10s
          retries: 2            

This will try to download http://localhost/check.php from the container, in case of two or more not 200 OK responses will consider the container as not-healthy. Will also not send traffic to that container in case the container is part of a swarm cluster. Is is also possible to configure a restart_policy to decide what to do with an not-healthy container.

Is possible to configure health checks not only at runtime via docker-compose but also via HEALTHCHECK in the Dockerfile.

Container placement

An interesting feature of docker orchestrators is the ability to influence the container placement on the nodes part of a cluster. Using docker-swarm and docker-compose, this is possible using the placement property.

Each node part of a cluster has some "labels" (system or user defined); docker-compose services can set as a requirement to place services having only specific labels.

version: '3'
services:
  www:
    # www service definition...             
    deploy:
      placement:
        constraints:
          - node.role == worker
          - engine.labels.operatingsystem != ubuntu 14.04

In the snippet before, we ask to place the "www" service only on worker nodes having operating systems different from "ubuntu 14.04".

This can be very useful if nodes have a specific configuration (shared directories, log drivers or public IPs as example) and we want containers to be distributed on specific nodes.

PHP and Nginx together

The application shown in this series of post was using two separate images, one for php and one for the web-server (nginx). It was also possible to create a single image having both php and nginx together and communicating over a socket instead of a TCP port.

This is more a philosophical preference that has benefits and drawbacks, will let decide to the reader the one to prefer. Some advantages are: a single image for both components, performance (?); on the other hand as disadvantages: we lose the single responsibility principle and we have sometimes more complex and interdependent configurations.

Currently there are no official images to do it using nginx as webserver (for example apache + php is available), so is necessary to use some images user-provided as richarvey/nginx-php-fpm.

Node draining

When we need to remove a node from a cluster, is necessary to stop sending traffic to that node and to remove (or move) containers from that node to avoid service interruptions. Only when this operations are completed is safe to remove the node.

Highly suggested to read this article on possible ways to do it.

Kubernetes

Many of you probably already heard of Kubernetes. Kubernetes is an production-grade container orchestration. It is something similar to docker-swarm, just way more powerful.

But from great power comes complexity and for me was an overkill for the application I was building. I also already had experience with docker swarm, so for me was a natural choice. From many point of views, Kubernetes looks way superior to Docker-Swarm and could have been a valid alternative.

Conclusion

I wanted to share my experience. While writing this series of post I learned many details and also had a chance to improve my application.

Hope you (reader) enjoyed the articles and as already said many times, feedback is welcome.

php, symfony, aws, deploy, symfony, api, docker, amazon, ec2, swarm

Do you need something similar for your company?