How do I deploy my Symfony API - Part 2 - Build

This is the second article from a series of blog posts on how to deploy a symfony PHP based application to a docker swarm cluster hosted on Amazon EC2 instances. This post focuses on the build process, executed on a continuous integration server.

This is the second post from a series of posts that will describe the whole deploy process from development to production. The first article is available here.

Here a short summary of the blog article series, quoted directly from the first post.

The application was a more-or-less basic API implemented using Symfony 3.3 and few other things as Doctrine ORM and FOS Rest Bundle. Obviously the source code was stored in a GIT repository.

When the project/application was completed, I've used
CircleCI (that offers 1 container for private builds for free) to coordinate the "build and deploy" process.

The repository code was hosted on Github and CircleCI was configured to trigger a build on each commit.

As development flow I've used GitFlow and each commit to master was triggering a deploy to live, each commit to develop was triggering a deploy to staging and each commit to feature branches was triggering a deploy to a test environment. Commit to release branches were triggering a deploy to a pre-live environment. The deploy to live had a manual confirmation step.

In this part we will see how the build process was organized.

Build process

The build process was triggered on each push to the GitHub repository. The process was identical for all the branch "types" (master/develop/feature branches). This was fundamental to reduce environment-differences, it is a common problem a situation when something works on staging but does not work on production. The goal here was to have the development, build, staging and production environments as close as possible.

The build was performed using CircleCI, and the whole flow was managed by a single .circleci/config.yml file to decide what to do in each step. Might be useful to have a quick look to the CircleCI config file reference.

The configuration file

# helper nodes
helpers: &helpers
  - &helpers_system_basic # basic system configurations  helper
    run:
      name: System basic
      command: |
        sudo apt-get -qq update
        sudo apt-get -qq -y install \
        apt-transport-https \
        ca-certificates \
        curl wget \
        software-properties-common openvpn
        pip install -q awscli

  - &helpers_docker # basic docker installation  helper
    run:
      name: Docker installation
      command: |
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        sudo apt-get -qq update
        sudo apt-get install -y -qq docker-ce

  - &helpers_docker_compose # docker compose installation helper
    run:
      name: Install Docker Compose
      command: |
        sudo curl -L https://github.com/docker/compose/releases/download/1.15.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
        sudo chmod a+x /usr/local/bin/docker-compose


# build configurations
version: 2
executorType: machine
jobs:
  build:
    working_directory: ~/my_ap
    steps:
      - *helpers_system_basic # use basic system configurations  helper
      - *helpers_docker # use basic docker installation  helper
      - *helpers_docker_compose # docker compose installation helper
      - checkout # checkout the source code

      - restore_cache: # restore docker images cache
          keys:
            - myproject-{{ epoch }}
            - myproject

      - restore_cache: # restore composer cache
          keys:
            - myproject-composer-{{ checksum "composer.lock" }}

      - run: # load docker images cache into docker engine
          name: Restore docker images cache
          command: |
            mkdir -p ~/cache
            if [ -e ~/cache/docker-images-cache.tar.gz ]; then docker load -i ~/cache/docker-images-cache.tar.gz; fi

      - run:
          name: Build
          command: |
            docker-compose build

      - run:
          name: Install dependencies
          command: |
            mkdir -p ~/.composer && chmod a+rw -R vendor var ~/.composer
            chmod a+rw "$SSH_AUTH_SOCK"
            docker-compose run --rm -u www-data php composer install --no-dev --no-autoloader --no-progress --no-interaction --no-scripts --prefer-dist
            docker-compose build php

      - deploy:
          name: Push images to registry
          command: |
            docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASS
            docker-compose push

      - save_cache: # save composer cache
          key: myproject-composer-{{ checksum "composer.lock" }}
          paths:
            - ~/.composer

      - run:  # save docker images into a TAR file
          name: Save images cache
          command: |
            docker save -o ~/cache/docker-images-cache.tar.gz $(docker images -q -a)

      - save_cache:  # cache the docker TAR file  
          key: myproject-{{ epoch }}
          paths:
            - ~/cache

The configuration file might look complicated, but lets analyze it step by step.

The first part of the file with the key helpers is just a set of helper nodes obtained using the YAML anchors and references, that allows the reuse of portions of YAML code on multiple places.

Step by step

Let's analyze step-by-step the build process by looking in detail at the .circleci/config.yml file.

Preparation

- *helpers_system_basic # use basic system configurations  helper
- *helpers_docker # use basic docker installation  helper
- *helpers_docker_compose # docker compose installation helper
- checkout # checkout the source code

The first part of the file consists in simply installing the required libraries for our build, this includes installing some basic libraries as curl, docker, docker-compose and updating the general system configuration (in this case Ubuntu 14.04 was the underlying Linux distribution).

Restoring caches

- restore_cache: # restore docker images cache
  keys:
    - myproject-{{ epoch }}
    - myproject

- restore_cache: # restore composer cache
  keys:
    - myproject-composer-{{ checksum "composer.lock" }}

- run: # load docker images cache into docker engine
  name: Restore docker images cache
  command: |
    mkdir -p ~/cache
    if [ -e ~/cache/docker-images-cache.tar.gz ]; then docker load -i ~/cache/docker-images-cache.tar.gz; fi

To speed up the build process I've tried to cache as many things as possible, this includes docker images and composer file. The two caches are separate, this allows to refresh independently the files as most likely they will not change together.

The composer cache will change only if the composer.lock changes, while the docker image cache will change probably on each build. Saving the docker images cache might be not trivial as it requires to save the images in a TAR file running the docker save command and to restore the cache is necessary to use the docker load command. Docker does not offer a folder that can be simply copied as composer does.

Build

- run:
  name: Build
  command: |
    docker-compose build

- run:
  name: Install dependencies
  command: |
    mkdir -p ~/.composer && chmod a+rw -R vendor var ~/.composer
    chmod a+rw "$SSH_AUTH_SOCK"
    docker-compose run --rm -u www-data php composer install --no-dev --no-autoloader --no-progress --no-interaction --no-scripts --prefer-dist
    docker-compose build php

To create the docker images we simply run docker-compose build that will build all the images following the docker-compose.yml file shown in the previous article.

After the images are ready we can install the composer dependencies. Note that the composer install command is executed disabling the post install and the autoloader creation. When the dependencies are installed, the "php" image is re-build to include the just created vendor folder. In the rebuild process, he autoloader is created and the post install command is executed (will warm-up the symfony cache).

Note: this solution can be not optimal in some circumstances.
In the described project we had not yet tests (early stage start-up). I highly recommend to have a good test suite and to run it on each build. The tests can be executed right after the build process or in a separate CircleCI build (this step will be explained in the next articles by introducing the CircleCI Workflows).

TIP: Private repositories

If you have some private repositories in your project and you rely in the SSH keys to download them in the composer install step, you will have to enable the SSH key forwarding from the build environment to the docker container and to add a valid SSH key on to the build machine (CircleCI calls them "Checkout SSH keys" under the project setting menu).

To do it is necessary to edit the docker-compose.yml file by adding the volume reference to the SSH socket.

php: 
    volumes:
      - $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
    environment:
      - SSH_AUTH_SOCK

(this is the reason for chmod a+rw "$SSH_AUTH_SOCK" in the "Install dependencies" step)

Push

- deploy:
  name: Push images to registry
  command: |
    docker login -u $DOCKER_HUB_USERNAME -p $DOCKER_HUB_PASS
    docker-compose push

When the docker images are ready we just push them to the DockerHub.

If you are on AWS, most probably is more convenient (economically) to push tem to the Amazon ECR. In that case the "login" part will be slightly different as the ECR has a bit different authentication mechanism.

Will be something as:

aws configure set aws_access_key_id $AWSKEY
aws configure set aws_secret_access_key $AWSSECRET
aws configure set region $AWS_REGION

eval $(aws ecr get-login --no-include-email)

docker-compose push

In this case is necessary to install the aws-cli (the Amazon console client). The environment variables contains the amazon region and credentials for the registry-push operation.

Save caches

- save_cache: # save composer cache
  key: myproject-composer-{{ checksum "composer.lock" }}
  paths:
    - ~/.composer

- run:  # save docker images into a TAR file
  name: Save images cache
  command: |
    docker save -o ~/cache/docker-images-cache.tar.gz $(docker images -q -a)

- save_cache:  # cache the docker TAR file  
  key: myproject-{{ epoch }}
  paths:
    - ~/cache

If everything went fine, we can save the caches that will allow us to speed up the next builds. To note, if the composer.lock file did not change between builds, the "save cache step" will be just skipped, saving some time necessary to upload the (not-modified) composer cache directory.

Conclusion

In this article I've explained how the build process was done (including the push of the docker images to the registry). In the next articles the focus will move on how to deploy to some environment (live/staging...) the just pushed images.

As you can see I've decided to not use any of the pre-installed tools/services/libraries offered by CircleCI. The reason behind it is that I want to use exactly what developers are using and what will be deployed on production. By doing so, in my opinion the environment differences are less and the possibility of bugs caused by those differences decreases.

As usual I think that many things can be improved and I'm always happy to hear constructive feedback. As already happen will be happy to publish updates.

php, symfony, aws, deploy, symfony, api, docker, amazon, ec2, docker-compose, swarm, build, circle-ci

Do you need something similar for your company?