I’ve grown, I’ve learned some stuff, I’ve broken some other stuff. So I think it’s about time I did…

Zero Downtime Laravel Deployments with Envoy—Version 2!

My first attempt at zero downtime deployments was exactly what I needed a few years ago. But now, I have something better.

Warrick Bayman

--

A few years ago, I wrote an article on how to use Envoy to run zero-downtime deployments. That article has been a somewhat popular one. It still gets around 100 reads a week, which I don’t think is all that bad considering I’m definitely NOT a professional writer. That approach to deployment was great and it’s a method I’ve used for quite some time. It has, however, started to show its limitations—especially since it’s not “just me” anymore.

I’ve always been a bit of a lone programmer. I studied graphic design many years ago, but it just wasn’t my cup-o-tea. I left design for web development in my 20s and basically taught myself everything I know (thanks to a decent collection of some well written books and some strategically placed people in my life), and I’ve never looked back. Like many who teach themselves, PHP and JavaScript quickly became my thing, and I’m now massive fan of Laravel with a fair amount of experience.

A lot of the decisions I would make were based around the idea that no one else would ever need to work on the same projects. That all changed just a few years ago. My life’s story isn’t really the point of this article, so I’ll fast forward just a little… I now have the privilege of working with a great team. It’s awesome, but it meant that I needed to make some changes to how I worked.

A Quick Recap

Zero-downtime deployments are exactly what you would think they are. It means you deploy your application without ever needing to take the site offline. Sounds magical, but it’s really not all that hard:

  1. Get the code on the server (ftp, ssh, git, whatever floats your boat);
  2. Do all the bits that you would normally do to make the site work, like installing dependencies;
  3. Create a symbolic link to the location of the site;
  4. Get the web server to serve the symbolic link.

Ready to deploy a new version? Just repeat those same steps again. The current running version doesn’t get removed until the last step when you create a new symlink. Since all the actual deployment tasks, which take so much longer to complete, have already been done. The web server simply continues to serve the same symlink. Except now it’s serving your new version… with zero downtime.

If you don’t want to do this yourself, there’s a few tools that you can use instead. Take a look at Envoyer if you’d rather not be messing about on the server yourself. And if you’re a Laravel developer, Envoyer is a no-brainer.

But, if you like how this sounds and you’re not afraid to get your hands dirty, then you’ll be happy to know that making it work really isn’t difficult.

This article is not a continuation of my previous post on the same topic, so you don’t need to go back and read that one first. However, if you’re interested in how I got to this point, take a look at my first version here.

Since I wrote that first article I’ve spent a fair amount of time working with other programmers and we’ve used that approach, or a version of it, quite extensively. We’re not alone, either. If you take a look around the web, you’ll find a whole bunch of articles that describe similar solutions. It definitely works for more programmers than just me. However, you’ll start to eventually realise that there’s a bit of a problem with it.

The Compiled Assets Problem

Our websites and webapps contain a lot of stuff. And if you’re like me, then you’re compiling assets using a whole collection of different tools. We write a lot of stuff using things like SASS, and we often have plenty of Vue components. Those files need to go through a build step so that we end up with stock standard CSS and JavaScript files that the browser understands.

I’m a big fan of WebPack and we use Laravel Mix for most build tasks and usually end up with at least two files: app.css and app.js. We also use code splitting a lot through the use of the Babel dynamic imports, so we’ll often end up with a few extra JavaScript files. Regardless, the build process generally remains the same.

The resulting assets then need to exist on the server but how do we get them there? Well, that’s easy right? we commit them to the repo and clone the repo to the server. If you read my first article then you’ll know that’s exactly what we were doing. We git clone directly onto the server as part of our deployment and be done with it. We don’t need to install any fancy build tools on the server (which, to me, feels akin to taking the contents of your entire kitchen on a road trip) and we know we’ll have a correctly compiled asset on the server.

So what’s the problem? Well, now that I’ve been working with a team of my own, it’s very common for multiple programmers to be working on the same project. Each programmer is compiling assets on their own machines and committing to the repo. Those compiled assets can get pretty large, which was now causing chaos with pull-requests. Our git history was a mess and diffs were impossible to deal with. It’s clear that committing compiled assets to our repo wasn’t going to be right thing to do anymore.

The Missing Piece

We’re getting to the solution, I promise. It just felt like I needed to write this fascinating back story first…

I started reading around the web and I found a ton of people struggling with the very same issue. Some devs say you should compile on the server (yuk) and others said I should continue committing to the repo. Others agreed that having compiled assets in the repo was a bad idea, but didn’t really have a nice solution to getting those assets onto the server (nobody wants to FTP, right?). I knew I still wanted to use git clone to get code onto the server, but didn’t want compiled assets in the repo at all.

Then it hit me. Why not just copy them there? Sounds so rediculously simple. Compile the assets locally, and just upload them to the server using scp.

The Solution

After I put some thought into this, here’s what I eventually decided needed to happen. It’s not all that different from my old deployment process, but it has one magical extra step:

  1. Add all compiled JavaScript and CSS files files to my .gitignore file.
  2. I use Laravel Mix, so I’ll also add the mix-manifest.json file to .gitignore as well.
  3. Run yarn prod to build production ready assets.
  4. git clone the production branch onto the server.
  5. Run composer install --no-dev on the server.
  6. Use scp to copy the compiled assets into the correct directories on the server.
  7. Create a symlink that points to the new public directory that’s been created.
  8. The web server serves the new symlink.

There is exactly one extra step there: step 6. Everything else is pretty much the same as before.

Making it work

With most projects we work on, we follow a slightly modified version of Gitflow. I’m not going to detail that here, but it’s worth noting that we generally end up with at least two branches in all our repos. We have a master branch and a develop branch. All work gets merged into develop during development and eventually merged into master which is always a reflection of what is on the production server. We also tag our releases, but that’s just for our own sanity and gives us something to reference when talking about different stages of the project.

Before we can write any deployment scripts, we need to do some set up on the server. We use Nginx as our own web servers and we run Ubuntu Server whenever we can. However, that’s really not important here, as long as your web server can serve symbolic links. If you’re an Apache user, you’ll probably need to add something like Options +FollowSymLinks to your web server config.

I’m also assuming that you’ve set up your keys correctly and you’re able to log into the server using your private SSH key. If you’re still logging in with a password, I strongly recommend you consider setting key pairs and disabling password logins on your server.

Step 1: Ignoring some stuff

We first need to ensure that we’re not committing all those compiled assets to our repo any longer. We need to ignore any .js files in /public/js and any .css files in /public/css and the mix-manifest.json file. Let’s first update the root .gitignore file to ignore the mix-manifest.json file. Simply add the line:

/public/mix-manifest.json

Cool. Now create a .gitignore file in both the /public/js and the /public/css directories and add the following:

*
!.gitignore

This will make sure that we ignore all the files in those directories except the .gitignore itself. That way at least the directory is still created with nothing in it.

Remember to commit your changes.

Step 2: Server set up

We need to define how our environment looks on the server. I like to use the /opt directory, but it’s entirely up to you. I’ll often create a directory named after the vendor inside /opt and a directory for the project inside that. Then we need a directory to place our releases. For clarity I’ll name it releases. So here’s what we should have to start with:

/opt
|
+- /vendor
|
+- /project
|
+- /releases

Now we can clone into that releases directory So from within the releases directory, run:

git clone --depth 1 -b master git@gitlab.com:project.git initial

Obviously, you’ll replace that Git URL with the one for your project. That line will shallow clone the project and checkout the master branch into a directory named initial. Now we have a copy of the project on the server.

Before you carry on, it’s important to know that you’re going to need Composer installed on the server. If you haven’t done so already, you can get it installed by running:

curl -sS https://getcomposer.org/installer | php
sudo cp composer.phar /usr/local/bin/composer

Now you can run composer from anywhere on the server. If you need more instructions on this, take a look at the Composer documentation.

Step 3: .env and storage

Next we’ll need to make a copy of our .env.example file. If you’re a Laravel user, you’ll already know what that file is for. Normally it sits at the root the your project, however, since we’ll be cloning a new copy of the code each time we deploy, we’ll need to keep a copy of the .env out of the way so it doesn’t get lost between deployments. That way we can reuse the same file each time we run our deployment script. A good place for it is in /opt/vendor/project since it will be relative to the project as a whole. So:

cp /opt/vendor/project/releases/initial/.env.example /opt/vendor/project/.env

We need to do the same thing with the storage directory. You’re probably using the storage directory already to store stuff for your project and you won’t want to loose the content in there each time you run a new deployment. Also, Laravel stores session data here and we don’t want to destroy users sessions each time we deploy. We can put the storage directory in the same location as the .env file we already copied.

cp -r /opt/vendor/project/releases/initial/storage /opt/vendor/project

The -r option will copy the directory recursively so we don’t leave anything behind. All this should result in the following layout:

/project
|
+- /releases
| |
| +- /initial
|
+- /.env
|
+- /storage

Now we can remove the copy of the /storage directory from within our project (since we’ll use the one we just created) and we can create symlinks to both the .env file and the storage directory we copied.

First, remove the storage directory:

rm -rf /opt/vendor/project/releases/initial/storage

Now create the links:

ln -nfs /opt/vendor/project/storage /opt/vendor/project/releases/initial/storageln -nfs /opt/vendor/project/.env /opt/vendor/project/releases/initial/.env

Let’s take a look at our layout again:

/project
|
+- /releases
| |
| +- /initial
| |
| +- /.env -> /opt/vendor/project/.env
| |
| +- /storage -> /opt/vendor/project/storage
|
+- /.env
|
+- /storage

Great! We’re getting somewhere. This is starting to look similar to how the old deployment looked. However, notice that the /releases/intial/public directory has an empty /js and an empty /css directory. Since we’re not including those compiled assets in the repo anymore we now need to copy them into the correct location. That’s our next step.

Step 4: Assets

It’s time to compile some assets. On your own computer and from within your project, run your build script and compile production ready assets. For me, this is offen as simple as:

yarn prod

Once you’ve got a set of new assets, let’s get them onto the server:

scp -rq public/css/ user@server.com:/opt/vendor/project/releases/initial/publicscp -rq public/js/ user@server.com:/opt/vendor/project/releases/initial/publicscp -q public/mix-manifest.js user@server.com:/opt/vendor/project/releases/initial/public

Not overly complex. We’re using scp to copy the entire contents (including directories: that’s the -r option) of the /public/css and the /public/js from our local copy into the server. We’re also copying the mix-manifest.json file to the server.

Step 5: Composer and Laravel

Now that we have everything in place we can install Composer dependencies and get our Laravel app into a running state. First, we need a key:

cd /opt/vender/project/releases/initialphp ./artisan key:generate

Remember that we have a symlink of the .env file in our releases/initial directory, which means that actual .env file, sitting in /opt/vendor/project will now contain a new APP_KEY value.

Next, dependencies:

composer install --no-dev --prefer-dist

We don’t need development dependencies on the server, and the --prefer-dist flag will get Composer to leave out any development specific files from our dependencies that are not needed on the server.

Now’s a good time to complete any extra steps you need for the initial installation of your application. For example, maybe you need to create a new database and migrate the tables:

php ./artisan migrate

Maybe you need to update the .env file with any third-party API keys you’re using. Before putting your application live, you’ll want to complete those tasks first.

Step 6: Live!

Our journey is at an end! Well, sort of… If we’ve done everything correctly, we should have a working application. All we have to do is create a new symbolic link to the /public directory that the web server will serve:

ln -nfs /opt/vendor/project/releases/initial/public /opt/vendor/project/live

Update your web server config to point to /opt/vendor/project/live

And voila! Let’s take one last gander at our layout with the new symbolic link in place:

/project
|
+- /releases
| |
| +- /initial
| |
| +- /.env -> /opt/vendor/project/.env
| |
| +- /storage -> /opt/vendor/project/storage
|
+- /.env
|
+- /storage
|
+- /live -> /opt/vendor/project/releases/initial/public

As long as the web server is configured to serve the /opt/vendor/project/live symlink then each time you deploy, you can simply replace that symlink and your new deployment will automatically become the one your users see.

If you’ve got this far, then great success! But now we need to automate this whole thing. It’s really not feasible to follow all these steps every time you need to deploy a new version of your app. At some point you’ll mess it up and then you have an app offline. We’re trying to avoid that situation, so let’s rather automate the steps using a simple Envoy script.

Step 7: Envoy

Envoy is one of the most useful tools (besides Laravel itself) to come out of the Laravel ecosystem. It’s a simple task runner based on Laravel blade. It can be used to run tasks both locally and on a remote server. Envoy is dead simple and you can find a bunch of info in the Laravel documentation.

First, you’ll need to make sure you have Envoy installed. You can install it globally on your computer with Composer:

composer global require laravel/envoy

Now create a new file in your project root called Envoy.blade.php, and start with the following content:

@servers(['local' => '127.0.0.1', 'production' => 'user@server.com'])@setup
// any setup will go here
@endsetup
@story('deploy')
build
git
install
live
@endstory
@task('build', ['on' => 'local'])
@endtask
@task('git', ['on' => 'production'])
@endtask
@task('install', ['on' => 'production'])
@endtask
@task('assets', ['on' => 'local'])
@endtask
@task('live', ['on' => 'production'])
@endtask

Before we go any further, let’s take a look at what we have here. The servers bit at the top defines the servers you’ll be deploying to. You could define a staging, UAT or production server here. For now, I just have a production one set, and a local which is set to 127.0.0.1. Envoy will let us run tasks locally if we use that as the target server. Just a note: that first line might look like a PHP array, but there is one requirement… It must exist on a single line. Don’t break the array up over multiple lines, Envoy won’t know what to do with that.

We have five task blocks. Namely: build, git, install, assets and live. Each one is also added to a story block that will execute them in sequence so we don’t need to run each one separately.

If you’re coming from the old article, then that script should look somewhat familiar.

The @setup block lets us run some simple PHP before the tasks are executed. Let’s add some setup stuff that we can use throughout the script to make our jobs a little easier.

@setup
$repo = 'git@gitlab.com:project.git';

$branch = 'master';

date_default_timezone_set('Africa/Johannesburg');
$date = date('YmdHis');

$appDir = '/opt/vendor/project';

$buildsDir = $appDir . '/releases';

$deploymentDir = $buildsDir . '/' . $date;

$serve = $appDir . '/live';
$env = $appDir . '/.env';
$storage = $appDir . '/storage';

$productionPort = 22;
$productionHost = 'user@server.com';

@endsetup

That should be fairly self-explanitory, but let’s go through it anyway:

  • $repo is the repository that we’ll clone from;
  • The $branch variable is just the branch in the repo that will be cloned to the server;
  • The $date variable is used to create unique directories inside the releases directory;
  • The $appDir is the absolute location to the project;
  • the $buildsDir points to the /releases directory we created earlier;
  • The $deploymentDir is the one we’ll clone into. Note how I’ve used the $date variable here;
  • The $serve variable is where the live symlink is created;
  • The $env variable points to the actual .env file we copied;
  • The $storage variable points to the actual storage directory we copied;
  • And lastly, the $productionHost and $productionPort is used by the assets task when copying the compiled assets to the server;

It’s stock standard PHP and all we’re really doing is setting some variables. It shouldn’t be hard to see what’s going on here. Now we can flesh out the tasks. Let’s start with the build task.

@task('build', ['on' => 'local'])
yarn prod
@endtask

Yup. That’s it. All it’s doing is building the production ready assets locally. Now for the git task:

@task('deployment', ['on' => 'production'])
git clone --depth 1 -b {{ $branch }} "{{ $repo }}" {{ $deploymentDir }}
@endtask

Also simple. We’re creating a shallow clone of the master branch into a directory on the server. Now the install task:

@task('install', ['on' => 'production'])
cd {{ $deploymentDir }}

rm -rf {{ $deploymentDir }}/storage

ln -nfs {{ $env }} {{ $deploymentDir }}/.env
ln -nfs {{ $storage }} {{ $deploymentDir }}/storage

composer install --prefer-dist --no-dev

php ./artisan migrate --force

php ./artisan storage:link
@endtask

That’s our first complex task. There’s a few things happening here, but if you take a hard look, it’s all the tasks we’ve done before:

  • We’re change into the new deployment directory,
  • remove the storage directory (we have one we’ll use, remember),
  • Create a symlink to the .env file,
  • Create a symlink to the storage directory,
  • Run Composer to install dependencies,
  • Migrate any database changes,
  • Create a symlink to the storage/public directory (optional if you use this),

We have our compiled assets waiting paciently so now is the time to get those in place. Here’s the assets task:

@task('assets', ['on' => 'local'])
scp -P{{ $productionPort }} -rq ./public/js/ {{ $productionHost }}:/opt/vendor/project/public
scp -P{{ $productionPort }} -rq ./public/css/ {{ $productionHost }}:/opt/vendor/project/public
scp -P{{ $productionPort }} -q ./public/mix-manifest.json {{ $productionHost }}:/opt/vendor/project/public
@endtask

And finally, the live task:

@task('live', ['on' => 'production'])
ln -nfs {{ $deploymentDir }}/public {{ $serve }}
@endtask

The last task simply creates a live symlink and points it to the new deployment. And we’re done! We’ve basically taken all the steps we did earlier and repeated them in the Envoy script. Now whenever you need to deploy a new copy of your application into production, simply navigate to your project directory and run:

envoy run deploy

and the whole thing will happen all over again. You still get the same old zero-downtime deployment as before, but now there are zero compiled assets in your repository. For teams, this is a godsend because you can now keep your commits cleaner and pull requests are no longer the nightmares they were.

Since we create a new directory every time we deploy, we have a copy of the old one still sitting on the server. This means that if something terrible goes wrong after you’ve deployed, you can simply replace the live symlink and point it to the previous deployment.

Here’s the whole Envoy script in all its glory:

Envoy.blade.php zero-downtime script

But wait… There’s more!

This is pretty damn awesome I think. It’s made mine and my teams lives so much simpler. We use this approach for a lot of the web work we do (with a variation here or there). However, I’ve been working on much bigger projects these days and even this approach still isn’t perfect.

I’ve spoken a few times about the potential of using this approach in a CI/CD pipeline but never actually having done it myself. Well, it was about time I did… so I did. That’s my next post. I’ll show you how to get a CI/CD pipeline to build, run your unit tests and your deployment script for you. And I’ll write about how to set up multiple environments, so keep an eye out for that one. I’ll update this post with a link when that one is ready.

--

--

Warrick Bayman

Programmer, musician, cyclist (well... I own a bike), husband and father.