Configuring build automation
You may specify build automation routines using our YAML-based format. Commands will be executed in an isolated container.
We've got you covered for most use cases with a simple base image on top of node:lts
that has npm
, yarn
, php
and composer
installed by default.
You can install anything you might need through build commands (for example: install nvm into your build container on-the-fly if you need a different version of node.js). The possibility to configure a custom base image is on our roadmap - in the meantime, don't hesitate to reach out to us for help if you need guidance setting up your build automation.
If you take a few minutes to read up on our build automation system, you’ll learn several ways to gain tremendous performance improvements and make your deployment workflow more worthwhile and satisfying.
Build
The most basic type of build automation config would be just a list of any number of commands.
build:
- echo "hi"
- sleep 1
- uname
For more complex settings per-command, you may use the “advanced” format. Consider the following example which would run ls
within a subdirectory called src
build:
- path: src
cmd: ls
- echo "hi"
As you’ll notice, the format does not have to be consistent across commands, i.e. you may use “simple” definitions intermixed with “advanced” ones.
Per-command caching
Often, build
commands such as having a dependency manager resolve and download dependencies will 1) be painfully time-consuming and 2) generate the same output most of the time. Therefore, it makes no sense to have it execute for every single build. Launchdeck allows you to specify, for each command, one or more input
and output
paths (more specifically, glob patterns). Take for example:
build:
- path: web
cmd: yarn install
input:
- yarn.lock
- package.json
output:
- node_modules
As you can see, we have specified two files as the input
path to the command yarn install
as well as a directory as the output
path. Note that these input and output paths are relative to the command’s working directory (path
).
Now, on your first build, Launchdeck will not only execute yarn install
but persist the resulting node_modules
folder to a user-specific cache storage volume. Then, if you run another build, and the content of the files yarn.lock
and package.json
has remained unchanged, Launchdeck will restore the previous node_modules
result from cache. This will often cut down your build time by 5-10x.
Special commands
During the build phase, we've made some special commands available within the container for you to use:
use-node-version <version>
installs the specified node version using nvm and updates the system node
, npm
and npx
binaries so
that version will be used for the remainder of the build.
pull-up <folder>
removes everything in the current working directory except the given path, takes all the files and
folders within that path and moves them to the current working directory. You can use this, for example, when you have
a build script that generates some files in a subdirectory which you'd like to "hoist up" to the build root.
Another way to think of this is to imagine you're cd
-ing into a folder while keeping the working directory the same.
Here is an example using both special commands:
build:
- use-node-version 15
- npm install
- npm run build
- pull-up build
Purge
Besides build
commands, you may specify a list of globs to be “purged” after the command runner stage has completed, before the build is transferred to your remote server. This might help you gain another performance improvement by not wasting precious time transferring unneeded files, as well as save storage space on your remote server.
Take the following example:
purge:
- "web/node_modules"
- "web/assets"
In this case, Launchdeck would delete those two folder before sending the build off to your remote server.
Slow transfers? Be sure to purge
node_modules
!
If you're bundling assets, you're most likely going to npm install
or yarn install
to grab the modules required
for your bundling script. Unless those node_modules
are going to be needed on your server, you're far better off
getting rid of them before your build is transferred to your server, as this directory often contains thousands of files
easily comprising tens if not hundreds of MB's.
Shared
In many instances there will be files or folders that need to be persisted throughout builds.
Think of wp-uploads
, or some asset/uploads folder.
While this feature is technically suitable for configuration files such as .env
as well, we encourage you
to have a look at our config files feature which is intended to help you set up your
configuration files in a hassle-free way.
By specifying a shared
section, an array of paths, you can tell Launchdeck to set up a separate "shared"
directory and create symlinks from each new release to the corresponding files or folders within this "shared"
directory.
Take a look at this example configuration for a Laravel application:
shared:
- storage/framework/sessions/
- storage/app/
- storage/logs/
For each of these paths, Launchdeck will create a symlink pointing from the release path to the shared path.
It is important to distinguish between files and folders by denoting folders with a trailing /
.
Let's say that you have set up a remote with an installation path of /var/www/my-app
, and configured
your build automation to specify web/wp/uploads
as a share. What Launchdeck will do once a new build is transfered
is create a symlink at /var/www/my-app/releases/12345678/web/wp/uploads
pointing to /var/www/my-app/shared/web/wp/uploads
(where 1234567
is the auto-generated release ID). You'll be able to see exactly how this is done
in the release log for every release.
Let's deploy!
The build automation feature allows you to define any number of arbitrary commands necessary to prepare your source code (bundle) for deployment, and those commands will be executed in a safe and isolated build environment. In some cases, you might want to run commands on the actual destination server, for example to clear caches, run migrations, or restart your server process. If you're using the (zero-downtime) SSH strategy to deploy, you can use SSH commands to do precisely that!
Happy deploying!