Borg Documentation (v0.0.1)

This information is designed to make you immediately productive with Borg v0.0.7.
When additional versions are released, previous versions will be linked here.
Let’s get started!

1. Getting Started

Feel free to watch the 5min Quick Start video which covers this chapter:

1.1. Install Node.JS

Using NVM to install Node.JS is recommended.

curl -L | sh
source ~/.bashrc # or close and reopen the terminal
nvm install v0.10.35 # latest stable preferred
nvm use v0.10.35
nvm alias default v0.10.35

1.2. Install Borg

npm install borg -g

1.3. Generate a New Project

borg init Devops
cd Devops/

Or, hit the ground running by cloning our existing sample project and follow along with the 10min Sample Project video to assimilate your first machine:

1.4. Explore Help

The borg Command Line Interface (CLI) contains many commands which are not documented here. That documentation is expected to be accessed directly, beginning with the following command:

borg help

NOTICE: Most borg CLI commands require your working directory to be the project root.

2. Cloud Integration

Though abstracted to integrate with any cloud provider, only Amazon Web Services integration is implemented at present.

ADVERTISEMENT: Accepting pull requests to support other Cloud Provider APIs.

2.1. Amazon Web Services

This integration depends on the AWS CLI utility being installed:

cd /tmp
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
aws --version # test

and configured:

aws configure
Access Key ID: your-aws-access-key-id
Secret Access Key: your-aws-secret-key

For more information, please see:
Setting Up the Amazon EC2 Command Line Interface Tools

3. Introduction

3.1. Vernacular

  • script: file containing code listing steps to complete orchestration.

  • resource: named function organizing common steps for reuse across multiple scripts; often with the secondary goal of becoming operating system agnostic.

  • server: file containing code defining how remote machines are named and addressed, and which scripts must be applied in order to complete their orchestration.

  • attribute: server variable; representing static data, dynamic runtime calculation functions, or personal preferences/overrides for use by scripts at runtime.

  • file containing a list of servers and their attributes--as well as the relationship of one server to another in a hierarchy of datacenters, groups, server types, and server instances.

  • datacenter: the outermost unit available for the grouping of reusable server definitions (e.g., one datacenter may contain two or more instances of the same server)

  • group: another layer of organization available for the grouping of servers within datacenters; typically named after the environment and project (e.g., a datacenter may contain a group for the production environment of Project A as well as the staging environment for Project B, both of which contain separate instances of the same type of server)

  • borg create: to provision a new empty remote server via cloud provider apis.

  • borg assimilate: to orchestrate a remote server.

  • borg assemble: to both create and then assimilate.

3.2. Directory structure

attributes/		JSON and CSON files; see Attributes chapter below	your datacenter and server hierarchy
  memory.json		remembers details from cloud provider api interactions
scripts/		your Scripts; see Scripts chapter below
  server/		code defining which Servers use which Scripts
  vendor/		third-party Scripts as Git submodules

4. Attributes

Attribute files are regular CoffeeScript files, appropriately suffixed .coffee, but some of them might more effectively thought of as CoffeeScript Object Notation (CSON); a variation of JSON that allows you to, among other things, make use of comments, string interpolation, function values, and conditional logic such as ternary and switch statements. These features make it a more versatile alternative to YAML.

It is recommended to take advantage of these features to achieve, as close as possible, the overall goal of DRY (Don’t Repeat Yourself) and readability. Strive to simplify the job of an application maintainer who will only want to look in a single place to change any attribute value generated by the devops scripts.

There are times when it is appropriate for an attribute to be defined in more than one place. Borg knows which value to use in these cases because it merges attribute files in a specific order at runtime. For example, your attribute definitions will override any defaults provided by third-party submodules.

The resulting attribute values are accessible to all scripts from within the @server object, which is also pretty printed to the debug log at the beginning of each run.

4.1. Attribute Precedence

Although it may seem straightforward, when making changes its important to be sure any changes to attribute values won’t be accidentally overridden. To help illustrate both the various locations where attributes can be defined, and the order that they get merged to produce the final result, we’ve assembled the handy chart below, in order of precedence:

Name, Location, Precedence Description

1. Hard-coded Attributes
(un-overridable precedence)

Always scrutinize whether there are values in your scripts which would be more useful as attributes that others can see and modify.

Define local variables which remain private to only one script and its templates.

Calculations, for example, may be based on a combination of attributes defined elsewhere--such as in a concatenation operation, or a hashing operation, or any kind of last-minute reformatting.

Defining constants or overriding attributes within the template code itself is discouraged, because its difficult for other users to change these and remain compatible with your upstream. The only exception is when a script author is certain the resulting output is unlikely to be a desired change.

As it happens, most scripts end up 100% hard-coded as a convenience for authors in a hurry to get testing and building servers. However, these aren't useful for sharing publicly until all values are abstracted as overridable attributes.

2. CLI --locals= Attributes
Process arg w/ CSON value
(high precedence)

Define short-term instance attribute values unique to this run.

Commonly these are one-time values that are expected to change by the end of a successful run.

For example, if a script was expected to change the sshd listen port, but aborted due to an error before that step was reached, you may want to override that setting temporarily until you are done debugging and retrying.
(e.g., ssh: port:, user:, pass:, key:)

3. Global Attributes
in the global: key

Define long-term instance attribute default values in the most general way possible.

Rarely useful for global user preferences.
(e.g., ssh: port or tz:)

4. Datacenter Attributes
in the datacenters: key

Define long-term attribute values per-datacenter, per-environment, per-group, per-type, and/or per-instance.

Perhaps the most commonly used area to specify attributes of any.

Details information specific to an instance, such as the AWS AMI, instance size, region id, zone, security group, as well as information used by scripts, such as memory settings, the number of instances to create in each datacenter, and how they are grouped. (e.g., by environment)

Also provides individual scripts with hierarchical graph of all defined systems in the local and extended network, represented by the @networks variable, which is intended to be useful when dynamically configuring firewall, monitoring, whitelists or other lists that need to access relational information about servers other than the one currently assimliating.

5. Server Attributes
within the exported function assimilate: ->

Usually there is at least one of these files for each type of server.

Default attributes unique to servers of one type.

These files also define the order that scripts are executed to complete assimilation of the machine.

For example, a set of attribute values shared by servers "web01", "web02", ... "web09" could all be defined once in a file called scripts/servers/

6. Script Attributes

The most appropriate place for a script author to declare all script attributes and set default values which can be later overridden, because it is packaged together with the script when shared.

Also the best place to look first for a list of attributes you can override if you are a new user of a third-party script.

It is recommended to define defaults for all script attributes so the user only has to define overriding values to address unusual cases.

7. Memory Attributes
(lowest precedence)

Users should avoid modifying this file directly.

Data remembered by Borg after successfully interacting with the cloud provider API.

Great for things you don’t want to have to constantly insert into an attribute file manually; things Borg can figure out on its own, or that can only be figured out during/after a run.

These are commonly used when automatically connecting to a newly made server, or deleting an existing server.
(e.g., @server.instance_id, @server.public_ip)

4.2. Cascading Attributes

This next section applies specifically only to the ./attributes/ file. The goal with this file is to create an object hierarchy like: datacenters: D1: groups: G1: servers: S1: instances: I1: PROPERTY: VALUE where ALL-CAPS keys are names you would invent. This hierarchy of cascading attributes (like cascading style sheets) allow the most generic definitions at the root (least specificity) to be inherited all the way down to the most specific definitions in the deepest leaves (most specificity; in this case, I1) to define the network.

Observe how the keys are then inherited by instances through a _.merge() to create the final fully-detailed instance-level leaves a.k.a. the final object representing all the keys attributed to a specific server instance. In order of objects merged; latter overrides former.

  1. global.*
  2. datacenters.*.* (except key: groups)
  3. datacenters.*.groups.*.* (except key: servers)
  4. datacenters.*.groups.*.servers.*.* (except key: instances)
  5. datacenters.*.groups.*.servers.*.instances.*.*

4.3. @networks object

A structure like:

  ssh_port: 3562
    provider: 'aws'
         env: 'prod'
         tld: '.myproject.tld'
             aws_size: 't2.micro'
                 aws_size: 'm3.xlarge'
                 secondary_ip: ''
                 secondary_ip: ''

Will be merged into a @networks object, in scope from within scripts:

@networks.datacenters['aws-ca'].groups['prod-myproject'].servers['my-app'].instances['01'] =
  aws_size: 'm3.xlarge'
  secondary_ip: ''
  env: 'prod'
  tld: '.myproject.tld'
  provider: 'aws'
  ssh_port: 3562
@networks.datacenters['aws-ca'].groups['prod-myproject'].servers['my-app'].instances['02'] =
  aws_size: 't2.micro'
  secondary_ip: ''
  env: 'prod'
  tld: '.myproject.tld'
  provider: 'aws'
  ssh_port: 3562

So our script can now reach attributes for all servers, and it can lookup attributes by their relationship to other servers in the network hierarchy. (e.g., what is the secondary_ip of each my-app server in the same datacenter and group as the current server?)

4.4. @server object

But there is also a shortcut to the current server’s attributes-- which in the above case, if we pretend the current server is my-app02, would be:

@server =
  aws_size: 't2.micro'
  secondary_ip: ''
  env: 'prod'
  tld: '.myproject.tld'
  provider: 'aws'
  ssh_port: 3562

4.5. Attribute Functions

Notice you can define function values which are [re-]evaluated (as a javascript getter) at runtime every time they are referenced, and have access to the @server object within the function.

For example, we can use this to dynamically reference other attributes, like so:

  aws_security_groups: -> [ @server.env +’-’+ @server.type ]

We love that we can take this type of data-as-code approach, and its one of the most compelling reasons why we prefer a javascript-based devops solution.

4.6. Calculated Attributes

Finally, some attributes are calculated and appended for you by Borg at runtime, even though you didn’t specify them anywhere. These can be based on parts of the server name (e.g., @server.datacenter, @server.type,, @server.subproject, @server.env, @server.tld, @server.fqdn) or position in the hierarchy (e.g.,

5. Resources

All resources are just functions. Within every script and callback--anyplace that would typically be considered a space the average devops scripter would occupy, there is a carefully crafted object provided as the reference of this in Javascript or @ in CoffeeScript. It is where the @server object lives, and it is also where all resources can be found.

Borg's resources are divided into three categories:

  • Core Resources: Resources referenced by Borg core itself; these are shipped with and are inseparable from Borg. They can generally be considered part of the Domain Specific Language (DSL) which all scripts use. There are deliberately as few as possible defined.

  • Common Resources: Resources most people expect to be there, but aren't needed by Borg core; these are packaged externally like a third-party resource, and installed as a Git submodule by borg init with all new projects. That is so anyone who might decide one of its resources isn't good enough for them can replace, modify, or override it.

  • Third-Party Resources: Resources which clearly wouldn't be used by every project; these are packaged externally and installed as submodules by borg install as-needed. That is so anyone can author a set of resources and share with others as easily as uploading to

Each of these are discussed further in their own separate chapters below.

6. Scripts

Writing scripts that define how you want your servers built is the whole purpose of Borg, and every other feature is only a facilitator toward that goal. The focus surrounds complete control and convenience-- for a programmer; someone who dreams in code and works in shells every day.

6.1. Defining Servers

A server definition links your scripts and your datacenter attributes to a single Fully Qualified Domain Name (FQDN) which you can use on the CLI, according to the custom naming convention below.

FQDN Format:



  • datacenter: Must uniquely match a key you define inside the datacenters: key.
    (e.g., aws-ca might signify the Amazon Web Services datacenter in California)

  • env: Must match anenv: key value you define.
    Unique match determined by datacenter+env.
    (e.g., dev, stage, prod are recommended)

  • type: Must match a key you define inside a servers: key.
    Unique match determined by datacenter+env+type.
    (e.g., web might represent horizontally scaling servers hosting your website)

  • instance: Must match a key you define inside an instances: key.
    Unique match determined by datacenter+env+type+instance.
    (e.g., 01 might represent the first instance of many more servers like it)

  • subproject (Optional): Must match a subproject: key value you define anywhere below the datacenters: key.
    Unique match determined by datacenter+subproject.
    (e.g., mobile might represent the mobile counterpart to your desktop product, if the architecture were significantly different)

  • tld: (Optional): Must match a tld: key value you define anywhere below the datacenters: or globals: keys. Does not have to be unique.
    (e.g., might be your corporate domain)

All definitions of valid values for your project happen inside the file.

The motivation is to simplify command-line interactions so:

  1. Commands remain simple.
  2. Complicated logic connecting everything together remains in code where it belongs.
  3. Borg can guess your intentions when naming previously undefined new servers, and do the right thing. For example, if all you have defined is:

        tld: ''
             env: 'dev'

    and a scripts/servers/ server definition, Borg will only pause briefly to prompt for human confirmation that you mean to permanently define new servers (via memory.json) when you specify commands like:

    borg create
    borg assimilate
    borg assemble

    ...and carry on to do exactly what you had intended. Any further commands referencing those FQDN will be treated exactly as any other pre-defined server, since they are remembered by Borg until borg destroy is called, or they are otherwise deleted from memory.json.

The basic server definition looks like this:

# scripts/servers/
module.exports =
  target: ->
    @server.type is 'web'
  assimilate: ->
    @import @cwd, 'scripts', 'web'

The callback function target: -> is expected to return a boolean determining whether the current @server object matches this server definition. Since this is CoffeeScript, all statements are expressions, and the last expression of any function is always returned, unless otherwise specified. So that's exactly what this example does.

NOTICE: Borg will only process the first match found in scripts/servers/*.coffee .

When a match is found, the next callback function assimilate: -> is executed. From there it is up to you to specify any commands or @import declarations that act upon the remote server to complete orchestration.

6.2. Importing Code

The import declaration takes arguments similarly to path.join(). The @cwd variable is a string provided by Borg holding the result of process.cwd() which is expected to be the absolute path to the root of your Borg project.

  @import @cwd, 'scripts', 'vendor', 'redis', 'server'

This will require() a script located at scripts/vendor/redis/, which in this case can be expected to be provided by a third-party resource named redis. For example, if someone had run the CLI command:

borg install redis some point in the project's history.

NOTICE: Its important to use @cwd because it can point to other projects if Borg is being loaded as a library inside of another application.

NOTICE: @import is what actually overrides the scope of this or @ for the module.

6.3. Asynchronous Flow Control

There are a lot of ways to do this, but we like Continuation-Passing Style (CPS). One well-known caveat to using this approach is remembering that all Javascript flow-control statements--such asif, else, try, catch, etc.--don't normally apply.

The popular solution is to select a third-party library providing the equivalent behavior as a set of user-defined functions. Borg provides its own set of functions for this purpose.

From the script developer perspective, all you need to know is @then(); you can think of it as an alias for Array::push() on an array holding a list of your functions which will be executed in-order, all-at-once, later after all scripts have been processed.

For example:

@then (cb) =>
  console.log "This won't be executed until later."

Except--instead of passing an anonymous function you've just defined--most of the time you are passing strings into predefined resource functions which do the heavy lifting, and hide the passing of callbacks behind an alluringly simple syntax. Your scripts look more like this:

@then @log "This won't be executed until later."

Which means, from the resource developer perspective, your resource functions are defined with a compatible signature, like this:

module.exports = -> _.assign @,
 some_resource: (names, [o]...) => (cb) =>
   # your code here

Actually, most times you want to continue using async flows inside your resource. For that we have @inject_flow() which looks like this:

module.exports = -> _.assign @,
 sync_clock: (names, [o]...) => @inject_flow =>
   @then @execute "sudo ntpdate -s"
   # no callback hell pyramids here

There are three major reasons why we require asynchronous control flow to get anything done in Borg, versus a DSL that is strictly blocking, or simply pasting one long bash script in a string block for that matter.

  1. Its critical to the extensibility and dynamism of the attribute system that we have a two-pass system of script evaluation. The first pass being the one defining attributes and enqueuing functions to the giant event loop in the sky. The second pass being the one actually performing actions on the remote server during orchestration--possibly re-evaluating attributes since the first pass, in reaction to a server response, or another script.

  2. Eventually, your script could be super-parallelized; like multiple SSH connections to one remote server, performing complimentary steps at the same time. Admittedly this application has yet to be demonstrated.

  3. Some resources might actually be using functions that are asynchronous even though your script code may not.

There are more resources related to async that might become useful as you get further along. You can find them by reviewing examples in the common resources resource, and in

6.4. Cryptography

You are encouraged to use the following resources to obfuscate strings in your project repository. Its a smart thing to do with sensitive information, which would then be relatively safe even if your devops scripts were accidentally leaked.

@then @die @encrypt "example utf8 string"
decrypted_string = @decrypt "example base64 string"

The cipher is OpenSSL AES-256-CBC. The key is derived from a file in your project root named ./secret which typically contains a random 512-byte base64 string generated by borg init when your project is first initialized. This file should never be shared in the same way the project source is, or the encryption is useless.

Likewise, you can also encrypt binary data files your scripts are expected to upload to remote servers, such as individualized software licenses, using the Borg CLI:

borg encrypt # see help

and then passing thedecrypt: true option when using resources like @upload() to transmit the local file to its remote location on the server.

6.5. Console Debugging

Some people prefer to temporarily sprinkle log statements throughout their code and then run it to see what order they appear on the console log:

@then @log "Reaching here? Let's see what variable x is: #{x}."
console.log() # the non-async way

Sometimes also aborting just after the point of interest, to prevent going too far or taking a long time between iterations:

@then @die "I am debugging. This is only temporary."
process.exit 1 # the non-async way

Its a valid strategy and occasionally faster than other methods.

6.6. Interactive Debugging

You can execute any borg CLI command with debug as the first parameter to launch a Chrome browser using DevTools . This will let you set and catch debugger breakpoints, pause, step over, step into, step out, continue, and inspect stack, backtrace, variables, etc. From there the experience is very similar to any other Javascript, Node.JS, or CoffeeScript application.

borg debug assimilate

6.7. Test Provisioning

There is a test mode you can enter with the Borg CLI, which keeps a separate list of servers provisioned in the cloud. This way, while testing, servers you create are still created at your cloud provider--as not to differ from the production environment hardware--but they have a "test-" prefix to set them apart.

For example:

borg test assimilate

While in test mode, you can perform bulk operations such as provisioning every server from your dev environment to see if recent script modifications broke anything.

borg test assimilate aws-ca-dev

The fourth argument is matching on regular expression.

For more information, see:

borg help test

6.8. Integration Testing

NOTICE: This feature is currently in development.

A feature using Mocha is planned to run tests in CoffeeScript that are able to execute commands for testing purposes. For example, it might be useful if, after a new machine is cooked, Borg were able to automatically connect to one or more of its peer servers as defined in and perform a command nc -vz s <public_ip> <some_port> in order to determine if important ports were open to them.

This type of test would essentially be attempting to reproduce, from the end-user's perspective, whether or not the assimilation resulted in a working service. You can imagine the output would look something like:

user@host: ~/project$ borg checkup
test: web:
web01 was able to connect to redis01 on tcp/6379
web02 was NOT able to connect to redis01 on tcp/6379 nc: getaddrinfo: Name or service not known 2 test(s) run in 204ms. 1 passed, 1 failed.

6.9. Jobs

NOTICE: This feature is currently in development.

Jobs are essentially partial scripts, which are invoked from the command-line and perform specific actions of a periodic nature on a remote server. For example, it could restart a service, or perform a software deployment, or truncate logs, or any other routine or mundane task.

Jobs are located in scripts/jobs/*.coffee, which have the benefit of being distributed with and utilizing all the accompanying resources. For example, tasks related to the maintenance of a Percona server, such as peforming a backup snapshot, could be distributed along with thepercona resource which installs the service.

7. Common Resources

There is a third-party submodule that comes with every new borg init and that is the borg-scripts/resources repository. It holds the most basic functions you would expect to use in any project.

Notice that these are kept very light-weight. The list is short but they do a lot by themselves. The preference though is very biased toward bash scripting muscle. In the author’s opinion, it makes more sense for three reasons:

  1. Someone not familiar with borg but who is familiar with bash could more readily read / QA a borg script and copy/paste from it to achieve the same outcome.
  2. Typically the process for making a new script involves first making the server manually, which results in a bash history which is used as a template for the devops script. It saves time to be able to simply paste the bash history into Borg, rather than have to translate it to yet another domain specific language (DSL), and back out to bash when debugging/troubleshooting.
  3. Well-written bash script tends to be terse and powerful, resulting in far less boilerplate and complexity than the equivalent heavy resource.
  4. Though expecting to support others, most resources favor Ubuntu Server when forced to choose, as the time of writing.

ADVERTISEMENT: Accepting pull requests to support other Server Operating Systems.

7.1. @execute()

Execute shell command(s) on remote host. Perhaps the most used of all resources.


@then @execute "echo --no-rdoc --no-ri | sudo tee /etc/gemrc"


  • sudo: (Optional): If true, command will be prefixed with the string sudo. If false, no prefix is added. If typeof string, prefix will be sudo -u#{string}. This option can be useful when you want to sudo a resource that depends on other resources. In that case, they all support this option, and forward the option recursively. But sometimes when piping is involved, its easier to just define sudo in the command yourself and omit this option. Default is false.

  • su: (Optional): Must a be a string containing a username. Will prefix the command with sudo su - #{string}. Alternatively you can also just prefix the command yourself. Default is null or no prefix. Mutually exclusive of sudo:.

  • retry (Optional): Integer representing number of times to retry the command if it fails, before giving up and @die()ing. Default is 0.

  • ignore_errors: (Optional): Boolean representing whether to @die() if failure is encountered. Default is true.

  • expect: (Optional): A kind of assertion for testing the result of the command. If integer, must match the exit code. If RegExp, must match the combined stdout and stderr output. If string, must case-sensitive string match the combined output. Failed matches will subsequently @die(), while matches continue.

  • test: (Optional): A callback Function of signature ({code, out}) => where code is the exit code, and out is the combined stdout and stderr output string. Can be used as an alternative to expect: to define your own assertions, or to parse data from the output, or to transform the @execute() resource into a kind of asynchronous if statement alternative, as in the following example:
    @then @execute "date", test: ({out}) =>
      if out.match /Sep/
        @then @log "Yay, a birthday month!"
        @then @log "Boo."
    NOTICE:@execute()internally wraps @inject_flow() before calling our test: callback function.

7.2. @package_update()

Updates the local cached copy of OS package manager repository. Usually done prior to @install() especially if a new third-party repository was just added to the list.


@then @package_update()

This resource can be a little awkward because you have to remember to use the parenthesis, as it takes no arguments.

NOTICE: In the future this resource will check if it succeeded recently or if the repo list changed since last run, and skip if its not productive to run again.

7.3. @install()

Install new software package(s) via the OS package manager.


@then @install "git build-essential unzip"

This resource only takes one argument, which is a space-delimited list of the packages to install. It also assumes sudo: true when calling @execute().

PROTIP: Almost all the common resources follow a conventional function signature where the first argument represents a list of things, and the second argument is an object containing optional key values. That list argument can be a string, or it can also be an array of strings, which may come in handy if your items naturally contain spaces that you don't intend to be parsed as delimiters.

7.4. @uninstall()

The opposite of @install().


@then @uninstall "whoopsie samba"

This resource only takes one argument, which is a space-delimited list of the packages to install. It also assumes sudo: true when calling @execute().

7.5. @directory()

Make one or more directories on the remote server, and/or set their ownership and permissions.


@then @directory "/var/www/",
  recursive: true
  owner: 'www-data'
  group: 'www-data'
  mode: '0755'
  sudo: true


  • recursive: (Optional): Boolean indicating whether to create parent directories if they do not already exist.

  • Inherits all options from @chown(), except recursive:.

  • Inherits all options from @chmod(), except recursive:.

  • Inherits all options from @execute().

7.6. @chown()

Set ownership of a file or directory.


@then @chown "/etc/nginx",
  owner: 'nginx'
  group: 'nginx'
  recursive: true
  sudo: true


  • owner: (Required): String owner name.

  • group: (Optional): String group name.

  • recursive: (Optional): Boolean indicating whether to modify ownership of child directories and files, if they exist.

  • Inherits all options from @execute().

7.7. @chmod()

Set permission mode of a file or directory.


@then @chmod "/etc/nginx",
  mode: '0755'
  recursive: true
  sudo: true


  • mode: (Required): String MODE.

  • recursive: (Optional): Boolean indicating whether to modify ownership of child directories and files, if they exist.

  • Inherits all options from @execute().

7.8. @template()

Create or replace a text file on the remote machine with the result of a local template.


@then @template [__dirname, 'templates', 'default', 'logrotate'],
  to: "/etc/logrotate.d/example_org.conf"
  owner: 'root'
  group: 'root'
  mode: '0600'
  sudo: true
        weekly: true
        rotate: 4
        missingok: true
        copytruncate: true
        create: "555 #{@server.web_user} www-data"
        compress: true

The first argument in this example is a path.join() style location of the local template, while __dirname evaluates to the absolute path of the current script. Its easiest to reference your template files relative to the script because they are usually distributed together, but you can also use @cwd.


  • to: (Required): String absolute path of the remote file.

  • variables: (Optional): Object containing hierarchy of keys that become context of thisor @inside the template. (e.g., variables: pet_name: 'trudy' is accessed as @pet_name within the scope of the template tags.

    PROTIP: The @server and @networks objects are always in template scope.

  • content: (Optional): String representing entire file contents. When specified, to: is omitted, and the path to the remote file is specified as the first argument to the resource, instead.

    PROTIP: Sometimes, instead of creating a separate template file, especially if the template is very short, mostly variables, or requires complex logic to create, its easier to pass them inline as a string to this option.

    @then @template "/etc/logrotate.d/example_org.conf",
      content: """
        rotate 4
        create 555 #{@server.web_user} www-data
      owner: 'root'
      group: 'root'
      mode: '0600'
      sudo: true
    PROTIP: You can @decrypt() part or all of content: to secure sensitive info.

  • Inherits all options recursively from @upload().

Template Markup:

The template markup is most like Extended Ruby (ERB) adjusted for CoffeeScript. For example, the companion template for the variables: passed in the first example given above could be:

<% for key, o of @paths: %>
<%=key%> {
<% for k, v of o: %>
  <%=k%><%=if v is true then '' else ' '+v%>
<% end %>
<% end %>

...and it would have produced the same string as passed to the second example above's content: option. Have a look at the parser implementation for details. You can also find examples while perusing various third-party resource templates.

Template files are located within the scripts/*/templates/default/*.coffee path nearest to where your script is located. All template files should have a .coffee extension, even though it isn't required to be suffixed in the first argument.

PROTIP: Borg includes Sugar.js globally for both templates and scripts.

7.9. @upload()

Upload a local file to the remote server. Overrides existing files unless checksums match.


for file in ['', '', 'godaddy_chain.crt']
  @then @upload [ __dirname, 'files', 'default', file ],
    to: '/etc/ssl/'+file
    decrypt: not file.match /godaddy/
    owner: 'root'
    group: 'root'
    mode: '0400'
    sudo: true

The first argument is the local file to upload, provided similarly to @template().


  • to: (Required): String absolute path of the remote file.

  • decrypt: (Optional): Boolean indicating whether to @decrypt()the local file prior to transmission. (e.g., if borg encrypt was used to secure the file on disk)

  • Inherits all options from @chown(), except recursive:.

  • Inherits all options from @chmod(), except recursive:.

  • Inherits all options from @execute().

7.10. @download()

Cause the remote server to download a file from the Internet.


@then @download '',
  checksum: '29bb08abfc3d392b2f0c3e7f48ec46dd09ab1023f9a5575fc2a93546f4ca5145'
  to: '/tmp/redis.tar.gz'
  mode: '0400'

The first argument is a string representing the remote file url to download.


  • to: (Required): String absolute path of the remote file.

  • checksum: (Optional): String holding the sha256sum which must match.
    If specified, a non-match will result in @die().

  • Inherits all options from @chown(), except recursive:.

  • Inherits all options from @chmod(), except recursive:.

  • Inherits all options from @execute().

7.11. @link()

Create a symlink on the remote host.


current_dir = '/var/www/'
# symlink logs for convenience
@then @link "#{current_dir}/logs",
  target: '/var/log/'
  sudo: true

The first argument is a string representing the real file or directory which will be linked.


  • target: (Required): String path to output link to.

  • Inherits all options from @execute().

7.12. @append_line_to_file()

Append a new line to an existing file, only if a matching RegExp doesn't already exist.


@then @append_line_to_file '/etc/apache2/apache2.conf',
  unless_find: '^ServerName' # avoids commented lines
  append: 'ServerName'

The first argument is a string representing the absolute file path on the remote host.

This resource is handy when you're not confident you should replace the entire remote file (e.g., The Apache configuration template would vary significantly by release version, OS distro, and even distro version.)


  • unless_find: (Required): String regular expression formatted for grep.

  • append: (Required): String to append.

  • Inherits all options from @execute().

7.13. @replace_line_in_file()

Replace a line in an existing file, only if and where the first matching RegExp is found.


@then @replace_line_in_file '/etc/redis/redis.conf',
  find: '^bind [\w:.]+$'
  replace: 'bind'

The first argument is a string representing the absolute file path on the remote host.

This resource is handy when you don't feel like templating the entire remote file for a simple change, or when non-action is acceptable if a match isn't found.


  • find: (Required): String regular expression formatted for sed.

  • replace: (Required): String serving as the replacement.

  • Inherits all options from @execute().

7.14. @user()

Create a new user, if the username doesn't exist already.


@then @user 'regal',
  comment:    'Reggie Almus'
  password:   '$1$UPmdzNfV$k6U33XIPlWuE1z1cPJ/QQ/'
  group_name: 'developers'
  groups:   [ 'regal', 'sudo' ]
  ssh_keys: [ 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt2pTY9+k/PwmGuEwmXOQMrq/fnHHJ+LIexmB172hlf1ytMaXPy4lcNPFX3j7Q5lI8z+L3SFl66Lcakc6i/BIQR8Jr7Vz2UmtaeF21sQKHS1Bw5he4l9F/EkikgIetVGU8+X7xaBkV4+2v6ELE1UMVot0YI7DurVj3aLRpsCYyihj9ju/J6Few4ffxP3ef4FsM89Tsnj0SYi8lnjEfKBbLm5ydu9oJH6vG6AHGWjVstznrMwvzZiKiX8gOJmZSCLNL1SAEYrnuGx3rWa5BzdWuLcvrbhJk2Gge4T4iUItt9VsqJ/vFbV95JwYjLLr6nTC3i7Exd7SOn+GKYJVldY/H' ]
  sudo:        true

The first argument is a string representing the new username.


  • password: (Required): String password, as encrypted by the command:
    openssl passwd -1 "theplaintextpassword"
  • comment: (Optional): String usually used to indicate full name in proper case.

  • group_name: (Optional): String representing the user's default/primary group.

  • groups: (Optional): Array of strings representing the user's additional groups.

  • ssh_keys: (Optional): Array of strings, each containing the typical one-line contents of any OpenSSH public key file(s) to be installed in the user's ~/.ssh/ directory.

  • Inherits all options from @execute().

7.15. @group()

Create a new group, if the group name doesn't exist already.


@then @group 'developers', sudo: true

The first argument is a string representing the new group name.


7.16. @deploy()

Download a version control repository within a Capistrano-style directory structure.


@then @deploy 'borg-docs',
  deploy_to: '/srv/borg-docs-mirror'
    repo: ''
    branch: 'master'
    deployKey: @decrypt 'NOuYeXFv6YS022wwlcUps6WX31QZeLmHTb72hNdj6XnKPpeSaxDm4ec+Mi6Sl2C4MqHuWIyxS5DMydOd118InDyhbmkv7L1Xeon719QsakI6NKbWm1jLahHwX/RfWgVWwVyX8ARVEqjQOa3rjST/LUpnNWukbjXvOxRV6tbwDhrZeKT/W0vCxBeifxT6zVEGDkE9hTEt8kb1l3eFjJFirVvdD1RBVJfqa/UL4OtndJMG9bV/7VVnaLwyRjH0EI6Z/Wbahf7VAtyn3LdtMcBQHmkYgZc8H8S56UrbdAa+GQYnvDSIC2kYEZBcEtrvkxFo3+Jx6G+zLRGfqgTg026cwJYOuGIMWCOxqS+h+pngRSYP6JB2zH5KTZ30QUI3srWQj08rQMpqY/0XJJKDV69bJL3/0oRP90qcYKuhxnTsAcG4CI3EbgT/it2DVzIpNPlOXDq+0OyDbWP9oZdefLw4+GloDD/By4aZMc1sI+ZsShay/aV6+LH85xAjNXehjg18Ik2mTyGyHvXUtltf/ELBsik5DJx1Iepz6BHEWdruuB9W9a/ycS9TnSkyGYKvyHFc9lusJEzNYFeSek1Ae0J7jPrMlojeyz/VFxvUZLpT5Ju/38XRE2t94bdQoYroNHISvHK4kNbWDNjy8fMa4zGvx/Yd5wjQlCfk4FJotkRPNkarazrFrPJ+dENeFJfRRa2B1KXbBtf/olVJPBGyiImjiwr2pVB/2jJpjEI6/zyp+wNDLSFHwXvwi54+vnQL/mBUTHZuTn9WS7sa+gjdV0LjcUSWsxahGIDpmc+WvAUPGUD89f6lQJMIx37Lp9Ij9kpl3SLrrBdJbrzjr0bgF82bq6gki4cBqNMswC1Ol/JzRmwg4jGr3wjF0kZV4/4U4Ok12qbOsyKhMOknDljbrBQb+6rZZC9CiB1np4g1BUd5WAq64Q5NpwprjThWipPOK5kxV4Rart1GyEkuOFKvnFX3jPLxHwHMGiwsaeI4qY9xK1F+TBoouP2IWi4Qv3asnWaUzo08GEKvtAJBmj8dREVnPvPwcBBgO9Q01y9nqaGdY34XJkdCdPy9iFOP4PJ6ol44U8FyeMc7gKPWIS1epLtPmmagT02Ii5AucZOMUVYth7+UC6q2lW2mEi3xIQklvxTmST/jpJ1ZILzPvK5ksq2P3nnFbahj4ylQSd8EZ8tIR7wA62cp8KlRSWAbU0Fc4IYEJCv3kvaupihPBFdW9pSQrAMAudb7OdTtWr2UnT5AjtfVNeKobRYYkNaY278Y16oR+PIRGsQmdFruugCEArk3MOGs6VqG4/Eb3yfzqOm2Ht63/SjWdpfu6NjrotL0H3yIg6oUNMijtmAlqjMmTJyzSEwlkX+9N+GkGpjAel+4O7ff06Cts7PqVBRKJTmv8b8JPTyh+JlqVZs7MwtGAwN2LoE+aygAXTeEexR+pNOUbJtGZm4nO+KSlxQn6J47yz+W3EGRHyEWBupnT+3lkQ9qka/hJIDzr37RvsP9haBLwtjId/dmnJRu1r58WBl/gfZgftjNGtlG2bhLzWSfbAv6cA7+nU02pozfQBIhXkKivkcDJD23yZ92cjbF31VHh1GKxv3rSQbXIBkywn5sv5jpoppQTDVoBADag+RAuVHGskjWyk/eezAdP4ndBixB98NDVKG64ICA4F7O7BM+91bcFdjDQKHS0OVXvERNKkJTBLkkDXNPnmxS4incVMAMCDyweEZQlR+FJkMsVCOQxXP4YADEnBDXe3Sx4XnGxRUKal/dLHJiQcxZ6XRgTJMbVKEFYrpVa9EqTVRv8wcxJNMOfZW1Ne6h3SQOljkcz2tLL2lDbqQEk/aGHGWQxQ7EZPiaFNEKhtz8uMiyBQ0gw9ihV8vKTebhbObXt07yZtAyGD580Vjx3syzSYx0FT77hWe4ViHmNxJSS3QMp//nJOZFC4yEgb178/hg47rS+u6lCn0VXE1ZMY00N6idMLu7BNruwBv7kIju7khJeeMsWkKYBm/CtOqRbZSHp+S10gdk0e/9ktqiZPE9k+3rNf+cf4w0DGnMRM/HfpEwUB/lXQfVV/pUqE/FqGe/gcFtRVmQ7Bkm+keiE8uihRpn2KP7SyJuW7uBrHXXDQ6rrfkr1bVBCy90sLFIzwQt/aPp3aowt0yls2Lj71yHrq8rUOOm8W7mWMIG5rtysDYPR5okr8Q+BXLnF+lO7Pskn4msBMenx64COR9v2kp9CZ/aQ66yIPIr4GS4zS3niKklUQUFYm6lDRq/mo1xFNwZJoU4tGfvkH5zeIJUSc+jWrKDwlNLIO5IYgFd3ADtbuvH22S9QF08FiDYg3m4BCx+oY6IHj8/6fmrgiCnPgxecEdhGFrBxQLVdcBdReqrupb/g8HhoeKeOl7XrWmQQDx74T2Q8TDOndAZ0udONsHECb1aHpDTGh7JFJOZKPLwOn8XtgxA+14rlyXNenOugE+gsCd/RS5k2zAil14f2TMrKcuctOCaXSVdut9YrTzMZk2i+M3OcAGvchblbGPbfOOzhJg4omT/4kKd/utSdIBVjGVgQvjjaD0cRGsNAn1sbOZBLhAIT+3Bfeympp5N/i6meCxaSBigWvB5Xey1+VyDZJPG8zXN5NPzlxva0+Bm8SJtP9LeJesO+ocBIqx3MHSCWGmeLjhFg/vS5qO191XFtulAOW9IvSKk7yy0Mk+adEsf915Hz2aGjm6ZFXjP0gJ6ZOkOugY+b2XzYt4KFqtzyCEPJEA5bP+pv/qZzeb7k4ViPsYf0u9YMkVRoc5q65Bfw8ubAr6PvFLX6ccYO6c9uIVHtL5qJRu8cmXvYute/dMPD1AFN7unzw=='
  keep_releases: 3
  owner: 'nodejs'
  group: 'developer'

The first argument should be an alias of the repo. It is planned to be used in a later version.


  • deploy_to: (Required): String representing remote path to the directory to download into.

  • git: (Required): Object containing this structure:
    • repo: (Required): String URI to the hosted Git repository.

    • branch: (Required): String branch, tag, or git ref.

    • deployKey: (Optional): The OpenSSH private key, if one is required to gain read access to the hosted repository.

      PROTIP: It is recommended to use @decrypt() to secure private keys, as their contents could grant access to everywhere the corresponding public key was installed, if leaked.
  • keep_releases: (Optional): Integer how many previous ./releases/*/ directories to keep around before deleting them to save disk space. Default is 3.

  • Inherits all options from @chown(), except recursive:.

  • Inherits all options from @chmod(), except recursive:.

  • Inherits all options from @execute().

Right now only Git via CLI is supported.

ADVERTISEMENT: Accepting pull requests to support other Source Code Managers (SCMs).

7.17. @reboot()

Command the remote server to reboot itself.


@then @reboot wait: 3 # min

Useful when the remote Linux kernel was upgraded, or a hardware driver was installed that depends on a different kernel version, or if you want to test that your configuration survives a restart.


  • wait: (Optional): Minutes to wait for the server to finish rebooting before attempting to reconnect and resume the SSH session where the script left off. Default is 1.

    PROTIP: Its important to tweak this for slow servers because Borg's @ssh client will only retry 3 times before giving up and calling @die() in the event it takes longer than expected.

    NOTICE: Borg will always call @die() in the event it gets stuck or confused. This is a safety convention you should try to observe in your scripts, as well. During mass assimilation, you want failures to be obvious. Its hard to recover from what might otherwise be a hang, or skipping of some commands. In most cases, its much easier to embrace the disposable hardware strategy of just destroying the remote machine and trying again later.

8. Third-Party Resources

borg install
borg update

The documentation for resources can be found in their individual repos.

For a listing of available repos, see:

9. Contributing

Your contributions are welcome via

9.1. Issuing a Pull Request

  1. Fork and clone

  2. Edit to heart’s content
  3. Publish to your fork:

    git commit && git push
  4. Issue pull request to official repo, and we will review and approve or provide feedback.