paul_venezia
Senior Contributing Editor

Review: Salt keeps server automation simple

reviews
Oct 17, 201312 mins

Salt brings simplicity, flexibility, and high scalability to Linux and Unix server infrastructure management -- it does Windows too

Like Puppet, Chef, and Ansible, Salt is an open source server management and automation solution with commercial, officially supported options. Based on command-line-driven server and client services and utilities, Salt is primarily focused on Linux and Unix server management, though it offers significant Windows management capabilities as well. While Salt may look simple on its face, it’s surprisingly powerful and extensible, and it has been designed to handle extremely large numbers of clients.

Salt uses a push method of communication with clients by default, though there’s also a means to use SSH rather than locally installed clients. Using the default push method, the clients don’t actively check in with a master server; rather, the master server reaches out to control or modify each client based on commands issued manually or through scheduling. But again, Salt can also operate in the other direction, with clients querying the master for updates. Salt functions asynchronously, and as such, it’s very fast. It also incorporates an asynchronous file server for file deployments.

[ Review: Ansible orchestration is a veteran Unix admin’s dream | Review: Chef cooks up configuration management | Review: Puppet Enterprise 3.0 pulls more strings | Puppet or Chef: The configuration management dilemma | Subscribe to InfoWorld’s Data Center newsletter to stay on top of the latest developments. ]

Salt can be distilled into a few simple core concepts. You can issue raw commands to clients or groups of clients, or you can create YAML configuration templates called “states” to control the behavior of those clients. Further, you can extend and abstract functions and parameters through the use of extended templates called “pillars.” All of this is combined with the capability to use a few different scripting interpreters to extend functionality through the use of Python or PyDSL (Python Domain Specific Language).

In addition to the free open source version, Salt is available in supported Professional and Enterprise versions (which are also completely open source) from SaltStack. SaltStack Enterprise costs $150 per node per year, with volume discounts and site licenses available.

Installing Salt

Salt is installed quite easily on Linux through Git or through the package manager for your distribution. You may have to add a repository, but after that, installing the master is generally as simple as issuing a yum install salt-master or apt-get install salt-master command. You can install the clients, called minions, in the same way.

On each minion, you can edit a hosts file to direct the minion to the master, or simply configure a DNS A record for Salt to point to the master’s IP address. Generally, this is the simplest way to start.

InfoWorld Scorecard
Value (10.0%)
Scalability (20.0%)
Management (20.0%)
Availability (20.0%)
Interoperability (20.0%)
Performance (10.0%)
Overall Score (100%)
SaltStack Enterprise 0.17.0 9.0 9.0 9.0 9.0 8.0 9.0 8.8

Once you have a master and a selection of minions installed and running, you’ll find that your master already has a key request in place for each minion that has tried to contact the master. You can view all of the keys on the master by issuing a salt-key -L command.

It’s interesting to note here you may wind up with multiple keys per minion if there are multiple reverse zone entries for that minion’s IP. In practice, that shouldn’t happen, but you can delete the additional requests later.

On the master, you can accept all keys by issuing a salt-key -A, which will allow the master and the minions to communicate. You can then test this with the general salt command:

[root@saltmaster salt]# salt "*" test.ping saltcentos.iwlabs.net:     True saltubuntu.iwlabs.net:     True

This will display all known Salt minions and their status. Once this is done, you can now begin issuing commands and building up “states” and “pillars.”

Controlling Salt minions On the master, you can issue specific commands to single minions or groups of minions with the same salt command:

[root@saltmaster salt]# salt "*" pkg.install bind saltubuntu.iwlabs.net:     ---------- saltcentos.iwlabs.net:     ----------     bind:         ----------         new:             9.8.2-0.17.rc1.el6_4.6         old:                 bind-libs:         ----------         new:             9.8.2-0.17.rc1.el6_4.6         old:             9.8.2-0.17.rc1.el6_4.4     bind-utils:         ----------         new:             9.8.2-0.17.rc1.el6_4.6         old:             9.8.2-0.17.rc1.el6_4.4

This shows a successful installation of the bind package on the minion named saltcentos.iwlabs.net. However, note there was no result from the minion named saltubuntu. This is because the bind package is named “named” on Ubuntu, so that package couldn’t be installed. This is where states, pillars, and some scripting can come in handy to allow for the right packages to be installed regardless of naming differences among distributions. (More on this later.)

You can also poll minions for information such as network interface configuration:

[root@saltmaster salt]# salt "*" network.interfaces saltubuntu.iwlabs.net:     ----------     eth0:         ----------         hwaddr:             00:50:56:96:7f:07         inet:             ----------             - address:                 172.16.32.70             - broadcast:                 172.16.32.255             - label:                 eth0             - netmask:                 255.255.255.0         inet6:             ----------             - address:                 fe80::250:56ff:fe96:7f07             - prefixlen:                 64         up:             True     lo:         ----------         hwaddr:             00:00:00:00:00:00         inet:             ----------             - address:                 127.0.0.1             - broadcast:                 None             - label:                 lo             - netmask:                 255.0.0.0         inet6:             ----------             - address:                 ::1             - prefixlen:                 128         up:             True

This truncated example shows the network interface information for the minion, including IP addressing, MAC addressing, and interface status.

Grains, states, and pillars You can segment minions into logical groups using “grains,” which reference commonalities among the minions in order to perform required tasks. You could run this command to address only CentOS hosts:

[root@saltmaster salt]# salt -G 'os:CentOS' network.interfaces

Alternatively, you can use:

[root@saltmaster salt]# salt -G 'kernelrelease:2.6.32-358.11.1.el6.x86_64' network.interfaces

The grains.items directive will pull an inventory of one or more minions:

[root@saltmaster salt]# salt "*" grains.items

You can also assign grains to specific minions, classifying them as Web servers, for example, or by data center location or any other desired method. Grains can be specified within the minion configuration on each host, either in the main configuration file or in a static file that can be managed via Salt. You can also write dynamic grains that will return data based on simple scripts that inspect the minion for information such as the version of a custom application.

To accomplish tasks more complex than command-line directives, you need to create states, which are YAML templates that tell Salt how to configure minions.

Here’s an example for ntpd:

ntp:   pkg:     - installed   service:     - name: ntpd     - running     - require:       - pkg: ntp /etc/ntp.conf:   file.managed:     - source: salt://ntpd/ntp.conf     - mode: 644     - user: root     - group: root

This state will inspect the client for the presence of the ntp daemon and install it if it’s not there. It will also place the referenced ntp.conf file in the right place, then start the service.

In Salt, everything is a file, more or less. This means that the ntp.conf file referenced in this state can live in a directory hierarchy that lends itself to easy organization. For instance, this state file (which would be called init.sls) would be placed in a directory called ntpd under the default Salt state directory, /srv/salt, and the ntp.conf file it references would be stored in the same directory. Our directory structure looks like so:

/srv/salt/ntpd/init.sls /srv/salt/ntpd/ntp.conf

We then run the salt command to reference that directory:

[root@saltmaster ntpd]# salt "saltcentos.iwlabs.net" state.sls ntpd saltcentos.iwlabs.net: ----------     State: - file     Name:      /etc/ntp.conf     Function:  managed         Result:    True         Comment:   File /etc/ntp.conf updated         Changes:   diff: New file                    ----------     State: - pkg     Name:      ntp     Function:  installed         Result:    True         Comment:   The following packages were installed/updated: ntp.         Changes:   ntp: { new : 4.2.4p8-3.el6.centos old : }                    ----------     State: - service     Name:      ntpd     Function:  running         Result:    True         Comment:   Started Service ntpd         Changes:   ntpd: True

We can see that Salt has installed the ntp package, added the custom configuration file, and started the service.

We can get a little deeper in state files to accommodate issues like the aforementioned package naming disparity. We can use simple if/then constructs to alter the package name depending on the OS running on the minion:

{% if grains['os'] == 'RedHat'%}     - name: httpd     {% endif %} {% if grains['os'] == 'Ubuntu'%}     - name: apache     {% endif %}

This allows the same state file to address multiple distributions. We can also use for loops in the Jinja-based YAML state files. Alternatively, state files can be written in Python or PyDSL for extended programmatic options.

Here’s a Python example that will make sure that ntp is installed and running:

#!py def run():     return {'include': ['python'],             'ntp': {'pkg': ['installed'],                     'service': [ { 'name': 'ntpd'}, 'running',                     { 'require': [ {'pkg': 'ntp'} ] }                     ]              }      }

You can also pass variables to files in order to dynamically alter the contents when the file is placed on the minion. This can be used for specific alterations in a configuration file for a particular set of clients, for example.

The upshot of all this is that you can do simple things easily with the default YAML syntax and leverage Python and PyDSL to accomplish more complex tasks.

Next up are pillars, which are sets of data that can be made available to states while they’re running. This allows you to collect common data points in a central file hierarchy and reference them elsewhere. For instance, you might have a list of user names and user IDs, or a simple script that determines package names based on the OS in use, or any number of other parameters that need to be globally available.

Pillars are maintained similarly to states in that they occupy their own directory structure (under /srv/salt by default) and are constructed like state files. As an example, if we had a pillar that contained the variable ntpconf: salt://ntpd/ntp.conf, we could use that variable in the ntp example by referencing the pillar:

file.managed:     - source: {{ pillar['ntpconf'] }} ...

Cloud, Windows, and Web-based management In addition to managing local infrastructure, Salt can provision and manage server instances on clouds such as Amazon and Rackspace through an extension called Salt Cloud. Salt Cloud was a bit of a challenge to configure due to inconsistencies with the distributed versions of Salt in various package repositories. Cloning the Salt Cloud Git repo and installing Salt Cloud with pip produced a functional Salt Cloud instance, but installing the RPM from the same repository as the rest of Salt proved fruitless. This appears to be a problem in upstream packaging, not in Salt itself.

With Salt Cloud, you can configure predefined profiles for cloud server instances and spin them up with a single command. You can also rename, modify, tag, and destroy instances directly from Salt Cloud.

Salt can also manage Windows instances, though the toolset is not quite as complete as for Unix systems. The major parts are there, such as the ability to install software, manage network settings, deploy files, and manage the registry. Grains are visible for Windows as well. In order to facilitate software deployments, Salt has developed a repository for Windows packages that prompts Windows minions to download and install packages from Salt’s asynchronous file server.

A Python-based Web UI for Salt, called Halite, can be installed and run on the same server as the Salt master. As with many elements of Salt, you’re best off using the development version of Halite for now, as there are inconsistencies between versions that will cause unexpected behavior. Using the development versions of Salt from the Git repo for both Salt and Halite brought up a functional Web interface.

Configuring the Web interface is relatively straightforward, requiring changes to be made to the Salt master config file. This is also where the configuration for external authentication must be completed in order to allow users to log into Halite. By default this uses the local PAM authentication method on the server itself.

Halite is a new, bare-bones Web UI that offers views of running jobs, minion status, and event logging, and it allows you to execute commands on minions. While it does work, it is not as complete as it could be, and it suffers from the bugs you would expect from new code. In practice, most Salt admins will use Halite as an overview tool for now, and look forward to a greater feature set in the future.

Deep Salt Beyond the simple aspects of Salt lie a multitude of heavy-duty features. For instance, minions can attach to multiple masters, allowing for redundancy. In addition, masters can themselves be minions to upstream masters, which allows for high scalability. You can issue commands on an upstream master that will flow down to the masters below, then on to the minions controlled by those masters.

Further, the peer system allows minions to ask questions of a Salt master that may require data collection from other servers. For instance, a minion may need to populate data in a configuration derived from a database residing on another server. This system allows the minion to query the master to retrieve that data, which is then used for the configuration. Minions can send out flags when they are back in operation after some manual changes, and trigger events such as being added to a list of available database servers or being re-added to a load-balancer configuration.

Overall, Salt is a capable infrastructure management tool that is not only developing rapidly, but is already quite robust and extensible. Salt is designed to be highly scalable, the CLI tools are fast and functional, and the layout is solid, but the Web UI is not as up to snuff as others. That said, Salt is a well-engineered tool that will fit very well into the workflow of systems administrators.

This article, “Review: Salt keeps server automation simple,” was originally published at InfoWorld.com. Follow the latest developments in application development, cloud computing, and open source at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.