What is Salt?
Salt is an event driven infrastructure management tool. It sounds really complex but it’s not. Salt is similar to most of the configuration tools that we use in our every day lives to configure infrastructure but there are many key differences in the way salt is architected. Salt is very unique as it has a ZeroMQ high speed messaging bus between the salt minions (in our case Arista switches) and a master which is typically a Linux server. Salt can be used to configure devices from multiple different template languages such as Jinja or YAML. Salt can target devices and run Ad-hoc commands to multiple switches. Although, the best feature of salt is the events. All events are logged on the high speed, ZeroMQ messaging bus. Therefore, Salt can react to any event such a network switch being added, a new host coming up, a new BGP peering session etc. This blog article simply scratches the surface of what is possible with Salt and EOS.
Salt High Speed Network Bus
Salt is a publish-subscribe system. In our case the switches will need an agent installed on the switch. The agent simply runs as a Linux process on each of the Arista switches and communicates directly with the master The instructions can be found here to install the agent. In the instructions, the agent is launched and uses the SALT_MASTER environmental variable to find the salt master. Another nice benefit of Salt is that it does not need a service account.
We will walk through a example of how the minion/ master exchange works. Within the minion/ master exchange there is a secure key exchanged between both. Therefore, there is no need to have a username and password for the communication between both components.
Pillars – Pillars are a Salt interface with global values that match other minions. For example, a pillar will describe what configuration all leaf switches need which might be VLANs and VARP. This pillar might also have spine switches which would not include VLANs but have BGP configuration and everything receives Syslog/SNMP.
Grains – Grains are detail about a device. Since we have an agent on each switch the agent is able to tell the master about Operating system on the switch, Interfaces and other system information.
Roots – Roots are a simple file structure that instruct salt on where files are as in Pillars, States and reactors. Salt like other configuration management systems has a best practice for file structures which is located in the master configuration.
Reactors – Salt reactors are exactly what they sound like. We will give an example of a few reactors in this post. A reactor will listen to messages on the ZeroMQ bus and react to specific events.
States – States is a preconfigured operation or task supported by salt. A state like a pillar file is also in the .sls extension.
Templates – Templates are configuration files which can be used multiple times. In our case templates will be written in Jinja2.
Salt Lab on Vagrant
The next step is to clone the repo with the following command.
git clone https://github.com/burnyd/Arista-Vagrant-Saltstack
Everything within this article was tested on the following software versions.
Management OS: Ubuntu 16.04 Xenial
Salt master: 2017.7.2 Nitrogen
Salt minions: 2017.7.1 Nitrogen
Note: In this case, vagrant had to be run as the root user. So you may have to execute sudo su before proceeding.
vagrant up mgt1 leaf1a leaf1b spine1
This may take a while so it is a good idea to talk about the salt authentication that needs to take place. The Salt key is the system responsible for managing a public key used for authentication. As we have previously mentioned, salt does not need a service account or a username to manage devices. Salt works with a secure key for each transaction.
By now hopefully the vagrant lab is up and running. If we vagrant ssh mgt1 we are into the management host. Make sure to sudo su into root to run salt commands.
Accepting the Salt Keys from the Minions.
salt-key -L will show all of the pending requests for minions wanting to join the salt master.
We can see that leaf1a, leaf1b and spine1 are pending keys to the master. Once we accept every key with the salt-key -A command and accept their keys we can control our minions.
The keys have now been accepted. We can now run a quick simple test with the Salt test module to make sure the minions are functioning properly with the salt ‘*’ test.ping command.
We can see the response was the boolean True so there is connectivity.
Now that the minions are under the control of the master lets see some examples of what information we can observe using the salt built in grains.
Lets find the operating system using grains of leaf1a
Lets find all the ip addresses of anything that starts with ‘leaf’
Grains are extremely powerful. Grains as discussed previously are a built in feature that the minion will pass to the master. To find all of the grains possible on each device the following command will work salt ‘*’ grains.ls or salt ‘*’ grains.items
Within Salt we can target devices. Targeting devices is another way of running ad-hoc commands on devices. We can run commands on a group of devices or a list of devices. Keep in mind we can also run ad-hoc commands on any device that belongs to a specific grain. As in our previous example we can run a command on only minions that have an operating system of EOS. All of the examples in which we will use will come from the net module.
Display the arp table on all leaf switches.
Find the version of EOS on spine1.
Any minion that has EOS run a test ping with results to 22.214.171.124.
Configuring a Device by Loading Config and Rendering Templates
We can use the very same net module that we have used to target devices to configure devices using the load_config function for one time configurations of a device. In this short example we are going to add VLAN 500 to only the leaf switches.
Salt can also render templates in the next example we will render a Jinja2 template passing in vlan 10.
A state as referenced before, is exactly what it sounds like it is. A state is a file describing a state of a minion. These files are typically in YAML and located in the salt file structure under the /state/ directory. Here is a example of a external state file which will also use a jinja2 template to render it.
This will render the following file
To walk through this file which is named leafs.vlans.
Add vlans example #This is a arbitrary name used to describe the state file.
netconfig.managed #This uses the netconfig module and the managed function to pass in configuration
template_name #Within the /srv/salt/templates lies the vlans.jinja file.
vlans #This passes in a python list to iterate the vlans jinja file.
We have mentioned that Salt uses a ZeroMQ bus between minions and master. Within that ZeroMQ bus, all salt events are recorded and sent to the master. So all the events we have just done targeting devices, test pings and configurations would all show events within the Salt ZeroMQ bus. If you are following along with the vagrant guide please destroy the environment as we are going to start over to show how reactors work. Issue the vagrant destroy -f command to tear down all of the VMs.
When All the VM’s are within a powered down state only bring up the mgt1 VM.
vagrant up mgt1
vagrant ssh mgt1
service salt-master stop
We need to stop the master service as we are going to go ahead and look at a debug of events that will happen as well as review our very first reactor.
Before looking at the reactor and how it works lets look at where a reactor lives. The reactor itself is simply in the master configuration file in /etc/salt/master and looks like the following.
We can see under the list for Python that there are two entries. We will work with the salt/minion/leaf*/start reactor first. Then we will followup on the slackgenerate afterwards for the slack notifications.
Since the salt-master is stopped, Lets start it but enable debugging. On the master issue the following command.
salt-master -l debug
In a separate window please bring up both leaf1a and leaf1b
vagrant up leaf1a leaf1b
If we watch the debug on mgt1 we will see the following:
We can see the salt minion which is leaf1a try to authenticate the key exchange process with the master. Lets go ahead and accept the key for the minion on the master and observe what happens.
The reactor will see this tag within the ZeroMQ and react to it. In this case it will use the leafs vlans template we seen earlier. Therefore, any minion with the name leaf will receive vlans 300 and 400.
Lets look at the debug message of the ZeroMQ bus.
Slack Notifications and Reactors
Now that we have a firm grasp on how reactors work. Every time we bring up a device, make a change or anything that is caught within the salt ZeroMQ bus we can send out a slack notification if we would like. Since Salt has many third parties slack being one of them this is simple to do with reactors. In the previous /etc/salt/
Lets take a look at the slackgenerate.sls state file once this reactor is triggered.
Looking at the state file it will simply send a notification once the reactor is triggered to the #ops channel with the username of burnyd. The message pulls the id key value pair out of the debug message in the data dictionary. The results of the slack room are as follows after bring up a device.
Reactors are very interesting. As this simply scrapes the surface of what is possible with a reactor. Reactors can automate a lot more than the deployment of a device. A reactor with the combination of a good logging system or broker can automate a network based off of events. For example, if a BGP dropped and it was because of a admin down situation, salt would be able to bring the device back online. The ZeroMQ message bus is extremely powerful.