On Exoscale, you can attach a private network to your VMs, which comes unmanaged. To ease the communication between your instances on this private link, let’s setup a DHCP server answering requests on the private interface for all your VMs using this second network.

how to create a DHCP server with ansible

We are going to use Ansible to ease the process of deploying one DHCP server and 3 sample virtual machines to validate the setup.

If you’re not familiar with Ansible, it’s an open source automation tool for your infrastructure as a code. The only language you’ll need is YAML to write the configuration files.

Setup

First you have to clone the repository:

$ git clone https://github.com/marcaurele/ansible-exoscale-privnet.git
$ cd ansible-exoscale-privnet

Create a new virtual environment for Python, preferably with Python 2.7 since Ansible is not fully compatible with Python 3.x:

$ virtualenv -p <location_of_python_2.7> venv
# Activate the virtual environment
$ . ./venv/bin/activate

Install the requirements for the playbook (ansible (>=2.4), cs, sshpubkeys):

$ pip install -r requirements.txt

In your shell you need to export those 3 variables, as per cs documentation:

  • export CLOUDSTACK_ENDPOINT=https://api.exoscale.ch/compute
  • export CLOUDSTACK_KEY=<your-api-key>
  • export CLOUDSTACK_SECRET=<your-api-secret-key>

Or if you’re alread using a .cloudstack.ini file, you only need to export:

  • export CLOUDSTACK_REGION=<section_name>

Your API key can be found at https://portal.exoscale.com/account/profile/api.

Now you are all set to run the playbook. To verify the setup, from your terminal run:

$ cs listZones

You should get a JSON output of the current zones available on Exoscale.

{
  "count": 3, 
  "zone": [
    {
      "allocationstate": "Enabled", 
      "dhcpprovider": "VirtualRouter", 
      "id": "1128bd56-b4d9-4ac6-a7b9-c715b187ce11", 
      "localstorageenabled": true, 
      "name": "ch-gva-2", 
      "networktype": "Basic", 
      "securitygroupsenabled": true, 
      "tags": [], 
      "zonetoken": "ccb0a60c-79c8-3230-ab8b-8bdbe8c45bb7"
    }, 
    {
      "allocationstate": "Enabled", 
      "dhcpprovider": "VirtualRouter", 
      "id": "91e5e9e4-c9ed-4b76-bee4-427004b3baf9", 
      "localstorageenabled": true, 
      "name": "ch-dk-2", 
      "networktype": "Basic", 
      "securitygroupsenabled": true, 
      "tags": [], 
      "zonetoken": "fe63f9cb-ff75-31d3-8c46-3631f7fcd533"
    }, 
    {
      "allocationstate": "Enabled", 
      "dhcpprovider": "VirtualRouter", 
      "id": "4da1b188-dcd6-4ff5-b7fd-bde984055548", 
      "localstorageenabled": true, 
      "name": "at-vie-1", 
      "networktype": "Basic", 
      "securitygroupsenabled": true, 
      "tags": [], 
      "zonetoken": "26d84c22-f66d-377e-93ab-987ef477cab3"
    }
  ]
}

Quick run

I will discuss the playbook setup afterwards. If you’re eager to run the playbook and see the result, run:

$ ansible-playbook deploy-privnet-dhcp.yml

Playbook roles

Common

This role creates an SSH key “privnet” to be used on all virtual machines deployed by this playbook to authenticate, instead of using a password. This new key generated by CloudStack is saved under ~/.ssh/id_rsa_privnet on your local machine.

Infra

This role provisions the VMs on Exoscale, as well as a new SSH key, security groups, and adds the private networking interface to each of them. What you might not see often is the user_data provided for the VM deployment in create_vm.yml which is used to let cloud-init manage the hostname nicely as done when starting a VM from Exsocale portal:

    user_data: |
      #cloud-config
      manage_etc_hosts: true
      fqdn: {{ dhcp_name }}

The private networking interface is added through create_private_nic.yml using Ansible cs_instance_nic module:

- name: "dhcp server : add privnet nic"
  local_action:
    module: cs_instance_nic
    network: "{{ private_network }}"
    vm: "{{ dhcp_name }}"
    zone: "{{ zone }}"

This instructs Ansible to attach a new NIC to VM {{ dhcp_name }} in the {{zone }} zone on the privNetForBasicZone network. A new eth1 interface comes up on your Linux Ubuntu box for the DHCP server to bind to.

DHCP/server

This role configures the DHCP server. We configure a static IP address for its privnet interface eth1 in configure_private_nic.yml and activate the interface:

- name: upload network interface configuration
  template:
    src: privnet.cfg.j2
    dest: /etc/network/interfaces.d/01-privnet.cfg
    force: yes

- name: enable privnet interface
shell: "ifup eth1"

In setup_dhcp_server.yml we install ISC DHCP server with a basic configuration to serve IP addresses in the range 10.11.12.2 - 10.11.12.30:

- name: install packages
  apt: name={{item}} state=present
  with_items:
    - isc-dhcp-server

- name: set listening interfaces
  lineinfile:
    path: /etc/default/isc-dhcp-server
    line: "INTERFACES=\"eth1\""
    regexp: "^INTERFACES"
  notify: restart dhcp server

- name: set configuration
  template:
    dest: /etc/dhcp/dhcpd.conf
    src: dhcpd.conf.j2
    owner: root
    group: root
notify: restart dhcp server

DHCP/client

This role is the simplest through configure_private_nic.yml: it uploads the network interface configuration file for the privnet and enables it:

- name: copy network interface configuration
  copy:
    src: privnet.cfg
    dest: /etc/network/interfaces.d/01-privnet.cfg
    force: yes

- name: enable privnet interface
  shell: "ifup eth1"

Going further

This setup could be extended to also configure the DHCP server with static DHCP mappings for your VMs based on their private interface MAC address.