Backbeat Software
Photo by Pablò on Unsplash

Configuring Prometheus targets with Consul

How to configure dynamic Prometheus targets using Consul.

Glynn Forrest
Sunday, May 31, 2020

In the last post we used the SaltStack Mine to get targets for Prometheus to scrape. This time we’ll use Consul, a service discovery tool by HashiCorp that we use heavily in all Backbeat infrastructure.

The SaltStack Mine technique is useful, but can be quite slow to update. New IP addresses will be published to the Mine every mine_interval (e.g. 60 minutes), and Prometheus will only update when a highstate is run on the Prometheus server.

Consul gives us information far more rapidly. By configuring Prometheus with [consul_sd_configs](), we tell it how to pull addresses from Consul’s service catalog. When a new service is added to the catalog, Prometheus will start scraping it immediately.

Goal for this post

Imagine we have these servers:

  • ( and ( are load balancers that run HAProxy;
  • ( runs Prometheus;
  • ( runs a Consul server in a --bootstrap-expect=1 setup.

Our goal is to scrape Prometheus metrics from each load balancer, using Consul to get the addresses to scrape.

To do this, we need to run a Consul agent on the load balancers and Prometheus server. Each load balancer will register itself in Consul’s service catalog, and Prometheus will query that catalog for all addresses under the haproxy service.

HAProxy exporter

HAProxy exposes a wealth of information about itself with the stats configuration options. With haproxy-exporter we can convert these stats into a format Prometheus can consume, available over HTTP.

Note: HAProxy 2.0 has native Prometheus metrics, but we’ll assume an earlier version.

Start by updating the HAProxy configuration on each load balancer to make stats available locally:

frontend stats
    stats enable
    stats uri /stats
    stats refresh 10s

Use your favourite installation process to get the haproxy-exporter binary installed to /usr/local/bin. It’s distributed as a native binary, so installation is as easy as extracting an archive downloaded from the GitHub releases page:

curl -L -o haproxy-exporter.tar.gz
tar xfv haproxy-exporter.tar.gz
mv haproxy_exporter_0.10.0.linux-amd64/haproxy_exporter /usr/local/bin/
chmod +x /usr/local/bin/haproxy_exporter

Then run the exporter as a systemd service. Here’s a simple example:

Description=HAProxy Exporter

ExecStart=/usr/local/bin/haproxy_exporter --web.listen-address "" --haproxy.scrape-uri="http://localhost:8404/stats?stats;csv"


The exporter is now available on port 9101:


# ...
# # HELP haproxy_backend_bytes_in_total Current total of incoming bytes.
# # TYPE haproxy_backend_bytes_in_total gauge
# haproxy_backend_bytes_in_total{backend="some-app"} 748236
# haproxy_backend_bytes_in_total{backend="another-app"} 2345621
# ...

We now need to register it in the Consul service catalog so Prometheus can find it. Create /etc/consul.d/haproxy_exporter.json:

  "service": {
    "name": "haproxy_exporter",
    "port": 9101

Make sure to run Consul with -config-dir=/etc/consul.d so it will read the file when it starts.

After doing the same for the other load balancer, we should see the haproxy_exporter service in Consul:

consul catalog services
# ...
# haproxy_exporter
# ...

dig @ -p 8600 haproxy_exporter.service.consul
# ...
# haproxy_exporter.service.consul. 0 IN   A
# haproxy_exporter.service.consul. 0 IN   A
# ...

Consul configs

Add a new scrape config to the Prometheus configuration:

  - job_name: 'haproxy'
    - server: 'localhost:8500'
        - 'haproxy_exporter'

This registers a haproxy job that will use the local Consul agent (localhost:8500) to scrape all addresses of the haproxy_exporter consul service.

Reload Prometheus and check out the targets page:

Great! The HAProxy metrics have been discovered by Prometheus.


As we did with Instance labelling in the last post, it’d be cool if we could show instead of an IP address and port.

Enter relabel_configs, a powerful way to change metric labels dynamically.

Add the following to the haproxy job:

    - job_name: 'haproxy'
      - server: 'localhost:8500'
          - 'haproxy_exporter'
+   relabel_configs:
+     - source_labels: ['__meta_consul_node']
+       replacement: '$'
+       target_label: instance

This tells Prometheus to replace the instance label with the value of the __meta_consul_node label and add to the end. Assuming all our machines have hostnames ending in, this will automatically update the instance label to the hostname of the machines.

The labels are a lot clearer now:

Where did the __meta_consul_node label come from, and what other labels are available?

__meta_* labels change depending on the type of targets used. For consul_sd_configs, the documentation lists the following:

  • __meta_consul_address: the address of the target
  • __meta_consul_dc: the datacenter name for the target
  • __meta_consul_tagged_address_<key>: each node tagged address key value of the target
  • __meta_consul_metadata_<key>: each node metadata key value of the target
  • __meta_consul_node: the node name defined for the target
  • __meta_consul_service_address: the service address of the target
  • __meta_consul_service_id: the service ID of the target
  • __meta_consul_service_metadata_<key>: each service metadata key value of the target
  • __meta_consul_service_port: the service port of the target
  • __meta_consul_service: the name of the service the target belongs to
  • __meta_consul_tags: the list of tags of the target joined by the tag separator

On top of that, we have the default labels included with every metric:

  • __address__: The IP address / port combination that Prometheus scrapes. If no value is set for instance after relabelling, the value of __address__ will be used.
  • __scheme__: The scheme of the target, e.g. http.
  • __metrics_path__: The path to scrape metrics from, e.g. /metrics.
  • job: The job name, e.g. haproxy.

A great way to debug the relabelling is to hover over a label in the targets page:

This shows the original labels before relabelling. In this case we can see the __meta_consul_node value of lb1 was used to set instance to Prometheus drops all labels that begin with __, thus leaving our final two labels, and job=haproxy.

Conclusion and next steps

In a few steps, we’ve added metrics collection to two HAProxy servers. Thanks to Consul, we can add another instance, register it in the service catalog, and have Prometheus scrape its metrics automatically.

This dynamic target discovery works especially well in fast moving environments, e.g. when using Nomad. Nomad can register every job in Consul, which Prometheus could then scrape.

To learn more about relabelling, Life of a label on the Robust Perception blog is an excellent deep-dive into the available metric labels, including how and when they get assigned.

Need any help getting your Prometheus stack running? Send us an email!

More from the blog

Configuring Prometheus targets with SaltStack cover image

Configuring Prometheus targets with SaltStack

How to configure static Prometheus targets using Salt.

Glynn Forrest
Thursday, April 30, 2020

Secure servers with SaltStack and Vault (part 5) cover image

Secure servers with SaltStack and Vault (part 5)

Using the Consul storage backend and Consul Template for dynamic configuration files.

Glynn Forrest
Sunday, June 30, 2019

First look at Symfony UX cover image

First look at Symfony UX

Our first impressions of the Symfony UX initiative.

Glynn Forrest
Friday, December 11, 2020