Let’s say you have just finished installing Prometheus, full of enthusiasm you want to take another step, create the structure of exporters and sort out from which exact services you want to harvest metrics.
If you use it on a small scale, source code control is not your biggest concern, but when you want to collect metrics from your whole infrastructure, you definitely want to know the binaries you are running.
You could possibly go through all the exporters you need, build the structure once and forget about them. But since we, Linux Admins, are generally lazy and automate everything, why don’t we do it for Prometheus Exporters?
Prometheus has libraries for many programing languages since not everyone has to prefer one language. If you have to build a Binary file, going one-by-one and building it by hand is going to drive you mad. That’s where Jenkins comes in place. For example, you want to re-build an exporter Binary every week or on any new commit in repository.
Now that we know what we want, the planning part comes. Think about the process you have to do when you build anything manually. Clone a repository, check the code, get decencies, build Binary, and test. And suddenly we have our Jenkins Pipeline structure! We will not discuss how to exactly create Pipelines in Jenkins but the concept of it.
If we return to the structure, the first step is getting the source code, in this case using a repository. After that we need to process the code somehow. The next step is building the Binary. You will have to use one of Jenkins’s plugins for build environments (Go versioning, Python …) so you can separate projects written, for example Golang and Ruby since both build in different ways. The last thing is the distribution of your Binary file. You will have to distribute your exporters somehow, via private Apt repo or Ansible. And do not forget to connect these Jobs together using build triggers and other Jenkins’s magic stuff.
The original autor: David, Junior DevOps Engineer, cloudinfrastack