Prometheus configuration

Prometheus is a Open Source monitorization system which some interesting features for event monitoring and alerting. You can find a more detailed description at https://prometheus.io/.

It is already included in OpenKM since v7.1.21. To enable it you need to include these two lines at openkm.properties configuration file:

management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true

Once modified you need to restart OpenKM and the endpoint http://127.0.0.1:8080/openkm/actuator/prometheus will be available. This endpoint is protected and will be only accesible for users with the ROLE_ACTUATOR role.

Setup validation

In order to verify the access is valid, open a terminal and execute this command:

$ http --auth actuator:s3cr3t0 http://localhost:8080/openkm/actuator/prometheus

If the user and password matches, and the user has the right ROLE_ACTUATOR role the output will be something like:

# HELP tomcat_global_request_max_seconds
# TYPE tomcat_global_request_max_seconds gauge
tomcat_global_request_max_seconds{name="http-nio-0.0.0.0-8080",} 0.449
tomcat_global_request_max_seconds{name="ajp-nio-127.0.0.1-8009",} 0.0
# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset
# TYPE jvm_threads_peak_threads gauge
jvm_threads_peak_threads 128.0
# HELP system_cpu_usage The "recent cpu usage" for the whole system
# TYPE system_cpu_usage gauge
system_cpu_usage 0.04721888755502201

Running Prometheus with Docker

The easiest way to get Prometheus running is using the Docker image, so we need to get it:

$ docker pull prom/prometheus

Once we have the image we need to create the prometheus.yml configuration file:

global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
    - targets: ['127.0.0.1:9090']

  - job_name: 'spring-actuator'
    metrics_path: '/openkm/actuator/prometheus'
    scrape_interval: 5s
    static_configs:
    - targets: ['172.17.0.1:8080']
    basic_auth:
      username: 'actuator'
      password: 's3cr3t0'

Let's create the appropiate Docker Compose file:

version: '3.2'

services:

  prometheus:
    image: prom/prometheus
    container_name: prometheus
    hostname: prometheus
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
    ports:
      - 9090:9090
    volumes:
      - ./prometheus.yaml:/etc/prometheus/prometheus.yml

And you can start it by:

$ docker-compose up -d

For more info, take a look at the Prometheus documentation.

 
Table of contents [ Hide Show ]