For optimal reading, please switch to desktop mode.
Here at StackHPC we've used and experimented with Monasca in a variety of ways, contributing upstream wherever possible.
For the benefit of those using or considering either Monasca or Kayobe we thought we'd share some of our tips for deploying and configuring it.
This tutorial will follow on from the Kayobe a-universe-from-nothing tutorial on Train to demonstrate how to deploy and customise Monasca with Kolla-Ansible.
Assuming you've got a Kayobe environment (see our helpful universe-from-nothing blog post if you haven't already) you're only a few steps away from having a deployed Monasca stack. Here's how.
Before we begin
From your designated Ansible control host, source the Kayobe virtualenv, kayobe-env and admin credentials files. Assuming the virtualenv and kayobe-config locations are the same as in the a-universe-from-nothing tutorial:
$ source ~/kayobe-venv/bin/activate
$ cd ~/kayobe/config/src/kayobe-config/
$ source kayobe-env
$ source etc/kolla/admin-openrc.sh
Any reference to a filesystem path from this point in the guide will be relative to the kayobe-config directory above.
Optional
Optionally, enable Kayobe shell completion:
$ source <(kayobe complete)
Containers
First you'll need Kolla containers, these can either be pulled from Docker Hub or built using Kayobe. Kolla containers are either of the source or binary variety depending on how they were built and this is reflected in their image name. Note that not every component supports both build types and Monasca is only available from source. In practice, this means we'll need to inform Kolla and Kolla-Ansible of the container image to build (unless pulling from Dockerhub) and deploy respectively.
Pulling from Docker Hub
If you've followed a universe-from-nothing build the following script can be used to pull the relevant containers from the Docker Hub Kolla repositories and push them to the seed:
#!/bin/bash
set -e
tag=${1:-train}
images="kolla/centos-binary-zookeeper
kolla/centos-binary-kafka
kolla/centos-binary-storm
kolla/centos-binary-logstash
kolla/centos-binary-kibana
kolla/centos-binary-elasticsearch
kolla/centos-binary-influxdb
kolla/centos-source-monasca-api
kolla/centos-source-monasca-notification
kolla/centos-source-monasca-persister
kolla/centos-source-monasca-agent
kolla/centos-source-monasca-thresh
kolla/centos-source-monasca-grafana"
registry=192.168.33.5:4000
for image in $images; do
ssh stack@192.168.33.5 sudo docker pull $image:$tag
ssh stack@192.168.33.5 sudo docker tag $image:$tag $registry/$image:$tag
ssh stack@192.168.33.5 sudo docker push $registry/$image:$tag
done
Building using Kayobe
Building your own containers is the recommended approach for production OpenStack and is required if customising the Kolla Dockerfiles. The following Kayobe commands can be used to build Monasca and related containers:
$ kayobe overcloud container image build kafka influxdb kibana elasticsearch zookeeper storm logstash --push
$ kayobe overcloud container image build monasca -e kolla_install_type=source --push
The --push argument will push these containers to the Docker registry on the seed node once built.
Configuring Kayobe
StackHPC usually recommends a cluster of 3 separate nodes for monitoring infrastructure but with sufficient available resources it is possible to configure the controllers as monitoring nodes. For separate monitoring nodes see here for an example of adding another node type.
If instead you are running monitoring services on controllers then add the following to etc/kayobe/inventory/groups:
[monitoring:children]
# Add controllers to monitoring group
controllers
Configuring Kolla-Ansible
Add the following to the contents of etc/kayobe/kolla/globals.yml:
# Roles which grant read/write access to Monasca APIs
monasca_default_authorized_roles:
- admin
- monasca-user
# Roles which grant write access to Monasca APIs
monasca_agent_authorized_roles:
- monasca-agent
# Project name to send control plane logs and metrics to
monasca_control_plane_project: monasca_control_plane
This configures Kolla-Ansible with some sane defaults for user and agent roles and finally names the OpenStack project for metrics as monasca_control_plane.
Configuring Monasca
StackHPC makes regular use of the Slack notification plugin for alerts. To demonstrate how this works we'll enable and customise this feature. Customising Monasca requires creating configuration under directories that do not yet exist, so first create both the Kolla config Monasca directory and a subdirectory for alarm notification templates:
$ mkdir -p etc/kayobe/kolla/config/monasca/notification_templates
Monasca-Notification configuration
Populate the monasca-notification container's configuration at etc/kayobe/kolla/config/monasca/notification.conf to enable Slack webhooks and set the notification template:
[notification_types]
enabled = slack,webhook
[slack_notifier]
message_template = "/etc/monasca/slack_template.j2"
timeout = 5
ca_certs = "/etc/ssl/certs/ca-bundle.crt"
insecure = False
[webhook_notifier]
timeout = 5
Slack webhook notification template
Custom Slack notification templates should be placed in etc/kayobe/kolla/config/monasca/notification_templates/slack_template.j2. If you've followed a universe-from-nothing template then the following jinja will work as is:
{% raw %}{% set base_url = "http://{% endraw %}{{ aio_vip_address }}{% raw %}:3001/plugins/monasca-app/page/alarms" -%}
Alarm: `{{ alarm_name }}`
{%- if metrics[0].dimensions.hostname is defined -%}
{% set hosts = metrics|map(attribute='dimensions.hostname')|unique|list %} on host(s): `{{ hosts|join(', ') }}` moved to <{{ base_url }}?dimensions=hostname:{{ hosts|join('|') }}|status>: `{{ state }}`
{%- else %} moved to <{{ base_url }}|status>: `{{ state }}`
{%- endif %}.{% endraw %}
If you've prepared your own deployment then {{ aio_vip_address }} will need to be replaced with the address of an accessible VIP interface as defined in etc/kayobe/networks.yml.
Astute Jinja practitioners may notice that the notification template is wrapped inside {% raw %} tags except for the VIP address: this allows Kayobe to insert a variable not visible at the time Kolla-Ansible templates the file.
Adding Dashboards & Datasources
Monasca-Grafana will need to be configured with the monasca-api address as a metric source. Note elasticsearch can also be configured as a metric source to visualise log metrics. Optionally custom dashboards can also be defined in the same file etc/kayobe/grafana.yml:
# Path to git repo containing Grafana dashboards. Eg.
# https://github.com/stackhpc/grafana-reference-dashboards.git
grafana_monitoring_node_dashboard_repo: "https://github.com/stackhpc/grafana-reference-dashboards.git"
# Dashboard repo version. Optional, defaults to 'HEAD'.
grafana_monitoring_node_dashboard_repo_version: "stable/train"
# The path, relative to the grafana_monitoring_node_dashboard_repo_checkout_path
# containing the dashboards. Eg. /prometheus/control_plane
grafana_monitoring_node_dashboard_repo_path: "/monasca/control_plane"
# A dict of datasources to configure. See the stackhpc.grafana-conf role
# for all supported datasources.
grafana_datasources:
monasca_api:
port: 8070
host: "{{ aio_vip_address }}"
elasticsearch:
port: 9200
host: "{{ aio_vip_address }}"
project_id: "{{ monasca_control_plane_project_id | default('') }}"
Pulling containers to the overcloud
Once the configuration is in place, it is recommended to prepare for the next step by pulling the new containers from the seed registry to the relevant overcloud nodes:
$ kayobe overcloud container image pull
This also serves to check all required containers are available.
Deploying Monasca
Deploying Monasca and friends using Kayobe can take a considerable length of time due to the number of checks Kayobe and Kolla-Ansible both perform. Being familiar with kolla-ansible we can skip some of these tasks with the --kolla-tags argument:
$ kayobe overcloud service deploy --kolla-tags monasca,elasticsearch,influxdb,mariadb,kafka,kibana,grafana,storm,kafka,zookeeper,haproxy,common
The above command will deploy only Monasca and related services. A word of caution however, limiting tasks in this fashion can have unexpected consequences for inexperienced users and honestly doesn't save much time. If in doubt which tags are required run a full deploy with:
$ kayobe overcloud service deploy
Now would be a good point to grab a cup of tea.
Run in a production environment, this command shouldn't cause any disruption to tenant services (on sufficient hardware) but HAProxy will restart, potentially interrupting connections to the API for a brief period.
Adding Dashboards & Datasources
Assuming the deployment completed successfully, additional tasks are still required to configure Grafana with the datasources and dashboards defined in etc/kayobe/grafana.yml:
$ kayobe overcloud post configure --tags grafana
Testing
You should now be able to navigate to Grafana and Kibana, found by default on ports 3001 & 5601.
To start using the Monasca CLI, install it from PyPI and configure the relevant roles to authenticate in the Monasca project. Create and activate a fresh venv for the purpose:
$ deactivate
$ python3 -m venv ~/monasca-venv
$ source ~/monasca-venv/bin/activate
$ pip install python-openstackclient
$ pip install python-monascaclient
$ source etc/kolla/admin-openrc.sh
Optionally enable shell completion:
$ source <(openstack complete)
$ source <(monasca complete)
Add the admin user to the monasca_control_plane project (and double check):
$ openstack role add --user admin --project monasca_control_plane admin
$ openstack role assignment list --names --project monasca_control_plane
Switch to the monasca_control_plane project and view available metric names:
$ export OS_PROJECT_NAME=monasca_control_plane
$ unset OS_TENANT_NAME
$ monasca metric-name-list
Alarms!
Don't forget to install the Slack Incoming webhooks integration in order to make use of Monasca alerts. Once that is installed and configured in a channel, you'll be provided with a webhook URL - since this is a private URL it should be secured before being added to kayobe-config (for more information see the Kayobe documentation on secrets).
Deploying Alarms from a custom playbook
Alarms and notification definitions can be created using the monasca CLI, but in keeping with the configuration-as-code approach thus far we'd recommend our ansible role for the task - it contains a reasonably sane set of alarms for monitoring both overcloud nodes and OpenStack services.
With some additional configuration, the role can be installed and used by Kayobe.
First create the directory and provide symlinks to Kayobe Ansible as per the documentation:
$ mkdir -p etc/kayobe/ansible
$ cd etc/kayobe/ansible
$ ln -s ../../../../kayobe/ansible/filter_plugins/ filter_plugins
$ ln -s ../../../../kayobe/ansible/group_vars/ group_vars
$ ln -s ../../../../kayobe/ansible/test_plugins/ test_plugins
$ cd -
And then etc/kayobe/ansible/requirements.yml to specify the role:
---
- src: stackhpc.monasca_default_alarms
version: 1.3.0
An example playbook to deploy only the system level alerts (cpu, disk, memory usage) in etc/kayobe/ansible/monasca_alarms.yml. This assumes you've created a variable for your slack webhook called secrets_monasca_slack_webhook and that the monasca CLI virtualenv is in ~/monasca-venv:
- name: Create Monasca notification method and alarms
hosts: localhost
gather_facts: yes
vars:
keystone_url: "http://{{ aio_vip_address }}:5000/v3"
keystone_project: "monasca_control_plane"
monasca_endpoint_interface: ["internal"]
notification_address: "{{ secrets_monasca_slack_webhook }}"
notification_name: "Default Slack Notification"
notification_type: "SLACK"
monasca_client_virtualenv_dir: "~/monasca-venv"
virtualenv_become: "no"
skip_tasks: ["misc", "openstack", "monasca", "ceph"]
roles:
- {role: stackhpc.monasca_default_alarms, tags: [alarms]}
The Ansible galaxy role can be installed using Kayobe with:
$ kayobe control host bootstrap
And the playbook invoking it can be executed with:
$ kayobe playbook run ${KAYOBE_CONFIG_PATH}/ansible/monasca_alarms.yml