There are clear benefits to mirroring queues in RabbitMQ. Message redundancy helps to minimise the impact of a RabbitMQ node going down, either due to unexpected failures or when planned during an upgrade. As such, it had been the case for years that Kolla Ansible would configure the queues in RabbitMQ for classic queue mirroring. This means that message queues are replicated across multiple RabbitMQ nodes. However, these queues were not made durable as there were concerns that this would impact performance at large scales. Classic queue mirroring is not designed to work with transient queues, so issues were commonplace. This, combined with other bugs that were not on the radar of the Kolla Ansible team, meant that RabbitMQ was in a particularly fragile state. As such, the high availability wasn't particularly high. As the messaging system is integral to many moving parts of an OpenStack system, this problem would rear its head in many different forms. We found that it would often break during OpenStack upgrades, and some of our customers had issues during the regular running of their systems. Typically, this was caused by outages such as from switch resets or power failures. In some cases, it felt like half of our job was just picking up the pieces after more messages were lost. As such, it was decided by the Kolla-Ansible team to remove queue mirroring entirely. After all, some downtime is preferable compared to a system getting stuck in a broken state. The guiding principle in removing replicated queues was that the OpenStack services should retry and recover in the event of message loss. I.e. the loss of one node in a cluster should result in successfully retrying on another node. However, it turned out that many services are actually very bad at handling message loss. So classic queue mirroring was removed, but a HA service was still retained in some sense. Kolla-Ansible continued deploying and supporting HA RabbitMQ clusters and exchanges. RabbitMQ clients remained free to fail over to other nodes if there was an outage.
Revisiting HA
In November of last year, I was tasked with this goal: revive classic queue mirroring in RabbitMQ, and aim to minimise message loss to support a greater degree of high availability. Here's what has been done since then. Many of these ideas come from the Large Scale SIG's resources on configuring RabbitMQ.
First up is the big one: bringing back classic queue mirroring. But this time, it now follows the supported configuration and must be enabled alongside durable queues. Using the flag om_enable_rabbitmq_high_availability will either enable both options at once, or neither of them. Some queues are not mirrored, notably "reply" and "fanout" queues. An exclusionary pattern is used in the classic mirroring policy. This pattern is:
^(?!(amq\\.)|(.*_fanout_)|(reply_)).*
Originally, these exclusions were designed to improve the efficiency of RabbitMQ in OpenStack Ansible, as they are not expected to be long-lived queues. But fanout and reply queues are also not made durable, and as we now know, mirroring transient queues is unsupported and causes issues around failover. If a fanout queue is being mirrored while a node is then shut down, old incarnations of these queues are often left behind without any active consumers. However, some services will still send messages to these queues, grinding their communication to a halt.
Next, we explored an issue with how classic queue mirroring handles the leader queues of a cluster. By default, the value of ha-promote-on-shutdown is set to "when-synced". This means that a replica queue will only be promoted to a leader queue if the messages are fully synchronised. However, this assumes that the node is going to recover quickly after a soft shutdown. If it takes a long time or never comes back at all then the messages in these queues cannot be consumed, so the system fails to recover from the node outage. To resolve this, ha-promote-on-shutdown is set to "always" in Kolla-Ansible. This means that a follower queue will immediately be promoted to the new leader if the current leader is shut down. While this does open the possibility of some messages getting lost due to not being synchronised, this has been deemed an acceptable tradeoff to slightly reduce consistency in favour of high availability. This also eliminates the need for operator intervention, as the system can now recover automatically.
I mentioned before about bugs that, while not directly related, made it much harder to diagnose the problems with RabbitMQ HA. One such bug was polluting the logs with errors about heartbeat connections being dropped. These had very similar stack traces to the HA failures, so it took a while to discern where the problems were.
Many OpenStack APIs run under mod_wsgi. When this is the case, the RabbitMQ heartbeat thread is changed to run in a green thread, instead of the intended native thread. Instead of checking AMQP sockets every 15 seconds, it is now suspended and resumed by eventlet. This can take a very long time if mod_wsgi isn't processing traffic regularly, which causes RabbitMQ to close the AMQP connection. The oslo.messaging team resolved this issue by setting the value of [oslo_messaging_rabbit] heartbeat_in_pthread to true. However, this then caused issues for non-wsgi services. As such, it was reset to false and is now left up to the user to choose the appropriate value. In Kolla-Ansible, heartbeat_in_pthread is now set to true only for wsgi applications to allow the RabbitMQ heartbeats to function. As the default value of this variable has changed between releases, this value is explicitly set for all services from Wallaby onwards. For more information on this, check out Hervé Beraud's blog which does a great job explaining this issue in more detail.
An OpenStack system is a complex collection of different projects and services all working together. As such, it is not unexpected that some of the issues with RabbitMQ have not been caused directly by Kolla-Ansible. After working on all the above changes for quite some time and finally being able to get RabbitMQ within Kolla-Ansible into a stable state in a Wallaby test environment, I moved on to making sure that the system would remain stable as I upgraded through OpenStack releases. Sadly, I immediately hit a new issue in Xena: if a RabbitMQ node was shut down, then OpenStack was unable to launch any new VMs. I was concerned that this was another bug in Kolla-Ansible that wasn't present in Wallaby, but eventually managed to track it down to a regression caused by a change in oslo.messaging. Thankfully, fixes are already being proposed for this bug, so in the meantime, we have been able to apply these changes locally in our downstream forks and the issue will be resolved without any further changes to Kolla-Ansible.
Supporting more features
Now a few months deep, it's safe to say we're fully down the rabbit hole. So why not keep this momentum going? There are some additional patches in Kolla-Ansible to support more configuration of RabbitMQ. These aim to make RabbitMQ more reliable in a Kolla-Ansible managed OpenStack system. (Note that, as of writing this blog, some of these are still in progress).
Support has been added to Kolla-Ansible to change the replication factor of mirrored queues. Prior to this, queues were replicated across every node. This is mainly motivated as a performance measure, as we now default to following the advice in the RabbitMQ docs. Queues are replicated across n/2+1 nodes, where n is equal to the total number of nodes. There are also hopes that this will help to speed up recovery times from any node outages, as messages will need to be replicated fewer times.
Two new configurable options have also been added to KA: queue expiry and message TTL. Messages will expire after 10 minutes, and queues after an hour of inactivity. The former ensures that old messages with no consumers will be removed (note this is intentionally longer than the 300s timeouts used often across OpenStack services). The latter ensures any old queues from removed or renamed nodes are dealt with, as they may otherwise grow over time.
There is work being done to switch the Kolla-Ansible upgrade of RabbitMQ from a full-stop upgrade to a rolling upgrade. This will involve utilising RabbitMQ's feature flags to ensure the upgrades can take place, and we will take advantage of node maintenance mode to minimise the impact of individual node downtimes.