HAproxy

From Open-Xchange
Revision as of 13:13, 22 October 2014 by Dominik.epple (talk | contribs)

HAproxy Loadbalancer

Introduction

Where using a Keepalived based approach for Galera loadbalancing is not feasible, the next alternative is to use HAproxy.

System Design

We present a solution where each OX node runs a HAproxy instance. This way we can implement a solution without the need for failover IPs or IP forwarding, which is often the reason why the Keepalived based approach is unavailable.

We create two HAproxy "listener", one round-robin for the read requests, one active/passive for the write requests.

Software Installation

HAproxy should be shipped with the distribution.

Wheezy note: haproxy is provided in wheezy-backports, see http://haproxy.debian.net/

Short version:

 echo "deb http://http.debian.net/debian wheezy-backports main" > /etc/apt/sources.list.d/wheezy-backports.list
 apt-get update
 apt-get -t wheezy-backports install haproxy

Configuration

The following is a HAproxy configuration file, assuming the Galera nodes have the IPs 192.168.1.101..103:

 global
     log 127.0.0.1     local0
     log 127.0.0.1     local1 notice
     user              haproxy
     group             haproxy
     # this is not recommended by the haproxy authors, but seems to improve performance for me
     #nbproc 4
     maxconn           256000
     spread-checks     5
     daemon
     stats socket      /var/lib/haproxy/stats 
 
 defaults
     log               global
     retries           3
     maxconn           256000
     timeout connect   60000
     timeout client    5m
     timeout server    5m
     option            dontlognull
     option            redispatch
     option            allbackups
     # the http options are not needed here
     # but may be reasonable if you use haproxy also for some OX HTTP proxying
     mode              http
     no option         httpclose
 
 listen mysql-cluster
     bind 127.0.0.1:3306
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3
 
 listen mysql-failover
     bind 127.0.0.1:3307
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 backup
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3 backup
 
 #
 # can configure a stats interface here, but if you do so,
 # change the username / password
 #
 #listen stats
 #    bind 0.0.0.0:8080
 #    mode http
 #    stats enable
 #    stats uri /
 #    stats realm Strictly\ Private
 #    stats auth user:pass

You can see we use the httpchk option, which means that haproxy makes http requests to obtain node health. Therefore we need to configure something which answers those requests.

The Percona Galera packages ship with a script /usr/bin/clustercheck which can be called like

 # /usr/bin/clustercheck <username> <password>
 HTTP/1.1 200 OK
 Content-Type: text/plain
 Connection: close
 Content-Length: 40
 
 Percona XtraDB Cluster Node is synced.
 # 

They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.

You need a user for this service. Create as follows:

mysql -e 'grant process on *.* to "clustercheck"@"localhost" identified by "<password>";'

Of course substitute here (and in the following) "<password>" by some reasonable password.

Then adjust the xinetd configuration:

# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
        disable         = no
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        wait            = no
        user            = nobody
        server          = /usr/bin/clustercheck
        server_args     = clustercheck <password>
        log_on_failure  += USERID
        only_from       = 0.0.0.0/0
        per_source      = UNLIMITED
        type            = UNLISTED
}
#

Monitoring

Besided using the Galera check service configured before, you can also speak to the stats socket of haproxy using socat.

 echo "show stat" | socat unix-connect:/var/lib/haproxy/stats stdio

There are more commands available via this socket to enable / disable servers; see the haproxy documentation for details. (As of writing that documentation could be found here: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 that URL seems unstable.)