HAproxy: Difference between revisions
No edit summary |
No edit summary |
||
Line 83: | Line 83: | ||
They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services. | They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services. | ||
You need a user for this service. Create as follows: | |||
mysql -e 'grant process on *.* to "clustercheck"@"localhost" identified by "<password>";' | |||
Of course substitute here (and in the following) "<password>" by some reasonable password. | |||
Then adjust the xinetd configuration: | |||
# default: on | # default: on | ||
Line 96: | Line 104: | ||
user = nobody | user = nobody | ||
server = /usr/bin/clustercheck | server = /usr/bin/clustercheck | ||
server_args = | server_args = clustercheck <password> | ||
log_on_failure += USERID | log_on_failure += USERID | ||
only_from = 0.0.0.0/0 | only_from = 0.0.0.0/0 |
Revision as of 09:21, 8 July 2014
HAproxy Loadbalancer
Introduction
Where using a Keepalived based approach for Galera loadbalancing is not feasible, the next alternative is to use HAproxy.
System Design
We present a solution where each OX node runs a HAproxy instance. This way we can implement a solution without the need for failover IPs or IP forwarding, which is often the reason why the Keepalived based approach is unavailable.
We create two HAproxy "listener", one round-robin for the read requests, one active/passive for the write requests.
Software Installation
HAproxy should be shipped with the distribution.
Configuration
The following is a HAproxy configuration file, assuming the Galera nodes have the IPs 192.168.1.101..103:
global log 127.0.0.1 local0 notice user haproxy group haproxy # TUNING # this is not recommended by the haproxy authors, but seems to improve performance for me #nbproc 4 defaults log global retries 3 maxconn 256000 timeout connect 60000 timeout client 120000 timeout server 120000 no option httpclose option dontlognull option redispatch option allbackups listen mysql-cluster bind 127.0.0.1:3306 mode tcp balance roundrobin option httpchk server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3 server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3 listen mysql-failover bind 127.0.0.1:3307 mode tcp balance roundrobin option httpchk server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3 server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 backup server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3 backup # # can configure a stats interface here, but if you do so, # change the username / password # #listen stats # bind 0.0.0.0:8080 # mode http # stats enable # stats uri / # stats realm Strictly\ Private # stats auth user:pass
You can see we use the httpchk option, which means that haproxy makes http requests to obtain node health. Therefore we need to configure something which answers those requests.
The Percona Galera packages ship with a script /usr/bin/clustercheck which can be called like
# /usr/bin/clustercheck <username> <password> HTTP/1.1 200 OK Content-Type: text/plain Connection: close Content-Length: 40 Percona XtraDB Cluster Node is synced. #
They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.
You need a user for this service. Create as follows:
mysql -e 'grant process on *.* to "clustercheck"@"localhost" identified by "<password>";'
Of course substitute here (and in the following) "<password>" by some reasonable password.
Then adjust the xinetd configuration:
# default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck server_args = clustercheck <password> log_on_failure += USERID only_from = 0.0.0.0/0 per_source = UNLIMITED type = UNLISTED } #