Hi,
this is rather HAProxy unrelated so more a general problem but anyway..
I did some tests with SSL vs. non-SSL performance and I wanted to share
my
results with you guys but also trying to solve the actual problem
So here is what I did:
haproxy.cfg:
global
user haproxy
group haproxy
maxconn 75000
ca-base /etc/ssl/certs
# Set secure global SSL options from https://cipherli.st/ and
https://bettercrypto.org/
ssl-default-bind-ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-DSS-AES128-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!DHE-RSA-AES128-GCM-SHA256:!DHE-RSA-AES256-GCM-SHA384:!DHE-RSA-AES128-SHA256:!DHE-RSA-AES256-SHA:!DHE-RSA-AES128-SHA:!DHE-RSA-AES256-SHA256:!DHE-RSA-CAMELLIA128-SHA:!DHE-RSA-CAMELLIA256-SHA
ssl-default-bind-options no-sslv3 no-tls-tickets
tune.ssl.default-dh-param 1024
defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s
maxconn 75000
frontend haproxy_test
bind :65410
mode http
option httpclose
default_backend backend_test
frontend haproxy_test.ssl
bind :65420 ssl crt /etc/haproxy/ssl/test.pem
mode http
option httpclose
default_backend backend_test
backend backend_test
mode http
errorfile 503 /etc/haproxy/test.error
# vim: set syntax=haproxy:
/etc/haproxy/test.error:
HTTP/1.0 200
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
Test123456
HAProxy itself has then been started with the following command:
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
So we have HAProxy serving very little content directly from the memory
which
means there's no actual overhead to backends, like a remote Apache or
something
like that.
A test without SSL, using "ab":
# ab -k -n 5000 -c 250 http://127.0.0.1:65410/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 65410
Document Path: /
Document Length: 11 bytes
Concurrency Level: 250
Time taken for tests: 0.117 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 460000 bytes
HTML transferred: 55000 bytes
Requests per second: 42668.31 [#/sec] (mean)
Time per request: 5.859 [ms] (mean)
Time per request: 0.023 [ms] (mean, across all concurrent
requests)
Transfer rate: 3833.48 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 3 0.4 3 4
Processing: 1 3 0.6 3 6
Waiting: 0 2 0.7 2 4
Total: 4 6 0.3 6 7
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 6
95% 6
98% 6
99% 6
100% 7 (longest request)
Wow, ~42k requests per second, quite a lot and pretty fast as well. Now
lets see SSL:
# ab -k -n 5000 -c 250 https://127.0.0.1:65420/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 65420
SSL/TLS Protocol: TLSv1/SSLv3,ECDHE-RSA-AES128-GCM-SHA256,4096,128
Document Path: /
Document Length: 11 bytes
Concurrency Level: 250
Time taken for tests: 34.688 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 460000 bytes
HTML transferred: 55000 bytes
Requests per second: 144.14 [#/sec] (mean)
Time per request: 1734.425 [ms] (mean)
Time per request: 6.938 [ms] (mean, across all concurrent
requests)
Transfer rate: 12.95 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 326 1057 236.0 1042 1709
Processing: 35 658 210.9 660 1013
Waiting: 35 658 211.1 659 1012
Total: 1264 1716 109.3 1702 2651
Percentage of the requests served within a certain time (ms)
50% 1702
66% 1708
75% 1712
80% 1714
90% 1720
95% 1779
98% 2158
99% 2211
100% 2651 (longest request)
That's much worse than I expected it to be. ~144 requests per second
instead of
42*k*. That's more than 99% performance drop. The cipher a moderate but
secure
(for now), I doubt that changing the cipher will help a lot here. nginx
and HAProxy
performance is almost equal so it's not a problem with the server
software.
One could increase nbproc (at least in my case it only increased up to
nbproc 4,
Xeon E3-1281 v3) but that's just a rather minor enhancement. With those
~144 r/s
you're basically lost when being under attack. How did you guys solve
this problem?
External SSL offloading, using hardware crypto foo, special
cipher/settings tuning,
simply *much* more hardware or not yet at all?
--
Regards,
Christian Ruppert
this is rather HAProxy unrelated so more a general problem but anyway..
I did some tests with SSL vs. non-SSL performance and I wanted to share
my
results with you guys but also trying to solve the actual problem
So here is what I did:
haproxy.cfg:
global
user haproxy
group haproxy
maxconn 75000
ca-base /etc/ssl/certs
# Set secure global SSL options from https://cipherli.st/ and
https://bettercrypto.org/
ssl-default-bind-ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-DSS-AES128-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!DHE-RSA-AES128-GCM-SHA256:!DHE-RSA-AES256-GCM-SHA384:!DHE-RSA-AES128-SHA256:!DHE-RSA-AES256-SHA:!DHE-RSA-AES128-SHA:!DHE-RSA-AES256-SHA256:!DHE-RSA-CAMELLIA128-SHA:!DHE-RSA-CAMELLIA256-SHA
ssl-default-bind-options no-sslv3 no-tls-tickets
tune.ssl.default-dh-param 1024
defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s
maxconn 75000
frontend haproxy_test
bind :65410
mode http
option httpclose
default_backend backend_test
frontend haproxy_test.ssl
bind :65420 ssl crt /etc/haproxy/ssl/test.pem
mode http
option httpclose
default_backend backend_test
backend backend_test
mode http
errorfile 503 /etc/haproxy/test.error
# vim: set syntax=haproxy:
/etc/haproxy/test.error:
HTTP/1.0 200
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
Test123456
HAProxy itself has then been started with the following command:
/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -D -p /var/run/haproxy.pid
So we have HAProxy serving very little content directly from the memory
which
means there's no actual overhead to backends, like a remote Apache or
something
like that.
A test without SSL, using "ab":
# ab -k -n 5000 -c 250 http://127.0.0.1:65410/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 65410
Document Path: /
Document Length: 11 bytes
Concurrency Level: 250
Time taken for tests: 0.117 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 460000 bytes
HTML transferred: 55000 bytes
Requests per second: 42668.31 [#/sec] (mean)
Time per request: 5.859 [ms] (mean)
Time per request: 0.023 [ms] (mean, across all concurrent
requests)
Transfer rate: 3833.48 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 3 0.4 3 4
Processing: 1 3 0.6 3 6
Waiting: 0 2 0.7 2 4
Total: 4 6 0.3 6 7
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 6
95% 6
98% 6
99% 6
100% 7 (longest request)
Wow, ~42k requests per second, quite a lot and pretty fast as well. Now
lets see SSL:
# ab -k -n 5000 -c 250 https://127.0.0.1:65420/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 65420
SSL/TLS Protocol: TLSv1/SSLv3,ECDHE-RSA-AES128-GCM-SHA256,4096,128
Document Path: /
Document Length: 11 bytes
Concurrency Level: 250
Time taken for tests: 34.688 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 460000 bytes
HTML transferred: 55000 bytes
Requests per second: 144.14 [#/sec] (mean)
Time per request: 1734.425 [ms] (mean)
Time per request: 6.938 [ms] (mean, across all concurrent
requests)
Transfer rate: 12.95 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 326 1057 236.0 1042 1709
Processing: 35 658 210.9 660 1013
Waiting: 35 658 211.1 659 1012
Total: 1264 1716 109.3 1702 2651
Percentage of the requests served within a certain time (ms)
50% 1702
66% 1708
75% 1712
80% 1714
90% 1720
95% 1779
98% 2158
99% 2211
100% 2651 (longest request)
That's much worse than I expected it to be. ~144 requests per second
instead of
42*k*. That's more than 99% performance drop. The cipher a moderate but
secure
(for now), I doubt that changing the cipher will help a lot here. nginx
and HAProxy
performance is almost equal so it's not a problem with the server
software.
One could increase nbproc (at least in my case it only increased up to
nbproc 4,
Xeon E3-1281 v3) but that's just a rather minor enhancement. With those
~144 r/s
you're basically lost when being under attack. How did you guys solve
this problem?
External SSL offloading, using hardware crypto foo, special
cipher/settings tuning,
simply *much* more hardware or not yet at all?
--
Regards,
Christian Ruppert