Quantcast
Channel: Serverphorums.com - HAProxy
Viewing all 11674 articles
Browse latest View live

dev1.9 2018/06/05 threads cpu 100% spin_lock v.s. thread_sync_barrier

$
0
0
Hi List,

I've got no clue how i got into this state ;) and maybe there is nothing
wrong..(well i did resume a VM that was suspended for half a day..)

Still thought it might be worth reporting, or perhaps its solved already
as there are a few fixes for threads after the 6-6 snapshot that i build
with..
Sometimes all that some people need is half a idea to find a problem...
So maybe there is something that needs fixing??

Haproxy running with 3 threads at 300% cpu usage, .. some lua, almost no
traffic, in a vm that just resumed operation and is still going through
its passes to initialize its nic's and some stuff that noticed the clock
jumped on its back and its dhcp lease expired or something like that..
anyhow lots of things going on at that moment..

Below some of the details ive got about the threads, one is spinning,
the others seemingly waiting for spin_locks.?.

Like i wrote, not sure if its 'something' and i don't know yet if i can
reproduce it a second time.
If more info is needed, please let me know and ill try and provide it.
But at the moment its a 1 time occurrence, i think..
If there is nothing obvious wrong, for now maybe just ignore this mail.
Also ill update to latest snapshot 2018/06/08. Maybe i wont see it ever
again :).

haproxy -vv
HA-Proxy version 1.9-dev0-cc0a957 2018/06/05
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = freebsd
  CPU     = generic
  CC      = cc
  CFLAGS  = -pipe -g -fstack-protector -fno-strict-aliasing
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-fno-strict-overflow -Wno-address-of-packed-member -Wno-null-dereference
-Wno-unused-label -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1
USE_ACCEPT4=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1
USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.4
Built with OpenSSL version : OpenSSL 1.0.2k-freebsd  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
     kqueue : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
        [TRACE] trace
        [COMP] compression
        [SPOE] spoe



(gdb) info threads
  Id   Target Id         Frame
* 1    LWP 100660 of process 56253 0x00000000005b0202 in
thread_sync_barrier (barrier=0x8bc690 <thread_enter_sync.barrier>) at
src/hathreads.c:109
  2    LWP 101036 of process 56253 0x000000000050874a in
process_chk_conn (t=0x8025187c0, context=0x802482610, state=33) at
src/checks.c:2112
  3    LWP 101037 of process 56253 0x000000000050b58e in
enqueue_one_email_alert (p=0x80253f400, s=0x8024dec00, q=0x802482600,
    msg=0x7fffdfdfc770 "Health check for server
Test-SNI_ipvANY/srv451-4 failed, reason: Layer4 connection problem,
info: \"General socket error (Network is unreachable)\", check duration:
0ms, status: 0/2 DOWN") at src/checks.c:3396
(gdb) next
110     in src/hathreads.c
(gdb) next
109     in src/hathreads.c
(gdb) next
110     in src/hathreads.c
(gdb) next
109     in src/hathreads.c
(gdb) next
110     in src/hathreads.c
(gdb) next
109     in src/hathreads.c
(gdb) next


Command name abbreviations are allowed if unambiguous.
(gdb) thread 1
[Switching to thread 1 (LWP 100660 of process 56253)]
#0  thread_sync_barrier (barrier=0x8bc690 <thread_enter_sync.barrier>)
at src/hathreads.c:109
109     src/hathreads.c: No such file or directory.
(gdb) bt full
#0  thread_sync_barrier (barrier=0x8bc690 <thread_enter_sync.barrier>)
at src/hathreads.c:109
        old = 7
#1  0x00000000005b0038 in thread_enter_sync () at src/hathreads.c:122
        barrier = 1
#2  0x000000000051737c in sync_poll_loop () at src/haproxy.c:2380
No locals.
#3  0x00000000005172ed in run_poll_loop () at src/haproxy.c:2432
        next = -357534104
        exp = -357534104
#4  0x0000000000514670 in run_thread_poll_loop (data=0x802491380) at
src/haproxy.c:2462
        start_lock = 0
        ptif = 0x8af8f8 <per_thread_init_list>
        ptdf = 0x7fffffffec80
#5  0x0000000000511199 in main (argc=10, argv=0x7fffffffec28) at
src/haproxy.c:3052
        tids = 0x802491380
        threads = 0x8024831a0
        i = 3
        err = 0
        retry = 200
        limit = {rlim_cur = 6116, rlim_max = 6116}
        errmsg =
"\000\354\377\377\377\177\000\000\200\354\377\377\377\177\000\000(\354\377\377\377\177\000\000\n\000\000\000\000\000\000\000\202$B\374|\211\275C\240g\213\000\000\000\000\000
\354\377\377\377\177\000\000\200\354\377\377\377\177\000\000(\354\377\377\377\177\000\000\n\000\000\000\000\000\000\000\300\353\377\377\377\177\000\000\302Z\340\001\b\000\000\000\001\000\000"
        pidfd = 6
(gdb) thread 2
[Switching to thread 2 (LWP 101036 of process 56253)]
#0  0x0000000000508718 in process_chk_conn (t=0x8025187c0,
context=0x802482610, state=33) at src/checks.c:2112
2112    src/checks.c: No such file or directory.
(gdb) bt full
#0  0x0000000000508718 in process_chk_conn (t=0x8025187c0,
context=0x802482610, state=33) at src/checks.c:2112
        __pl_l = 0x8024df5a0
        __pl_r = 4294967300
        check = 0x802482610
        s = 0x8024dec00
        cs = 0x8024f1790
        conn = 0x803091480
        rv = 8
        ret = 38896344
        expired = 0
#1  0x000000000050805e in process_chk (t=0x8025187c0,
context=0x802482610, state=33) at src/checks.c:2281
        check = 0x802482610
#2  0x0000000000503265 in process_email_alert (t=0x8025187c0,
context=0x802482610, state=33) at src/checks.c:3156
        check = 0x802482610
        q = 0x802482600
        alert = 0x7fffdfffde10
#3  0x000000000059b9bf in process_runnable_tasks () at src/task.c:351
        t = 0x8025187c0
        state = 33
        ctx = 0x802482610
        process = 0x503060 <process_email_alert>
        t = 0x8025182c0
        max_processed = 200
        average = 2
#4  0x000000000051717b in run_poll_loop () at src/haproxy.c:2403
        next = -357534145
        exp = -357534147
#5  0x0000000000514670 in run_thread_poll_loop (data=0x802491384) at
src/haproxy.c:2462
        start_lock = 0
        ptif = 0x8af8f8 <per_thread_init_list>
        ptdf = 0x800f037cc
#6  0x0000000800efec06 in ?? () from /lib/libthr.so.3
No symbol table info available.
#7  0x0000000000000000 in ?? ()
No symbol table info available.
Backtrace stopped: Cannot access memory at address 0x7fffdfffe000
(gdb) thread 3
[Switching to thread 3 (LWP 101037 of process 56253)]
#0  0x000000000050b4a5 in enqueue_one_email_alert (p=0x80253f400,
s=0x8024dec00, q=0x802482600,
    msg=0x7fffdfdfc770 "Health check for server
Test-SNI_ipvANY/srv451-4 failed, reason: Layer4 connection problem,
info: \"General socket error (Network is unreachable)\", check duration:
0ms, status: 0/2 DOWN") at src/checks.c:3396
3396    in src/checks.c
(gdb) bt full
#0  0x000000000050b4a5 in enqueue_one_email_alert (p=0x80253f400,
s=0x8024dec00, q=0x802482600,
    msg=0x7fffdfdfc770 "Health check for server
Test-SNI_ipvANY/srv451-4 failed, reason: Layer4 connection problem,
info: \"General socket error (Network is unreachable)\", check duration:
0ms, status: 0/2 DOWN") at src/checks.c:3396
        __pl_l = 0x8024827d0
        __pl_r = 4294967300
        strs = {0x639ab8 "DATA\r\n", 0x0}
        strs = {0x639b0d "QUIT\r\n", 0x0}
        alert = 0x80308f0b0
        tcpcheck = 0x802f22780
        check = 0x802482610
#1  0x0000000000503692 in enqueue_email_alert (p=0x80253f400, s=0x8024dec00,
    msg=0x7fffdfdfc770 "Health check for server
Test-SNI_ipvANY/srv451-4 failed, reason: Layer4 connection problem,
info: \"General socket error (Network is unreachable)\", check duration:
0ms, status: 0/2 DOWN") at src/checks.c:3414
        i = 0
        mailer = 0x8024329c0
#2  0x00000000005035fa in send_email_alert (s=0x8024dec00, level=6,
format=0x6437bd "%s") at src/checks.c:3445
        argp = {{gp_offset = 32, fp_offset = 48, overflow_arg_area =
0x7fffdfdfcb90, reg_save_area = 0x7fffdfdfc680}}
        buf = "Health check for server Test-SNI_ipvANY/srv451-4 failed,
reason: Layer4 connection problem, info: \"General socket error (Network
is unreachable)\", check duration: 0ms, status: 0/2
DOWN\000*\006\002\b\000\000\000\000\332\000\003\b\000\000\000"...
        len = 184
        p = 0x80253f400
#3  0x00000000005000a8 in set_server_check_status (check=0x8024df048,
status=8,
    desc=0x803017880 "Health check for server Test-SNI_ipvANY/srv451-4
failed, reason: Layer4 connection problem, info: \"General socket error
(Network is unreachable)\", check duration: 0ms, status: 0/2 DOWN") at
src/checks.c:294
        s = 0x8024dec00
        prev_status = 7
        report = 0
#4  0x0000000000503f0e in chk_report_conn_err (check=0x8024df048,
errno_bck=51, expired=0) at src/checks.c:669
        cs = 0x8024f1760
        conn = 0x803091000
        err_msg = 0x803017880 "Health check for server
Test-SNI_ipvANY/srv451-4 failed, reason: Layer4 connection problem,
info: \"General socket error (Network is unreachable)\", check duration:
0ms, status: 0/2 DOWN"
        chk = 0x800907228
        step = 0
        comment = 0x0
#5  0x0000000000508ab4 in process_chk_conn (t=0x8027f1b40,
context=0x8024df048, state=5) at src/checks.c:2165
        check = 0x8024df048
        s = 0x8024dec00
        cs = 0x8024f1760
        conn = 0x803091000
        rv = 32767
        ret = 20480
        expired = 1
#6  0x000000000050805e in process_chk (t=0x8027f1b40,
context=0x8024df048, state=5) at src/checks.c:2281
        check = 0x8024df048
#7  0x000000000059b9bf in process_runnable_tasks () at src/task.c:351
        t = 0x8027f1b40
        state = 5
        ctx = 0x8024df048
        process = 0x508000 <process_chk>
        t = 0x8027f1a00
        max_processed = 200
        average = 3
#8  0x000000000051717b in run_poll_loop () at src/haproxy.c:2403
        next = -357534145
        exp = -357534147
#9  0x0000000000514670 in run_thread_poll_loop (data=0x802491388) at
src/haproxy.c:2462
        start_lock = 0
        ptif = 0x8af8f8 <per_thread_init_list>
---Type <return> to continue, or q <return> to quit---
        ptdf = 0x800f037cc
#10 0x0000000800efec06 in ?? () from /lib/libthr.so.3
No symbol table info available.
#11 0x0000000000000000 in ?? ()
No symbol table info available.
Backtrace stopped: Cannot access memory at address 0x7fffdfdfd000

Re: Connections stuck in CLOSE_WAIT state with h2

$
0
0
Hi Milan,

On Fri, Jun 08, 2018 at 01:26:41PM +0200, Milan Petruzelka wrote:
> On Wed, 6 Jun 2018 at 11:20, Willy Tarreau <w@1wt.eu> wrote:
>
> > Hi Milan,
> >
> > On Wed, Jun 06, 2018 at 11:09:19AM +0200, Milan Petruzelka wrote:
> > > Hi Willy,
> > >
> > > I've tracked one of connections hanging in CLOSE_WAIT state with tcpdump
> > > over last night. It started at 17:19 like this:
> > >
> > > "Packet No.","Time in
> > seconds","Source","Destination","Protocol","Length","Info"
> > > "1","0.000000","ip_client","ip_haproxy_server","TCP","62","64311
> > > 443 [SYN] Seq=0 Win=8192 Len=0 MSS=1460 SACK_PERM=1"
> > > "2","0.001049","ip_haproxy_server","ip_client","TCP","62","443
> > > 64311 [SYN, ACK] Seq=0 Ack=1 Win=29200 Len=0 MSS=1460 SACK_PERM=1"
> >
> > Sure, it's extremely useful. It looks like a shutdown is ignored when
> > waiting for the H2 preface. The FD is not being polled for reading, so
> > I tend to conclude that either it was disabled for a reason I still do
> > not know, or it used to be enabled, the shutdown was received then the
> > polling was disabled. But that doesn't appear in the connection flags.
> > So it seems that the transition between the end of handshake and the
> > start of parsing could be at fault. Maybe we refrain from entering the
> > H2 state machine because of the early shutdown. I'm going to have a look
> > in that direction.
>
>
> Hi Willy,
>
> Here is some additional info about the hanging CLOSE_WAIT connection
> described in my last message - screenshot from our log management -
> https://pasteboard.co/HoVCR0b.jpg and text variant:
>
> No. Time hap.ts.accept hap.client.port hap.term.state
> hap.rq.method
> hap.rsp.status hap.rsp.size
> 1 June 5th 2018, 17:19:22.913 June 5th 2018, 17:19:22.435 64311 ---- GET 200
> 17,733
> 2 June 5th 2018, 17:19:23.814 June 5th 2018, 17:19:23.806 64311 ---- GET 200
> 4,671
> 3 June 5th 2018, 17:19:23.814 June 5th 2018, 17:19:23.806 64311 ---- GET 200
> 4,481
> 4 June 5th 2018, 17:19:23.820 June 5th 2018, 17:19:23.808 64311 ---- GET 200
> 4,613
> 5 June 5th 2018, 17:19:23.821 June 5th 2018, 17:19:23.808 64311 ---- GET 200
> 4,584
> 6 June 5th 2018, 17:19:24.967 June 5th 2018, 17:19:24.961 64311 ---- GET 200
> 2,786
> 7 June 5th 2018, 17:19:28.945 June 5th 2018, 17:19:25.049 64311 ---- GET 200
> 78,325
> 8 June 5th 2018, 17:19:28.945 June 5th 2018, 17:19:25.049 64311 ---- GET 200
> 46,830
> 9 June 5th 2018, 17:19:31.617 June 5th 2018, 17:19:24.707 64311 CL-- GET 200
> 13,086
> 10 June 5th 2018, 17:19:31.618 June 5th 2018, 17:19:25.049 64311 CD-- GET
> 200 49,162
> 11 June 5th 2018, 17:19:31.618 June 5th 2018, 17:19:25.049 64311 CD-- GET
> 200 40,328
> 12 June 5th 2018, 17:19:31.618 June 5th 2018, 17:19:25.066 64311 CL-- POST
> 200 3,198
>
> As you can see, client first made 8 successful requests using http/2
> connection through Haproxy to our backend server. Remaining 4 requests
> finished with termination states CL-- or CD--, probably because of FIN
> packet received form client.

Or possibly because the client aborted with an RST_STREAM, which is
equally possible (eg: the user pressed STOP).

> C : the TCP session was unexpectedly aborted by the client
> D : the session was in the DATA phase.
> L : the proxy was still transmitting LAST data to the client while the
> server had already finished. This one is very rare as it can only happen
> when the client dies while receiving the last packets.
>
> It looks like Haproxy knows about client disconnection in muxed requests
> inside frontend http/2 connection. It just seems not to propagate this
> knowledge to the frontend connection, leaving it hanging in CLOSE_WAIT
> state.

It is indeed a possibility. I'm currently studying the code in hope to
find a clue there. I've been unsuccessful at trying to provoke the issue
for now.

> Sorry for not sending this piece of log last time, i didn't expect
> to see any of those CL-- / CD-- requests.

Hey are you kidding ? You don't have to be sorry, you've sent one of
the most detailed bug reports we've had for a while, and I really
appreciate the amount of work it requires on your side to dig through
this, blur all these fields to be able to still provide us with as much
information as possible! No, thanks *a lot* for your traces, really!

> I'll switch http/2 off on the site because I'll be AFK for 2 weeks. After
> that I'll be ready to provide any additional info to hunt this one down. Or
> to test the patched version if available.

I'll continue to work on this. I really don't like this strange behaviour.

Thanks!
Willy

haproxy with QAT requires root

$
0
0
Hello,

i am testing haproxy with a QAT card (Intel QuickAssit-Technology). I am
getting "SSL handshake failure" running haproxy with user nobody and
ssl-engine qat. When running haproxy with user root the card gets used
and the SSL connection works.
Is running haproxy as root required when using a QAT card?

haproxy v1.8.9
OpenSSL_1_1_0h
QAT_Engine v0.5.36
qat1.7.l.4.1.0


Thank you,
Christian

Re: haproxy with QAT requires root

$
0
0
Hi.

On 12/06/2018 12:58, Christian Braun wrote:
>Hello,
>
>i am testing haproxy with a QAT card (Intel QuickAssit-Technology). I
>am getting "SSL handshake failure" running haproxy with user nobody and
>ssl-engine qat. When running haproxy with user root the card gets used
>and the SSL connection works.
>Is running haproxy as root required when using a QAT card?

What's Intel answer to the question about usage of the card as none
root user?

>haproxy v1.8.9
>OpenSSL_1_1_0h
>QAT_Engine v0.5.36
>qat1.7.l.4.1.0

What do you see when you call `openssl engine -t -c -vvvv qat` as none
root user ?

I found this command on this page https://github.com/intel/QAT_Engine
as I don't know the QAT Engine.

Do you use haproxy in nbproc?

https://github.com/intel/QAT_Engine#limitations

>Thank you,
>Christian

Best regards
aleks

Re: HAProxy 1.8.x not serving errorfiles with H2

$
0
0
Hi Cyril,

On Mon, Jun 11, 2018 at 10:36:43PM +0200, Cyril Bonté wrote:
> If it looks like HTTP/0.9, I tend to think that your errorfile is not
> properly set. Such files must contain the status line, the headers and the
> response body.
>
> And indeed, from a quick test, if I remove the status line, I get the same
> error with a HTTP/2 request. Once the file is correctly defined, everything
> is OK.
>
> Can you provide the content of your errorfile(s) ?

That's indeed very possible. The H2 to H1 gateway requires that the
server side respects HTTP/1.1 messaging and semantics, otherwise it
will not be able to parse the response. It's obviously the same for
error messages since these ones are converted back to H2 by the same
gateway.

Willy

Re: haproxy and solarflare onload

$
0
0
Hi Elias,

On 05/28/2018 04:08 PM, Elias Abacioglu wrote:
> Hi Willy and HAproxy folks!
>
> Sorry for bumping this old thread. But Solarflare recently released a new Onload version.
> http://www.openonload.org/download/openonload-201805-ReleaseNotes.txt http://www.openonload.org/download/openonload-201805-ReleaseNotes.txt
>
> Here is a small excerpt from the Release Notes:
> "
>
> A major overhaul to clustering and scalable filters enables new data
> center use cases.
>
> Scalable filters direct all traffic for a given VI to Onload so that
> the filter table size is not a constraint on the number of concurrent
> connections supported efficiently. This release allows scalable filters
> to be combined for both passive and active open connections and with RSS,
> enabling very high transaction rates for use cases such as a reverse web
> proxy.
>
> "
>
> So this NIC with Onload features requires tuning and perhaps a better understanding of the Linux network stack than what I got to get working in a high volume/frequency setup.
> I am willing to gift one SFN8522(2x10Gbit/s SFP+) to the HAproxy team(within EU there should be no toll) if you want to test the card and it's capabilities.
> I don't have any hard requirements for gifting this card, just that you got the will to give it a fair shot. The only thing I want in return is that you share your insights, good or bad. Perhaps we can get a working Onload profile for HAproxy. They ship a example profile for nginx in onload-201805. I am still very curious if Onload actually can offload the CPU more than regular a NIC.
>
> I don't have an EnterpriseOnload license. But this card should get a free ScaleoutOnload license(basically it's included in the card, but Dell forgot, so I had to reach out to Solarflare support to get a free License). And ScaleoutOnload is their OpenOnload.
> I could help out with that if needed.
>
> So HAproxy team, you want this card to play with?
>
> /Elias

Yes, we are always interested testing hardware with our soft to advise the end users if they could have some benefits.
But currently we are very busy and t thing we can't test your NIC before the last quarter of year.

R,
Emeric

subscribe

Re: dev1.9 2018/06/05 threads cpu 100% spin_lock v.s. thread_sync_barrier

$
0
0
Hi Pieter,

On Mon, Jun 11, 2018 at 10:48:25PM +0200, PiBa-NL wrote:
> Hi List,
>
> I've got no clue how i got into this state ;) and maybe there is nothing
> wrong..(well i did resume a VM that was suspended for half a day..)
>
> Still thought it might be worth reporting, or perhaps its solved already as
> there are a few fixes for threads after the 6-6 snapshot that i build with..
> Sometimes all that some people need is half a idea to find a problem... So
> maybe there is something that needs fixing??

This one is not known yet, to the best of my knowledge, or at least
not reported yet.

> (gdb) info threads
>   Id   Target Id         Frame
> * 1    LWP 100660 of process 56253 0x00000000005b0202 in thread_sync_barrier
> (barrier=0x8bc690 <thread_enter_sync.barrier>) at src/hathreads.c:109
>   2    LWP 101036 of process 56253 0x000000000050874a in process_chk_conn
> (t=0x8025187c0, context=0x802482610, state=33) at src/checks.c:2112
>   3    LWP 101037 of process 56253 0x000000000050b58e in
> enqueue_one_email_alert (p=0x80253f400, s=0x8024dec00, q=0x802482600,
>     msg=0x7fffdfdfc770 "Health check for server Test-SNI_ipvANY/srv451-4
> failed, reason: Layer4 connection problem, info: \"General socket error
> (Network is unreachable)\", check duration: 0ms, status: 0/2 DOWN") at
> src/checks.c:3396

This is quite odd. Either some quit a function without releasing the
server's lock nor the email_queue's lock (suspicious but possible),
or both just happen to be the same. And at first glance, seeing that
process_chk_conn() is called witha context of 0x802482610 which is
only 16 bytes above enqueue_one_email_alert's queue parameter, this
last possibility suddenly seems quite likely.

So my guess is that we take the server's lock in process_chk_conn(),
that we go down through this :
chk_report_conn_err()
-> set_server_check_status()
-> send_email_alert()
-> enqueue_email_alert()
-> enqueue_one_email_alert()

And here we spin on a lock which very likely is the same eventhough I
have no idea why. I suspect the way the queue or lock is retrieved
there is incorrect and explains the situation with a recursive lock,
but I'm afraid I don't really know since I don't know this part of
the code :-(

Note, it could also be that the queue's lock has never been initialized,
and makes the inner lock block there, with the server's lock held, causing
the second thread to spin and the 3rd one to wait for the barrier. I'll
have to ask for some help on this part. I'm pretty confident that 1.8 is
affected as well.

Thanks,
Willy

Re: regression testing for haproxy

$
0
0
On Thu, Jun 07, 2018 at 03:29:24PM +0200, Frederic Lecaille wrote:
> Well... this is patch matching with the previous text file.

Now applied, thanks Fred!

willy

Re: subscribe

Re: remaining process after (seamless) reload

$
0
0
Hello William L,

On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.

In fact, I now have a different but similar issue.

root 18547 3.2 1.3 986660 898844 ? Ss Jun08 182:12 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy 14593 299 1.3 1251216 920480 ? Rsl Jun11 5882:01 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 14582 14463 -x /var/lib/haproxy/stats
haproxy 18290 299 1.4 1265028 935288 ? Ssl Jun11 3425:51 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 18281 18271 18261 14593 -x /var/lib/haproxy/stats
haproxy 29663 99.9 1.4 1258024 932796 ? Ssl Jun11 1063:08 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29653 29644 18290 14593 -x /var/lib/haproxy/stats
haproxy 4203 99.9 1.4 1258804 933216 ? Ssl Jun11 1009:27 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 4194 4182 18290 29663 14593 -x /var/lib/haproxy/stats
haproxy 1445 25.9 1.4 1261680 929516 ? Ssl 13:51 0:42 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1436 29663 4203 18290 14593 -x /var/lib/haproxy/stats
haproxy 1763 18.9 1.4 1260500 931516 ? Ssl 13:52 0:15 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy 1903 25.0 1.4 1261472 931064 ? Ssl 13:53 0:14 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy 2063 52.5 1.4 1259568 927916 ? Ssl 13:53 0:19 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
haproxy 2602 62.0 1.4 1262220 928776 ? Rsl 13:54 0:02 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats


# cat /proc/14593/status | grep Sig
SigQ: 0/257120
SigPnd: 0000000000000000
SigBlk: 0000000000000800
SigIgn: 0000000000001800
SigCgt: 0000000180300205

kill -USR1 14593 has no effect:

# strace -ffff -p 14593
strace: Process 14593 attached with 3 threads
strace: [ Process PID=14595 runs in x32 mode. ]
[pid 14593] --- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=18547, si_uid=0} ---
[pid 14593] rt_sigaction(SIGUSR1, {0x558357660020, [USR1], SA_RESTORER|SA_RESTART, 0x7f0e87671270}, {0x558357660020, [USR1], SA_RESTORER|SA_RESTART, 0x7f0e87671270}, 8) = 0
[pid 14593] rt_sigreturn({mask=[USR2]}) = 7

however, the unix socket is on the correct process:

# lsof | grep "haproxy/stats" ; ps auxwwf | grep haproxy
haproxy 2602 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
haproxy 2602 2603 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
haproxy 2602 2604 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
haproxy 2602 2605 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp

So it means, it does not cause any issue for the provisioner which talks
to the correct process, however, they are remaining process.
Should I start a different thread for that issue?

it seems harder to reproduce, I got the issue ~2 days after pushing back.

Thanks,

--
William

stable-bot: WARNING: 29 bug fixes in queue for next release

$
0
0
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable release! One such e-mail is sent every week once patches are waiting in the last maintenance branch, and an ideal release date is computed based on the severity of these fixes and their merge date. Responses to this mail must be sent to the mailing list.

Last release 1.8.9 was issued on 2018/05/18. There are currently 29 patches in the queue cut down this way:
- 2 BUILD, first one merged on 2018/05/23
- 2 MAJOR, first one merged on 2018/06/06
- 15 MEDIUM, first one merged on 2018/05/23
- 10 MINOR, first one merged on 2018/05/23

Thus the computed ideal release date for 1.8.10 would be 2018/06/20, which is in one week or less.

The current list of patches in the queue is:
- BUILD : fd: fix typo causing a warning when threads are disabled
- BUILD : threads: unbreak build without threads
- MAJOR : map: fix a segfault when using http-request set-map
- MAJOR : lua: Dead lock with sockets
- MEDIUM : contrib/mod_defender: Use network order to encode/decode flags
- MEDIUM : lua/socket: wrong scheduling for sockets
- MEDIUM : contrib/modsecurity: Use network order to encode/decode flags
- MEDIUM : lua/socket: Notification error
- MEDIUM : fd: Only check update_mask against all_threads_mask.
- MEDIUM : spoe: Return an error when the wrong ACK is received in sync mode
- MEDIUM : threads: handle signal queue only in thread 0
- MEDIUM : lua/socket: Sheduling error on write: may dead-lock
- MEDIUM : servers: Add srv_addr default placeholder to the state file
- MEDIUM : dns: Delay the attempt to run a DNS resolution on check failure.
- MEDIUM : cache: don't cache when an Authorization header is present
- MEDIUM : spoe: Flags are not encoded in network order
- MEDIUM : stick-tables: Decrement ref_cnt in table_* converters
- MEDIUM : lua/socket: Buffer error, may segfault
- MEDIUM : lua/socket: Length required read doesn't work
- MINOR : contrib/modsecurity: Don't reset the status code during disconnect
- MINOR : contrib/spoa_example: Don't reset the status code during disconnect
- MINOR : don't ignore SIG{BUS,FPE,ILL,SEGV} during signal processing
- MINOR : contrib/modsecurity: update pointer on the end of the frame
- MINOR : unix: Make sure we can transfer abns sockets on seamless reload.
- MINOR : signals: ha_sigmask macro for multithreading
- MINOR : contrib/mod_defender: update pointer on the end of the frame
- MINOR : ssl/lua: prevent lua from affecting automatic maxconn computation
- MINOR : contrib/mod_defender: Don't reset the status code during disconnect
- MINOR : lua: Socket.send threw runtime error: 'close' needs 1 arguments.

---
The haproxy stable-bot is freely provided by HAProxy Technologies to help improve the quality of each HAProxy release. If you have any issue with these emails or if you want to suggest some improvements, please post them on the list so that the solutions suiting the most users can be found.

Re: remaining process after (seamless) reload

$
0
0
On Tue, Jun 12, 2018 at 04:00:25PM +0200, William Dauchy wrote:
> Hello William L,
>
> On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> > That's great news!
> >
> > Here's the new patches. It shouldn't change anything to the fix, it only
> > changes the sigprocmask to pthread_sigmask.
>
> In fact, I now have a different but similar issue.
>

:(

> root 18547 3.2 1.3 986660 898844 ? Ss Jun08 182:12 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy 14593 299 1.3 1251216 920480 ? Rsl Jun11 5882:01 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 14582 14463 -x /var/lib/haproxy/stats
> haproxy 18290 299 1.4 1265028 935288 ? Ssl Jun11 3425:51 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 18281 18271 18261 14593 -x /var/lib/haproxy/stats
> haproxy 29663 99.9 1.4 1258024 932796 ? Ssl Jun11 1063:08 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29653 29644 18290 14593 -x /var/lib/haproxy/stats
> haproxy 4203 99.9 1.4 1258804 933216 ? Ssl Jun11 1009:27 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 4194 4182 18290 29663 14593 -x /var/lib/haproxy/stats
> haproxy 1445 25.9 1.4 1261680 929516 ? Ssl 13:51 0:42 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1436 29663 4203 18290 14593 -x /var/lib/haproxy/stats
> haproxy 1763 18.9 1.4 1260500 931516 ? Ssl 13:52 0:15 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy 1903 25.0 1.4 1261472 931064 ? Ssl 13:53 0:14 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy 2063 52.5 1.4 1259568 927916 ? Ssl 13:53 0:19 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
> haproxy 2602 62.0 1.4 1262220 928776 ? Rsl 13:54 0:02 \_ /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 2063 1903 1763 1445 14593 29663 4203 18290 -x /var/lib/haproxy/stats
>
>

Those processes are still using a lot of CPU...

> # cat /proc/14593/status | grep Sig
> SigQ: 0/257120
> SigPnd: 0000000000000000
> SigBlk: 0000000000000800
> SigIgn: 0000000000001800
> SigCgt: 0000000180300205
>
> kill -USR1 14593 has no effect:
>
> # strace -ffff -p 14593
> strace: Process 14593 attached with 3 threads


> strace: [ Process PID=14595 runs in x32 mode. ]

This part is particularly interesting, I suppose you are not running in x32, right?
I had this problem at some point but was never able to reproduce it...

We might find something interesting by looking further..

> [pid 14593] --- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=18547, si_uid=0} ---
> [pid 14593] rt_sigaction(SIGUSR1, {0x558357660020, [USR1], SA_RESTORER|SA_RESTART, 0x7f0e87671270}, {0x558357660020, [USR1], SA_RESTORER|SA_RESTART, 0x7f0e87671270}, 8) = 0
> [pid 14593] rt_sigreturn({mask=[USR2]}) = 7


At least you managed to strace when the process was seen as an x32 one, it wasn't my case.

>
> however, the unix socket is on the correct process:
>
> # lsof | grep "haproxy/stats" ; ps auxwwf | grep haproxy
> haproxy 2602 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
> haproxy 2602 2603 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
> haproxy 2602 2604 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
> haproxy 2602 2605 haproxy 5u unix 0xffff880f902e8000 0t0 3333061798 /var/lib/haproxy/stats.18547.tmp
>
> So it means, it does not cause any issue for the provisioner which talks
> to the correct process, however, they are remaining process.

Are they still delivering traffic?


> Should I start a different thread for that issue?
>

That's not necessary, thanks.

> it seems harder to reproduce, I got the issue ~2 days after pushing back.
>
> Thanks,
>

I'll try to reproduce this again...

--
William Lallemand

Re: remaining process after (seamless) reload

$
0
0
On Tue, Jun 12, 2018 at 04:33:43PM +0200, William Lallemand wrote:
> Those processes are still using a lot of CPU...
> Are they still delivering traffic?

they don't seem to handle any traffic (at least I can't see it through strace)
but that's the main difference here, using lots of CPU.

> > strace: [ Process PID=14595 runs in x32 mode. ]
>
> This part is particularly interesting, I suppose you are not running in x32, right?
> I had this problem at some point but was never able to reproduce it...

yup, I can't really understand where it is coming from. I have this
issue until I do a complete restart.

Thanks for your help,
--
William

haproxy bug: healthcheck not passing after port change when statefile is enabled

$
0
0
Hello,

There seems to be a bug in the loading of state files after a configuration change. When changing the destination port of a server the healthchecks never start passing if the state before the reload was down. This bug has been introduced after 1.7.9 as we cannot reproduce it on machines running that version of haproxy. You can use the following steps to reproduce the issue:

Start with a fresh debian 9 install
install socat
install haproxy 1.8.9 from backports

create a systemd file /etc/systemd/system/haproxy.service.d/60-haproxy-server_state.conf with the following contents:
[Service]
ExecStartPre=/bin/mkdir -p /var/run/haproxy/state
ExecReload=
ExecReload=/usr/sbin/haproxy -f ${CONFIG} -c -q $EXTRAOPTS
ExecReload=/bin/sh -c "echo show servers state | /usr/bin/socat /var/run/haproxy.sock - > /var/run/haproxy/state/test"
ExecReload=/bin/kill -USR2 $MAINPID

create the following files:
/etc/haproxy/haproxy.cfg.disabled:
global
maxconn 32000
tune.maxrewrite 2048
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
nbproc 1
maxcompcpuusage 85
spread-checks 0
stats socket /var/run/haproxy.sock mode 600 level admin process 1 user haproxy group haproxy
server-state-file test
server-state-base /var/run/haproxy/state
master-worker no-exit-on-failure

defaults
load-server-state-from-file global
log global
timeout http-request 5s
timeout connect 2s
timeout client 300s
timeout server 300s
mode http
option dontlog-normal
option http-server-close
option redispatch
option log-health-checks

listen stats
bind :1936
bind-process 1
mode http
stats enable
stats uri /
stats admin if TRUE

/etc/haproxy/haproxy.cfg.different-port:
global
maxconn 32000
tune.maxrewrite 2048
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
nbproc 1
maxcompcpuusage 85
spread-checks 0
stats socket /var/run/haproxy.sock mode 600 level admin process 1 user haproxy group haproxy
server-state-file test
server-state-base /var/run/haproxy/state
master-worker no-exit-on-failure

defaults
load-server-state-from-file global
log global
timeout http-request 5s
timeout connect 2s
timeout client 300s
timeout server 300s
mode http
option dontlog-normal
option http-server-close
option redispatch
option log-health-checks

listen stats
bind :1936
bind-process 1
mode http
stats enable
stats uri /
stats admin if TRUE

listen banaan-443-ipv4
bind :443
mode tcp
server banaan-vps 127.0.0.1:80 check inter 2000
listen banaan-80-ipv4
bind :80
mode tcp
server banaan-vps 127.0.0.1:80 check inter 2000

/etc/haproxy/haproxy.cfg.same-port:
global
maxconn 32000
tune.maxrewrite 2048
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
nbproc 1
maxcompcpuusage 85
spread-checks 0
stats socket /var/run/haproxy.sock mode 600 level admin process 1 user haproxy group haproxy
server-state-file test
server-state-base /var/run/haproxy/state
master-worker no-exit-on-failure

defaults
load-server-state-from-file global
log global
timeout http-request 5s
timeout connect 2s
timeout client 300s
timeout server 300s
mode http
option dontlog-normal
option http-server-close
option redispatch
option log-health-checks

listen stats
bind :1936
bind-process 1
mode http
stats enable
stats uri /
stats admin if TRUE

listen banaan-443-ipv4
bind :443
mode tcp
server banaan-vps 127.0.0.1:443 check inter 2000
listen banaan-80-ipv4
bind :80
mode tcp
server banaan-vps 127.0.0.1:80 check inter 2000


start a netcat process to fake a webserver: nc -klp 80
cp haproxy.cfg.disabled to haproxy.cfg and start haproxy.
cp haproxy.cfg.same-port to haproxy.cfg and reload haproxy. You will now see that the servers for banaan-443-ipv4 are marked as down, as expected (nothing is running on port 443).
Now cp haproxy.cfg.different-port to haproxy.cfg and reload haproxy again. banaan-443-ipv4 will still be marked as down, although it uses the same healthcheck as the port 80 configuration: server banaan-vps 127.0.0.1:80 check inter 2000

If we now stop haproxy and delete the statefile located at /var/run/haproxy/state/test and start haproxy again the server will be marked as up.

Thanks in advance,
Sven

Re: haproxy bug: healthcheck not passing after port change when statefile is enabled

$
0
0
Sven,

Am 12.06.2018 um 17:01 schrieb Sven Wiltink:
> create a systemd file /etc/systemd/system/haproxy.service.d/60-haproxy-server_state.conf with the following contents:
> [Service]
> ExecStartPre=/bin/mkdir -p /var/run/haproxy/state
> ExecReload=
> ExecReload=/usr/sbin/haproxy -f ${CONFIG} -c -q $EXTRAOPTS
> ExecReload=/bin/sh -c "echo show servers state | /usr/bin/socat /var/run/haproxy.sock - > /var/run/haproxy/state/test"
> ExecReload=/bin/kill -USR2 $MAINPID

While this would not have an effect on your issue I suggest specifying
RuntimeDirectory to save the manual `mkdir` and manual clean up:

> # /lib/systemd/system/haproxy.service.d/state.conf
> [Service]
> RuntimeDirectory=haproxy
> ExecReload=
> ExecReload=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS
> ExecReload=/bin/sh -c "echo show servers state |nc -U /var/run/haproxy/admin.sock > /run/haproxy/global-state"
> ExecReload=/bin/kill -USR2 $MAINPID

The RuntimeDirectory would automatically be cleaned on `restart` /
`stop`, but not on a `reload`. If this is not wanted take a look at
`RuntimeDirectoryPreserve`
(https://www.freedesktop.org/software/systemd/man/systemd.exec.html#RuntimeDirectoryPreserve=).

Best regards
Tim Düsterhus

RE: HAProxy 1.8.x not serving errorfiles with H2

$
0
0
Hi Cyril,

Thank you so much for your response. [1] is the simplest example of our 503 errorfile. Based on what you're saying it sounds like we need to add some additional information into the file. Is this documented somewhere?

[1]
<html>
<head>
<title>HTTP 503</title>
</head>
<body style="font-family:Arial,Helvetica,sans-serif;">
<div style="margin: 0 auto; width: 960px;">
<h2 >Author is unavailable</h2>
<p>
Either the instance is temporarily unavailable, down or being restarted. If this issue persists, please <a href="REMOVED">file a ticket</a>
<br />
<pre>
W W W
W W W W
'. W
.-""-._ \ \.--|
/ "-..__) .-'
| _ /
\'-.__, .__.,'
`'----'._\--'
VVVVVVVVVVVVVVVVVVVVV
</pre>
</div>
</body>
</html>


-----Original Message-----
From: Cyril Bonté [mailto:cyril.bonte@free.fr]
Sent: Monday, June 11, 2018 4:37 PM
To: J. Casalino <casalino@adobe.com>
Cc: haproxy@formilux.org
Subject: Re: HAProxy 1.8.x not serving errorfiles with H2

Hi,

Le 11/06/2018 à 16:39, J. Casalino a écrit :
> Trying again - has anyone else seen this?
>
> *From:* J. Casalino [mailto:casalino@adobe.com]
> *Sent:* Tuesday, June 5, 2018 12:27 PM
> *To:* haproxy@formilux.org
> *Subject:* HAProxy 1.8.x not serving errorfiles with H2
>
> We are in the process of testing HAProxy 1.8.x with ALPN and H2 on
> some of our servers. We have default 502 and 503 errorfiles defined (ex.
> errorfile 503 /etc/haproxy/errors/503.http), but we've noticed that
> these errorfiles are not served to the user's browser when the error
> occurs (for instance, if the backend is down, a user should get the
> 503 errorfile).
>
> Chrome returns "ERR_SPDY_PROTOCOL_ERROR", Curl [1] returns "curl: (92)
> HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)", and
> Firefox shows "The connection to <servername> was interrupted while
> the page was loading."
>
> With debug logging turned on, I can see that HAProxy is recognizing a
> 503 if the back-end server is down [2], but it doesn't seem to pass
> that error through to the client browser. If the backend is up and a
> 502 is generated, users do not receive the errorfile either. If we
> turn off H2 and drop back to HTTP/1.1, the errorfiles are displayed
> properly (though via HTTP/0.9)

If it looks like HTTP/0.9, I tend to think that your errorfile is not properly set. Such files must contain the status line, the headers and the response body.

And indeed, from a quick test, if I remove the status line, I get the same error with a HTTP/2 request. Once the file is correctly defined, everything is OK.

Can you provide the content of your errorfile(s) ?


> This has been observed in both 1.8.4 and 1.8.9. Our platform is Amazon
> Linux, using openssl-1.0.2k-12.109.amzn1.x86_64.
>
> Thanks in advance for any thoughts you might have -
>
> [1]
>
> Curl verbose (curl -vvvvI) output:
>
> *   Trying <ipaddr>...
>
> * TCP_NODELAY set
>
> * Connected to <servername> (<ipaddr>) port 443 (#0)
>
> * ALPN, offering h2
>
> * ALPN, offering http/1.1
>
> * Cipher selection:
> ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
>
> * successfully set certificate verify locations:
>
> *   CAfile: /etc/ssl/cert.pem
>
>   CApath: none
>
> * TLSv1.2 (OUT), TLS handshake, Client hello (1):
>
> * TLSv1.2 (IN), TLS handshake, Server hello (2):
>
> * TLSv1.2 (IN), TLS handshake, Certificate (11):
>
> * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
>
> * TLSv1.2 (IN), TLS handshake, Server finished (14):
>
> * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
>
> * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
>
> * TLSv1.2 (OUT), TLS handshake, Finished (20):
>
> * TLSv1.2 (IN), TLS change cipher, Client hello (1):
>
> * TLSv1.2 (IN), TLS handshake, Finished (20):
>
> * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
>
> * ALPN, server accepted to use h2
>
> * Server certificate:
>
> *  subject: [removed]
>
> *  start date: Mar 20 00:00:00 2017 GMT
>
> *  expire date: Mar 24 12:00:00 2020 GMT
>
> *  subjectAltName: host "<hostname>" matched cert's "<certname>"
>
> *  issuer: C=US; O=DigiCert Inc; CN=DigiCert SHA2 Secure Server CA
>
> *  SSL certificate verify ok.
>
> * Using HTTP2, server supports multi-use
>
> * Connection state changed (HTTP/2 confirmed)
>
> * Copying HTTP/2 data in stream buffer to connection buffer after
> upgrade: len=0
>
> * Using Stream ID: 1 (easy handle 0x7fbded005400)
>
> > HEAD /libs/cq/core/content/welcome.html HTTP/2
>
> > Host: <hostname>
>
> > User-Agent: curl/7.54.0
>
> > Accept: */*
>
> >
>
> * Connection state changed (MAX_CONCURRENT_STREAMS updated)!
>
> * HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
>
> * Closing connection 0
>
> * TLSv1.2 (OUT), TLS alert, Client hello (1):
>
> curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err
> 2)
>
> [2] haproxy[19803]: <ipaddr>:63832 [05/Jun/2018:15:36:24.202]
> incoming_https~ local_author_app_http/<NOSRV> 0/-1/-1/-1/0 503 1441 -
> - SCDN 3/1/0/0/0 0/0 "GET /libs/cq/core/content/welcome.html HTTP/1.1"
>


--
Cyril Bonté

cookie insert method secure

$
0
0
Hi,

there is a mechanism to specify to command like:



cookie <cokie_name> insert indirect preserve nocache httponly secure



to insert secure only if the session is ssl ? So it is possible to use this command on a common http/https backend without using 2 different redundant backend ?



There are also other cockie new security specifiers such as SameSite=… ?





Thank you



Rob


[APK]

[Unione]


mlist


APKAPPA s.r.l. sede legale Via F. Albani, 21 20149 Milano | p.iva/vat no. IT-08543640158
sede amministrativa e operativa Reggio Emilia (RE) via M. K. Gandhi, 24/A 42123 - sede operativa Magenta (MI) via Milano 89/91 20013
tel. 02 91712 000 | fax 02 91712 339 www.apkappa.ithttp://www.apkappa.it






Ai sensi e per gli effetti della Legge sulla tutela della riservatezza personale (DL.gs. 196/03 e collegate), questa mail è destinata unicamente alle persone sopra indicate e le informazioni in essa contenute sono da considerarsi strettamente riservate.
This email is confidential, do not use the contents for any purpose whatsoever nor disclose them to anyone else. If you are not the intended recipient, you should not copy, modify, distribute or take any action in reliance on it. If you have received this email in error, please notify the sender and delete this email from your system.

Re: cookie insert method secure

$
0
0
Hi.

On 12/06/2018 16:23, mlist wrote:
>Hi,
>
>there is a mechanism to specify to command like:
>
>cookie <cokie_name> insert indirect preserve nocache httponly secure
>
>to insert secure only if the session is ssl ? So it is possible to use
>this command on a common http/https backend without using 2 different
>redundant backend ?

You mean something like this?

frontend http
...
default_backend common_backend

frontend https
...
default_backend common_backend

backend common_backend
...
cookie <cokie_name> insert indirect preserve nocache httponly if !{ ssl_fc }
cookie <cokie_name> insert indirect preserve nocache httponly secure if { ssl_fc }
...

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-default_backend
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.4-ssl_fc

>There are also other cockie new security specifiers such as SameSite=… ?

Sorry I don't understand this sentence.

>Thank you
>
>Rob
>
>[APK]
>
>[Unione]
>
>mlist
>
>APKAPPA s.r.l. sede legale Via F. Albani, 21 20149 Milano |
> p.iva/vat no. IT-08543640158
>sede amministrativa e operativa Reggio Emilia (RE) via M. K. Gandhi,
>24/A 42123 - sede operativa Magenta (MI) via Milano 89/91 20013
>tel. 02 91712 000 | fax 02 91712 339 www.apkappa.ithttp://www.apkappa..it
>
>Ai sensi e per gli effetti della Legge sulla tutela della riservatezza
>personale (DL.gs. 196/03 e collegate), questa mail è destinata
>unicamente alle persone sopra indicate e le informazioni in essa
>contenute sono da considerarsi strettamente riservate.
>
>This email is confidential, do not use the contents for any purpose
>whatsoever nor disclose them to anyone else. If you are not the
>intended recipient, you should not copy, modify, distribute or take any
>action in reliance on it. If you have received this email in error,
>please notify the sender and delete this email from your system.

HM, is the mailing list *the intended recipient* ;-) ?!

Best regards
Aleks

RE: cookie insert method secure

$
0
0
Hi Alekandar,

as I can see in the configuration documentation cookie command does not seems to support <condition>
As now I use HA-Proxy version 1.8-dev0-530141f 2017/03/02 if I set "if { ssl_fc }" condition I get:

[ALERT] 162/194855 (10704) : parsing [/etc/haproxy/haproxy.cfg:657] : 'cookie' supports 'rewrite', 'insert', 'prefix', 'indirect', 'nocache', 'postonly', 'domain', 'maxidle, and 'maxlife' options.

Also on newer version documentation I cannot see support for <condition>

http://cbonte.github.io/haproxy-dconv/1.9/configuration.html#cookie%20(Alphabetically%20sorted%20keywords%20reference)

What you wrote was exactly what I'm looking for !

>>There are also other cockie new security specifiers such as SameSite=.... ?

>Sorry I don't understand this sentence.

I mean one can use other options then only those specified in the alert above. ie:

cookie <cookie_name> insert indirect preserve nocache httponly SameSite=strict

We can "add" a flag to a cookie passing "through" haproxy with " rspirep ^(set-cookie:.*) \1;\ SameSite=strict ..."

[backend set a cookie] -> [haproxy add SameSite=strict to passing cookie] -> [client get altered cookie]

How we can do that with cookie completely added by haproxy as we see "cookie insert" command doesn's seems to support flags like SameSite=strict:

DOESN'T WORK
[haproxy cookie insert SameSite=strict] -> [client get inserted cookie flag]




[APK]

[Unione]


mlist


APKAPPA s.r.l. sede legale Via F. Albani, 21 20149 Milano | p.iva/vat no. IT-08543640158
sede amministrativa e operativa Reggio Emilia (RE) via M. K. Gandhi, 24/A 42123 - sede operativa Magenta (MI) via Milano 89/91 20013
tel. 02 91712 000 | fax 02 91712 339 www.apkappa.ithttp://www.apkappa.it






Ai sensi e per gli effetti della Legge sulla tutela della riservatezza personale (DL.gs. 196/03 e collegate), questa mail ? destinata unicamente alle persone sopra indicate e le informazioni in essa contenute sono da considerarsi strettamente riservate.
This email is confidential, do not use the contents for any purpose whatsoever nor disclose them to anyone else. If you are not the intended recipient, you should not copy, modify, distribute or take any action in reliance on it. If you have received this email in error, please notify the sender and delete this email from your system.





-----Original Message-----
From: Aleksandar Lazic <al-haproxy@none.at>
Sent: marted? 12 giugno 2018 19:29
To: mlist <mlist@apkappa.it>
Cc: haproxy@formilux.org
Subject: Re: cookie insert method secure

Hi.

On 12/06/2018 16:23, mlist wrote:
>Hi,
>
>there is a mechanism to specify to command like:
>
>cookie <cokie_name> insert indirect preserve nocache httponly secure
>
>to insert secure only if the session is ssl ? So it is possible to use
>this command on a common http/https backend without using 2 different
>redundant backend ?

You mean something like this?

frontend http
...
default_backend common_backend

frontend https
...
default_backend common_backend

backend common_backend
...
cookie <cokie_name> insert indirect preserve nocache httponly if !{ ssl_fc }
cookie <cokie_name> insert indirect preserve nocache httponly secure if { ssl_fc }
...

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-default_backend
https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#7.3.4-ssl_fc

>There are also other cockie new security specifiers such as SameSite=... ?

Sorry I don't understand this sentence.

>Thank you
>
>Rob
>
>[APK]
>
>[Unione]
>
>mlist
>
>APKAPPA s.r.l. sede legale Via F. Albani, 21 20149 Milano |
> p.iva/vat no. IT-08543640158
>sede amministrativa e operativa Reggio Emilia (RE) via M. K. Gandhi,
>24/A 42123 - sede operativa Magenta (MI) via Milano 89/91 20013
>tel. 02 91712 000 | fax 02 91712 339 www.apkappa.ithttp://www.apkappa.it
>
>Ai sensi e per gli effetti della Legge sulla tutela della riservatezza
>personale (DL.gs. 196/03 e collegate), questa mail ? destinata
>unicamente alle persone sopra indicate e le informazioni in essa
>contenute sono da considerarsi strettamente riservate.
>
>This email is confidential, do not use the contents for any purpose
>whatsoever nor disclose them to anyone else. If you are not the
>intended recipient, you should not copy, modify, distribute or take any
>action in reliance on it. If you have received this email in error,
>please notify the sender and delete this email from your system.

HM, is the mailing list *the intended recipient* ;-) ?!

Best regards
Aleks
Viewing all 11674 articles
Browse latest View live


Latest Images