Cache Killer: Russian Intelligence Vector?
A while ago, I mentioned that my LiveJournal account got hacked. I also teased that the way it was done allowed me to know, to a high degree of certainty, how it was done.
Well, I've finally got confirmation that Cache Killer, the Chrome plugin that was ubiquitous among front-end developers, was indeed compromised, and sent a ton of tracking code back, which is why it was yanked from the store. It is worth noting that the main way a number of people were able to independently confirm this was that the rest of the code was well-written and thoroughly commented, which made the injected attack stick out like a sore thumb. One more reason to write clean, readable, maintainable code.
Of probable interest are the exfiltration vectors:
- Yandex, a Russian search engine
- Dropbox, through a compromised account
The first one is interesting because it would require custom code on the backend to have either been written by, or wedged into, Yandex itself. In an era of both active Russian hacking and false-flag attacks, it's just a point of interest, but, one more data point.
Why Cache Killer?
Because a huge percentage-- possibly a majority-- of the entire planet's professional web developers used it.
Cache Killer-- the legitimate version-- solved a ubiquitous problem that almost every web developer has: making the site load the same way every time, even though it will only load that way once for the user. Browsers are really bad about exposing commands to do this properly. They lump it in with the privacy settings, but the motivation is completely different.
That, of course, isn't always clear to people other than developers, and anyone trying to use this extension to "cover their tracks" missed some pretty important points. It is, however, entirely possible that some analyst missed the point as well, and that this hack was done for law enforcement, not intelligence, reasons-- but I really doubt it.
The permissions Cache Killer asked for were so broad as to be able to read, report, and modify any page from any site. This is obviously not ideal, but because Google didn't properly address developer needs by adding proper support for this in the API or settings pages, it seemed to make sense, so people just put up with it.
Keep in mind that permissions like that can do more than just access static content. They can continuously poll the DOM for dynamic changes-- such as keyboard entry. Passwords, form entries, search queries, Bitcoin wallet details, just about anything.
Usernames and passwords, perhaps.
Probable target: remote login credentials
Because login pages have a clear objective-- acquire two pieces of information, pass them in a secure way to a remote endpoint, receive cryptographic tokens and new content as a result-- detecting them is reasonably straightforward. The matter gets even simpler when you realize that for most users, even developers, there are only a handful of pages where the keyboard is used for data entry at all.
Consequently, you have a situation where a ubiquitous software development tool was given carte blanche permissions-- and was hijacked to abuse them.
Once you realize that many of the login credentials might well have been for app store credentials for developers of other trusted plugins, you start to see how the problem can rapidly snowball.
Lesson 1: Developer needs must be fully met to ensure platform security
The root cause of all this is that Chrome did not listen to developer needs adequately enough for the program itself to address them in a secure manner. Instead, developers had to lean on a third-party plugin, which was eventually compromised.
Lesson 2: Counterintelligence agencies need to audit ubiquitous developer tools
That line pretty much says it all, and is probably worth a more complete article itself.
Lesson 3: Anything weird needs to be run in a sandbox
A pretty effective way to neuter this thing would have been to just use it in a browser that was running in a VM. That would provide a number of other benefits, such as being able to put second VM in as a router to simulate various network conditions. It might have been logging everything-- but with a little discipline, "everything" would have just been test users on pre-release code, and with a little more discipline, the reporting traffic would have been smacked down by an vicious firewall configuration on the VM.
If a system does something weird, it's weird. Even if you use it on purpose. Even if there's a good reason for it. No matter how much you trust it, remember it's weird, because weird is hard to watch.