If the CIA can sidestep encryption, what makes you think cyberthieves can’t?

Credit to Author: Evan Schuman| Date: Wed, 08 Mar 2017 06:48:00 -0800

Having just spent much of the day browsing through Wikileaks’ latest batch of documents from the intelligence community — in which government agents discussed ways to circumvent mobile encryption and to listen in on conversations near smart devices including smart TVs — it’s clear that government agents have long had the ability to grab mobile content before it’s encrypted.

Some of the tactics have names that are quite explicit about their function, such as a TV mode called “TV Fake-Off.” These docs provide a fascinating look into the government teams that are emulating cyberthieves, trying to improve on their techniques rather than thwart them.

Personal security products (PSP) “sandboxes typically have a set time limit they analyze a program for before making a decision. PSPs do not want to impose unnecessarily long wait times on the user, which may cause the user to disable PSP components or try other products out of frustration,” said one typical passage. “A common technique of exploiting this mechanism is using a Sleep-like call at the start of a program to ‘run out the clock.’ PSPs caught on and many will skip the sleep calls in their sandbox environment. To counteract this, Malware authors will call a meaningless function which performs some kind of task or calculation that takes a while to complete, before performing any malicious action. This makes it harder/impossible for PSPs to know what to skip, and the Malware can effectively ‘run out the clock’ while in a PSP sandbox.”

Interestingly, the CIA and other intelligence firms are doing the same process as most security firms — studying cyberthief tactics — but instead of using that knowledge to improve defenses, the CIA is using those lessons to craft better attacks.

“This is a very impressive set of tools gathered,” said Doug Barbin, principal cybersecurity leader of Schellman & Co., a CPA firm. “But it wasn’t something that a security researcher would be too surprised by. It’s so detailed, though, that it takes the debate out of whether or not these types of attacks are hypothetical.”

Barbin added, though, that some of the initial reports have been misleading. The CIA’s tested method of monitoring that smart TV, for example, he said, used a USB stick placed into the set to initiate any monitoring. That would require physical contact with the set, as opposed to an over-the-air method of intercepting data.

Although Barbin’s point is well taken, some of these memos are two years old. Just because it was tested with a USB insert doesn’t mean that the attack couldn’t today be launched wirelessly.

Another security professional, Ken Pfeil, the chief architect at the TechDemocracy consulting firm, was equally unimpressed with the CIA’s tactics.

“These are pretty standard. The fact that they are using DLL injection is not surprising. In the exploit world, some of this stuff is pretty basic,” Pfeil said. “There is nothing sitting in front of me [from the Wikileaks data dump] that would surprise me. Absolutely nothing.”

Agreed. Only the dumbest terrorist would opt to hold terror planning meetings in the same room as a smart TV that supports voice recognition. Then again, who ever said terrorists are especially smart? If only one plan is thwarted from some IQ-deficient murderer, it’s likely worth the effort.

Some of the advice in the CIA memos is positively coach-like. Consider: “After verifying that the CTNR was called for thread creation, the kernel code can do some basic checks to see if the thread is being created in an interesting process. The important thing to remember about running code in the CTNR is that NO new threads can be created until each CTNR is finished. If your CTNR code takes 1 minute to run, then you’ve bottlenecked thread creation to 1 new thread a minute — extreme example of course. Whatever you do in the CTNR, make sure it’s quick.”

Many of the suggestions were aimed at, logically enough, tactics to avoid detection. “Process Hollowing involves starting a benign process — such as Internet Explorer — using Windows’ CreateProcess, with a specific flag set to create the process in a suspended mode. At this point, the component removes the benign process’ code from the suspended process, injects its own malicious code, and resumes the process. PSPs may only do an initial scan when the process is created — even though it’s suspended at the start — and won’t notice the code replacement. Also, dynamic analysis tools such as Procmon will only log/show that a benign process was created.”

The CIA paid particular attention to getting around security defenses from Kaspersky. That might be a compliment of sorts to that product’s sophistication or it might simply be that Kaspersky has rejected many efforts to cooperate with government investigators.

“The Kaspersky AVP.EXE process references a DLL called WHEAPGRD.DLL. This DLL is supposed to be located in one of the Kaspersky directories, which are protected by the PSP. Due to a UNICODE/ASCII processing mistake, the DLL name is prepended with the Windows installation drive letter, rather than the full path to the DLL,” a memo said. “For typical installations, this causes Kaspersky to look for the DLL ‘CWHEAPGRD.DLL’ by following the standard DLL search path order. Loading our own DLL into the AVP process enables us to bypass Kaspersky’s protections.”

Here’s an interesting example of a more basic exploit on Windows. “Process Hollowing involves starting a benign process, such as Internet Explorer, using Windows’ CreateProcess, with a specific flag set to create the process in a suspended mode. At this point, the component removes the benign process’ code from the suspended process, injects its own malicious code, and resumes the process,” a memo said. “PSPs may only do an initial scan when the process is created even though it’s suspended at the start and won’t notice the code replacement. Also, dynamic analysis tools such as Procmon will only log/show that a benign process was created.”

Other memos described time-savers. “All function calls need to come from the ese.dll, and not esent.dll. The API appears the same, but exchange does not use esent.dll. Therefore all JET function calls need to be from ese.dll space. Thankfully, its already loaded into mem,” the document said, before adding a smiley emoticon. “Store.exe seems to export a wonderful function EcGetJetInstanceForMDB() that takes a GUID and returns a valid JET instance handle that has already been initialized and setup for use. Appears there is no need to figure out all the right SystemParameters, etc. and in order to create our own sessions from this instance. Use UuidFromString() to convert from String GUID to binary. However, this function isn’t really need as once we are injected in, calling JetGetInstanceInfo() gives us everything we need.”

The most interesting discussions, though, were candid in suggesting ways to bypass security restrictions. “When building a tool, you will almost inevitably have to use some set of strings or sensitive data. When security products or professionals scan a system, we don’t want to make it easy for them to find something malicious by just doing a string search. Thus, in order to obfuscate what the tool is doing, we obfuscate the strings or data being used,” one memo said. “You should also scan the binary you deliver against usernames and names of people on the project as many times mistakes are made and PDB strings — file paths that often include usernames — are left in the final binary. There are many products we use to help us automate portions or all of string/data obfuscation.”

That memo continued, winking to the reader about its intended use. “So you may already have a good idea of where we’re going with this. Memory refers to the volatile memory on the machine while the disk is non-volatile. This difference is important when developing malicious software,” the note said. “As a development shop, we tend to do most of our work in memory and rarely leave unencrypted artifacts on disk. That being said, all persistence is gained by writing to a non-volatile location on the machine. Thus, it is good to keep in mind that anything on disk shouldn’t contain anything too cool for school. Also, on disk artifacts are more likely to be detected by Personal Security Products (PSPs).”

All in all, just a run-of-the-mill day for your friendly neighborhood CIA agents.

http://www.computerworld.com/category/security/index.rss