In the latest Snowden document released is hilariously named: What’s The Worst That Could Happen. In effect, this is the Government’s thought process involved in determining the risk associated in launching a targeted attack on someon. In theory, this served as a guide to remind UK spooks of the various risks that are involved when trying to exploit someone. (It reads like it was put together by some GCHQ intern so I would pay too much attention to it.)
One of your main goals of your OPSEC plan is to reduce the likeliness of attack. With this document, we can see what an attacker believes to be the risks or their fears, and exploit them. As with all operation decisions, the risks outweigh their rewards. So let’s make it risky for them to target you.
Fears of Discovery
Most of the document makes up a list of various different results of being discovered like if I catch someone in my bed room, the result would be that I could identify the person and punch them in the face. But the discovery section is where it gets interesting. This is a list of all the various ways they’re concerned about being discovered. Their job, prior to deciding if an operation should be launched, is to identify the risk of your discovering them. Lets substantiate their fears by building some OPSEC plans.
Here’s the list:
- Compromise of operation during installation
- Inadequate personnel security controls and subsequent information leak
- Discover of installed hardware (including post-operation)
- Forensic discovery of installed software
- Discovery of a suspicious audit trail/logs/registry
- Discovery of suspicious RF energy
- Suspicious profile caused by hardware/software malfunction
- Discovery of egressed traffic
- Discovery through other IT leakage
- Vulnerability to HIS or other monitoring
- Inadequate monitoring of profile generated by operation
- Inadequate review of risks during the lifetime of the operation
- Reliance on an uncertain supply chain or other risky dependencies
- Failure by operators to cover tracks, including clearing logs/changing read statusof emails
- Novel capabilities and techniques having unknown effects outside of lab testing conditions
- Unforeseen changes to hardware or software leading to compromise of techniques or installation
- Hardware/software malfunctions leading change in target behaviour, potentially including forensic investigation (and potential discovery) and/or loss of target access
Compromise During Installation
A relatively easy fear to substantiate is the detetion of an exploit device installation. This could be an application or hardware installation a device you own as well as physical security threats like microphones or cameras. Suggestions might be:
- Security cameras to monitor your workspace.
- Custom log that detects when unknown USB devices are installed.
- Tools like tripwire that detect modifications to your OS and prevent installation.
Catching Them In The Act
This refers to catching a person in the middle of their operation. Examples would be sketchy vans parked outside of your flat, cars tailing you over long distances, or people following you on the streets. There are some fun OPSEC tactics we can apply here:
- Take illogical paths (circle around the block).
- Don’t have a predictable pattern of activities.
- Use security cameras to keep a log of people around your
Some of the “bugs” used by these types of people communicate by radio frequency. If you can show that you know about radio and that you know how to detect signals being relayed, you make yourself a risk of catching them.
- Use a software defined radio (HackRF or BladeRF#) to monitor normal radio signals around your facility. Regularly monitor for changes and send alerts.
- Use bug detection tools (usually pretty expensive) to sweep your living space.
Malfunctions and Crashes
The NSA’s TAO program has a variety zero-days at their disposal. Most often, these exploits will cause some kind of crash on your system. When a crash happens, don’t react by saying “Oh, it does that sometimes.” Make sure you have a way of detecting crashes and logging them somewhere to determine what occurred.
- Review all your logs after a crash. This seems simple but if you don’t have time to debug a crash, at least store them for review later
- Use tools like mce-log that not only keep logs of hardware events but help predict when a crash is going to occur again.
If something is installed on one of your devices, it alone can only collect information. Eventually it will need to exfiltrate it back to the attacker. There are some simple things you can do to try and detect when unauthorized connections are being made on your network.
- Use strict egress filtering on your operation’s network segment. Only allow access to hosts you trust, via protocols you expect, over ports you know.
- Alert yourself on unexpected network traffic such as outbound connections to other countries.
Missing From The List
The list of course is not exhaustive. They don’t go very deep into the various different discovery tactics. For example, you may consider building an extremely public system that everyone knows about and likes. In the case that something is attacked or exploited, a canary could go off that destroys that service. This would be the idea of collateral damage which could affect their public perception, share what country originated the attack, or just causes some political turmoil which makes it not worth the effort. In some ways, Edward Snowden can be an example of this.
Don’t forget, each of these examples isn’t talking about what you do after you’ve been targeted, it’s about a show of capabilities before you’re targeted. If it looks like you follow tight OPSEC, make it very difficult/expensive to target, or just seem like you’re not worth the effort, you’re less likely to be targeted. Again, it’s a simple question of risk vs reward.
An unfortunate side note should be mentioned: Performing all of these tactics may make you appear to be hiding something of higher reward to them than you truly are. We know that those of us that use things like GPG are specifically targeted solely because we encrypt a message. Put another way, in order to quantify the reward of targeting you, they would first need to know what you’re doing and they likely can’t. This will lead them to deduce whatever you’re doing is of high value to them based on the level of OPSEC you’ve built. Defending against being targeted makes you a target. That circle is for another day.