I started putting together a piece on the concept of a 'criminality premium' some time ago, but was drawn to other topics for a while after. I was brought back to it again after reading a blog by Phil Kernick, of CQR Consulting, titled "Attribution is Easy." I'm not sure whether the title is intended to be serious or to provoke debate, but if you're really interested in attribution, the US held hearings into the topic before the Subcommittee on Technology and Innovation, Committee on Science and Technology of the United States House of Representatives, in 2010. While obviously a few years old now, the content remains excellent and is a must read for cyber-security folks.
My personal favourite is this submission: Untangling Attribution: Moving to Accountability in Cyberspace, by Robert K. Knake, International Affairs Fellow in Residence, The Council on Foreign Relations
The following diagram from Knake's submission presents a neat and tidy summary of the key challenges in attribution, varying by the type of incident/attack one is trying to attribute. I would suggest that attribution isn't "easy", but in some cases is a problem with sub-elements which can definitely be resolved.
While the CQR blog entry example of Alice and Chuck, and Chuck peering over Alice's fence with a telephoto lens, is hardly the epitome of 'cyber war', the mechanism of attribution - based on the "small number of capable actors" (ie who could see the designs) and "using out-of-band investigative and intelligence capabilities" is a pretty good match for the above.
The CQR blog also included the following line which raised my eyebrows:
"This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing."
While the underlying economic premise here may well be correct, it is only true in a world where the only 'cost' of theft to the thief, is the actual financial cost of the resources used to steal. The lack of consideration of the potential for either civil or criminal liability for copyright breach (and whatever other miscellaneous crimes may have occurred in the process), renders the example of little use in the real world.
Where this does become relevant, however, is in the consideration of the concept of a 'criminality premium', which first arose after a discussion about crowd sourced security testing, and bugcrowd (for whom I am an Advisor).
The realisation that I had, is that crowdsourcing testing aligns the reward process for the good guys, with the reward process for the bad guys. That is, the bad guys don’t get 'paid' (ie, don't receive an economic reward) for the time they invest in finding venerabilities in systems; they only get 'paid' when they find the vulnerability (generally, through exploiting it). Crowdsourcing aligns the reward system so that the good guys get rewarded for doing the same thing as the bad guys.
This, in turn, got me wondering about whether this economic similarity in reward structure somehow helps level the playing field because the good guys no longer have the economic advantage of stability of earnings (ie getting paid for time, rather than results) and instead are paid like the bad guys - on delivery of results.
Taking this a step further, if we're presenting the same fundamental task (finding security weaknesses), and the same economic incentive structure to both the good guys and the bad guys, then the only reason someone would choose between the two is the size of the reward. I also assume that it is not as simple as just converging the size of the 'good guy' reward pool with the potential size of the criminal 'reward pool', but that logically there is a 'criminality premium', in that given two choices:
- Earn $50 legally;
- Earn $50 illegally for doing exactly the same thing;
Anyone making rational decisions will choose 1, as there is a 'cost' that must be considered associated with (2) associated with the potential for punishment for the illegal act.
Therefore, the question is simply how big we think this criminality premium is. If you have a database of 40,000 credit card numbers, which for argument's sake are worth about 50c each on the black market, the potential 'payment' for accessing that database and selling the contents, is $20,000.
How much do you need to pay, for the person identifying the vulnerability allowing access to that data, who is economically rational, to choose the legal disclosure path rather than the illegal disclosure path? (Acknowledging that this concept requires almost everyone in the world having a tacit ongoing bug bounty program!)
$5,000? Seems unlikely.
$10,000? Must be getting close. $10,000 without any worries about the feds kicking in your door would seem a better idea than $20,000 from illegal exploitation of that data set (since there are all the usual 'non-payment' risks that also arise in the black market).
If we can successfully remove the economic incentive to be a 'black hat' rather than a 'white hat', we're just left with the criminally insane and the purely vindictive (ie not economically motivated) attackers to worry about.
And whether organisations have a grip on the potential economic value of their data to an attacker, in order to put together a program that is sufficient to take economically rational hackers out of the pool of bad guys, is a different question again.