Crowdsourcing & the Prisoner's Dilemma

One of the common questions that gets raised in the crowdsourced testing process (eg Bugcrowd) is how it's possible to manage the risk of a tester identifying vulnerabilities, then disclosing them or selling them or using them, outside the parameters of the officially sanctioned test.

While it is presenting an alternative to penetration testing in many cases, it is somewhat more useful to consider the model in the context of the bug bounty programs run by companies like Google.  

The reason for the distinction is that bug bounty programs are aimed at achieving two related, but distinct, goals:

  1. To have vulnerabilities that would have been identified anyway (ie through unauthorised testing, or through incidental testing for a third party), be responsibly disclosed; and
  2. To have additional vulnerabilities identified by encouraging additional testing, and corresponding responsible disclosure.

That first group is often not considered as a goal of a penetration test - the likelihood that any system of interest is constantly being subject to security analysis by Internet-based users with varying shades of grey- or black- hats, seems to often to be overlooked.  

With the risk of stating the obvious, the reality is that every vulnerability in a given system, is already in that system.  Identifying vulnerabilities in a system does not create those weaknesses - but it is true that it may increase the risk associated with that vulnerability as it transitions from being 'unknown' to being 'known' - depending on who knows it.  

Feel free to spread this video around.

To use Donald Rumsfeld's categorisation, we could consider the three groups as follows:

  1. Known Knowns: Vulnerabilities we know exist and are known in the outside world (publicly disclosed or identified through compromise);
  2. Known Unknowns: Vulnerabilities that we know exist, and are unsure if they are known in the outside world (either identified by us; or privately disclosed to us);
  3. Unknown Unknowns: Vulnerabilities that we don't know exist, and are unsure if they are known in the outside world (which is the state of most systems, most of the time).

What crowdsourcing seeks to do, is to reduce the size of the 'unknown unknown' vulnerability population, by moving more of them to the 'known unknown' population so that companies can manage them.  The threat of a 'known unknown' is significantly lower than the threat of an 'unknown unknown'.

Which brings us to the risk that a vulnerability identified through a crowdsourced test, is not reported, and hence remains an 'unknown unknown' to us.  The risk of non-disclosure of vulnerabilities identified through a crowdsourced test is effectively mitigated by game theory - it is somewhat similar to the classic 'Prisoner's Dilemma'

The Prisoner's Dilemma is a classic of game theory, demonstrating why individuals may not cooperate, even if it is in their best interests to do so.  The Dilemma goes like this:

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."

Effectively, the options are as presented in this table:

prisonersdilemma.png

The beauty of the dilemma, is that as they cannot communicate, each prisoner must evaluate their own actions without knowing the actions of the other.  And for each prisoner, they get a better outcome by betraying the other prisoner.  For Prisoner A looking at his options, if Prisoner B keeps quiet, Prisoner A has the choice of 1 year in jail (if he also keeps quiet) or no jail time at all (if he testifies against Prisoner B). Hence, testifying gives a better outcome.  And if Prisoner B testifies against him, Prisoner A has the choice of 3 years in jail (if he keeps quiet) or 2 years in jail (if he also testifies)... again, testifying gives a better outcome.

Hence, economically rational prisoners will not cooperate, and both prisoners will serve 2 years in prison, despite that appearing to be a sub-optimal outcome.

What does this have to do with crowdsourcing?

In crowdsourcing there are obviously far more than 2 participants; but the decision table we are interested in, is as it is relevant to any particular tester.  The situation they face is this:

crowdtest2.png

Essentially, each tester only knows the vulnerabilities they have identified.  They do not know who else is testing, or what those other testers have discovered.

Only the first tester to report a vulnerability gets rewarded.

Any tester seeking to 'hold' an identified vulnerability for future sale/exploitation (as opposed to payment via the bounty system) has to be confident that the vulnerability was not identified by anyone else during the test, since otherwise they are likely to end up with nothing - the vulnerability gets patched, plus they don't get any reward.  

Since Bugcrowd tests to date have had large numbers of participants, and have found that over 95% of vulnerabilities are reported by more than one tester, this is a risk that will rarely pay off.

As a result, economically rational testers will disclose the vulnerabilities they find, as quickly as possible.  

For organisations getting tested, cliched as it is, the crowd truly does provide safety in numbers.

Disclaimer: I'm an Advisor to Bugcrowd.