Extrapolating the US penetration testing market size

One of the questions I have had a bit following on from my analysis of the Australian penetration testing market, is the implied size of the global penetration testing market.  Or at least, the size of the US penetration testing market, on the assumption that it is going to be the largest.  With a few minutes to spare, I thought I would try to kludge together a number that at least seems plausible given the (admittedly very few) external reference points available.

IBIS World released a research report in August 2012 (the "IT Security Consulting in the US Market Research Report") which provides a couple of free snippets of data - a revenue figure of $5 Billion, and, interestingly, the statement that "there are no companies with a dominant market share in this industry" - which is exactly the conclusion I came to when looking at the Australian penetration testing market.

So there's our first data point:  The US IT Security Consulting Market (2012) is estimated at $5 Billion.  

5bil.png

Global Industry Analysts, Inc have estimated the 2013 global information security products & services market at $104 Billion, and RNCOS has estimated the global IT security market at $96 Billion (both figures from this interesting analysis of the Turkish IT security market).  Not wildly dissimilar numbers which is always a nice start.  A PricewaterhouseCoopers report in 2011 apparently put the estimated market size at $60 Billion, so a bit smaller, but with forecast growth, probably closer to a $75 Billion estimate by 2013.  Gartner has put the global market at $55 Billion in 2011 with a forecast growth path that would imply something like $67 Billion for 2013. 

The US is estimated to make up close to half of all cyber-security spending globally.  Which seems quite plausible when one considers the size of both defence-led Government cyber-security expenditure, and also the size of the economy.  That would put the US cyber-security market into the vicinity of $35-45 Billion for 2013.

35bil.png

One potentially useful stat we can gather from the above, is that IT security consulting, is ~10-15% of the overall IT security market size.

So how do Australia's numbers compare?

This fairly old data set from 2009 has Gartner estimating the Australian IT security market size being about $250 Million.  Let's add on 20%-year-on-year growth since then, and we're at $500 Million-ish today.  Given my previous analysis of the Australian penetration testing market put it at $200-300 Million on its own, I think this is a pretty low estimate.  A 2008 estimate by IDC forecast the market would hit $1.5 Billion by 2011, which actually sounds a bit more workable.

1point5.png

If this is correct, and if my previous penetration testing market estimates are plausible, then at a macro level, organisations are spending 10-20% of their security budget on penetration testing and vulnerability assessment.  This feels a bit high (probably reflecting the fact that less is being spent than the bottom-up estimate of penetration testing expenditure would suggest), and also seems not to match with the US estimate of 10-15% of IT security spend going to consulting.  Given this would contain a great deal of 'non-penetration testing' consulting services, for penetration testing alone, let's go with something closer to 5% to be a bit more conservative.

1to3.png

So as rubbery as these data sets may be, they would suggest that the US penetration testing market is in the $1.5 - 3 Billion range... Which makes it 8-10 times the size of the Australian market, which given the size of the US economy (GDP $15.094 Trillion) is a larger order of magnitude than that, larger than the Australian economy (GDP $1.37 Trillion), would seem to make sense.

And just to recap my favourite point once again... "there are no companies with a dominant market share in the [IT security consulting] industry".  As I said at the end of the Australian analysis, this is a great market to be a part of; and on a global scale that is no different.

Crowdsourcing & the Prisoner's Dilemma

One of the common questions that gets raised in the crowdsourced testing process (eg Bugcrowd) is how it's possible to manage the risk of a tester identifying vulnerabilities, then disclosing them or selling them or using them, outside the parameters of the officially sanctioned test.

While it is presenting an alternative to penetration testing in many cases, it is somewhat more useful to consider the model in the context of the bug bounty programs run by companies like Google.  

The reason for the distinction is that bug bounty programs are aimed at achieving two related, but distinct, goals:

  1. To have vulnerabilities that would have been identified anyway (ie through unauthorised testing, or through incidental testing for a third party), be responsibly disclosed; and
  2. To have additional vulnerabilities identified by encouraging additional testing, and corresponding responsible disclosure.

That first group is often not considered as a goal of a penetration test - the likelihood that any system of interest is constantly being subject to security analysis by Internet-based users with varying shades of grey- or black- hats, seems to often to be overlooked.  

With the risk of stating the obvious, the reality is that every vulnerability in a given system, is already in that system.  Identifying vulnerabilities in a system does not create those weaknesses - but it is true that it may increase the risk associated with that vulnerability as it transitions from being 'unknown' to being 'known' - depending on who knows it.  

Feel free to spread this video around.

To use Donald Rumsfeld's categorisation, we could consider the three groups as follows:

  1. Known Knowns: Vulnerabilities we know exist and are known in the outside world (publicly disclosed or identified through compromise);
  2. Known Unknowns: Vulnerabilities that we know exist, and are unsure if they are known in the outside world (either identified by us; or privately disclosed to us);
  3. Unknown Unknowns: Vulnerabilities that we don't know exist, and are unsure if they are known in the outside world (which is the state of most systems, most of the time).

What crowdsourcing seeks to do, is to reduce the size of the 'unknown unknown' vulnerability population, by moving more of them to the 'known unknown' population so that companies can manage them.  The threat of a 'known unknown' is significantly lower than the threat of an 'unknown unknown'.

Which brings us to the risk that a vulnerability identified through a crowdsourced test, is not reported, and hence remains an 'unknown unknown' to us.  The risk of non-disclosure of vulnerabilities identified through a crowdsourced test is effectively mitigated by game theory - it is somewhat similar to the classic 'Prisoner's Dilemma'

The Prisoner's Dilemma is a classic of game theory, demonstrating why individuals may not cooperate, even if it is in their best interests to do so.  The Dilemma goes like this:

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."

Effectively, the options are as presented in this table:

prisonersdilemma.png

The beauty of the dilemma, is that as they cannot communicate, each prisoner must evaluate their own actions without knowing the actions of the other.  And for each prisoner, they get a better outcome by betraying the other prisoner.  For Prisoner A looking at his options, if Prisoner B keeps quiet, Prisoner A has the choice of 1 year in jail (if he also keeps quiet) or no jail time at all (if he testifies against Prisoner B). Hence, testifying gives a better outcome.  And if Prisoner B testifies against him, Prisoner A has the choice of 3 years in jail (if he keeps quiet) or 2 years in jail (if he also testifies)... again, testifying gives a better outcome.

Hence, economically rational prisoners will not cooperate, and both prisoners will serve 2 years in prison, despite that appearing to be a sub-optimal outcome.

What does this have to do with crowdsourcing?

In crowdsourcing there are obviously far more than 2 participants; but the decision table we are interested in, is as it is relevant to any particular tester.  The situation they face is this:

crowdtest2.png

Essentially, each tester only knows the vulnerabilities they have identified.  They do not know who else is testing, or what those other testers have discovered.

Only the first tester to report a vulnerability gets rewarded.

Any tester seeking to 'hold' an identified vulnerability for future sale/exploitation (as opposed to payment via the bounty system) has to be confident that the vulnerability was not identified by anyone else during the test, since otherwise they are likely to end up with nothing - the vulnerability gets patched, plus they don't get any reward.  

Since Bugcrowd tests to date have had large numbers of participants, and have found that over 95% of vulnerabilities are reported by more than one tester, this is a risk that will rarely pay off.

As a result, economically rational testers will disclose the vulnerabilities they find, as quickly as possible.  

For organisations getting tested, cliched as it is, the crowd truly does provide safety in numbers.

Disclaimer: I'm an Advisor to Bugcrowd.