Sendmail, Sensory Networks & PacketLoop - Pondering Interesting Transactions

Sendmail - Watch this space 

ProofPoint - who are serial acquirers in the cyber-security industry - acquired Sendmail for about $23 Million in cash, paying a revenue multiple of something like 10, and a profit multiple of n/a since by the sounds of the announcement, Sendmail as a commercial enterprise has been losing money pretty consistently.  

sendmail.png
"For the fourth quarter of 2013, Proofpoint expects Sendmail to have an immaterial impact on revenue while widening the company's non-GAAP net loss by approximately $2 million or $0.06 per share, as the company takes on the costs associated with this new team and begins to build a recurring revenue stream."  (http://finance.yahoo.com/news/proofpoint-inc-acquires-sendmail-inc-201000890.html)
"Sendmail brings a global community of open source users and a compelling set of enterprise customers, but little in the way of near-term recurring revenue due to their legacy business model built around the sale of appliances and perpetual licenses."  (http://finance.yahoo.com/news/proofpoint-inc-acquires-sendmail-inc-201000890.html)

So why are they buying it?  It seems the strategy is primarily about supply chain protection and/or integration:

"Noting that ProofPoint's enterprise protection solution is built on Sendmail's MTA, ProofPoint CEO Gary Steele said, "Acquiring Sendmail gives Proofpoint ownership of this definitive industry-standard technology...""  (http://www.fool.com/investing/general/2013/10/01/proofpoint-makes-another-acquisition.aspx)

Although the opportunity could well also be larger than that.  There is certainly precedent for taking a semi-open-source software product and surrounding it with commercial services and support (with Snort/Sourcefire and Nessus/Tenable being two prime examples in the cyber-security industry) and creating significant value in the process.  Key to success will be ensuring the community continues to participate in the open source project, and see that the overarching commercial organisation that is now supervising them, is an organisation whose values they align to.  That ProofPoint has already started reaching out the community (eg http://www.sendmail.com/sm/open_source/community_letter/) is a positive start to that relationship.

 

Sensory Networks - A mixed result

 The same day as the Sendmail transaction, it was announced that Intel is acquiring Australian cyber-security tech company Sensory Networks for $21.5 Million (http://www.smh.com.au/it-pro/business-it/intel-to-acquire-australian-tech-company-sensory-networks-for-21-million-20131001-hv1un.html).  Intel is listed on the Sensory website as a partner, so as with the Sendmail acquisition, it could simply be case from Intel's perspective of protecting the supply chain.

sensory.png

I have a soft spot for Sensory Networks as it was on Matt Barrie's recommendation that a number of our earliest team members at SIFT were recruited, and without exception they turned out to be some of the best and brightest minds in security that I have had the privilege to work with.  That being said, early media reports of the Sensory Networks sale really wanted to be able to present it as a success story, but that became progressively more difficult when additional context was added to the deal and the company.  

Like the fact Sensory had raised about USD $30M in venture capital to get to this point.  Like the fact Sensory was not a 'start-up', but had been running since 2003.  Like the fact Sensory started life as a hardware company (and by all accounts was excellent at it, from an engineering standpoint) and in 2009 changed tack to be software focused.  And the fact that at the date of the transaction the company had only five (5) employees.

Does anyone actually make any money in a deal like this?  It's an interesting question, and the answer is... It depends.

It depends on a few things, like:

  • The terms under which the venture capitalists invested
  • The degree to which the early shareholders were diluted in the various funding rounds
  • The importance of the remaining key employees and their ability to renegotiate equity plans over time
  • Other technical things like whether it's an asset sale or a share sale, and what the balance sheet of the company looks like

The first of those is probably the most significant.  Essentially, a venture capitalist is likely to get 'Preferred Stock' rather than 'Common Stock'.  One of the benefits of this preferred stock is that it will generally have 'liquidation preferences' attached to it.  At the simplest level, the 'preference' referred to in the name of the stock, is that it gets paid before the common stock.  There are a few different approaches to preferred stock (broadly known as 'Straight Preferred', 'Participating Preferred', or 'Partially Participating Preferred' - http://venturebeat.com/2010/08/16/beware-the-trappings-of-liquidation-preference/), but the crux of the issue is the same... basically, if you've got preferred stock, you will generally get back the cash you put in, prior to the common stockholders getting anything.  And if you put in $30M, and the company sells for $20M, that means there is zero left for anyone holding non-preferred shares.

Now to be clear, I don't have inside information on any of these transactions, and don't know what the terms were in any of the agreements.  It's likely that the share register at Sensory changed a great many times over the years as funds were raised, investors came and went, founders departed, the employee share scheme ebbed and flowed (since it is in everyone's interests to ensure the key team members remain motivated and incentivised to make the company succeed), and perhaps at the end a few people were holding enough of the right shares to do reasonably well after years of hard work... But it's also possible that nobody did.

My intention here is simply to highlight the fact that for aspiring tech entrepreneurs out there who heard the figure "$21.5 Million" and thought "Pay Day! I'm starting a company!", life often isn't that simple.  While it's fairly self-evident that a company going bust doesn't make the founders rich, it's less self-evident that a company being sold for an eight-figure sum, also may not make the founders a fortune.

I do hope that the team who worked so hard, for so long, to build the technology and the business of Sensory, did reasonably well out of this.  Looking to build an engineering-heavy cyber-security hardware company in Australia in the early 2000s was ambitious and courageous, and they contributed significantly to the cyber-security talent pool that we now have.

 

PacketLoop - The next generation

A month before the Sensory Networks and Sendmail transactions, it was announced that Arbor Networks (www.arbornetworks.com) acquired PacketLoop (www.packetloop.com) - see http://www.arbornetworks.com/recent-in-the-news/4983-news-packetloop for official press release.  While both innovative cyber-security technology companies, in many ways, PacketLoop is the antithesis of the Sensory Networks story.  It was started in 2011 and sold just 2 years later, and as far as I know, was bootstrapped throughout that period, without external venture capital involvement (although I could be wrong in that assumption).     

packetloop.png

For those who are new to the industry, it is worth noting that the PacketLoop team have experience in this area - their previous cyber-security consulting firm ThinkSecure was sold to Infoplex in 2007 (http://www.computerworld.com.au/article/188385/infoplex_acquires_thinksecure_/).  

The great thing about this transaction from my perspective, is that PacketLoop is genuinely innovative, IP-driven, and Australian.  The company has focused on research and development, and getting the product right before taking it hard to market.  The attraction of PacketLoop to Arbor can only have been the IP - while I'm sure they have some clients and revenue, an acquisition at this early stage of the company's genesis is about getting access to the technology.  And that is really exciting, a great credit to Scott Crane, Michael Baker and others involved, and also is a really powerful message to others that it can be done.

The financial details of the deal haven't been made public and I don't know what they are, but I hope the founders and others have done well out of it, and I am also very confident that the deal would have been structured to provide significant incentive to stay and build the company further with Arbor's support and backing - which is great for the industry, the technology, and for cyber-security research and development in Australia. 

Attribution, Economics, and 'The Criminality Premium'

I started putting together a piece on the concept of a 'criminality premium' some time ago, but was drawn to other topics for a while after.  I was brought back to it again after reading a blog by Phil Kernick, of CQR Consulting, titled "Attribution is Easy."  I'm not sure whether the title is intended to be serious or to provoke debate, but if you're really interested in attribution, the US held hearings into the topic before the Subcommittee on Technology and Innovation, Committee on Science and Technology of the United States House of Representatives, in 2010.  While obviously a few years old now, the content remains excellent and is a must read for cyber-security folks.

My personal favourite is this submission: Untangling Attribution: Moving to  Accountability in Cyberspace, by Robert K. Knake, International Affairs Fellow in Residence, The Council on Foreign Relations

The following diagram from Knake's submission presents a neat and tidy summary of the key challenges in attribution, varying by the type of incident/attack one is trying to attribute.  I would suggest that attribution isn't "easy", but in some cases is a problem with sub-elements which can definitely be resolved.

attribution.png

While the CQR blog entry example of Alice and Chuck, and Chuck peering over Alice's fence with a telephoto lens, is hardly the epitome of 'cyber war', the mechanism of attribution - based on the "small number of capable actors" (ie who could see the designs) and "using out-of-band investigative and intelligence capabilities" is a pretty good match for the above.  

The CQR blog also included the following line which raised my eyebrows:

"This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing." 

While the underlying economic premise here may well be correct, it is only true in a world where the only 'cost' of theft to the thief, is the actual financial cost of the resources used to steal.  The lack of consideration of the potential for either civil or criminal liability for copyright breach (and whatever other miscellaneous crimes may have occurred in the process), renders the example of little use in the real world.

Where this does become relevant, however, is in the consideration of the concept of a 'criminality premium', which first arose after a discussion about crowd sourced security testing, and bugcrowd (for whom I am an Advisor).  

The realisation that I had, is that crowdsourcing testing aligns the reward process for the good guys, with the reward process for the bad guys.  That is, the bad guys don’t get 'paid' (ie, don't receive an economic reward) for the time they invest in finding venerabilities in systems; they only get 'paid' when they find the vulnerability (generally, through exploiting it).  Crowdsourcing aligns the reward system so that the good guys get rewarded for doing the same thing as the bad guys.  

This, in turn, got me wondering about whether this economic similarity in reward structure somehow helps level the playing field because the good guys no longer have the economic advantage of stability of earnings (ie getting paid for time, rather than results) and instead are paid like the bad guys - on delivery of results.

Taking this a step further, if we're presenting the same fundamental task (finding security weaknesses), and the same economic incentive structure to both the good guys and the bad guys, then the only reason someone would choose between the two is the size of the reward.  I also assume that it is not as simple as just converging the size of the 'good guy' reward pool with the potential size of the criminal 'reward pool', but that logically there is a 'criminality premium', in that given two choices:

  1. Earn $50 legally;
  2. Earn $50 illegally for doing exactly the same thing;

Anyone making rational decisions will choose 1, as there is a 'cost' that must be considered associated with (2) associated with the potential for punishment for the illegal act.

Therefore, the question is simply how big we think this criminality premium is.  If you have a database of 40,000 credit card numbers, which for argument's sake are worth about 50c each on the black market, the potential 'payment' for accessing that database and selling the contents, is $20,000.

How much do you need to pay, for the person identifying the vulnerability allowing access to that data, who is economically rational, to choose the legal disclosure path rather than the illegal disclosure path?  (Acknowledging that this concept requires almost everyone in the world having a tacit ongoing bug bounty program!)

$5,000?  Seems unlikely.

$10,000?  Must be getting close.  $10,000 without any worries about the feds kicking in your door would seem a better idea than $20,000 from illegal exploitation of that data set (since there are all the usual 'non-payment' risks that also arise in the black market). 

$15,000?  Surely.

If we can successfully remove the economic incentive to be a 'black hat' rather than a 'white hat', we're just left with the criminally insane and the purely vindictive (ie not economically motivated) attackers to worry about.  

And whether organisations have a grip on the potential economic value of their data to an attacker, in order to  put together a program that is sufficient to take economically rational hackers out of the pool of bad guys, is a different question again.

Want to maximise your sale price? Build a product

When you run a cyber-security consulting firm, servicing hundreds of clients, and delivering thousands of projects over the course of many years, you get a pretty good idea of the problems that organisations are experiencing, as well as the problems you are experiencing, and would like to have solved.  From that position, invariably a discussion occurs within the leadership of the company, about whether or not to stay 'pure' as a consulting firm - and do what you know well, recruiting, delivering, and tracking utilisation - or reallocate some of the brainpower in your consulting team towards research & development and more specifically towards the development of some kind of 'product' that will solve the problems you have identified.

The obvious attraction is that products are (often) scalable.  People are not.

Part of the consideration in deciding whether to make this investment, is the expected return at the point of 'exit', particularly, the likely valuation differential that could be commanded at the point of a trade sale.  Having analysed the data for over 600 cyber-security industry transactions completed in the last decade, this is what that premium looks like:

Comparative valuation multiples - software, hardware & consulting led cyber-security businesses, 2004-2013

comparative-valuations.png

So what does the data tell us?

Breaking the organisations into consulting-led, software-led, and hardware-led categories (noting that not enough managed services company data is available for this category to stand alone), and comparing valuation multiples for revenue and profit, with consulting-led firms normalised for each category to '100%', we get the following differentials:

  • Compared to consulting-led firms, hardware-led firms have sold for revenue multiples between 3%-45% higher.
  • Compared to consulting-led firms, software-led firms have sold for revenue multiples between 101%-177% higher.
  • Compared to consulting-led firms, software-led firms have sold for profit multiples between 69%-109% higher.
  • (Insufficient comparative profit multiple data is available for the hardware firms so isn't included)

To put those figures in perspective, if your consulting-led cyber-security business is expected to sell for a revenue multiple of about 2 or a profit multiple of 6, a software-led cyber-security business next door will likely sell for a revenue multiple of between 3 and 5.4, or a profit multiple of between 10.1 and 12.5.  That is a significant difference.

In other words, if you have both consulting and software parts to your business, when valuing the business, it is likely that $1 of profit from your in-house developed software, is worth twice as much as $1 of profit from your consulting business.

Of course, this isn't without its exceptions.  Just looking at listed companies, it's easy enough to find cases of services-driven firms being valued more highly than product-driven firms.  As an example:

PE-mature.png

(Of course, I do acknowledge the significant growth of Checkpoint and Symantec in the services area of their businesses, and particularly Symantec with regard to managed services.  But I would be pretty confident that investors see them significantly as product companies first.)

But then those are all very mature businesses and realistically are well past the point of 'explosive growth'.  When you look at the younger crop of cyber-security product companies, you get some pretty crazy numbers:

PE-fastgrowth.png

To give some perspective on what a P/E of 319 means... Sourcefire's income (profit) for the last 12 month reporting period was a tad over $5 million.  Their current market capitalisation is $1.57 Billion.

But these companies have massive growth potential (Sourcefire has been growing revenue at 25-35% a year), and are also obvious acquisition targets for the more established firms in the market.  The enormous market capitalisations reflect this growth profile and the fact that investors are comfortable the companies will find a way to provide a return to shareholders.

It is also important to recognise, however, that building a successful product business is significantly more difficult than building a consulting practice, and the likelihood of a 'moderate' success is much lower.  In other words, building a consulting practice, it is reasonably easy to run a small team, build up a client base, and operate at a healthy level of profitability for as long as you are willing to continue driving the business.  Building a product business, this type of viability-without-being-the-market-leader is harder to come by, and success is much more likely to be all or nothing.  So while the payoff may be higher, the likelihood of getting a payoff at all is most likely lower.

Also of importance to consider is that the 'buyer universe' changes significantly when your consulting firm starts building a product-led business unit.  Companies that previously may have been interested suitors, may not want the R&D or support and maintenance expenditure necessary for an ongoing product-led operation.  

Ultimately, there are many ways to build a valuable company that will appeal to a sufficient number of potential buyers to achieve a healthy exit for the founders.  What is important, is understanding where the value is within your business, and how to stitch it together into a coherent story to maximise value during the sale process.

Crowdsourcing & the Prisoner's Dilemma

One of the common questions that gets raised in the crowdsourced testing process (eg Bugcrowd) is how it's possible to manage the risk of a tester identifying vulnerabilities, then disclosing them or selling them or using them, outside the parameters of the officially sanctioned test.

While it is presenting an alternative to penetration testing in many cases, it is somewhat more useful to consider the model in the context of the bug bounty programs run by companies like Google.  

The reason for the distinction is that bug bounty programs are aimed at achieving two related, but distinct, goals:

  1. To have vulnerabilities that would have been identified anyway (ie through unauthorised testing, or through incidental testing for a third party), be responsibly disclosed; and
  2. To have additional vulnerabilities identified by encouraging additional testing, and corresponding responsible disclosure.

That first group is often not considered as a goal of a penetration test - the likelihood that any system of interest is constantly being subject to security analysis by Internet-based users with varying shades of grey- or black- hats, seems to often to be overlooked.  

With the risk of stating the obvious, the reality is that every vulnerability in a given system, is already in that system.  Identifying vulnerabilities in a system does not create those weaknesses - but it is true that it may increase the risk associated with that vulnerability as it transitions from being 'unknown' to being 'known' - depending on who knows it.  

Feel free to spread this video around.

To use Donald Rumsfeld's categorisation, we could consider the three groups as follows:

  1. Known Knowns: Vulnerabilities we know exist and are known in the outside world (publicly disclosed or identified through compromise);
  2. Known Unknowns: Vulnerabilities that we know exist, and are unsure if they are known in the outside world (either identified by us; or privately disclosed to us);
  3. Unknown Unknowns: Vulnerabilities that we don't know exist, and are unsure if they are known in the outside world (which is the state of most systems, most of the time).

What crowdsourcing seeks to do, is to reduce the size of the 'unknown unknown' vulnerability population, by moving more of them to the 'known unknown' population so that companies can manage them.  The threat of a 'known unknown' is significantly lower than the threat of an 'unknown unknown'.

Which brings us to the risk that a vulnerability identified through a crowdsourced test, is not reported, and hence remains an 'unknown unknown' to us.  The risk of non-disclosure of vulnerabilities identified through a crowdsourced test is effectively mitigated by game theory - it is somewhat similar to the classic 'Prisoner's Dilemma'

The Prisoner's Dilemma is a classic of game theory, demonstrating why individuals may not cooperate, even if it is in their best interests to do so.  The Dilemma goes like this:

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."

Effectively, the options are as presented in this table:

prisonersdilemma.png

The beauty of the dilemma, is that as they cannot communicate, each prisoner must evaluate their own actions without knowing the actions of the other.  And for each prisoner, they get a better outcome by betraying the other prisoner.  For Prisoner A looking at his options, if Prisoner B keeps quiet, Prisoner A has the choice of 1 year in jail (if he also keeps quiet) or no jail time at all (if he testifies against Prisoner B). Hence, testifying gives a better outcome.  And if Prisoner B testifies against him, Prisoner A has the choice of 3 years in jail (if he keeps quiet) or 2 years in jail (if he also testifies)... again, testifying gives a better outcome.

Hence, economically rational prisoners will not cooperate, and both prisoners will serve 2 years in prison, despite that appearing to be a sub-optimal outcome.

What does this have to do with crowdsourcing?

In crowdsourcing there are obviously far more than 2 participants; but the decision table we are interested in, is as it is relevant to any particular tester.  The situation they face is this:

crowdtest2.png

Essentially, each tester only knows the vulnerabilities they have identified.  They do not know who else is testing, or what those other testers have discovered.

Only the first tester to report a vulnerability gets rewarded.

Any tester seeking to 'hold' an identified vulnerability for future sale/exploitation (as opposed to payment via the bounty system) has to be confident that the vulnerability was not identified by anyone else during the test, since otherwise they are likely to end up with nothing - the vulnerability gets patched, plus they don't get any reward.  

Since Bugcrowd tests to date have had large numbers of participants, and have found that over 95% of vulnerabilities are reported by more than one tester, this is a risk that will rarely pay off.

As a result, economically rational testers will disclose the vulnerabilities they find, as quickly as possible.  

For organisations getting tested, cliched as it is, the crowd truly does provide safety in numbers.

Disclaimer: I'm an Advisor to Bugcrowd.