One of my favorite reads this year was Nate Silver's The Signal and the Noise which has the subtitle "Why so many predictions fail, but some don't." It covers a ton of different topics, from weather to politics to gambling, and I couldn't help but read it with a startup/tech point of view.
After all, the industry of technology startups is all about prediction- we try to predict what will be a good market, what will be a good product, as we "iterate" and "pivot" on our predictions. And of course the business of venture capital is even more directly about knowing how to pick winners- especially the seed and Series A investments.
And yet, we're all so bad at predicting what will work and what won't. I've written about myembarrassing skepticism about Facebook, but hey, I'm just a random tech guy. For the folks whose job it is to professionally pick winners, the venture capitalists, they aren't doing very well either. It's been widely noted that the venture capital asset class, after fees, has lagged the public markets- you'd be better off buying some index funds.
Startup exceptionalism = sparse data sets = shitty prediction models
One of the most challenging aspects of predicting the next breakout startup is that there's so few of them. It's been widely discussed that 10-15 startups a year generate 97% of the returns in tech, and each one seems like a crazy exception. And as an industry we get myopically focused on each one of them.
With these kinds of odds, our brains go crazy with pattern-matching. When a once-in-a-generation startup like Google comes around, for the next few years after that, we all ask, "OK, but do you have any PhDs on the team? What's the 'PageRank' of your product?" And now that we have AirBnb, we've gone from being skeptical of designer-led companies to being huge fans of them. With so few datapoints, the prediction models we generate as a community aren't great- they're simplistic and are amplified with the swirl of attention-grabbing headlines and soundbites.
These simplistic models result in generic startup advice. As I wrote about earlier, there's a whole ecosystem of vendors, press, consultants, and advisors who go on advice autopilotand give the same advice regardless of situation. Invest in great UX, charge users right away, iterate quickly, measure everything, launch earlier, work long hours, raise more money, raise less money â€“ all of these ideas are helpful to complete newbies but dangerous when applied recklessly to every situation.
We all know how to parrot this common wisdom, but how do we know when we're hearing good versus bad advice? If you think about the idea that there's 10-15 companies every year who are breakouts, how many people really have first-hand experience making the right decisions to start and build breakout companies?
Hedgehogs and pundits
I was reminded for my dislike of generic startup advice when in his book, Nate Silver writes about hedgehogs versus foxes and their approaches towards generating predictions â€“ here's the Wikipedia definition on the concept:
[There are] two categories: hedgehogs, who view the world through the lens of a single defining idea and foxeswho draw on a wide variety of experiences and for whom the world cannot be boiled down to a single idea.
Silver clearly identifies as a fox, and contrasted his approach to the talking head pundits that dominate political talk shows on TV and radio. For the pundits, the more aggressive, contrarian, and certain they seem, the more attention-grabbing they are. Rather similar to what we see in the blogosphere, where people are rewarded for writing headlines like "10 reasons why [hot company] will be killed by [new product]." Or "Every startup should care about [metric X]" or whatever.
This hedgehog-like behavior is amplified by the fact that there's always pressure to articulate a thesis on what's going on in the market. People in the press are always trying to spot trends or boil down complex ideas, and investors are constantly asked, "What kinds of startups are you investing in? Why?" And entrepreneurs are always forced to fit their businesses into the broader trends of the market, to find sexy competitors, all in the change to find a simple narrative that describes what's going on.
The solution to all of this isn't easy- to be a fox means to draw from a much broader set of data, to look at the problem from multiple perspectives, and to reach a conclusion that combines all of those datapoints. There's been some great work on the science of forecasting by Philip Tetlock of UPenn, who's set up an open contest to study good forecasting here. There's an interview of him Edge.org here and a video describing some of his academic research below. Worth watching.
My personal experience
Over my 5 years in Silicon Valley, the biggest lesson I've learned from trying to predict startups is calibration. They talk about it in the video above, but the short way to describe it is to be careful with what you think you know versus what you don't. I've found that my area of expertise where I can make good decisions is actually pretty narrow- I've done a bunch of work in online ads, analytics, consumer communication/publishing, and I think my judgement is pretty good there, but it's much shakier outside of that area.
When I do an analysis, I try to match my delivery with how much I think I know- and these days, it means that they sound a lot more tentative than the younger, brasher version of myself when I first came to SF. I've also tried to be diligent in my employment of "advice autopilot" â€“ if I meet with entrepreneurs and find myself saying the same thing multiple times, then I try to refine the idea to take into account the specifics and nuances of that product. It's easier, lazier, but less helpful to just say the same thing over and over again.
Be the fox, not the hedgehog.
(Andrew Chen is an entrepreneur and blogger based in Palo Alto, CA)