How We Can Thrive in a World of Chaos Great TED talk by Fred Destin
Don’t launch your startup at an event BUT use events to catalyze growth
One interesting finding that seems to be emerging from my research project on The Science of Growth at CMU is that break out companies while no more / less likely to “launch” with a big bang do seem to find events once they’ve gotten started to catalyze growth.
I mentioned this during my panel at the MIT Enterprise Forum this week and a lot of people asked me about it after, so thought I’d expand here.
A lot of the popular “launch” stories you here were not actually launches at all. For example, here is Evan Williams talking about what actually happened at SxSW in 2007 for Twitter on Quora
we didn’t actually launch Twitter at SXSW — SXSW just chose to blow it up. We launched it nine months before — to a whimper …
A lot of the “overnight successful launch stories” told today seem to have this characteristic. The product had actually been used by a core group of users for months before the “overnight success.”
That said, many of the companies we have been looking at do seem to have a well known catalyzing event later. Sometimes it’s a physical event like SxSW (Twitter) or the DNC Convention (AirBnB) and sometimes it’s just a news story the product / company can draft off. In one case, Moveable Type vs Wordpress the event was self inflicted by Moveable Type’s Licensing Fiasco which Wordpress jumped all over.
Early results are this will be one of the roughly half-dozen best practices we ultimately highlight.
My Entrepreneurial Journey
I spoke in my friend & mentor Dave Mawhinney’s class at Tepper this morning. He asked me to talk about the importance of networking & dive deep into Birchmere Labs and how it complements Birchmere’s standard venture fund.
I decided to walk through my career and how my network catalyzed that. For the first part of my journey as a founder / entrepreneur, I used the LinkedIn maps app to visualize the network and show connections.
Then talked about driving across the country from Tuscon to Pittsburgh after 9/11 with Ned Renzi who over 10 years later would invite me to join him as a partner at Birchmere.
Finally, I talked about some of the key assumptions around Birchmere Labs and our thesis / approach.
The slides are pretty visual, but sharing them below. I like this much better then a standard “networking is important” talk that just felt to self evident to present to a bunch of graduate students.
Interesting analysis. Suspect some of this is larger seed rounds, but also the point Nikhil Basu Trivedi makes that momentum matters.
Are you hearing “no” enough?
I was talking to a Birchmere portfolio company yesterday about an amazing business development partnership they had just secured. It was a really fun call as it promises to be transformational.
I mentioned casually how impressed I was with their tenacity and the guy who had secured the deal said: “look it was low probability but you miss 100% of the shots you don’t take.”
It’s a great line. It’s easy to fall into the trap of just doing things you think will be successful. One of my mentors, Dave Mawhinney likes to remind his students that a good definition for entrepreneurship is “insane perseverance in the face of complete resistance"
I was trying to think how to measure if I was doing this. (After all, you measure the things you care about most.) I’ve decided to start tracking how many times I hear no a week, because I took a shot that I thought I’d miss.
In these early hours, we have enough willpower and energy to tackle things that require internal motivation, things the outside world does not immediately demand or reward.
That’s the argument for scheduling important priorities first. But there’s more to the muscle metaphor. Muscles can be strengthened over time. A bodybuilder must work hard to develop huge biceps, but then he can go into maintenance mode and still look pretty buff. Paradoxically, with willpower, research has found that people who score high on measures of self-discipline tend not to employ this discipline when they do regular activities that would seem to require it, such as homework or getting to class or work on time. For successful people, these are no longer choices but habits.
Need Your Help: Research Project for The Science of Growth - Suggested Cases?
I’ve been talking a lot recently about “The Science of Growth” (slides below from standard presentation that I customize).
The basic concept is focusing on answering the question: What do you do once you have product market fit? I argue there is an entire science to generating growth once they’ve achieved the first milestone of p/m fit and it’s not as simple as “hire a growth hacker.”
This fall in addition to my course on Lean Entrepreneurship during mini 1 & 3 at CMU, I’ll be adding a second course focusing on this topic taught during mini 2 & 4.
I’ve started working with a graduate student at CMU to develop some case studies. So here is the request, we’ve identified a bunch of interesting examples where two companies had a similar level of p/m fit at about the same point in their development but one succeeded and the other failed to achieve scale.
This includes software examples, but is broader (ex: Telsa vs Fisker) and not a new phenomenon (ex: McDonalds vs White Castle).
I’d like to have a really broad set of case studies, so any suggestions would be welcome. Please respond below or email me seanammirati at gmail dot com.
Thanks in advance!
Are Your Experiments Actually Delivering Validated Learning?
I’m currently enjoying Nate Silver’s book The Signal and the Noise. In the introduction, he has a great anecdote about basic scientific published research regularly being unreproducible. I thought this was really interesting, so searched around to find a little more information.
In September of 2011, Nature Review Drug Discovery, published an analysis done by Dr. Khusru Asadullah and his colleagues at Bayer that tested:
67 target-validation projects, covering the majority of Bayer’s work in oncology, women’s health and cardiovascular medicine over the past 4 years. Of these, results from internal experiments matched up with the published findings in only 14 projects, but were highly inconsistent in 43 (in a further 10 projects, claims were rated as mostly reproducible, partially reproducible or not applicable
This means roughly 25% of the published research Bayer tried to replicate (and theoretically build upon) could actually be replicated in their lab. In the other 3 out of 4 every four experiments, the results were “highly inconsistent”. While surprising to me, this apparently isn’t a new phenomenon, as the article in Nature goes on to site other published studies that show similar challenges replicating results. The Wall Street Journal covered the same phenomenon summarizing it as:
one of medicine’s dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can’t be reproduced.
The obvious question is Why? John Ioannidis from Stanford University’s School of Medicine seems to be the academic expert in this phenomenon. He wrote a paper titled “Why Most Published Research Findings Are False” where he walks through the statistics of the “Post Study, PPV (Positive Predictive Value)”. Based on his statistical framework, he outlines six corollaries each of which decrease the likelihood the research is true:
- The smaller the sample size of the study
- The smaller the effect size of the study
- The greater the number and the lesser the selection of tested relationships
- The greater the flexibility in designs, definitions, outcomes, and analytical modes
- The greater the financial and other interests and prejudices
- The hotter a scientific field (with more scientific teams involved)
So What? / How does this apply to startups?
I don’t invest in life science companies, but I think this is actually very relevant for all entrepreneurs. Especially now with the lean startup movement (which I’m a big fan of teaching the Lean Entrepreneurship graduate course at Carnegie Mellon University) focusing on build / measure / learn experiment cycles and drawing inspiration from applying the scientific method to validating / invalidating these hypothesis.
You might argue that the first four are obvious on the surface given a basic understanding of statistics. However, I’d wager that most of the scientists publishing their research understand statistics better than most people reading this. It’s just tempting to find patterns that don’t exist and run “quick and dirty” experiments. This is often especially true for startups going through accelerators and trying to squeeze out every validation they can from the limited investment dollar before demo day.
The last two are quite interesting given entrepreneurs extreme financial interests and the “herd mentality” my partner Ned Renzi wrote about recently resulting in a lot of startups chasing the trend. In the paper, he makes an interesting statement discussing his sixth corollary: “This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention.” This certainly is true with “the next big startup trend.”
So I’m curious, what techniques do you use to make sure your build / measure / learn experiments are truly delivering validated learnings and not a false positive?
4 Paid Solutions I Love to See Used at Birchmere Labs
I don’t like to see entrepreneurs waste money, but it’s just as dangerous for a startup to save itself out of business. Sometimes this includes spending on tech solutions.
I’ve found myself regularly telling entrepreneurs to pay for the following four solutions so figured it’d be worth quickly talking about each of them here.
- UserTesting.com: Remote usability testing with videos delivered to your inbox in about an hour.
This great service allows you to design quick usability tests with people across the country (filtered for your demographic criteria) and records their screen & microphone while doing tasks on your app or website.
One particularly powerful test that can be included is a 10 second test where a page is shown to a user and then hidden after 10 seconds. You can then ask them basic questions like “what does the site do?” At a certain point in a company’s development, it’s worth automatically doing the same user test with five new people each week from the service. It costs $49 / test so that is a roughly $250 investment.
As entrepreneurs, we stare at our apps all day and often overlook basic usability problems such as “where do I click to sign up?” or “what does that buzzword on the homepage mean?”
- Unbounce: The easiest way to build A/B tests without writing a line of code.
While certainly the technique can be overused, A/B testing different marketing messages is often low hanging fruit for optimizing conversion rates and also earlier in a business’s development generally understanding your customers needs.
There are some free solutions, including Google Website Content Experiments but I’ve found Unbounce is worth the $50 / month to save you time.
- iMockups for iPad: Create wireframes much faster then keynote.
Wireframes are a really powerful way to express an idea and get quick feedback. I’ve lost a lot of cycles trying to mock up a wireframe in keynote or powerpoint. Using this handy iPad app, eliminates a lot of the tedious parts of building these wireframes and seems to avoid those time sinks. (Note: I realize a lot of “real designers” love balsamiq, but I’ve found iMockups to be a nice blend of easy to use and powerful enough.)
- Design Pax: Crowd sourced design for logos & landing pages.
I mentioned last week when talking about misunderstandings of MVPs that "Viable ≠ Crappy: Remember things that are ugly or confusing may introduce false positives or negatives into the hypothesis you are looking to test."
A lot of people have commented to me that “they aren’t designers.” DesignPax solves that problem by allowing you to pay a couple hundred dollars and ensure that your landing page is well designed. Well worth the investment for any idea you are serious about.
We are NOT investors in any of these companies, but give each of them that magical non-dilutive funding called revenue each month :)