In these early hours, we have enough willpower and energy to tackle things that require internal motivation, things the outside world does not immediately demand or reward.
That’s the argument for scheduling important priorities first. But there’s more to the muscle metaphor. Muscles can be strengthened over time. A bodybuilder must work hard to develop huge biceps, but then he can go into maintenance mode and still look pretty buff. Paradoxically, with willpower, research has found that people who score high on measures of self-discipline tend not to employ this discipline when they do regular activities that would seem to require it, such as homework or getting to class or work on time. For successful people, these are no longer choices but habits.
Need Your Help: Research Project for The Science of Growth - Suggested Cases?
I’ve been talking a lot recently about “The Science of Growth” (slides below from standard presentation that I customize).
The basic concept is focusing on answering the question: What do you do once you have product market fit? I argue there is an entire science to generating growth once they’ve achieved the first milestone of p/m fit and it’s not as simple as “hire a growth hacker.”
This fall in addition to my course on Lean Entrepreneurship during mini 1 & 3 at CMU, I’ll be adding a second course focusing on this topic taught during mini 2 & 4.
I’ve started working with a graduate student at CMU to develop some case studies. So here is the request, we’ve identified a bunch of interesting examples where two companies had a similar level of p/m fit at about the same point in their development but one succeeded and the other failed to achieve scale.
This includes software examples, but is broader (ex: Telsa vs Fisker) and not a new phenomenon (ex: McDonalds vs White Castle).
I’d like to have a really broad set of case studies, so any suggestions would be welcome. Please respond below or email me seanammirati at gmail dot com.
Thanks in advance!
Are Your Experiments Actually Delivering Validated Learning?
I’m currently enjoying Nate Silver’s book The Signal and the Noise. In the introduction, he has a great anecdote about basic scientific published research regularly being unreproducible. I thought this was really interesting, so searched around to find a little more information.
In September of 2011, Nature Review Drug Discovery, published an analysis done by Dr. Khusru Asadullah and his colleagues at Bayer that tested:
67 target-validation projects, covering the majority of Bayer’s work in oncology, women’s health and cardiovascular medicine over the past 4 years. Of these, results from internal experiments matched up with the published findings in only 14 projects, but were highly inconsistent in 43 (in a further 10 projects, claims were rated as mostly reproducible, partially reproducible or not applicable
This means roughly 25% of the published research Bayer tried to replicate (and theoretically build upon) could actually be replicated in their lab. In the other 3 out of 4 every four experiments, the results were “highly inconsistent”. While surprising to me, this apparently isn’t a new phenomenon, as the article in Nature goes on to site other published studies that show similar challenges replicating results. The Wall Street Journal covered the same phenomenon summarizing it as:
one of medicine’s dirty secrets: Most results, including those that appear in top-flight peer-reviewed journals, can’t be reproduced.
The obvious question is Why? John Ioannidis from Stanford University’s School of Medicine seems to be the academic expert in this phenomenon. He wrote a paper titled “Why Most Published Research Findings Are False” where he walks through the statistics of the “Post Study, PPV (Positive Predictive Value)”. Based on his statistical framework, he outlines six corollaries each of which decrease the likelihood the research is true:
- The smaller the sample size of the study
- The smaller the effect size of the study
- The greater the number and the lesser the selection of tested relationships
- The greater the flexibility in designs, definitions, outcomes, and analytical modes
- The greater the financial and other interests and prejudices
- The hotter a scientific field (with more scientific teams involved)
So What? / How does this apply to startups?
I don’t invest in life science companies, but I think this is actually very relevant for all entrepreneurs. Especially now with the lean startup movement (which I’m a big fan of teaching the Lean Entrepreneurship graduate course at Carnegie Mellon University) focusing on build / measure / learn experiment cycles and drawing inspiration from applying the scientific method to validating / invalidating these hypothesis.
You might argue that the first four are obvious on the surface given a basic understanding of statistics. However, I’d wager that most of the scientists publishing their research understand statistics better than most people reading this. It’s just tempting to find patterns that don’t exist and run “quick and dirty” experiments. This is often especially true for startups going through accelerators and trying to squeeze out every validation they can from the limited investment dollar before demo day.
The last two are quite interesting given entrepreneurs extreme financial interests and the “herd mentality” my partner Ned Renzi wrote about recently resulting in a lot of startups chasing the trend. In the paper, he makes an interesting statement discussing his sixth corollary: “This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention.” This certainly is true with “the next big startup trend.”
So I’m curious, what techniques do you use to make sure your build / measure / learn experiments are truly delivering validated learnings and not a false positive?
4 Paid Solutions I Love to See Used at Birchmere Labs
I don’t like to see entrepreneurs waste money, but it’s just as dangerous for a startup to save itself out of business. Sometimes this includes spending on tech solutions.
I’ve found myself regularly telling entrepreneurs to pay for the following four solutions so figured it’d be worth quickly talking about each of them here.
- UserTesting.com: Remote usability testing with videos delivered to your inbox in about an hour.
This great service allows you to design quick usability tests with people across the country (filtered for your demographic criteria) and records their screen & microphone while doing tasks on your app or website.
One particularly powerful test that can be included is a 10 second test where a page is shown to a user and then hidden after 10 seconds. You can then ask them basic questions like “what does the site do?” At a certain point in a company’s development, it’s worth automatically doing the same user test with five new people each week from the service. It costs $49 / test so that is a roughly $250 investment.
As entrepreneurs, we stare at our apps all day and often overlook basic usability problems such as “where do I click to sign up?” or “what does that buzzword on the homepage mean?”
- Unbounce: The easiest way to build A/B tests without writing a line of code.
While certainly the technique can be overused, A/B testing different marketing messages is often low hanging fruit for optimizing conversion rates and also earlier in a business’s development generally understanding your customers needs.
There are some free solutions, including Google Website Content Experiments but I’ve found Unbounce is worth the $50 / month to save you time.
- iMockups for iPad: Create wireframes much faster then keynote.
Wireframes are a really powerful way to express an idea and get quick feedback. I’ve lost a lot of cycles trying to mock up a wireframe in keynote or powerpoint. Using this handy iPad app, eliminates a lot of the tedious parts of building these wireframes and seems to avoid those time sinks. (Note: I realize a lot of “real designers” love balsamiq, but I’ve found iMockups to be a nice blend of easy to use and powerful enough.)
- Design Pax: Crowd sourced design for logos & landing pages.
I mentioned last week when talking about misunderstandings of MVPs that "Viable ≠ Crappy: Remember things that are ugly or confusing may introduce false positives or negatives into the hypothesis you are looking to test."
A lot of people have commented to me that “they aren’t designers.” DesignPax solves that problem by allowing you to pay a couple hundred dollars and ensure that your landing page is well designed. Well worth the investment for any idea you are serious about.
We are NOT investors in any of these companies, but give each of them that magical non-dilutive funding called revenue each month :)
Ned Renzi convinced us all to get standing desk in the new Birchmere offices. At the time, I thought he was crazy and that within a day Dan, Sean & I would all have lowered ours back to a “normal desk”.
However, a few weeks into the new space, all of us continue to stand through the day. For me, it’s amazing how much I like it. If you’ve been considering making the transition, give it a try.
Very thoughtful post by Albert’s this morning on our transition to an information age.
One part of this disruption we’ve been thinking about a lot at Birchmere is the reality that routine cognitive “white collar” jobs are going to be automated by big data + machine learning in the same way that a lot of “blue collar” jobs were automated by robotics.
The whole post is worth a read, but something jumped out at me in the middle as he documents the coming transition to the information age. He says:
in each transition it first got worse before it got better. There is evidence that early farmers lived shorter lives than hunter gatherers and worked much, much harder. The early period of industrialization was marked by child labor, squalor in the cities and terrific pollution.
Albert’s post ends asking:
what are you and I personally doing to help navigate the transition?
It’s an important question for each of us to ask. Personally, I believe improving and changing education is a key area for navigating this transition, because a lot of “entry level” jobs go away and the skills that employers will be looking for are going to be very different.
Some of the investments we’ve made in education technology (including TenMarks now part of Amazon) certainly have / will continue to help with this reform.
Beyond that on a more direct level, I believe the courses I teach at CMU do a good job preparing students for exciting careers in this new world.
Helpful Primer of Links re: Bitcoin
The increasing velocity of conversations around bitcoin in 2013 was amazing.
My partner Ned Renzi likes to quote William Gibson during our discussions at Birchmere: “The future is here it’s just not evenly distributed” - which is a helpful reminder with topics like bitcoin.
I’m still not sure the relevant investment areas / thesis for a firm with the focus and fund size of Birchmere but below are some links that helped me understand the phenomenon better.
- Explain Bitcoin Like I’m Five
- Bitcoin is a money platform with many APIs
- Paul Kemp-Robertson’s TED talk: Bitcoin. Sweat. Tide. Meet the future of branded currency.
- Khan Academy’s Series of Lessons on Bitcoin
- Chris Dixon’s post on Coinbase
- For a quick contrarian perspective, Alex Payne’s response Bitcoin, Magical Thinking, and Political Ideology
5 Misunderstandings about MVPs
I believe the concept of an “MVP” (or minimally viable product) is both one of the most powerful concepts for entrepreneurs thinking through their product strategy and also one of the most misunderstood / misused terms by entrepreneurs today.
If you’ve heard me talk about the decision at Birchemere to encourage our founders to talk about their minimally awesome products you know this is not a new theme for me. It’s the first misunderstanding listed below.
I threw together a quick deck over the weekend (embedded from slideshare below) on the five most common misunderstandings around MVPs. In no specific order they are:
- Viable ≠ Crappy: Remember things that are ugly or confusing may introduce false positives or negatives into the hypothesis you are looking to test.
- Not a destination: At least a few times a week I meet with an entrepreneur who has a “six to nine month roadmap until they launch their MVP”. Just because you call it an MVP doesn’t mean it isn’t a launch. The idea is that you end up having multiple iterations that test different core assumptions about your business based on different customer interactions.
- Should validate or invalidate key hypothesis: Rarely is the most critical hypothesis - can we build X. Remember, most startups fail because no one wants what they built not because they couldn’t build it.
- Doesn’t have to be a product at all: Often one of the most powerful early “MVPs” is a simple sketch or “paper prototype” that you can walk prospects through.
- Not always a landing page: I love the concept of building a landing page and driving some prospects to the page to test different assumptions. However, this is one of many techniques and not a silver bullet.
Just remember Eric defined an MVP as:
“that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort”
The First App You Open In The Morning
You wake up. You grab your phone. What’s the first app you open?
This sounds like a silly question — or worse, an insulting one.1 But I find it’s a rather enlightening question. Depending on when the question is asked, the answer can either be telling about the current state of apps or the current state of you.
Personally, right now, the first app I open in the morning is Twitter. But it hasn’t always been. A year ago, that app was Path. A year before that, that app was Instagram. Before that, it was probably Twitter again. Or Foursquare. Or Techmeme (technically, the web browser). At some point it was Facebook. And way back when it was probably — shudder — email.
Very interesting post. For me it’s Twitter, but should be wunderlist given my 2014 goals.
Great presentation on growth from LeWeb by @JamesCurrier