The Minimum Viable Stack — Part 1: Process and Guidelines

Eric Cobb
8 min readJul 27, 2018

--

As our team at vspr.ai applies the scientific method to product management, designing experiments and validating products, we inevitably find ourselves at the “build” stage of the build-measure-learn loop. Faced with the sometimes-daunting task of putting fingers on keyboards and writing “just enough” code to prove (or disprove) a hypothesis, it’s easy to to make missteps along the way, such as over-engineering the tech stack, overlooking security or privacy concerns, losing valuable time in quagmires of complexity for relatively small or insignificant features, sacrificing quality and craftsmanship in the name of speed and time-to-market, and more.

This article details the process we take to build an MVP and guidelines we use to make technical implementation decisions using a concrete example: our new machine-learn powered bookmarking tool Pensive. Like any process, ours isn’t perfect, but we are happy with how it has evolved and what we have accomplished with it.

The Product

The latest iteration of our Pensive experiment- A chrome extension

The idea for Pensive grew from a couple of simple observations:

  • I know several people, myself included, who tend to text themselves links to things they want to remember.
  • The user experience of finding those links later isn’t very great (to put it mildly).
  • My business partner, Danny, has incredibly well-curated bookmarks and while I want that for myself, I’m not willing to put in the work.

Our initial experiment then became: “What if we gave people an iPhone App to which they could share links so they could be organized and indexed for later retrieval?”

The keen observers among you have probably realized that the gif above does not include an iPhone App! The third part of this series details our pivots and compromises. There, I’ll talk about how we got to a Chrome extension and how we’re still using the mobile App today.

Our team sat down to sketch out what we thought the minimal product to test our hypothesis would look like. We imagined an iPhone app with a list of links, looking not all that dissimilar from iMessage. The links would have labels, automatically applied with machine learning. At the top, a search field would let you filter links by their content, not just their titles or user-provided metadata, and the bottom would have a few navigation buttons, one taking you to a browse view where you could filter by label and scroll through the links you saved.

Next, we came up with some metrics that we wanted to track: number of links added, how often links were automatically classified, how often users classified the links themselves, and how often users followed a link from within the app.

Minimum Viable Analytics, courtesy of CloudWatch

With these decisions made, we were ready to start implementation.

The Process

Developers typically face several, sometimes competing priorities and concerns. When I am building a solution, whether I’m adding a single feature to a large codebase or designing the architecture for a greenfield project, I keep three main priorities in mind:

  • Speed. By this I mean “time to market”, or how quickly I can deliver features to my users. Time is money, and a developer’s time is notoriously expensive.
  • Stability. How quickly a developer delivers code means nothing if the code doesn’t work. We should strive for software and designs that minimize bugs and downtime.
  • Maintainability. Even if our design is sacrificial or our experiment fails, the chance that we’ll never have to change our code is practically zero. In fact, I would say that even a project that eventually ends up in the trash is likely to require hundreds of small changes and iterations along the way as you test your hypotheses.

In every day conversation, you’ll often hear me combine these last two bullet points into a single idea: craftsmanship. This is convenient because we now have two principles, speed and craftsmanship, which often feel at odds with one another. At some level this is certainly true: most of us can relate to the idea of a shoddily built house thrown up in a few months that could be sold more quickly than a well-built house with a strong foundation.

The analogy breaks down, however, when you consider the life-span of software. It’s safe to assume that someone (probably you, but maybe not) is going to have to make many changes to your code over time. Perhaps counterintuitively, the more successful your project, the more changes will be required. The cumulative effect of poor stability and maintainability is that you move slower, while the cumulative effect of high stability and maintainability is that you move quicker. If the walls of your house collapse when you try to hang a picture on the wall, you’ll be having heated words with the construction company.

It is important to draw a distinction between high craftsmanship and over-engineering — it is easy to conflate the two and the every developer is guilty of this at some point, including me (especially me). Continuing with the house analogy- though you’d make sure the foundation of your well-built house is strong, you wouldn’t it build it deep enough to support a high-rise in a residential neighborhood.

You’ll hear me call writing code with high speed, stability, and maintainability but low over-engineering pragmatic development. Next, I’ll give a few guidelines for pragmatic MVP development and how we applied them when building Pensive.

Pick technologies and design patterns your team knows, have high adoption in the community, and are optimized for speed instead of efficiency.

Side-projects can be a great way to learn new technologies, but if you are seriously trying to launch an MVP, you should go with something that you and your team know well. At vspr.ai, we primarily use Java and Javascript, so our main stack for Pensive is React/React-native on the front end and Node.js on the back end. Node.js might not be the most efficient technology in terms of scaling and speed, but it allows us to leverage our full-stack development expertise. Additionally, the popularity of Javascript means we shouldn’t have too much difficulty scaling our team if it becomes necessary. Node.js is also a very productive framework, both in terms of the modern Javascript language itself and the open source community providing tools and utilities for it.

For the Pensive API, we embraced a monolithic architecture. In the past, I have gone straight to microservices for new projects. Microservices are an excellent tool for scaling large teams (efficiency) but represent a significant overhead in terms of a greenfield project with a small team (speed). Following best practices like Separation of Concerns, Inversion of Control, and DRY will make adding additional services or breaking your monolith into microservices manageable if that becomes a priority for you in the future. Parts of Pensive are in fact “service based”- the text classification model runs in a separate service. Machine learning is a great example of something that Node.js does not do well, and our design allowed us to add a service whose sole responsibilities are training a model and classifying text with minimal friction.

👏 You 👏 Must 👏 Write 👏 Tests 👏

We’re not bragging…

I am amazed to still encounter developers that insist they don’t need to write tests for their code. In my experience, code with tests is not always good, and good code does not always have tests, but good code is always easily testable. This means I can use how easy or difficult it is to write my tests as one indicator of the quality of my code. If you find that tests are slowing you down or that maintaining them is difficult when you make small changes, then the tests are actually doing their job: you probably have a bit of a mess on your hands.

I also believe that tests reduce bugs, improve stability, and increase maintainability by serving as documentation. I don’t think those are radical statements, and we’ve established the importance of stability and maintainability already. Put differently, tests are pragmatic because they improve stability and maintainability but are not by their nature over-engineering.

TDD is recommended but in no way required. There is, however, an aspect of TDD that I suggest everyone try: after you write your tests, refactor the code you just wrote. This might be the biggest value of writing tests: reading what you just wrote to understand it more, think about the edge cases, and how you can refactor it to make it more maintainable.

In Pensive, this means we use Codecov.io to guarantee a minimum test coverage of 90% for every project and that new code never decreases our coverage. This happens automatically on every pull request via continuous integration so nobody is responsible for being the ‘bad guy’ during a code review.

Deliver, Deploy, and Validate

Setting up a production environment should be a top priority. The first thing I build when I’m creating a new back-end service is a “status” API that returns a pre-defined ‘ok’ message once the server boots successfully. I don’t consider this task done until I can test it in a production environment on the public internet. To accomplish this, a laundry list of configuration, build, and deploy automation tasks need to be resolved first (some of which I’ll go over in part 2 of this series). As your product grows in complexity, the demands of deploying it will likely increase. It is much easier to figure these changes out piecemeal rather than all at once at some indeterminate point in the future.

We take this further and only call tasks “done” when they are validated in production, usually by someone other than the person writing the code. Getting into the habit of continuous delivery early pays dividends later, early on you will likely make mistakes that cause downtime or break backwards-compatibility. It is much better to make, and learn from, these mistakes early before you have active users.

On the topic of users, get them as soon as possible. Well before you do a broader MVP, make sure your team is regularly using and testing the product (in production!). This is always easier with consumer apps- if you’re making an enterprise app or something that you and your team wouldn’t normally use, set the expectation that your team regularly put on the hat of a user and validate the app. Not only does this improve stability, but it helps your team stay aligned and empathetic to your users.

What’s Next?

Building an MVP is a broad subject, touching topics ranging from the architecture and design of our production stack to how we iterate on tricky features with our product team. In the next article, we will take a deep dive into our tech stack, including the specific technologies we use and how we write and review code.

If you are an entrepeneur (or intrapeneur!) looking to build an MVP and think vspr.ai could help, drop us a line at buildit@vspr.ai.

If you’re a developer and this kind of work interests you, contact me at jobs@vspr.ai.

Lastly, if you want to actually use Pensive, check it out on the Chrome Extension Store (it’s free!) we’re looking for feedback for our MVP, so shoot thoughts and question to feedback@vspr.ai.

--

--

Eric Cobb

Full-Stack Software Developer and Entrepreneur in Denver, Colorado. Twitter @ericdcobb