Friday, July 8, 2022
-
3
min
How to safeguard quality during post-MVP scaling
You’ve done it at last: your MVP is ready to hit the app store, and now you can look forward to developing all the other features in your pipeline. But as you start to scale your fledgling product, how can you make sure you don’t lose any of the quality you’ve worked so hard to build in?
You might think the answer lies in the developers you hire. Getting the best talent on board always helps, but it’s not just about assembling a developer super-team. It’s also about empowering them with the right tools and setting up the right policies to guide your scaling.
Release updates little and often
One of the biggest mistakes firms make when they start scaling after their minimum viable product is taking too long on their next release. It’s easy to put a lot of pressure on your first post-MVP update. Now you’ve got real users on board, you don’t want to put anything out that might break your app and undo all that hard work. Or maybe you only want to release something when it can make a significant change to your app’s feature set.
But if you wait for your next release to be perfect before rolling it out, it’ll never make it off your development server. Instead of a digital product that scales flawlessly, you’ll end up with users frustrated that your key features keep getting kicked down the road.
That's not all. The longer your next release gets held back, the more code builds up around it. And when you finally add that code to the actual product, the difference between the old and new versions are so huge that you'll get a whole heap of bugs flaring up at once.
There will always be bugs, no matter how many months you spend testing. Instead of chasing one perfect update, build a culture of small, regular iterations. You might encounter bugs more often, but it’ll be easier to deal with a few at a time than to track down what’s going wrong in a mammoth overhaul.
Streamline your testing with automation
Releasing regularly doesn’t mean chucking out updates for the sake of it. Your code still needs to be tested if you want to safeguard quality – the trick is to make that testing as efficient as possible so it doesn’t keep anything from reaching your active users.
One way to speed things up is to build automation into your testing process. In an ideal world you only need two humans involved in writing code: one developer to do the writing, and one to peer review it. Once the pull request has been approved, automated tests are all you need to check it for production.
Rolling out updates without any manual testing might scare your product manager, but automated testing isn’t more fallible than a human pair of eyes. Automated testing doesn’t get tired or distracted – it won’t reach 4pm on a Friday and rush through the last checks to get home.
If anything does slip through the automated net, it’ll be something caused by a specific user request on a specific device with a specific app and operating system version. Those are bugs so obscure that you’ll almost never encounter them until the update goes into the wild – and that’s where monitoring comes in.
For automated testing, see Moropo.
Build a tight feedback loop
When you're making a minimum viable product, bug monitoring probably isn't high on the priority list. You've got key features to build and an idea to validate – maybe there's a line of Google Analytics code or some basic Sentry bug-catching in there, but that's it. Once you start scaling your app with real users though, you're going to need to invest in your monitoring before you start rolling out more features.
Without stringent bug reporting, you’re not going to know something’s gone wrong for specific users unless they let you know about it. But if you’re going to fix those bugs you need to know exactly what’s broken and under what circumstances – and ideally, before you read about it in an angry tweet or a one-star review.
One way to help that process of identifying and fixing bugs is with correlation IDs. These are unique identifiers that are attached to every request in your app, and they follow those requests all the way through the system.
That means that if something's broken for one of your users, your tech team can track down exactly where and why the request failed without conducting a huge manual investigation. And the faster you can find the bug, the faster you can roll out a fix.
Enabling that feedback loop also means setting the right expectations from the start. When you start evolving your MVP, the non-technical side of the business will want to crack on with the product ideas that add the most value to the user experience. But if that's where all your sprints are focused, you won't have any resources left for monitoring, bug fixes and refactoring, and the quality of the app will suffer.
It’s up to the developers to make it clear why bug reporting code and automated testing are so important to set up first. Once those elements are in place, scaling your app becomes faster and less frightening. Because even when a release does kick up bugs, you’ll have the tools in place to find, address and resolve them before they can hurt your app.
If you need support in scaling beyond your MVP or you’re looking to get your app project off the ground, get in touch.