How to build a companion product and win user trust

In this article, Oleksii Ianchuk, Product Lead at Railsware, shares his key insights from 13 years of building Mailtrap.io. He reflects on major mistakes that could have been avoided and lessons that helped them grow.

13 min read
Share on

For product managers, mistakes aren’t just bumps in the road—they’re a constant. But not every misstep spells disaster; some of them pave the way for innovation. Our journey began with what could have been a major blunder — accidentally sending test emails to real users. Instead of sinking us, it sparked the creation of the first product.

Every setback along the way forced us to rethink, adapt, and ultimately grow. And today, I am sharing our most significant slip-ups and the hard-earned insights that came from them. As in this field, the key to success isn’t avoiding mistakes — it’s knowing how to turn them into opportunities.

Mailtrap started as Railsware’s first product, born out of a simple internal pet project—a sandbox for email testing. Initially, it was developed by just one engineer on a part-time basis, with minimal resources. Despite these limitations, the tool grew organically, gaining traction through word of mouth within the community.

Fast forward eight years, and with nearly a million users and paid plans, we decided to expand Mailtrap by adding email-sending capabilities via an Email API/SMTP. This decision was driven by several key factors:

For the implementation, we chose a lesser-known but flexible MTA solution (Mail Transport Agent, software responsible for SMTP delivery). Why?

Another challenge was working with logs: email history and related events (when sent or opened, link clicks, etc.). This feature is absent in the testing phase but is crucial for email sending and further analysis. We tested several databases and chose Redshift to store logs (spoiler: this choice turned out to be less than ideal).

With this context in mind, let's dive into our mistakes. 

When I joined the Mailtrap team, work had just begun on the second part of the product—Email API/SMTP. At that time, the decision was made to work on email testing and sending in separate repositories and VPC-level isolated projects. From the engineers' perspective, such isolation was supposed to improve product convenience and simplify development. However, problems started to arise over time. We realized that creating communication between interfaces via gRPC protocol or AWS SQS drained team resources.

For example, to obtain user data from one interface, you need to query another project and send a request. Or when changes in one project need to be replicated in another. This creates challenges during product scaling or adjusting its operations in different regions, such as Europe or the US. However, merging these services is currently impossible due to resource constraints. Therefore, we continue to operate under this scheme and plan to change the current architecture gradually.

Look at planned changes as broadly as possible. Before deciding to split, we discussed it from various angles, but this was insufficient. It’s worth dedicating a bit more time to consider all possible risks and edge cases. This is especially important for a product that needs to evaluate ideas from multiple perspectives rather than focusing solely on the technological, marketing, or visual aspects. Also, ask more questions to avoid similar mistakes in the future.

To best analyze all possible scenarios, use various frameworks and tools. For example, conduct BRIDGeS sessions. Recently, our entire team met offline to thoroughly explore challenges, opportunities, and issues.

Even if you think you have gathered the maximum amount of information before making a decision, the choice may still be incorrect. We experienced this when working with Redshift. Moreover, before using it in the product, we conducted relevant tests (scaling, cost, performance)—and it seemed like the best solution (if interested, we also considered PostgreSQL, DynamoDB, and BigQuery).

However, we did not account for our lack of real-world experience. This hindered our ability to foresee challenges specific to our product. For example, over time, we encountered delays in logs. This is critical for Mailtrap, as it’s important for users that sent emails appear in logs immediately, not after 10 minutes. We spent a lot of time optimizing and reduced this time 10 -> 1 minute. However, it was still too long, and data volume would grow in the future.

So, we returned to analyzing alternatives. We also started looking for consultants with significant experience in Redshift. We thought we were simply misusing it. Yet, our conversations with experts revealed that Redshift was not suitable for our project.

After considering other options, we chose Opensearch in the AWS ecosystem, a fork of Elasticsearch . This time, we consulted with those who had substantial real-world experience with this platform. Additionally, AWS itself was helpful. Specifically, their architects consulted us directly. As a result, we quickly created an MVP based on our data—and found it worked for us. Email data appears in logs very quickly, data aggregation is good, and everything scales well.

If something isn’t working for a long time, don’t waste time on constant optimization. When choosing a technology stack, it's not enough to study the available options. Evaluate how those solutions will work in your product specifically. Surely, it can be done through trial and error (as happened with us). But if we had initially talked to teams with real experience, we would have saved the year we spent solving this problem.

And this leads us to the next lesson learned.

No matter how great your team is, you can’t know everything. Especially when it comes to niche issues, like with Redshift. Or consider high-load and high-reliability services with complex architecture. Another example is the current challenge for Mailtrap—email deliverability—which also has many nuances that you won’t discover through simple research.

Moreover, as your product grows, you may need additional specialists. For example, we didn’t have a dedicated DevOps function for a long time: engineers themselves handled these tasks and implemented changes. But at a certain stage, it became clear that a full-time specialist was needed.

Today, we are working on the next part of Mailtrap—email marketing—and are immediately looking for those who have already solved similar problems. Based on their experience, they can provide real solutions and evaluate our cases.

A smart approach is to buy time from external experts to save your own. It's crucial to clearly define your request to find the right specialist. For instance, we first set expectations within the team and then pass them on to our people management team. They create a pool of potential candidates, often sourced from LinkedIn and other platforms. Next, we arrange brief introductory calls to assess their expertise and fit for our needs. If everyone is satisfied, we sign the agreement on the terms of collaboration. For specific, focused tasks, hourly consulting works perfectly.

When mistakes lead to significant losses, challenges become particularly costly. However, correct decisions can prevent new problems. Fortunately, not all our knowledge was gained from failures. Here are some of the most interesting cases.

Product reliability is the foundation of the users’ trust. In our case, Mailtrap.io earned a good reputation among engineers even before launching marketing activities for Email API/SMTP. For instance, we were recommended on Twitter (now X) when someone had an email failure. So, we didn’t want to lose that. Negative reviews on social media could have undermined the investments and efforts of our team in developing the Email API/SMTP direction. Gaining trust is hard, but losing it is easy.

During our beta testing phase, we ran into an issue during deployment—emails weren’t being sent for ten minutes. Although we were ready to launch our marketing campaigns at that stage, we decided to hold off.

Honestly, part of the product team was tempted to take the risk, given the success of our previous testing efforts. But now we realize that pausing was the right decision. The email service market is highly competitive, with clients who value reliability and expect a flawless experience. In such a billion-dollar industry, technical issues can cost you users' trust. So, it was crucial to build an exceptionally robust product foundation to avoid letting our users down.

Over the next 7-8 months, we focused on improving our processes, including:

Experimentation is crucial, but not with core processes that are critical. In our case, emails must be sent reliably and quickly.

After working on a project for a long time and feeling confident in its reliability and competitive features, it's tempting to rush for real user feedback and engagement. This approach works well for interface tweaks or marketing campaigns but not for critical infrastructure. Even if your processes seem perfect, there's always room for improvement. The "build fast, fail fast" mentality has its place—but definitely not when a minor failure could jeopardize the entire product's success.

After launching Email API/SMTP, we expected users to quickly adopt this new feature. However, as you might guess, that didn’t happen. Mailtrap’s reputation still strongly associates it with email testing alone. We considered rebranding but quickly dismissed the idea—Mailtrap’s name shouldn’t limit us to just one function. 

Instead, we focused on understanding why users were hesitant to adopt the new email-sending feature:

We quickly dismissed the idea of competing on price, as it's a losing strategy in the long run. Instead, we focused on communication to ensure our current and potential users are fully aware of what the product can do. This included increasing content across various formats to explain interface changes and their benefits, highlighting these topics in conversations with users and colleagues at conferences, and revamping our website to be more informative and user-friendly. We also added features that make it easier for users to manage both testing and email sending within our product.

These users send hundreds of thousands of emails each month and use paid plans rather than free or the cheapest options. We provide them with the necessary features first, assist with migration for free, and don’t charge for the transition period to avoid simultaneous payments for two services. Additionally, our deliverability team monitors their accounts separately.

Creating a product is a long journey, one that takes more than a day or a week and requires readiness for various scenarios. Even if you’re 100% confident in your decisions, always have a backup plan for when things don’t go as expected.

Time is important, but that doesn’t mean you should rush. Instead, give yourself time to test hypotheses and find ways to save it, including bringing in external expertise. Most importantly—keep engaging with your users and listen to their feedback.

In our 13 years of experience, we’ve become faster and better. But we’re still learning from our own mistakes. Mistakes are inevitable, but the key is to learn from them and keep moving forward.