A monolith-to-microservices migration is a software architecture transformation that decomposes a single-codebase application into independently deployable services. Realistic timelines for medium-complexity migrations are 12–16 weeks with a team of 15–20 engineers, during which feature development typically stalls because the migration consumes nearly all engineering capacity. The key PM challenge is communicating business value — framing the migration in customer-facing outcomes (e.g., "40% faster checkout") rather than technical abstractions. Success requires understanding the dependency graph before, during, and after migration to track extraction progress and identify ownership changes.
At Salesken, we had 47 microservices. The architecture was technically sound. The problem was that no one person understood how they all connected.
By Priya Shankar
A team I know spent six months planning a monolith-to-microservices migration. They spent three months communicating to stakeholders. Then they spent nine months actually executing it. During those nine months, the feature roadmap didn't move. Not because they didn't try, but because the migration consumed almost all engineering capacity.
When the CTO presented the migration to the board, he said it would "improve our ability to ship faster and scale better." Which is true. But it took him three minutes to say that and nine months to deliver it. By the time the benefit became visible, everyone had already decided the migration was a disaster because nothing shipped.
That's the problem with monolith-to-microservices migrations from a product perspective: they are the most PM-hostile engineering work possible. They take quarters of engineering time. They produce nothing users can see. They are nearly impossible to explain to stakeholders without sounding like you're avoiding shipping real work.
But they're also sometimes necessary. So here's what a PM needs to know to survive one.
What Actually Breaks During a Migration
The first thing that's hard to grasp is that a migration doesn't just change how the code is organized. It changes the entire map of who owns what and how systems interact.
In a monolith, responsibility is fuzzy. The payments team touches the payments module, but the payments module also touches the user service, which touches the notification system. When you own the payments module, you kind of own all of those dependencies too, even if nobody explicitly said so.
When you migrate to microservices, those relationships become explicit and boundaries become rigid. The payments service can no longer reach into the user database. It has to call an API. That API now has a contract that the user service has to maintain. That contract is now something the payments team has to think about.
This sounds like an improvement ( - and it is architecturally ( - but it means a ton of rework. Every implicit dependency becomes an explicit API. Every API has to be designed. Every API change has to be coordinated between teams.
The other thing that breaks is ownership. In a monolith, it's not clear who owns what. In a microservice architecture, it has to be clear. So you end up with conversations like: "Who owns the notification service? Is it the product team that uses it, or a platform team? If the product team owns it, what happens when another team needs to use it?" These conversations take weeks.
The third thing: testing strategy changes completely. You can't integration test the way you used to because the services are separate. You have to invest in contract testing. You have to invest in monitoring. You have to learn how to debug production issues across service boundaries.
All of this is necessary and real work. But it's invisible to someone outside the engineering team. To stakeholders, it looks like "we're working on infrastructure and not shipping."
How to Set Expectations
The move is to be brutally honest upfront about what a migration costs and what it delivers.
Tell your board: "For the next three quarters, feature velocity will drop by 40% ( - 50%. During this time, we will be shipping a product of internal infrastructure that customers won't see but that we'll build on top of."
That's a hard thing to say. But saying it upfront is better than surprising people with it mid-migration.
Then say: "After the migration, we'll be able to ship features 30% faster because our architecture supports scaling. We'll have fewer architectural bottlenecks. Here's the specific features we plan to ship in Q1 after the migration that will prove the ROI."
The second part is important. You have to commit to concrete features that the new architecture enables. Not "we'll ship faster someday." But "we'll ship X, Y, and Z because the microservices architecture lets us parallelize work in ways we couldn't before."
If you're right about that, the migration will have obvious ROI. If you can't point to specific features that the new architecture enables, the migration is probably not worth it.
What the PM Can Actually Monitor
The temptation during a migration is to just give engineering complete autonomy and check back in six months. Don't do that. You need to monitor progress and flag if things are going sideways.
Here's what a PM can usefully track:
First, incident rate. A migration should not increase production incidents. If it does, you're breaking things as you migrate them. That's bad. The incident rate should stay flat or decrease because you're building a cleaner architecture.
Second, deployment frequency. How often are you shipping to production? In a monolith, it might be once a day. During a migration, it might slow to a few times a week. But it should not stop. If deployment frequency drops to zero, you've lost the ability to ship anything, which is a problem.
Third, time-to-change in migrated services. This is the best metric. After a team migrates to microservices, how much faster are they at shipping changes? If a feature that took two weeks in the monolith takes three weeks in the new microservice, the migration is failing. If it takes one week, you're on the right track.
Fourth, number of dependencies on the old monolith. As you migrate, you should see this number decrease. If you're halfway through a migration and you still have 30 dependencies on the monolith, you're doing it wrong. You should be at 10 ( - 15).
Fifth, knowledge distribution. Before the migration, maybe three people understand the payment flow. After the migration, is it clearer? Can more people make changes to the payments service? If the answer is no, the new architecture isn't an improvement.
The Reality of Parallel Shipping
Here's what rarely happens: you ship new features while you're migrating. Every organization says "we'll have parallel teams, one doing migration and one shipping features."
In practice, this is hard because migration and feature work are competing for the same engineers. The ones who know the system best are the ones doing the migration. The newer engineers are trying to ship features in an architecture that's actively changing underneath them.
It's doable, but you have to accept: the features will be slower. Budget for it. Build smaller features. Pick features that don't touch the parts of the system being migrated.
The teams that succeed at shipping during migrations do it by being very deliberate about which features they ship. Not "all features," but "features that demonstrate the benefits of the new architecture while not competing with the migration for engineering time."
Communicating to Stakeholders
The conversation with stakeholders should be: "We are investing in the foundation that will let us ship X, Y, and Z faster. The investment takes 10 weeks ( - or whatever your timeline is. During this time, other feature shipping will slow. After, we'll demonstrate the ROI by shipping the things that are now possible."
That narrative works because it's honest and it connects the investment to concrete benefits.
What doesn't work: "We're improving our architecture." That sounds important but vague. Stakeholders hear "we're working on technical debt," which sounds like avoiding real work.
What also doesn't work: "This will let us scale to 10 million users." That's probably true but it's not something a stakeholder can evaluate in three months.
The move is to tie the migration to a specific product benefit that will be visible within a quarter of the migration finishing. "After the migration, we can ship the multi-tenant feature." "After the migration, we can support a checkout flow that's 40% faster." That's a narrative that works.
Glue's Role in Migrations
The reason migrations are so PM-hostile is that the codebase map changes completely during a migration. Before the migration, you understand which teams own what. After the migration, everything changes. Services that didn't exist are created. Dependencies shift. Ownership gets reorganized.
Tools powered by AI for product managers can show you the codebase map before, during, and after a migration, helping you track progress visually. You can see "we've extracted 15 services from the monolith, we have 8 to go." You can see "this service now has 5 dependencies instead of 40." You can see which teams have capacity and which are overloaded during the migration.
This doesn't change the migration. But it makes it visible and trackable, which lets you communicate about it much more effectively.
Frequently Asked Questions
Q: How do we know if a monolith-to-microservices migration is actually worth it?
Look at the constraints the monolith imposes. Can you not deploy different parts of the system independently? Can you not scale different parts at different rates? Do architectural changes to one part slow changes to another part? If the answer to these is yes, migration might be worth it — use dependency mapping to understand the current architecture before planning extraction. If not, you're probably fine staying monolithic.
Q: What's the fastest a migration can go?
Realistically, 12–16 weeks for a team of 15–20 people with a medium-complexity monolith. Faster than that and you're probably cutting corners. Longer than that and something is wrong with your planning — track cycle time per service extraction to identify bottlenecks.
Q: Can we do this with a vendor?
You can hire help, but the vendor can't do the migration for you. Too much domain knowledge lives in your team. What vendors can do is accelerate the process and help you avoid mistakes. But the engineering work still has to happen internally.
Related Reading
- C4 Architecture Diagram: The Model That Actually Works
- Conway's Law: Why Your Architecture Mirrors Your Org Chart
- Software Architecture Documentation: A Practical Guide
- Code Dependencies: The Complete Guide
- Dependency Mapping: A Practical Guide
- Technical Debt: The Complete Guide for Engineering Leaders