By Priya Shankar, Head of Product at Glue
Last quarter, I spent two full days building a competitive analysis deck for a board meeting. I mapped every competitor's features, identified gaps, ranked priorities. It looked thorough. Then our principal engineer glanced at it and said, "We already have half of what you listed as gaps." I had built an entire strategic document without knowing what our own product actually contained. If you have ever tried to figure out how to do competitive analysis for your SaaS product, you have probably hit this exact wall.
This is not a skills problem. I have been doing competitive intelligence for years. It is a visibility problem. And it affects far more PMs than most people realize.
The Blind Spot in Competitive Analysis
Competitive analysis in SaaS has a well-documented set of best practices. Monitor competitor websites. Track their changelogs. Read their reviews on G2. Talk to prospects who evaluated both products. Survey your sales team.
According to Crayon's 2024 State of Competitive Intelligence report, 65% of sales professionals say competitive intelligence has a measurable impact on win rates. The value is not in question.
But there is a structural problem hiding inside every competitive analysis process: the analysis compares what competitors have against what you think you have. Not what you actually have.
The Internal Knowledge Gap
When I was a Senior PM at my last company, I managed the roadmap for a product with over 300,000 lines of code. I could tell you what we shipped last quarter from the release notes. I could tell you what was on the roadmap from the planning docs. But I could not tell you the full inventory of capabilities that existed in our codebase right now.
Nobody could. Not even the engineers, because no single engineer understood the entire system. Knowledge was distributed across the team, and much of it was undocumented.
This means every competitive analysis I produced was working from incomplete information on our own side. I was comparing a detailed external view of competitors against a fuzzy, partial internal view of our own product.
Crayon's research backs this up: competitive intelligence professionals rate their own effectiveness at just 3.8 out of 10. That is a remarkably honest self-assessment, and I believe the internal blind spot is a primary reason for it.
Why Feature Gap Analysis Fails
Traditional feature gap analysis follows a predictable pattern. You build a matrix with competitors on one axis and features on the other. You mark who has what. You identify the gaps.
The problem is step three. When you mark your own product, you are working from memory, from documentation that may be outdated, and from conversations with engineers who each know a piece of the puzzle.
The Documentation Decay Problem
I once flagged "automated invoice generation" as a competitive gap. We needed to build it, I argued. It was on the roadmap for Q3. Two weeks later, a backend engineer mentioned that we had actually built basic invoice generation 18 months ago as part of a billing refactor. It was never exposed in the UI, but the logic was there.
That feature sat in our codebase, unknown to the product team, while I was writing business cases for building it from scratch. Multiply that kind of miss across dozens of features and you start to see why competitive analysis so often leads to building things that already partially exist.
According to Crayon, 58% of competitive intelligence teams struggle to keep their intelligence current. On the competitor side, that is hard enough. On your own side, it should not be a problem at all. But it is, because the source of truth for what you have built is locked inside code that the product team cannot read.
What Gets Missed
The most common categories of missed internal capabilities:
- Backend logic without UI exposure. Features that were built but never surfaced to users.
- Partial implementations. Work that was started, got 70% complete, then deprioritized and forgotten.
- Configuration-based capabilities. Things the system can already do with a flag change or configuration update.
- API-only features. Capabilities available through the API that never made it to the product interface.
Every one of these represents wasted competitive analysis effort if you do not know about them.
Connecting CI to Code Reality
The fix is not better competitor research. Most teams do a reasonable job tracking what competitors are doing. The fix is better internal visibility.
This is where the competitive analysis process needs to evolve. Instead of just mapping the external landscape, you need to ground your analysis in what actually exists in your codebase.
What Code-Grounded Analysis Looks Like
Imagine being able to ask your system: "Do we have anything that handles referral tracking?" and getting back not a vague answer from an engineer's memory, but a specific response pointing to the exact files, functions, and database tables involved.
That is the approach Glue takes. Glue connects to your repository, indexes your codebase, and lets you query it in plain language. When you are running competitive analysis, you can check each competitor capability against your actual code. You can read more about this approach in our competitive intelligence guide for SaaS.
The workflow shifts from:
- List competitor features
- Guess what we have
- Flag gaps based on incomplete knowledge
To:
- List competitor features
- Query your codebase for matching capabilities
- Flag genuine gaps with evidence
The Competitive Gap Atlas
Glue's Gap Atlas feature takes this further. You can add a competitor, and Glue will analyze their publicly available features against your indexed codebase. It maps where you are ahead, where you are behind, and where gaps exist, all grounded in what the code actually shows.
This is not a replacement for talking to customers or monitoring competitor marketing. It is a supplement that adds a layer of internal truth to competitive analysis. The result is a competitive gap analysis that does not fall apart the moment an engineer says, "Actually, we already have that."
A Better Framework
Based on my experience running competitive analysis both the old way and with codebase visibility, here is a framework that actually works.
Step 1: Build the External View First
Start with traditional competitive intelligence. Monitor competitors, read their documentation, talk to your sales team. This part has not changed. Build your matrix of competitor capabilities.
Step 2: Ground Every Gap Internally
Before marking anything as a gap, verify it against your codebase. Ask whether any implementation, even partial, already exists. If you have a tool like Glue, query it directly. If not, at minimum schedule a focused session with engineering to review each claimed gap.
Step 3: Categorize Gaps by Effort Reality
Not all gaps are equal. "Competitor has X and we have nothing" is very different from "Competitor has X and we have the backend logic but no UI." Categorize each gap:
- True gap: No existing implementation. Requires full build.
- Exposure gap: Logic exists but is not user-facing. Requires UI work.
- Completion gap: Partial implementation exists. Requires finishing.
- Configuration gap: Capability exists but is not enabled. Requires configuration.
This categorization changes prioritization dramatically. Exposure gaps and configuration gaps can often be addressed in days, not sprints.
Step 4: Reassess Quarterly with Live Data
Competitive analysis is not a one-time exercise. Your product evolves, competitors evolve, and the gaps shift. If your analysis is grounded in a live codebase index that updates with every commit, your competitive view stays current without manual effort.
Making It Stick
The hardest part of this framework is not the analysis itself. It is building the habit of checking internal reality before making external comparisons. For years, I defaulted to assuming I knew what our product could do. I was wrong often enough that it cost real roadmap time.
If you are running competitive analysis for your SaaS product and want to stop guessing about your own capabilities, try Glue to see what is actually in your codebase. The first time you discover a "gap" that turns out to already be partially built, the value becomes obvious.
Frequently Asked Questions
How do you do competitive analysis for SaaS?
Effective SaaS competitive analysis combines external monitoring with internal grounding. Start by tracking competitor features through their changelogs, documentation, review sites, and sales conversations. Then verify each identified gap against your own codebase to confirm it is a true gap, not a documentation or visibility issue. Use a framework that categorizes gaps by implementation status so you can prioritize based on actual effort required, not assumptions.
What is the biggest mistake in competitive analysis?
The biggest mistake is assuming you know what your own product can do. Most competitive analysis processes are thorough on the competitor side but rely on memory and outdated documentation on the internal side. This leads to flagging features as gaps when partial or complete implementations already exist in the codebase. According to Crayon, CI professionals rate their own effectiveness at just 3.8 out of 10, and incomplete internal visibility is a primary driver of that low score.
How do you connect competitive intel to your codebase?
Connecting competitive intelligence to your codebase requires a tool that can index and query your code in plain language. Platforms like Glue parse your Git repository and build a semantic understanding of your system, allowing you to ask questions like "Do we have anything that handles X?" and get specific, file-level answers. This grounds your competitive analysis in technical reality instead of tribal knowledge and prevents you from planning to build features that already exist.