ChatGPT for product managers excels at drafting artifacts (PRDs, customer communications, competitive frameworks) and structuring thinking, but fails at product-specific questions requiring codebase knowledge — feasibility assessments, architecture understanding, dependency analysis, and technical risk evaluation. ChatGPT generates plausible-sounding answers about your product without access to your actual code, creating a dangerous illusion of understanding. The effective PM AI stack combines ChatGPT for general drafting with codebase intelligence tools for system-specific questions that require real data about your architecture, dependencies, and code health.
At Salesken, our product managers were some of the smartest people in the room. But they were making decisions with incomplete information because the codebase was a black box to them.
By Vaibhav Verma
ChatGPT is useful for product managers. I use it regularly. But it's useful in very specific ways, and the ways most people think it's useful are mostly wrong.
I watched a PM yesterday use ChatGPT to draft a PRD, and it was genuinely good. Clear structure, good thinking about edge cases. Thirty minutes of ChatGPT plus thirty minutes of editing, versus three hours of blank-page struggle. That's valuable. But then they asked it: "What's the actual impact on our checkout flow if we implement this feature?" ChatGPT made something up. Sounded confident. Completely wrong. The difference between drafting and understanding is the difference between useless and dangerous.
This is the core thing about ChatGPT for product: it's phenomenal at being the smart colleague who's never seen your code and great at tasks that don't require specific knowledge about your product.
What ChatGPT Actually Does Well
ChatGPT is exceptional at generative drafting tasks. You give it a prompt, it generates something structured and usable. You iterate and refine. This is where it's best:
Writing PRDs from outlines. "Here's what I want to build and why. Structure this as a professional PRD." ChatGPT will give you something with the right sections, the right tone, the right level of rigor. Better than blank-page writing. You still need to fill in the actual product thinking.
Generating user story formats. "Take this requirement and turn it into five user stories." Done. Multiple valid formats, all usable. You pick the ones that work.
Synthesizing in my experience, pasted text. You paste three customer interviews and ask "what are the common themes here?" ChatGPT will extract them. Not as insightful as a human would be, but 80% of the value in 20% of the time.
Writing competitive summaries from pasted content. "Here's the pricing page and feature list for three competitors. Summarize how we compare." ChatGPT will do this. Accurately? Often enough. Not missing anything? Rarely, but it's a starting point.
Brainstorming feature names. "We're building a feature that lets users save their configurations. Suggest 20 names that are distinctive and memorable." Get ideas fast. Filter them. Move on.
These are all tasks where ChatGPT is playing the role of "competent colleague who has general knowledge." It's good at that role. This is genuinely useful.
What ChatGPT Fails At
Here's where it breaks down: anything that requires real knowledge about your specific product or company.
"What's actually in our product?" ChatGPT doesn't know. It could hallucinate. It will confidently tell you a feature exists or behaves a certain way when it doesn't. I watched a PM ask: "Does our product support SSO?" ChatGPT said yes, confidently, based on the product description it was given. It didn't. The PM was about to tell a prospect we had it.
"Why did we build it this way?" ChatGPT can't answer this. It doesn't know your product history, your constraints, your past decisions. You need someone who was there.
"What are the actual dependencies of this feature?" ChatGPT will guess. And guess wrong. It doesn't have access to your code. It doesn't understand your architecture. It will sound authoritative while being completely incorrect.
"How long will this take?" ChatGPT will give you a number. The number will be wrong. Not because ChatGPT is bad at estimation, but because it doesn't know your codebase, your team's velocity, or your constraints.
These are all questions where you need actual context about your product, your code, and your company. ChatGPT can't provide that. It can only guess, and its guesses are often confidently wrong.
The Hallucination Problem
This is important enough to emphasize separately. ChatGPT hallucinates. It generates plausible-sounding answers that are completely made up. It does this confidently. The problem for product managers is that hallucination is most dangerous on questions that seem like they should be answerable.
"Does our API support webhooks?" Sounds like a factual question ChatGPT should be able to answer. It can't. If it doesn't have specific documentation about your API, it will guess. And if it guesses, it will be confident, and you might believe it.
This is why ChatGPT is good for brainstorming and drafting - domains where wrong answers don't cost you. It's bad for anything where factual accuracy matters and you're supposed to be the expert.
ChatGPT vs. Codebase-Specific Intelligence
Here's the actual line: ChatGPT is a general AI trained on internet text. Glue (and tools like it) are specific AIs trained on your codebase. If you're comparing codebase intelligence tools, see Glue vs CodeSee for a detailed comparison.
For a PM, the gap matters most when you need to understand your own product. Not write about it. Understand it.
You ask ChatGPT: "Is this technically feasible?" You get a thoughtful analysis about feasibility in general. Not whether it's feasible for your system.
You ask a codebase-specific tool: "Is this feasible?" You get an answer based on what's actually in your code. What dependencies exist. What constraints are real. What would actually need to change.
This is why I built Glue specifically for codebases. ChatGPT is useful for general product thinking. But for understanding your specific product - what's in it, how it works, what the constraints are - you need a tool that's trained on your code.
The Practical Mix
Here's how I think about using both:
Use ChatGPT for: drafting, structure, brainstorming, synthesizing open-ended research, anything where you're looking for ideas or structure, not facts.
Use codebase analysis for: understanding what's in your product, discovering dependencies, understanding why constraints exist, making decisions that depend on knowing your system.
These don't replace talking to engineers or reading docs. They're complements. ChatGPT speeds up thinking work. Codebase analysis speeds up understanding work. Engineers are still essential - they understand the "why" and the tradeoffs that even good tools can't capture.
But if you're trying to decide whether to build something, understand whether it's feasible, or figure out what actually happens in your system - the tool trained on your code will get you to the right answer. ChatGPT will give you a plausible-sounding wrong answer faster.
Frequently Asked Questions
Q: Should PMs use ChatGPT to draft customer-facing communication? Yes, as a first draft. Write the rough outline, feed it to ChatGPT, iterate the result. Gets you to something coherent much faster. But read it before it leaves your hands - ChatGPT can introduce weird phrasing or unsupported claims. It's a starting point, not a finished product.
Q: How do I know if ChatGPT is hallucinating about my product? You already know your product. If ChatGPT says something that sounds wrong, it probably is. The danger is when it says something that sounds plausible but you're not sure about. In that case, verify with someone who actually knows. But honestly, if you're second-guessing what ChatGPT said about your own product, you might not know your product as well as you should.
Q: Can I use ChatGPT instead of building codebase-specific tools? Depends on what you need. For drafting and thinking, yes - ChatGPT is free and good enough. For understanding what's actually in your code, no. ChatGPT doesn't have access to your codebase and will guess. If you need accurate answers about your system — dependency mapping, code dependencies, or technical debt analysis — you need codebase intelligence tools trained on your system.
Related Reading
- AI for Product Management: The Difference Between Typing Faster and Thinking Better
- The Product Manager's Guide to Understanding Your Codebase
- AI Product Discovery: Why What You Build Next Should Not Be a Guess
- Cursor for Product Managers: The Next AI Shift Nobody Is Talking About
- Product OS: Why Every Engineering Team Needs an Operating System
- Software Productivity: What It Really Means and How to Measure It
- Glue vs ChatGPT
- AI for Product Managers Guide