How to Improve Developer Experience: A 90-Day Playbook for Engineering Leaders
At Salesken, I said "we need to improve developer experience" in at least four different meetings before anything actually changed. The problem was always the same: DX improvement felt important but never urgent. There was always a feature to ship, a bug to fix, a customer escalation to handle. It wasn't until I framed DX as a 90-day project with measurable checkpoints that it finally got traction — and within that first 90 days, our build times dropped 70% and new engineer onboarding went from three weeks to four days.
"We need to improve developer experience," the VP of Engineering says in a meeting. Heads nod. Everyone agrees.
Then the meeting ends. And nothing changes.
Why? Because "improve DX" is too vague to act on. Where do you start? CI/CD speed? Tooling? Documentation? Onboarding? Monitoring? You could spend a year on DX initiatives without a clear strategy.
This playbook gives you a structured 90-day approach to systematically improve developer experience. It's designed for engineering leaders (VPs, CTOs, directors) who want to make meaningful progress without overwhelming their teams.
Why Developer Experience Matters (The Business Case)
Before diving into the playbook, let's establish why this matters:
- Retention: Poor DX is a leading predictor of engineering turnover. Improving DX has higher ROI than salary increases for retention.
- Speed: Better DX directly correlates with faster delivery. Less time fighting tools = more time shipping.
- Recruiting: Strong DX becomes a selling point for hiring. "We invest in developer experience" attracts better talent.
- Sustainability: High-velocity teams with poor DX burn out. Good DX enables sustainable high performance.
Now, let's get to work.
The 90-Day Framework: Three Phases
The playbook is divided into three phases, each lasting roughly 30 days:
- Phase 1 (Days 1-30): Listen – Understand where DX is broken
- Phase 2 (Days 31-60): Quick Wins – Implement high-impact, low-effort improvements
- Phase 3 (Days 61-90): Systemic Change – Build the infrastructure for sustained DX
Each phase has specific actions, success metrics, and outcomes. You won't transform your organization in 90 days, but you'll establish momentum and prove value.
PHASE 1: LISTEN (DAYS 1-30)
Goal: Understand where developer experience is actually broken.
Many engineering leaders make decisions based on intuition or personal experience. You wrote code 10 years ago, so you assume onboarding is still that hard. Your favorite tool is X, so you assume everyone loves X. This phase is about replacing assumptions with data.
Action 1.1: Developer Experience Survey
What to do: Create a focused survey and send it to all engineers. Keep it short (5-10 minutes). Use a 1-10 Likert scale for consistency.
Sample questions:
- How satisfied are you with your development environment? (1-10)
- How much time per week do you spend waiting for builds/deploys? (hours)
- How clear are our architectural decisions and design patterns? (1-10)
- How painful is onboarding for new hires? (1-10)
- What's the biggest friction point in your daily work? (open-ended)
- What's one tool or process you'd eliminate? (open-ended)
Why this matters: You'll identify pain points directly from the source. The open-ended responses are gold—they tell you what engineers actually care about.
Success metric: 70%+ response rate. Aim for 15+ responses from engineers at different tenure levels (new hires, mid-career, senior).
Action 1.2: Friction Logging (Lightweight)
What to do: Ask 5-10 volunteers from different teams to log friction points for a week. Each time they hit a blocker (slow build, unclear documentation, confusing deploy process), they capture:
- What happened
- How long it took
- Impact (blocked me, slowed me down, or minor annoyance)
How to run it:
- Shared Google Sheet or Notion database
- 5 minutes per entry
- Run for exactly one week
- Debrief with volunteers afterward
Why this matters: Surveys tell you what people think. Friction logs tell you what's actually happening. You'll discover problems people have stopped complaining about (because they've accepted them).
Success metric: 20+ friction entries. Categories should emerge (e.g., "CI/CD slow," "docs unclear," "environment setup hard").
Action 1.3: New Hire Interviews
What to do: Talk to engineers who joined in the last 3 months. Ask:
- What was the hardest part of onboarding?
- What took longer than expected to understand?
- What surprised you (good or bad)?
- What would have made your first month easier?
Why this matters: New hires have fresh perspective. They notice problems veterans have normalized. Their onboarding experience is a leading indicator of team health.
Success metric: Interview 5-8 new hires. Common themes should emerge (e.g., "environment setup took 2 days," "architectural docs are outdated").
Action 1.4: Engineering Manager 1:1s
What to do: Schedule 30-min conversations with each engineering manager. Ask:
- Where are your engineers getting frustrated?
- What's blocking faster delivery?
- Where are you seeing churn or burnout risk?
- If you had one week and unlimited budget, what DX problem would you fix first?
Why this matters: Managers observe patterns at team level. They'll surface systemic issues and reveal where different teams have different needs.
Success metric: Interview all managers. Synthesize findings into priority themes.
Phase 1 Deliverable: DX Baseline Report
By day 30, you have:
- Survey results (quantitative)
- Friction logs (behavioral)
- New hire feedback (qualitative)
- Manager perspective (systemic)
Compile into a one-page report:
- Top 3 pain points (with supporting data)
- Quick wins available (low-effort fixes)
- Systemic problems (require investment)
- Confidence level (based on data sources)
Share transparently with the team. You've just demonstrated that leadership listens.
PHASE 2: QUICK WINS (DAYS 31-60)
Goal: Implement high-impact, low-effort improvements.
You've identified pain points. Now fix the ones that are:
- High impact: Affect many engineers or frequent workflows
- Low effort: Can be fixed in 2-4 weeks with existing resources
Focus here. These wins build momentum and prove that DX investment matters.
Quick Win 2.1: CI Speed Optimization
The problem: Slow builds are a universal pain point. Engineers sitting idle waiting for CI costs productivity and morale.
The fix:
- Measure baseline: What's your average CI time today? Target metric: < 10 min for common workflows.
- Analyze: Identify slow jobs (unit tests? integration tests? linting?). Use build metrics.
- Parallelize: Can tests run in parallel? Split jobs across more workers?
- Cache dependencies: Are you rebuilding dependencies on every run?
- Fail fast: Run fastest feedback loops first (linters before tests).
Timeline: 2-3 weeks Owner: DevOps or platform engineer Success metric: 30% reduction in average CI time (e.g., from 15 min to 10 min)
Why it matters: Every 1 minute saved in CI time multiplies across all engineers. Saving 5 minutes per build across 50 engineers = 4+ hours of productivity per day.
Quick Win 2.2: Pull Request Review SLA
The problem: Code sits in PR limbo waiting for review. Engineers are blocked or context-switch to other work.
The fix:
- Define SLA: Code reviews should complete within 24 hours (or 4 hours for urgent fixes).
- Establish norms: No code reviews on Friday afternoons. Check PRs first thing in the morning.
- Rotate reviewers: Don't let one person become the bottleneck.
- Automate: Use automated checks (linting, tests) before human review.
- Track metrics: Measure average time-to-review. Publicize progress.
Timeline: 1-2 weeks Owner: Engineering leads Success metric: 80%+ of PRs reviewed within 24 hours
Why it matters: Blocked engineers context-switch, which destroys flow and productivity. Fast feedback loops matter more than perfect reviews.
Quick Win 2.3: Onboarding Documentation Refresh
The problem: New hire feedback revealed onboarding is painful. Often because docs are outdated.
The fix:
- Audit: List all onboarding docs (setup guides, architecture overviews, runbooks).
- Identify stale docs: Which ones reference old tools? Use last-updated dates.
- Assign owners: Each doc gets an owner responsible for currency.
- Refresh: Update 5-10 critical docs (setup, architecture, how to deploy, debugging).
- Create checklist: Build a 1-week onboarding checklist so new hires know exactly what to do.
Timeline: 3 weeks Owner: Tech leads + recent hires Success metric: New hires can set up environment in < 1 hour without help. Onboarding checklist exists.
Quick Win 2.4: Local Development Environment Standardization
The problem: "Works on my machine" is a sign of environment inconsistency. Engineers spend time troubleshooting environment issues.
The fix:
- Inventory: What's installed locally? What versions?
- Standardize: Use Docker, Nix, or devcontainers to codify the environment.
- Automate setup:
make dev-setupor equivalent that provisions everything. - Document: Keep the environment definition in your repo (Dockerfile, flake.nix, devcontainer.json).
- Test: New hires should be able to run
setup-devand have a working environment in 15 minutes.
Timeline: 2-3 weeks Owner: DevOps or platform engineer Success metric: New engineer can run the app locally in < 15 minutes
Quick Win 2.5: Deployment Friction Removal
The problem: Deploying to production feels risky or complex. Engineers avoid deploying.
The fix:
- Measure current state: How long does a deploy take? How manual is it?
- Automate: Can you deploy with a single command? Remove manual steps.
- Reduce ceremony: Do you need manual approval for every deploy? Consider auto-deploying on merge to main (if tests pass).
- Rollback mechanism: Make rollback simple (one-command). Fear of deploying often comes from fear of getting stuck.
- Deploy tracking: Log all deploys with who, when, what version. Provides confidence.
Timeline: 2-3 weeks Owner: DevOps or platform engineer Success metric: Deployment takes < 5 minutes and requires < 2 manual steps
Phase 2 Deliverable: Quick Wins Dashboard
Track progress on these 5 initiatives:
- CI Speed: baseline → target
- PR Review SLA: % of PRs reviewed in 24 hours
- Onboarding Docs: # of docs refreshed, avg setup time
- Dev Environment: % of new hires with working env in 15 minutes
- Deployment Friction: deploy time, # of manual steps
Update weekly. Share with engineering. Celebrate wins.
By day 60, you've completed 5 initiatives that directly improve daily experience. Engineers see leadership following through. Momentum builds.
PHASE 3: SYSTEMIC CHANGE (DAYS 61-90)
Goal: Build infrastructure for sustained DX improvement.
The quick wins bought you credibility. Now use it to invest in systemic improvements that require more effort but deliver greater returns.
Systemic Change 3.1: Platform Team Investment (or Roadmap)
The insight: Many DX problems are systemic. They require ongoing investment in infrastructure, tooling, and processes.
The action:
-
Audit: What problems need platform/infrastructure investment?
- CI/CD platform upgrade
- Observability/monitoring (so engineers can debug production)
- Configuration management (less manual setup)
- Internal developer platform (self-service deployment, secrets, etc.)
-
Prioritize: Which will have highest DX impact?
-
Business case: Build a one-pager showing ROI (e.g., "Better observability will reduce MTTR by 50%, saving X engineering hours per quarter").
-
Get buy-in: Present to leadership. Get commitment for 1-2 engineers part-time or full-time.
-
Create roadmap: 3-6 month plan to improve platform.
Why this matters: Quick wins are unsustainable without backing them up with infrastructure. After you optimize CI, the next bottleneck emerges. A platform team solves this by continuously improving engineering infrastructure.
Systemic Change 3.2: Autonomous DX Monitoring
The insight: After 90 days, how will you know DX is maintained or degraded? You need continuous measurement.
The action:
-
Define DX metrics (pick 3-5):
- Average PR review time
- Build time (P50, P95)
- Time to merge after approval
- Onboarding time for new hires
- Time spent in meetings vs. coding (if you have calendar data)
-
Instrument: Set up dashboards in your CI/CD and communication tools.
-
Alert: If build time creeps up 20% or review SLA drops, get a notification.
-
Monthly review: Share DX metrics with the team monthly. Celebrate improvements. Investigate regressions.
Why this matters: What gets measured gets managed. By monitoring DX metrics, you ensure improvements stick and catch degradations early.
Systemic Change 3.3: DX Feedback Loops
The insight: Phase 1 taught you that listening matters. Make it continuous.
The action:
- Quarterly DX survey: Repeat the survey from Phase 1. Track trends.
- Monthly office hours: Spend 30 minutes taking DX complaints and feature requests directly from engineers.
- Annual DX roadmap: Share what you're improving, prioritize based on impact.
- New hire debrief: 30-min conversation with each new hire at day 30 to capture fresh perspective.
Why this matters: DX is never "done." The best organizations maintain feedback loops that let engineers influence priorities.
Systemic Change 3.4: Capability-Based Tools (Optional but High-Impact)
The insight: Once you've identified DX problems and built baseline improvements, the next frontier is autonomous tools that continuously monitor and improve development experience.
The action: Modern agentic platforms can autonomously:
- Monitor DORA metrics (deployment frequency, lead time, failure rate, MTTR)
- Flag performance regressions (build time, review time)
- Triage failing tests and alert on flakiness
- Analyze code quality and architectural debt
- Provide self-serve answers to codebase questions (reducing back-and-forth with senior engineers)
These tools solve the "measurement and monitoring at scale" problem. Instead of manually creating dashboards, an agent continuously tracks DX signals and alerts when they degrade.
Phase 3 Deliverable: DX Strategy Document
By day 90, you've created a strategic document:
- DX Vision: What great DX looks like at your company
- Baseline metrics: Where you were at day 1
- Improvements: What you've shipped (Phase 2 quick wins)
- Outcomes: Metrics improvements (CI faster, PR reviews faster, onboarding smoother)
- Next 6 months: Platform investment roadmap and priorities
- Governance: How you'll measure DX ongoing (metrics, survey cadence, feedback loops)
- Success stories: Share before/after from engineers who experienced the improvements
How the Three Phases Fit Together
Phase 1 answers: "Where is DX actually broken?" Phase 2 answers: "What can we fix immediately?" Phase 3 answers: "How do we sustain and build on this?"
By the end of 90 days:
- Engineers have experienced tangible improvements (faster builds, faster PR reviews, smoother onboarding)
- Leadership has data showing ROI of DX investment
- Your organization has shifted from "DX is nice to have" to "DX is core to how we operate"
- You have a roadmap for continued improvement
Common Pitfalls to Avoid
1. Skipping Phase 1 Don't guess what DX problems are. Listen first. Fixes without data are often wrong.
2. Too many Phase 2 wins Pick 5. Doing 15 initiatives in parallel dilutes focus and exhausts the team.
3. No Phase 3 commitment If you don't invest in infrastructure and sustained monitoring, Phase 2 improvements will degrade. Momentum dies.
4. Not celebrating progress Engineers are skeptical that management listens. Celebrate wins publicly. Share metrics. Build credibility.
5. Isolated DX initiative Don't silo DX improvement to one person. Make it a cross-functional effort. Involve managers, tech leads, and (most importantly) frontline engineers.
Measuring 90-Day Success
How do you know if you've succeeded?
- Quantitative: 20-30% improvement in key metrics (CI time, PR review time, onboarding time)
- Qualitative: Positive shift in DX survey. "Things are getting better" sentiment.
- Behavioral: Engineers are eager to onboard new people (sign of healthy culture). Less Friday afternoon resignation emails.
- Strategic: Leadership approved a platform investment. Budget allocated for next phase.
Beyond 90 Days
After day 90, this becomes your normal operating mode:
- Monthly: Review DX metrics. Investigate regressions.
- Quarterly: Run DX survey. Share results.
- Bi-annually: Refresh DX roadmap based on feedback.
- Continuously: Monitor and iterate on improvements.
DX is not a project. It's a discipline. The best organizations treat it like product management—continuous listening, iteration, and improvement.
Conclusion
Developer experience improvement doesn't require a grand vision or huge budget. It requires structure, focus, and follow-through.
This 90-day playbook gives you exactly that. Start with listening. Move to quick wins that prove DX matters. Build the infrastructure for sustained improvement. By day 90, you've shifted your organization's culture around developer experience.
The compounding payoff: faster delivery, happier engineers, lower turnover, and sustained competitive advantage. That's worth 90 days of focus.
Related Reading
- Developer Experience: The Ultimate Guide to Building a World-Class DevEx Program
- How to Measure Developer Experience: Frameworks, Metrics & Measurement Stacks
- Developer Experience Strategy: Building a Sustainable DX Program
- DX Core 4: The Developer Experience Framework That Actually Works
- Developer Onboarding Metrics: How to Measure and Accelerate Time-to-Productivity
- Improving Developer Efficiency: Doing Things Right