Improving Developer Efficiency: Doing Things Right
At Salesken, I spent a year focused on productivity — shipping more features, closing more tickets, hitting sprint goals. Then I realized my team was productive but not efficient. They were building the right things but spending 30% of their time on environment setup, flaky tests, and manual deployments. When I shifted focus from "build more" to "waste less," our actual throughput increased without anyone working harder.
There's a critical distinction that most engineering leaders conflate: productivity versus efficiency.
Productivity is doing the right things—building features that customers want, solving real problems, creating business value. It's about impact and outcomes.
Efficiency is doing things right—eliminating waste, optimizing processes, reducing time spent on work that's necessary but not value-creating.
A team can be efficient (minimal waste, smooth processes) but unproductive (building features no one wants, solving non-existent problems). They can be productive (building things customers love) but inefficient (taking 3x longer than necessary with chaotic processes).
The best teams are both. This article is about the efficiency side.
The Economics of Waste
Software development has unavoidable waste. You need meetings to coordinate. You need testing to prevent bugs. You need deploys to get code to production. Some percentage of work is not directly creating customer value—it's creating the conditions for value creation.
But there's also unnecessary waste. Work that serves no purpose, steps that could be eliminated, delays that shouldn't exist.
In manufacturing, lean methodology obsesses over waste. Taiichi Ohno defined seven types of waste in manufacturing: overproduction, waiting, transport, over-processing, inventory, motion, and defects. Most of these map directly to software development.
Waiting: Pull request sits 3 days waiting for review. Code sits waiting for deployment window. Deployment sits waiting for approval.
Over-processing: Unnecessary meetings. Excessive requirements documentation. Redundant testing. Ten-person review for a one-line fix.
Transport: Data moving between systems that don't integrate. Manually copying information from Jira to deployment notes. Information silos requiring re-explanation.
Inventory: Features built but not released. Known bugs not fixed. Undocumented decisions that get rediscovered.
Motion: Tool switching (context switching). Searching for information. Navigating fragmented systems.
Defects: Bugs that have to be fixed. Poor architecture that slows future development. Technical debt.
Each of these types of waste has a concrete cost in time, money, and team sanity. Unlike manufacturing (where waste is tangible and measurable), software waste is often invisible.
Common Efficiency Killers
Let's identify the specific waste patterns that destroy developer efficiency.
Code Review Queues
A pull request created at 10am sits until 3pm waiting for review. The author has context for 30 minutes. After an hour, they've lost it and switch to another task. When the review comes, they have to rebuild context. Then the reviewer suggests changes. More back-and-forth, more context rebuilding.
What should be 30 minutes of focused work becomes 3 hours of fragmented, inefficient work.
The solution: Reviews happen within 2 hours. This requires culture shift (reviewing someone else's code is higher priority than writing your own) but the efficiency gain is enormous.
Deployment Gates and Approval Processes
Code is ready. Tests pass. Security scans pass. But deployment requires approval from someone who's in meetings all day. Or approval requires a "change board" that meets once per week. Or deployment can only happen during a maintenance window on Friday night.
Meanwhile, the code sits. The engineer who wrote it has moved on. If something goes wrong post-deploy, debugging is harder because context is cold.
The solution: Deploy when code is ready and safe, not based on schedule or approval bureaucracy. Automated gates (tests, security scanning) should gate deployments, not human bureaucracy.
Incident Chaos
When something breaks in production, there's a frantic hour of "What's broken? Who should fix it? What's the impact?" Multiple people investigate the same problem. Information gets scattered across Slack channels, call transcripts, and individual notes. Post-incident, there's no clear root cause or prevention plan.
The same problem happens again 3 months later, and the investigation repeats.
The solution: Runbooks for common incidents. Clear escalation paths. Blameless postmortems that identify system problems, not people problems. Treat incident response as a process to optimize, not a chaotic response.
Knowledge Silos
Critical knowledge lives only in one engineer's head. How does the payment system work? Only Sarah knows. Why did we build this that way? Only Marcus knows. How do you deploy to production? Ask the DevOps person.
When Sarah leaves, her knowledge leaves with her. When Marcus is on vacation, blocking questions pile up.
The solution: Document critical knowledge. Make architecture decisions discoverable. Create runbooks. Yes, this takes time upfront. But the time saved from not re-explaining things repeatedly pays back in weeks.
Meeting Overload
Engineer gets paged in from deep work for a "quick standup." Then another meeting. Then another. By the time meetings are done, deep work is impossible. The engineer can only do shallow, interruptible work for the rest of the day.
Meetings aren't inherently waste—some coordination is necessary. But most meetings are inefficient coordination. The same information could be async.
The solution: Establish no-meeting blocks. Use async standups (Slack message). Document decisions in writing instead of discussing in meetings. When meetings do happen, require agenda and pre-reads.
Manual Triage and Routing
Bug report comes in. Someone reads it. They talk to people. They figure out which team should handle it. They talk to that team about capacity. Eventually it gets assigned. Meanwhile the bug waits in a queue.
Compare to: Bug report comes in. Automated rules classify it, assess severity, identify impacted systems. High-severity bugs automatically route to on-call. Moderate bugs go to the team's backlog.
Same outcome, but one takes 2 hours of human time and the other takes seconds.
Rework and Context Thrashing
Code is written. It goes to review. Reviewer asks for changes. More review. Then QA rejects it. Back to development. Then deployment fails. More debugging. What should have been done in 3 hours takes 12 because of rework cycles and context rebuilding.
Root causes: Insufficient specification before work starts, inadequate testing before code review, deployment that hasn't been tested, processes that catch problems late instead of early.
The solution: Specification before coding (so you're building the right thing). Testing as you build (so bugs are caught early). Small PRs (so review is fast and rework is limited). Deployment testing in staging (so production surprises are rare).
Measuring Efficiency: Flow Efficiency
The most useful efficiency metric is flow efficiency: what percentage of time is work actually flowing forward vs. waiting, blocked, or being reworked?
How to measure it:
Using your project management system (Jira, Linear, etc.), track the state of work:
- In Progress: Someone is actively working on it
- Waiting: Blocked on a dependency, waiting for review, waiting for deployment, waiting for clarification
- Blocked: Can't proceed without external information or decision
For each piece of work, track:
- Total time from start to completion (cycle time)
- Time actively being worked on
- Time waiting
- Time blocked
Flow efficiency = Active time / Total time
For example: A feature takes 8 days end-to-end. Active engineering time is 3 days. Time waiting for review: 2 days. Time blocked on another team: 1 day. Time waiting for deployment: 2 days.
Flow efficiency = 3 / 8 = 37.5%
Interpretation: 62.5% of the cycle time is waste (waiting and blocked). Even if engineering time is efficient, the system is inefficient.
Targets:
- Below 40%: Significant systemic waste. This is common but fixable.
- 40-60%: Normal. There's some waste but flow is reasonable.
- 60-75%: Good. Most of cycle time is active work.
- 75%+: Excellent. Minimal waste.
The value of this metric is not the absolute number but the breakdown. It tells you exactly where to optimize:
- High waiting time? Optimize reviews, deployments, and approval processes.
- High blocked time? Improve cross-team coordination and reduce dependencies.
- Active time is still long? Improve engineering efficiency (reduce rework, automate manual steps, break work into smaller chunks).
Systematic Efficiency Improvement
Once you've identified where time is being lost, here's how to systematically improve:
1. Make Waste Visible
You can't improve what you can't see. Start measuring and tracking the inefficiency killers.
- Code review turnaround time
- Deployment frequency and duration
- Time from ready-to-merge to production
- Incident response time
- Number of times work gets reopened for rework
- Meetings per engineer per week
This doesn't require perfect data. Rough estimates are fine.
2. Prioritize by Impact
Not all waste is equal. A 3-day wait for code review affects 100% of engineering. An occasional incident affects 10% of engineering intermittently.
Focus on the biggest sources of wait time first.
3. Implement Targeted Improvements
For each major source of waste:
Code review delays:
- Establish review SLA (2 hours)
- Rotate review responsibility to ensure someone is always available
- Automate trivial reviews (formatting, linting, obvious dependency updates)
- Keep PRs small so reviews are fast
Deployment delays:
- Move approval gates to automated quality checks
- Automate deployment so it doesn't require manual steps
- Test deployments in staging so production is safer
- Make rollback fast so failure isn't catastrophic
Incident chaos:
- Document runbooks for common incidents
- Create clear escalation and notification rules
- Collect and organize knowledge from each incident
- Run blameless postmortems that identify system improvements
Knowledge silos:
- Document critical systems and architecture decisions
- Require code comments for non-obvious logic
- Create onboarding paths so new engineers can self-serve context
- Make documentation discoverable (good search, cross-linked, updated)
Meeting overload:
- Establish no-meeting blocks
- Require agendas and pre-reads for meetings
- Replace recurring meetings with async updates
- Set a company-wide standard: "Slack first, meetings only when necessary"
4. Measure Impact
After making changes, track the metrics again. Did review turnaround improve? Did flow efficiency increase? Did cycle time decrease?
The goal isn't perfection. It's continuous improvement. Each small win compounds.
Efficiency vs. Heroics
There's a trap that many organizations fall into: when efficiency is poor, they respond with heroics. Crunch time. All-hands-on-deck. Weekends and late nights.
This is treating the symptom, not the disease. Heroics feel productive—there's certainly a lot of activity. But they don't fix the underlying inefficiency.
Moreover, heroics create exhaustion. Exhausted engineers make mistakes. Mistakes create more incidents. More incidents demand more heroics.
The path forward is not harder work. It's smarter systems. It's eliminating the waste that makes work needlessly hard.
The Role of Tooling in Efficiency
Tooling can help, but only if it reduces waste rather than adding tool switching overhead.
Glue represents a different approach: instead of adding another tool that engineers have to use, agentic systems autonomously identify inefficiencies, route work, prevent bottlenecks, and synthesize information—without requiring engineers to change their workflow.
When issues are automatically triaged, you eliminate a person-hours of work and routing overhead. When bottlenecks are proactively detected, you can fix them before they create delays. When questions can be answered automatically by agents that understand the codebase, engineers don't get interrupted.
The goal isn't more tools. It's less friction. Automation that reduces friction without adding tool tax.
The Compounding Effect of Efficiency
Small improvements in efficiency compound dramatically. If you improve flow efficiency from 40% to 50%, you've reduced waste by 25%. That means the same work that took 8 days now takes 6.4 days.
If you also reduce deployment time from 2 hours to 30 minutes, cycle time drops further.
If you also reduce rework by improving specification and testing upfront, active time drops.
Across a team of 50 engineers, these improvements add up to 10-15 additional person-years of productive capacity per year. That's the equivalent of hiring 10-15 more engineers, without the hiring cost, onboarding time, or salary expense.
That's the power of systematic efficiency improvement.
Developer efficiency isn't about working faster or working harder. It's about removing the obstacles that make work unnecessarily slow and chaotic. It's about designing systems where good work flows naturally, not systems where good work requires heroic effort despite obstacles.
Related Reading
- Developer Productivity: Stop Measuring Output, Start Measuring Impact
- How to Improve Developer Experience: A 90-Day Playbook
- Programmer Productivity: Why Measuring Output Is the Wrong Question
- Cycle Time: Definition, Formula, and Why It Matters
- Engineering Bottleneck Detection: Finding Constraints Before They Kill Velocity
- Software Productivity: What It Really Means and How to Measure It