Code complexity measures how difficult code is to understand, test, and maintain. Higher complexity = higher risk.
Code complexity measurement is the practice of using quantitative metrics to evaluate how difficult a piece of source code is to understand, test, and maintain. Common metrics include cyclomatic complexity, cognitive complexity, and lines of code, each capturing a different dimension of how tangled or involved a codebase has become. These measurements give teams an objective basis for identifying areas of code that carry elevated risk.
Complex code is expensive code. Research by Microsoft found that modules in the top 20% of complexity account for over 60% of post-release defects. High complexity makes it harder for developers to reason about behavior, increases the likelihood of introducing bugs during changes, and slows down code reviews. Over time, unmeasured complexity quietly erodes a team's ability to ship with confidence.
Measurement converts a subjective feeling ("this code is messy") into an objective signal that can be tracked and acted upon. When a team can see that a specific module's cyclomatic complexity has risen from 15 to 45 over the past six months, that trend triggers a targeted refactoring conversation rather than a vague complaint. Connecting complexity data to code health dashboards helps teams maintain visibility across their entire codebase.
Complexity measurement also supports better planning. When a product manager asks why a seemingly simple feature change will take two weeks, complexity metrics provide evidence. Showing that the affected module has a cognitive complexity score in the 90th percentile explains the challenge in concrete terms that non-engineers can understand.
The most widely used metric is cyclomatic complexity, introduced by Thomas McCabe in 1976. It counts the number of independent paths through a function, with higher numbers indicating more branches and decision points. A function with a cyclomatic complexity of 1 to 10 is generally considered manageable, while values above 20 signal a need for refactoring.
Cognitive complexity, a newer metric developed by SonarSource, attempts to measure how difficult code is for a human to understand. Unlike cyclomatic complexity, it penalizes nested structures more heavily and rewards linear flow. This makes it a better proxy for the actual developer experience of reading and modifying code.
Teams typically integrate complexity measurement into their CI/CD pipeline. Automated checks flag pull requests that introduce functions above a defined complexity threshold, preventing new complexity from entering the codebase unchecked. Periodic full-codebase scans identify existing hotspots that accumulated before the checks were in place. Understanding the relationship between complexity and technical debt helps teams frame remediation work in terms of long-term maintainability.
Static analysis tools provide the foundation for complexity measurement. SonarQube and SonarCloud compute both cyclomatic and cognitive complexity across multiple languages. Language-specific tools like Radon (Python), ESLint with complexity rules (JavaScript), and Gocyclo (Go) offer targeted analysis. CodeClimate aggregates complexity data into maintainability ratings.
Glue enriches complexity measurement by correlating static metrics with change frequency, contributor patterns, and delivery speed. A module with high complexity that rarely changes poses less immediate risk than one with high complexity that multiple developers modify every week. That contextual layering helps teams prioritize refactoring efforts where they will have the greatest impact.
Functions with a cyclomatic complexity of 1 to 10 are generally considered low risk. Scores between 11 and 20 indicate moderate complexity worth monitoring. Scores above 20 typically signal that a function should be broken into smaller, more testable units.
Usually, but not always. Over-decomposing code into many tiny functions can increase indirection and make the overall flow harder to follow. The goal is to reduce complexity to a level where each function is easy to understand and test, not to minimize the metric at the expense of readability.
High-complexity functions require more test cases to achieve full branch coverage. A function with a cyclomatic complexity of 15 has at least 15 independent paths, each of which should be tested. Measuring complexity alongside coverage helps teams identify undertested high-risk areas.
Code quality metrics quantify how maintainable, reliable, and efficient a codebase is. Essential for engineering management.
Code dependencies are the relationships between different parts of your codebase that determine what breaks when something changes.
Code coverage measures the percentage of code that's tested by automated tests. 80%+ is a common target.