Technical Due Diligence in Software M&A: What It Is and Why It Matters
“In the fields of observation, chance favors only the prepared mind.” — Louis Pasteur
In software deals, tech diligence is business diligence. Architecture, infrastructure, and engineering maturity decide whether a company can sustain growth, protect margins, and hold its valuation. Financials may sparkle, but if the product can’t scale, or if security is shaky, the story falls apart fast.
Technical due diligence (TDD) is the disciplined review of a company’s technology, team, and roadmap. At heart, it answers two investor questions:
Can this platform carry the growth we’re underwriting?
What risks or opportunities in the tech could move enterprise value up or down?
Done well, TDD moves past surface impressions and reveals a product’s real scalability, resilience, and strategic headroom.
In this article, I'll be sharing examples of a fictional TDD done for a fictional company named Haborview.
Core Areas of Technical Due Diligence
1) Code Quality & Architecture
What to assess: Is the codebase modular and maintainable, or a fragile monolith? Is technical debt visible and managed? Is the stack modern enough to hire and retain talent?
Good: Clear separation of concerns, consistent patterns, and straightforward naming. Automated reviews (linting, tests, security scans) run in CI. Documentation covers the big picture (diagrams, service boundaries) and the basics (how to add an endpoint, run tests, deploy). Debt is tracked and paid down regularly. The stack uses widely adopted languages/frameworks, with infrastructure as code so dev environments spin up quickly and new hires ship within a week.
Bad: A “big ball of mud” where everything is coupled to everything else. Business rules buried in controllers or SQL. Inconsistent libraries, giant PRs, and stale or missing docs. Debt ignored for years, unsupported frameworks hanging around, unreliable CI, risky manual migrations. Obscure tech choices make hiring slow and expensive; onboarding drags on for weeks.
Why it matters: Strong architecture speeds delivery, lowers maintenance, and scales teams and traffic. Weak architecture slows everything, raises costs, and caps growth—hurting valuation.
Harborview’s product is built in TypeScript and Node.js, with React handling the front end. At first glance, the separation between the layers looks reasonable, and there is some unit testing in place, though coverage only extends to about 45% of the most critical modules. As we dug deeper, however, a few red flags stood out. Several services are overloaded with responsibilities; one in particular, the so-called OrderProcessor, not only manages orders but also pricing, routing, and customer notifications. That sort of coupling creates fragility and slows development. Documentation is thin, and new hires say it takes six to eight weeks before they can contribute meaningfully. Technical debt is another sore spot. It isn’t tracked consistently, and in practice only about two percent of Jira tickets are tagged for refactoring. This tells us that while the system works today, it’s carrying a hidden burden that will grow heavier as the company scales.
2) Infrastructure & DevOps Maturity
What to assess: Cloud-native practices vs. manual server care. CI/CD health. Resilience under failures and spikes.
Good: Infra defined in code (Terraform/Pulumi/CloudFormation). Containers and managed services used where sensible. Standardized CI/CD with tests, security checks, canaries/blue-green deploys, and reliable rollbacks. Monitoring covers uptime, latency, errors, and capacity; autoscaling and tested DR plans are in place; postmortems drive improvements.
Bad: Hand-built “pet” servers and inconsistent environments. Risky, infrequent releases that require all-hands calls. Minimal monitoring, noisy or missing alerts, manual scaling, and untested backups. Outages linger and fixes depend on heroics.
Why it matters: Modern DevOps cuts costs, increases velocity, and improves reliability. Legacy setups hide future capex and execution risk.
On the infrastructure side, Harborview is running on AWS with Kubernetes in place, which on paper sounds modern. A basic CI/CD pipeline exists, and developers can push code without too much ceremony. But when we looked at how environments are managed, it became clear the setup leans heavily on manual effort. There’s no infrastructure-as-code. Staging, for example, was configured manually and only one engineer fully understands it. This creates both risk and bottlenecks. Deployments don’t have automated rollbacks, so when an outage struck in May, it took nine hours of manual work to recover. Monitoring is patchy at best: while logs are collected, there’s no standardized approach to metrics or alerting. Taken together, the infrastructure is functional but fragile, and would likely require significant investment to reach enterprise-grade maturity.
3) Security & Compliance
What to assess: Patch hygiene, access controls, and third-party/library risk. Data encryption and segregation. Alignment with SOC 2, ISO 27001, HIPAA, GDPR, PCI (as relevant).
Good: Automated scanning and fast patching; least-privilege access with MFA and audit logs; secrets in a vault. Data encrypted in transit and at rest with managed keys; production data segregated; encrypted, tested backups. Certifications are current and controls operate in practice, not just on paper. Regular audits and pen tests.
Bad: Unpatched CVEs, shared admin accounts, secrets in code. Real customer data in test, unencrypted backups, weak environment segregation. Compliance treated as a checkbox; outdated evidence; few or no audits.
Why it matters: Strong security prevents fines, churn, and reputational hits and can be a sales edge in regulated markets. Weak posture creates liabilities and drags on valuation.
Security practices at Harborview show a mixed picture. On the positive side, all services enforce TLS, and passwords are hashed with bcrypt, which is a good baseline. But we found two serious vulnerabilities, one in OpenSSL and another in a payment library, that had been flagged months ago and still haven’t been patched. Secrets such as API keys and database credentials are stored in environment files within private repos instead of a secure vault, which creates unnecessary exposure. Compliance is thin as well: the company has not pursued SOC 2 or ISO certification, and no penetration testing has been conducted in the last year and a half. While nothing catastrophic has happened yet, the current posture would not pass muster with enterprise buyers, especially those in regulated industries.
4) Scalability & Performance
What to assess: Can the platform handle projected growth in users, data, and integrations? Horizontal scale vs. costly rewrites. Load-test results and behavior under stress.
Good: Stateless services, thoughtful caching, read replicas/partitioning, and async processing via queues/streams. Routine load tests with clear SLOs; autoscaling and dashboards keep performance predictable as demand surges with no re-architecture required.
Bad: Tight coupling and stateful services that only scale up, not out. Single-instance database bottlenecks. Little to no load testing; spikes cause timeouts, outages, and data loss. Scaling means rewriting major components.
Why it matters: If growth depends on doubling users, the platform must carry the load without massive rework. Otherwise the thesis cracks.
The system handles about three thousand active daily users today without major issues, but it’s clear the architecture isn’t ready for the kind of growth projected in the business plan. The PostgreSQL database is a single instance, with no replication or partitioning strategy. Engineering leaders admitted they don’t really know how the platform behaves beyond ten thousand concurrent users, since no meaningful load testing has been done since 2022. APIs don’t enforce rate limits, leaving the system vulnerable to both accidental and malicious traffic spikes. These are the sorts of weaknesses that don’t matter at small scale but can become show-stoppers if user growth accelerates.
5) Product Roadmap Alignment
What to assess: Does the roadmap advance strategy, upsell, and retention? Is capacity spent on innovation or stuck maintaining legacy? Are backlog and releases disciplined?
Good: Priorities reflect strategy, customer input, and competitive signals. Most capacity drives new value while debt is managed deliberately. Groomed backlogs, clear acceptance criteria, predictable cadence, and transparent communication.
Bad: Roadmaps drift with the loudest voice. Teams burn cycles on firefighting and manual support. Backlogs rot, priorities whipsaw, and releases slip or ship half-done. Sales and customers lose trust.
Why it matters: Alignment channels engineering into value creation; misalignment forces costly resets after close.
Harborview’s product roadmap is ambitious, featuring predictive analytics modules and deeper third-party integrations. The ideas are good, but execution is hampered by the fact that a majority of engineering capacity, around sixty percent by the company’s own estimate, is consumed maintaining an aging “RouteBuilder v1” module. This drags innovation timelines. Backlog management is inconsistent, with tickets more than a year old still sitting untouched. Releases come out irregularly, with nine to twelve weeks often passing between them. From a diligence perspective, this means the roadmap is aspirational but not necessarily achievable without freeing up capacity from legacy maintenance.
6) Team & Process Effectiveness
What to assess: Right-sized, scalable org design. Single points of failure. Maturity of QA, incident response, and documentation.
Good: Clear roles (PM owns priority, tech leads own architecture, engineers own delivery). Multiple people know each critical subsystem; docs, runbooks, and diagrams back them up. Layered automated tests; structured incident response with postmortems; smooth onboarding.
Bad: Over- or under-staffed teams with fuzzy ownership, contractor dependence, and “hero” engineers as the only safety net. Sparse docs, inconsistent testing, reactive firefighting, and slow onboarding.
Why it matters: Strong teams reduce execution risk and keep delivery consistent. Fragile orgs delay the roadmap and raise stabilization costs.
The engineering leadership is a bright spot; the CTO is seasoned and well respected. Yet the team structure reveals fragility. Certain subsystems depend entirely on one person: billing is owned by a single engineer, Kubernetes configuration by another. If either were to leave, knowledge gaps would put the business at risk. Cultural markers are concerning as well: employee reviews mention burnout and chaotic release cycles, which aligns with the irregular release history. Onboarding is slow because documentation hasn’t kept pace, forcing new hires to rely on “tribal knowledge.” While the team has talent, the way it’s organized creates single points of failure that add execution risk.
Why TDD Matters
Risk mitigation: Surfaces issues that can hit value, looming rewrites, security gaps, brittle infra, before they surprise you post-close.
Value preservation: Confirms the platform can truly scale to support premium software multiples.
Value creation: Identifies levers such as cloud modernization, DevOps automation, security/compliance upgrades that lower cost and speed delivery.
Integration & exit: Sets a clear plan for platform integration and improves readiness for the next diligence cycle.
Executive Takeaway
When you buy a software company, you’re not just buying financials or a customer list. You’re buying the technology itself. And if that technology is fragile, outdated, or unable to scale, the deal can unravel quickly, no matter how strong the revenue looks on paper.
That’s where technical due diligence (TDD) comes in. It’s the process of getting under the hood to see how the product is really built, how the engineering team operates, and whether the platform can actually support the growth story being sold. Done right, TDD answers three critical questions:
Can this product scale? Will it handle more users, more data, and more complexity without massive rework?
Where are the risks? Are there security gaps, brittle infrastructure, or heavy technical debt waiting to surface post-close?
Where’s the upside? What changes, cloud modernization, DevOps automation, security upgrades, could accelerate value creation?
For investors and operators, TDD is about protecting the downside and unlocking the upside. It helps prevent nasty surprises like costly rewrites or hidden compliance failures, while also highlighting opportunities to speed up delivery, cut costs, and strengthen the product’s competitive edge.
At the end of the day, software companies don’t just run on code. They run on the quality of their systems, the strength of their teams, and the clarity of their strategy.