Most custom software projects that fail don't fail because of bad code. They fail because the problem wasn't understood properly before the build started.
The pattern is familiar. A business has a problem, a broken process, a gap in their systems, something manual that should be automated. They find a development team. They describe what they want. The developer builds it. And three months later, the business has a system that technically does what was specified and doesn't solve the actual problem.
This happens because what people describe as the problem is almost never the full problem. It's a symptom, or a proposed solution, or the version of the problem that's easiest to articulate. The real problem, the one that's actually causing the pain, is usually a layer or two deeper.
The request is often not the problem.
When a client says "we need a dispatch system," they're describing a category of solution, not a problem. What they actually need depends on questions that haven't been asked yet: How does dispatch currently work? Where does it break down? Who does it, and what do they need to see? What happens when a driver doesn't get the message? What's the relationship between dispatch and billing?
A developer who skips straight to the build is effectively asking: "How can I build something that fits the description?" A developer who scopes properly is asking: "What problem actually needs to be solved?"
The difference sounds obvious. In practice, it's easy to skip. Scoping takes time. The client wants to move fast. The developer is eager to start building. And everyone is operating on the assumption that the spec sheet is the problem, rather than a first approximation of it.
Map the real process before designing the solution.
Good scoping means spending serious time on the problem before you design any solution. It means mapping the actual process, not the idealised version, and finding the places it breaks down. It means asking who uses the system and what they actually need it to do. It means surfacing the constraints that the client didn't think to mention because they've lived with them so long they've stopped noticing.
At SSS, our scoping process typically involves a two-to-three hour session where we walk through the problem end to end. We ask about edge cases. We ask about workarounds. We ask about the things that happened that nobody wants to talk about. This is the session where we find out that the drivers use low-end phones with unreliable data, or that the billing process is manual and disconnected, or that there are three people who can authorise a thing but only one who actually does.
Those details change the design. Sometimes they change the problem statement entirely.
The shortcut usually becomes the expensive part.
Bad scoping is expensive in ways that aren't always obvious at the start. The most obvious cost is rework, building something, realising it doesn't fit, and rebuilding it. But there's a subtler cost too: the demoralisation of a team that built something carefully and correctly, only to find that it doesn't actually solve the problem.
The other cost is time. A project that gets scoped properly upfront and takes three months is almost always cheaper than a project that skips scoping and takes six, because every week of a build that's heading in the wrong direction is a week that's hard to recover.
The uncomfortable truth is that most bad software projects are predictable. Not because the developers were incompetent, but because the problem was never understood well enough to solve it. Good scoping is the thing that makes the rest of the build work. Skipping it is the most expensive shortcut in software development.
Sharp Software Solutions builds custom software for South African businesses. Every engagement starts with a scoping session. We don't write a line of code until we understand the actual problem.
Start a conversation →