table of contents
If you like systems thinking, you’ve probably heard that the optimal amount of fraud is non-zero (opens in new tab). The argument is essentially that the cost of eliminating the next dollar of fraud costs more than the dollar savings itself, and businesses naturally land here because they already internalize fraud loss.
Where else might the optimal amount of a bad thing be non-zero? Strategy frameworks look like a good candidate, but the way they actually operate is worth dissecting.
Inverting frameworks
In a previous post (opens in new tab) I talked about the strategic value of interrogating the metaphors you live by to identify where their lensing effect is greatest. My suggestion was to imagine the strengths, weaknesses, and conditions of possibility that arise from the logical inverse of your situation. Now I’d like to test this idea against management strategy itself, which is its own kind of metaphor.
Let’s start with Prahalad and Hamel’s core competencies. In “The Core Competence of the Corporation”, they frame success as a coordination problem by asking what a firm does well uniquely, not necessarily within a business line but across lines. The authors go on to characterize core competencies as providing broad potential market access, as contributing to perceived customer benefits, and as being difficult to imitate.
This is the standard reading. But what if we flip this conception of the org on its head? Imagine an organization with literally no core competencies, maybe proficient only in common, easily copied areas. Surely such a firm would completely fail to differentiate itself on the simplest terms. But there’s actually a trivial counterexample that interrupts the metaphor! Picture an organization that has access to no markets beyond its own and that is trivially easy to imitate from an operational perspective. This sounds a lot like a public utility. Your energy utility does precisely one thing, is entirely uninterested in entering other markets, isn’t directly downstream of any market incentives, and probably only does things that many industry competitors could match on a technical and operational level.
You may protest the fairness of this point, since utilities exist due to statutory fiat, not because they’re necessarily any good at the job. But that causal logic is actually backwards: the mandate creates the need to provide a good enough service. A mundane “good enough” service is unintuitively valuable. Your water utility serves you a plentiful resource in a completely imitable way, but the plenitude of water is precisely what makes it valuable to provide freely at cost and at scale.
You might also object that this inversion suggests only that entrepreneurs shouldn’t try to start a business that supplies a town with water. And that’s probably true! But again, our real point is that the inverse of a traditionally “successful” business can be consistent with success too — it just depends on how you define your market. And this illustrates the real underlying condition, that the core competency model rests on a shaky assumption about your boundary conditions. Success can be selected for by political factors, not just economic factors.
Flow-asymmetry
So frameworks are a bounded metaphor for organizational success, but they’re not the analysis itself. This comes to light when your bank resurrects a 2008-era value-at-risk model or when your shoe company starts selling AI compute. But I’d like to argue in favour of suboptimal frameworks — or at least to explain why they necessarily exist at your natural equilibrium point.
The obvious reason why you might want to use a bad framework is because better ones impose coordination taxes that could exceed the marginal increase in value. Let’s take VRIO (opens in new tab): on some level it doesn’t matter that this framework produces “wrong” intuitions about utilities if you can’t agree with your colleagues about what parts of VRIO still work or how to apply them. (Can we just apply VIO? Or is it RIO? Maybe one-half of each?) If you need to write an HBR article just to figure out which fractions of a strategy framework to use, then you’ve wasted valuable time. Agreeing on someone’s VRIO analysis — no matter how much we might privately question the framework’s fitness for purpose — saves us the tax by maximizing the surface area of our shared assumptions.
But this is only true to a first approximation.1 Early in law school, I questioned the value of adopting Driedger’s rule (opens in new tab) or the purposive interpretation approach. And why would anyone prefer incrementalist, judge-made common law when the neatly mathematical and dispute-preempting civil code exists? To some extent, these objections are valid! While precision has a cost that my common-law-advocating readers may point to — maintaining a civil code requires heavy machinery — it also confers significant benefits. But critically, the cost of that precision is displaced onto two substantially resourced branches of government. This is a structurally neat solution because the population that would otherwise bear the cost of vague or incomprehensible laws actually elects the legislature, which closes the accountability feedback loop.
Things are a bit worse in your organization. There are the coordination-tax costs to precision that we discussed above, sure. But more importantly there are costs to vagueness, and they’re systemically trickier to solve because of flow asymmetry. When your strategists make a vaguely articulated decision, your operational teams scramble and your analysts write a post-hoc justification for the play. But the strategists don’t feel any of this; vague language retains optionality and can be reinterpreted without cost as consultancy, whereas a more precise framework doesn’t directly benefit them. And unlike in the civil law, there’s no accountability loop to speak of, because creating an institution to bear that cost would actually shift it back onto those who set the framework. So the suboptimal play emerges from the asymmetry of cost and benefit flows.
The catch is that both pressures we’ve discussed — between political and market effects from our utility example, and between precision and the cost of framework maintenance — both encode the same accountability mechanism. Business framework developers don’t gain much by controlling for non-market-captured players, and a big-picture strategist doesn’t directly benefit by giving the analysts a more precise playbook.2 In both cases, the party who’s best placed to improve the framework isn’t accountable to the party who would benefit from the fix. This is the gamed counterweight problem (opens in new tab) that I wrote about, just in another domain. In both cases we’re missing an arm’s-length institution with the right incentive structure to preempt path-dependence toward the bad-strategy equilibrium.
The systems fix looks like this: if the framework owner pays the cost of imprecision, you’ve got a non-zero optimum. If they don’t, you have a problem. Call the solution foresight or environmental scanning or a strategy red team or a VSM S4 (opens in new tab) — the name doesn’t matter as much as its placement and what it’s accountable for. Relocating who pays for framework wrongness moves the equilibrium toward zero.
Footnotes
-
Not surprisingly, “vague is actually good” is a tough sell for a systems thinking blog. ↩
-
There are certainly second-order benefits to setting out a precise strategy, but these eat into your time, and it’s unclear if they’d offset the loss of the first-order optionality that vague strategy enables. ↩