Twins of the Mind: Why the Next Digital Frontier Is Human

I’ve been thinking a lot lately about the human and behavioural dimension of interoperability and the defence landscape. Coupling this with how digital capability pieces it all together and what we can learn when we think about all of these together. Which got me into the realm of twinning.

Picture two commanders, sitting side by side in an allied exercise. The screens glowing with satellite feeds, logistics dashboards, weapons alerts and data streams Ala every Hollywood movie. Everything connects. Data flows without a hitch. Radios sync. Protocols align.

And yet, there’s tension. One commander, under stress, grows cautious. The other pushes for bold action.

Neither is wrong, but this mismatch could prove fatal. And here’s the thing, the problem isn't the network. It isn't even the code. It's the minds.

This is where the idea of a digital twin enters the conversation.

At its simplest, a digital twin is a high-fidelity virtual replica of something physical like an aircraft engine, a ship, even a whole city.

Fed by streams of real-world data, the twin mirrors how its counterpart behaves. Engineers can stress it, break it, reroute it, all without touching the original. If the jet engine vibrates, the twin vibrates too. If a supply chain falters, the twin reveals the choke points.

Digital twins aren't just laboratory curiosities either or the dreams of Taylor Swift and Travis Kelcie fans. They’re already reshaping how the world works.

Rolls-Royce runs virtual replicas of its jet engines, fed minute-by-minute data from those flying across the globe, so that engineers can spot a fault before a pilot ever feels it.

Singapore built "Virtual Singapore," a full 3D twin of the city, where planners test how floods, fires or new buildings will ripple through streets and communities.

And in the military sphere, NATO's Federated Mission Networking spins up interoperable coalitions with twin-like precision, rehearsing systems and standards before forces ever deploy.

These examples are impressive, but notice the pattern? We mirror machines, cities, and networks. We don't mirror the humans who interpret them, or the minds that fracture under pressure.

That gap matters.

And I’d hazard a guess that every after-action review carries the same refrain, whether literally or on reflection: failures rarely come from radios or servers. They come from commanders misaligned in how they perceive risk, when they trust their machines, and how fatigue reshapes their judgement. We have convinced ourselves that interoperability is a technical puzzle, when it’s more often a psychological one.

Now imagine if we twinned the mind as deliberately as we twin a machine.

A human cognitive-affective twin wouldn't be some sci-fi hologram of consciousness, but a practical model stitched together from science we already know works.

Heart-rate variability and pupil dilation can show when a person is approaching overload.

Decades of behavioural studies chart how trust in automation ebbs and flows.

Military psychology has documented how bias hardens under stress and how ethical strain reshapes choices in real time.

These fragments already exist in labs and journals; they've simply never been assembled into a single mirror of human decision-making under pressure.

And this is where the bigger shift comes into view. When we talk about interoperability today, we usually mean cables, protocols, software standards. But what really breaks coalitions in practice isn't the plumbing. It's the people. Stress thresholds differ. Risk appetites diverge.

One culture leans toward over-trust in machines, another toward suspicion. Biases push commanders to opposite conclusions even when they're staring at the same data.

The system works, but the minds don't.

Interoperability of minds would mean training, simulating and rehearsing these human variables as rigorously as we do bandwidth and logistics. Stress could be treated like ammunition: finite, consumed unevenly, and measurable.

Trust in automation could be tracked like latency, a metric that tells us when allied teams are falling out of sync. Biases could be deliberately injected into simulations, so commanders practice countering them before they warp decisions.

Even moral stress could be surfaced early, making resilience a collective asset rather than an afterthought.

This isn't fantasy. In my mind it’s the logical next step if we take seriously what modern science already tells us about how humans think under fire. And in an era where alliances, not arsenals, are the deciding factor, the coalitions that learn to align minds as well as machines will hold the real advantage.

Which is why this is just a beginning. I’ve been thinking about a lot of this. Too much to fit into one article geared to be interesting enough to read and digest. So in true digital fashion, more parts will follow! In the next pieces, I'm planning to dive deeper into the idea of a "stress budget" and what it might look like in practice, and how bias rehearsals could sharpen decision-making, and even why moral resilience deserves to be treated as a capability in its own right - but I need to think more on the latter. The merit makes sense to me though.

Bottom line is… the next digital twin we need is not of hardware, but of ourselves.

I guess that big question is whether we're willing to face the reflection.

All views expressed in this article are by no means representative of the Department or the ADF, or related to any current or future projects. The views and opinions are entirely my own based on publicly available information.

Next
Next

Shooting yourself in both feet; or the great DEI retreat of 2025