Skip to content

The Architecture of Trust

The Architecture of Trust

Jonathan Gill, March 2026

Trust is not a policy. It is not a compliance checkbox or a brand campaign. It is, as Rachel Botsman puts it, "a confident relationship with the unknown." That definition has stayed with me since I first encountered her work. It captures something that most institutional thinking about trust misses entirely: trust is not about certainty. It is about what we are willing to accept in the absence of certainty.

I have spent over thirty years working in technology and cybersecurity, most of it inside banks. Banks are built on trust. The very word "credit" derives from the Latin credere, to believe. When a customer deposits money, they are not simply storing value. They are making an act of faith that the institution will honour its obligation to return it. That faith is the foundation. Everything else, the products, the interest rates, the mobile apps, sits on top of it.

And yet, the people who build and run these systems rarely talk about trust in these terms. They talk about uptime. They talk about SLAs. They talk about risk appetite. These are useful constructs, but they describe the mechanics, not the meaning. They tell you how the engine works, not why the passenger got in the car.

The Three Ages of Trust

Trust has moved through three broad phases. The first is local trust: small communities where reputation was personal and consequences were immediate. You trusted your neighbour because you knew them, and because the social cost of betrayal was high.

The second phase is institutional trust. As societies scaled beyond village boundaries, we invented mechanisms to bridge the gap between strangers: contracts, courts, insurance, brands, regulators. Banks are a product of this era. We didn't need to know the individual behind the counter. We trusted the institution, and by extension, the system of rules and oversight that governed it.

The third phase, which Botsman calls distributed trust, is where we find ourselves now. Technology has taken trust that was centralised in institutions and pushed it sideways, into networks and platforms. We rate our Uber driver. We review our Airbnb host. We trust a stranger's opinion on TripAdvisor more than a hotel chain's marketing department. The flow of trust has changed direction, from vertical to horizontal.

This is not a story of decline. Trust is not being destroyed, it is being redistributed. Like energy, it changes form. The question is not whether people trust, but where they place that trust, and whether the new repositories deserve it.

Technology as Trust Broker

Here is where it gets interesting, and where it gets dangerous.

Technology platforms have become remarkably effective at manufacturing trust between strangers. The genius of Airbnb is not the listing; it is the rating system, the verified photos, the secure payment, the mutual review. These mechanisms reduce the unknown just enough for someone to hand their house keys to a person they have never met. That is a substantial trust leap, and it works because the platform has engineered the conditions for it.

But the same dynamic creates a fragility that institutions never had. When trust is distributed through technology, it becomes dependent on the technology working. Not just working well. Working perfectly.

Research into human-AI interaction has uncovered something called the "perfect automation schema." When we interact with a system rather than a person, our tolerance for error collapses. A human bank teller who miscounts cash is having a bad day. A banking app that shows the wrong balance is broken. The same error, interpreted through entirely different frames. We extend grace to people. We do not extend it to systems.

This asymmetry has consequences that most technologists underestimate.

When the Machine Stumbles

In 2018, TSB attempted to migrate 5.2 million customer accounts to a new platform. The migration failed. Nearly a million customers could not access their accounts. Some could see other people's account information. The total cost exceeded £300 million, and the CEO eventually resigned.

The technical failure was bad. But what destroyed trust was not the outage itself. It was the response: the slow acknowledgement, the downplaying, the sense that the institution did not fully grasp what it had done to the people who depended on it.

In 2012, RBS suffered a batch processing failure that left customers unable to access their money for days. One customer in a Mexican hospital was threatened with the termination of his life support because the bank could not process a payment. The failure was a software update gone wrong. The human impact was existential.

In 2020, Citibank employees accidentally transferred $900 million to creditors instead of $7.8 million, because of a confusing user interface. A design flaw, a moment of inattention, and nearly a billion dollars moved to the wrong place.

These are not edge cases. They are the predictable consequence of building critical systems without understanding what trust actually requires.

Capability and Character

Botsman's framework for trustworthiness draws a sharp line between capability and character. Capability is the "how": competence, reliability, the ability to deliver on promises. Character is the "why": empathy, integrity, the alignment of motives with the interests of the people being served.

Most organisations, when trust breaks down, focus their response on capability. They announce system upgrades. They publish incident reports. They hire more engineers. These are necessary steps, but they address only half the equation.

What people actually need to see after a failure is character. They need to see that the organisation understands the impact, not just the root cause. They need to believe that the people in charge care about getting it right, not just getting it fixed. The distinction matters. A capability failure met with a character response can actually strengthen trust. A capability failure met with corporate defensiveness, delayed accountability, or performative transparency will accelerate the collapse.

Most trust breakdowns in the world today are character problems, not capability problems. People do not stop trusting their bank because of a single outage. They stop trusting it when the outage reveals that the institution's priorities are not aligned with theirs. When the apology sounds scripted. When the compensation feels calculated rather than genuine. When the CEO appears on television and you can tell they are managing a reputation rather than confronting a failure.

The AI Trust Problem

This dynamic is about to intensify. As organisations deploy AI systems into decision-making, they are asking customers and employees to make trust leaps of a kind we have not seen before. Trusting a human advisor who occasionally gets it wrong is one thing. Trusting an algorithm whose reasoning you cannot see is something else entirely.

When we talk about trust in AI, we need to follow the question "do you trust it?" with "to do what?" Trusting an AI to recommend a playlist is trivially different from trusting it to approve a mortgage or flag a transaction as fraudulent. Context defines the stakes. Stakes define the trust requirement.

The research on this is sobering. People initially show a degree of appreciation for algorithmic advice, but a single visible error triggers a disproportionate trust collapse. And the path back is narrow: transparency about what went wrong helps, but only if it is genuine, timely, and demonstrates that the organisation understands the human consequences.

But visible errors are only half the problem. The more corrosive threat is the silent failure: the system that appears to be working correctly but is not. An AI model that confidently approves loans it should decline. A fraud detection engine that drifts out of calibration and nobody notices for months. These failures do not trigger alarms. They do not generate incident reports. They erode trust invisibly, and by the time anyone realises, the damage is structural. The hardest trust to repair is the trust people did not know they had lost.

This is where I work every day. Building frameworks for AI security is, at its core, an exercise in trust architecture. Not just technical controls, but the structures that allow a human being to maintain a confident relationship with a system whose inner workings they will never fully understand.

Trust as a Design Problem

The lesson I keep returning to is this: trust is not a communications problem. It is a design problem.

If you bolt trust onto a system after the fact, through PR campaigns, transparency reports, or regulatory compliance, you are building on sand. Trust needs to be in the architecture from the beginning. In the way systems handle errors. In the way they communicate uncertainty. In the way they give humans meaningful oversight rather than theatrical oversight.

In our rush to embrace new technologies, we risk placing too much trust, too easily, in the wrong places. The unintended consequences of platforms and systems are only now playing out, and the question of "platform responsibility" is one of the defining issues of our time.

The answer is not to slow down the technology, but to speed up our understanding of what trust actually requires. Not the institutional kind, with its top-down assurances and regulatory scaffolding. Not the naive distributed kind, with its assumption that algorithms and ratings can substitute for human judgement. Something more honest than either.

A confident relationship with the unknown requires knowing what you do not know, and being willing to say so. It requires building systems that fail gracefully rather than catastrophically. It requires leaders who understand that their response to a failure matters more than the failure itself.

Trust is not a metric. It is not a KPI. It is the thing that makes everything else possible, and the thing most easily destroyed by people who think it is someone else's problem.

This is a personal essay. The views are my own.