There is a comforting story financial institutions tell themselves about surveillance.

In this story, surveillance is a neutral control — technical, objective, largely apolitical. A back-office function designed to spot misconduct, satisfy regulators, and quietly protect market integrity while the real business of finance carries on elsewhere.

It is a tidy story. It is also false.

Surveillance is not neutral. It never has been. It is a political act — one that exposes who holds power inside an institution, which risks are tolerated, and where accountability is ultimately allowed to land.

Once surveillance is understood in those terms, its persistent shortcomings begin to look less like technical failures and more like institutional design choices.

In most large financial institutions, surveillance does not primarily exist to uncover uncomfortable truths. It exists to demonstrate that the organisation is trying. The distinction matters. Systems are built less around how abuse actually occurs and more around what appears reasonable to regulators, auditors, and senior committees operating under uncertainty.

The objective is not insight. It is defensibility.

This is why surveillance frameworks so often feel disconnected from trading reality. They are calibrated to generate activity — alerts, risk assessments, dashboards, metrics — without generating disruption. They flag enough to demonstrate diligence, but rarely enough to force sustained confrontation with the Front Office. Volume is produced; friction is not.

This outcome is not accidental. It is negotiated.

Consider a familiar scene inside a large financial institution.

A surveillance team identifies a recurring pattern on a profitable desk — not a clear breach, but a sequence of behaviours that sits uncomfortably close to the boundary. The data is imperfect but directionally consistent. Analysts escalate the issue internally, framing it cautiously.

The response is procedural. A working group is formed. Questions are raised about data quality, calibration, market context. The desk pushes back: the behaviour is explainable, commercially rational, well within market norms. Legal advises restraint. Senior management asks whether regulators have explicitly flagged this pattern elsewhere.

Eventually, the issue is reclassified. Thresholds are adjusted. Documentation is updated. The scenario remains live, but its sensitivity is reduced. The system still “monitors” the behaviour — just not in a way that generates friction.

No one breaks the rules. No one acts in bad faith. The organisation behaves exactly as it is incentivised to behave.

Front Office desks generate revenue and political capital. Surveillance teams generate cost, constraint, and the possibility of escalation. When those forces collide, compromise is inevitable. Thresholds are softened. Scenarios are narrowed. Known data gaps are documented, discussed, and quietly deferred. Everyone understands where the system looks — and, more importantly, where it does not.

Surveillance becomes a performance.

Many alerts exist for an audience that never trades and never investigates: audit. They fire reliably, close cleanly, and produce statistics that reassure governance forums that something is happening. Their function is not primarily to detect misconduct, but to create an evidentiary trail — proof that the institution can demonstrate process if challenged later.

This is why post-incident reviews so often feel hollow. When misconduct eventually surfaces — through enforcement action, whistleblowers, or external investigation — the same question is asked: why wasn’t this caught?

The honest answer is uncomfortable. Because the system was never designed to see it clearly.

Surveillance teams sit at the sharp end of this contradiction. They are asked to explain failures rooted in organisational choices they did not make. They inherit responsibility without authority, accountability without leverage. When something goes wrong, the failure is rarely framed as one of incentives, governance, or risk appetite. It is framed as a failure of execution.

Vendor selection follows the same logic. Large, established platforms offer something more valuable than detection capability: safety. They are recognisable. They are defensible. They allow risk to be externalised. If something is missed, responsibility can be diffused — attributed to configuration, data quality, or “industry standard” practice.

Innovation, by contrast, is dangerous. A genuinely effective surveillance approach would challenge assumptions, surface uncomfortable patterns, and force trade-offs into the open. It would create tension in institutions that often prize calm over clarity.

Surveillance therefore settles into a negotiated equilibrium — between revenue and risk, cost and credibility, expectation and appetite. What gets monitored reflects not only what matters, but what the organisation is willing to confront.

The cost of this pretence compounds over time. Analysts become cynical. Controls become ritualised. Metrics lose meaning. Institutions slowly forget what effective surveillance even looks like.

And when technology improves — when AI and advanced analytics promise sharper detection — the same constraints quietly shape their deployment. The danger is not that these tools will fail. It is that they will succeed at automating the wrong objective.

More data. More models. Better dashboards. The same avoidance.

Real surveillance would require something far more difficult than better technology. It would require institutions to accept that some profits come with unacceptable risk, that data quality is a governance problem rather than a technical one, and that effective oversight is inherently disruptive.

Most institutions are not ready for that reckoning.

So they continue to monitor activity while missing intent. They optimise for reassurance over truth. And they continue to call it surveillance.

Keep Reading

No posts found