The next evolution in fraud strategy: human-informed, AI-driven rulesets

by Chris Oakley - Head of Financial Crime Solutions
| minute read

Fraud strategy has always evolved in waves.

First came manual rules: static, analyst-built defences crafted from experience and gut instinct. Then came analytics, scorecards, models, and thresholds designed to refine those instincts. Now a new era is emerging, one where human expertise and artificial intelligence no longer compete, but collaborate.

The next evolution in fraud prevention is human-informed, AI-driven rulesets: logic that learns from labelled outcomes, generates thousands of potential rule combinations, and empowers fraud subject matter experts (SMEs) to decide which belong in production.

It’s not about replacing human judgement. It’s about amplifying it with machine precision.

A flow diagram showing four stages of rule evolution in fraud prevention. The first box reads “Manual Rules: Static logic, expert-built from experience and intuition.” The second box reads “Data-Driven Rules: Analytics and thresholds refine manual decisioning.” The third box reads “Optimised Rules: Incremental tuning of existing logic to maintain performance.” The fourth box reads “Human + AI Rulesets: Adaptive, explainable, continuously improving decision frameworks.” Each box is connected by a right-pointing arrow, indicating progression.

When “optimisation” hits its ceiling

Traditional rule optimisation has always been a reactive pursuit. Analysts tweak weightings, adjust thresholds, and add new conditions when fraud patterns change. But today’s data velocity, with new scams, social-engineering behaviours, and payment typologies emerging weekly, exposes the limits of manual tuning.

Human teams are exceptional at understanding context: the difference between a genuine customer in a hurry and a mule test transaction. But they cannot evaluate every variable permutation or correlation buried in live data. Conversely, AI systems can analyse billions of data points and test thousands of rule variants, but without human framing they risk producing logic that is operationally unviable or ethically tone-deaf.

Bridging that gap demands a model that combines both: the scale and objectivity of AI, guided by the contextual intelligence of the fraud SME.

The human layer: context, control and credibility

Fraud experts bring three assets no algorithm can replicate:

  1. Context - They understand customer intent, channel behaviour, and the subtleties of regulatory tolerance.
  2. Control - They decide how far to push detection before friction becomes reputational damage.
  3. Credibility - They provide the governance and explainability regulators expect when decisions affect customers’ money.

That human layer ensures every AI suggestion is interpreted through the lens of business objectives, brand promise, and ethical responsibility.

A hybrid rule strategy means analysts are no longer buried in alert queues; they are validating and prioritising intelligent proposals that have already been statistically stress-tested.

The AI layer: discovering patterns humans cannot see

This is where ODE’s capability becomes clear. Rather than acting as a black box model, it functions as a force multiplier for the fraud SME.

ODE learns from labelled outcomes such as confirmed frauds, false positives, and near misses, and then explores thousands of possible rule permutations that could improve separation between good and bad behaviour. Each candidate rule is scored for precision, coverage and explainability.

The result is not a single opaque model. It is a ranked list of human-readable rule options, each expressed in clear logic such as:
“IF Device = New AND Value > £700 AND Location ≠ Usual → Score +8.”

Analysts can instantly see why a rule works and decide whether to deploy it.

Biases and legacy assumptions fall away because AI is not anchored to past comfort zones. It surfaces correlations humans might never test.

The synergy: fraud SMEs and AI working together

The true power lies in the collaboration loop:

A circular flow diagram showing five stages of an AI-driven rule generation process. At the top, a purple circle labelled “Data In” states “Transaction data and labelled outcomes feed the ODE engine.” Moving clockwise, the next purple circle is “AI exploration” with text “ODE generates and ranks thousands of potential rule combinations.” The third circle, in red, is “Human evaluation” with text “SMEs review and select high-value candidates.” The fourth circle, also red, is “Deployment” with text “Approved rules move into the live decisioning engine.” The fifth circle, in red, is “Feedback” with text “New outcomes are fed back to retrain and improve the model.” Grey arrows connect the circles in a loop, indicating an iterative process.

This creates a continuous, closed-loop evolution: a living ruleset that adapts as behaviour changes.

Fraud analysts remain firmly in control, deciding what is operationally valid, while AI handles the heavy computation and discovery. The result is precision without opacity, and agility without chaos.

Real-world impact: principles that drive performance

In recent proofs of value and pilot programmes, the most meaningful outcomes have not been about raw percentage gains. They have been about how performance was achieved.

The common thread across every successful deployment is the same set of principles:

  • Precision over volume - Focus shifts from catching everything suspicious to identifying what truly matters. Alerts become smarter, not just fewer.
  • Agility over inertia - Rules are refined in days, not quarters, because SMEs can evaluate AI-generated options instantly.
  • Transparency over opacity - Every decision remains explainable. Analysts can trace the logic back to the source data and rationale.
  • Sustainability over reactivity - Rules evolve continuously through the feedback loop, preventing decay before it impacts customers.

These principles matter more than isolated metrics because they create an enduring advantage: a fraud strategy that improves itself over time.

Operationally, teams experience fewer manual investigations, stronger auditability, and greater confidence in their rule base.

Strategically, institutions move from “tuning rules” to governing an intelligent decision framework that learns alongside them.

Why this matters now

Regulatory and customer-experience pressures are converging. Firms are expected to prove proactive prevention while maintaining frictionless journeys. Fraud teams are expected to justify every decline and explain every decision instantly.

Human-informed AI-driven rulesets deliver that equilibrium.

A diagram showing three red rectangles pointing to a central red circle labelled “Equilibrium.” The top rectangle is labelled “Governance” with text “Every rule is reviewed and approved before use.” The left rectangle is labelled “Explainability” with text “Every rule is written in human-readable form.” The right rectangle is labelled “Adaptability” with text “AI continuously surfaces what humans can validate and trust.” Pink arrows connect each rectangle to the central circle, indicating that these three principles contribute to equilibrium.


This is not futuristic theory. It is what the smartest fraud functions are already deploying. Technology that makes analysts faster, sharper, and more consistent.

The strategic shift: from reactive defence to proactive intelligence

For years, fraud management has been about firefighting - reacting to spikes, revising rules, then waiting for the next attack.

The next evolution is proactive: using AI to predict where rules will decay before it happens, using human expertise to validate what is safe to deploy, and building a fraud strategy that learns from itself.

In that world, SMEs are not maintaining the ruleset; they are governing an intelligent decision layer that keeps learning.

It is faster, fairer and more efficient: a living ecosystem rather than a static playbook.

Trust through collaboration

Fraud prevention has always been a trust discipline. The trust between customer and institution, and trust between analyst and machine.

The future belongs to teams that can integrate both forms of intelligence: the nuanced judgment of experienced professionals and the computational strength of AI.

ODE was built for exactly this intersection. It does not replace fraud SMEs; it amplifies them, surfacing the best rule logic, stripping out bias, and giving decisioning teams the tools to act faster and with greater confidence.

As fraud becomes more adaptive, so must we.

Human-informed, AI-driven rulesets are not the end of human expertise. They are its next, most powerful expression. 

Search

ai-and-technology

financial-services

Related content

Reflections on PASA’s ‘Rip it up and start again’ annual conference

Following a year without a conference, it was great to meet up with industry peers and service providers to discuss the hot topics surrounding pensions, with the theme based on ‘rip it up and start again’.

Creating member-centric experiences for pension savers

To be genuinely customer-centric, interactions must be simple. Here we explore the balance needed in delivering a digital-first customer experience, with the all-important human element.

The post-pandemic pensions landscape

It’s time for the pensions industry to harness innovations, navigate the new social landscape and meet the growing demands of pension savers. Here are the key enablers for digital transformation.