Adopting AI to automate FinCrime compliance (FCC) operations is now a common aim for compliance leaders across banks and other financial institutions worldwide. To help demystify your organization’s move to AI, we sat down with our CTO, Peter Cousins, to discuss three critical considerations for using AI to streamline financial crime compliance operations. The Q&A topics include:
- The build vs buy decision
- Usage of GenAI/LLMs
- Fast-tracking scalable, secure AI Agent deployment with the Work.AI platform
You can also watch a video interview with Peter Cousins, WorkFusion’s Chief Technology Officer.
The Build vs Buy Decision
Q: Why should banks invest in WorkFusion AI Agents instead of building their own?
A: Banks should invest in WorkFusion AI Agents, because the end-to-end process of building, maintaining, and scaling AI solutions is incredibly complex and costly. Attempting to develop AI agents in-house requires millions of dollars in resources and ongoing investment, whereas purchasing ready-made solutions is far more cost-effective.
Q: What are the risks of building AI agents internally?
A: Every day spent developing an internal AI solution results in missed opportunities for improved compliance and cost reductions. The time commitment—often spanning multiple years—means prolonged inefficiencies and staggering opportunity costs. A short-term attempt, such as a quick experiment with ChatGPT, overlooks the intricate challenges involved.
Q: What are the key considerations when implementing AI solutions?
A: Developing AI agents isn’t just about writing code—it requires secure data integration, seamless connectivity with existing systems, and ensuring nothing breaks when updating or extracting data. Decision-making must be reliable, with proper governance to minimize errors and maximize automation while maintaining full explainability.
Q: How do banks ensure AI accuracy and reliability?
A: AI solutions must undergo rigorous back-testing, statistical quality control (QC), continuous monitoring, and feedback loop optimization. Banks need ongoing model risk management (MRM) reporting, version control, and updates to maintain performance over time. This isn’t a short-term endeavor—it requires long-term commitment.
Usage of GenAI/LLMs
Q: How do WorkFusion AI agents leverage LLMs selectively and responsibly?
A: WorkFusion AI agents use LLMs strategically, based on the benefits they provide and the risks they introduce. In some cases, banks prefer not to use LLMs at all due to concerns about data security and external data exposure. While protections exist, some financial institutions remain cautious. So, WorkFusion AI agents function effectively without LLMs, but can also benefit from LLMs when permitted.
Q: In what ways can LLMs enhance WorkFusion’s AI capabilities?
A: The most common application for LLMs is to serve as a productivity accelerator during design and development. For example, instead of manually coding complex rules for AI Agents to follow, users can describe the intended rules in natural language. An LLM can interpret these instructions, generate structured rules, and allow users to approve or refine them—eliminating the “blank page” challenge while maintaining full control. Since no runtime or customer data is involved, this use case carries minimal risk.
Q: How can LLMs support model training for AI agents?
A: LLMs are costly and slower than WorkFusion’s document extraction technology, but they can speed up training for new models. For long-tail document extraction, an LLM can apply zero-shot labeling to training examples, which humans then verify before integrating into model training. This approach significantly reduces costs and accelerates training, without requiring production data.
Q: Can LLMs assist with decision-making in production environments?
A: Yes, in cases where an AI Agent encounters a new document type never seen before, an LLM can generate preliminary labels to help human reviewers process it faster. Additionally, in adjudication tasks like adverse media screenings, AI Agents achieve high automation rates without LLMs. However, certain complex language constructs can create uncertainty. In those situations, WorkFusion AI can invoke an LLM to analyze a public domain article—without sending any customer data—and extract relevant insights. If the AI engine and LLM agree, the decision is highly accurate; if they disagree, a human reviewer ensures correctness.
Q: What impact does selective LLM usage have on automation rates?
A: By combining AI models and LLMs strategically, WorkFusion AI achieves up to 95% straight-through processing, compared to 70% without LLMs. Banks must weigh whether this higher automation rate justifies LLM adoption and decide if they are willing to navigate internal risk management approvals, which can take significant time.
Fast-Tracking Scalable, Secure AI Agent Deployment with the Work.AI Platform
Q: What is the primary purpose of the ‘Work.AI’ platform?
A: The Work.AI platform is used to build AI Agents, providing a secure, audited, access-controlled, and encrypted environment. It also ensures scalability, allowing banks to process large workloads efficiently.
Q: How does the platform support scalability?
A: The platform manages scaling automatically, whether on-premises or in the cloud. For example, if a bank needs to run language assessment models on 100 servers, the platform orchestrates this seamlessly without requiring business-side intervention.
Q: How do no-code capabilities improve the development and customization of AI Agents?
A: No-code construction tools accelerate development and customization. Since banks operate differently, they need adaptable solutions without requiring custom software releases or costly development projects.
Q: How does the platform enable seamless data integration?
A: The platform can acquire data from virtually any source—files, emails, APIs, databases, data lakes, middleware (legacy or modern), and even public websites. It automatically integrates with APIs without requiring manual coding.
Q: How does the Work.AI platform help process and analyze integrated data?
A: It transforms and structures disparate data sources, making them work together. Advanced analytics—such as link analysis, trend detection, cohort analysis, and rule-based processing—help drive investigative use cases.
Q: How does the platform handle rules and decision-making?
A: Rules can be created using natural language or through an intuitive no-code UI to enforce standards and business practices. These rules guide the decision tree for AI Agents.
Q: Can human intervention be incorporated into AI decision-making?
A: Yes, human review or approval can be seamlessly integrated at any step via a no-code UI—eliminating the need for web developers.
Q: What roles do machine learning models play in the platform?
A: The platform can automatically train ML models for document processing, information extraction, classification, and decision-making—all without requiring manual coding.
Q: What makes this AI platform so powerful?
A: By combining cognitive perception, memory, and AI automation, the platform enables organizations to build fully operational systems without writing a single line of code.
We hope you found these insights helpful to how you frame your approach to AI in FCC. To watch the full interview with Peter and capture additional nuance, you can watch the full video interview . This interview is the third in our 3 Questions with our CTO series of interviews with Peter, as he does a deep dive into WorkFusion’s technology and how it helps FCC leaders navigate AI usage in their operations.
For additional insights with a little bit of fun magic added, read the new eBook, De-mystifying AI’s Magic for Financial Crime Compliance.
AI,ai agents,AML,Banking & Financial Services,BSA/AML compliance,Compliance,Financial Crime,FinCrime,Scaling Automation