De-mystifying AI’s Magic for Financial Crime Compliance


Recently, WorkFusion’s Head of Product Marketing, Kyle Hoback, teamed with our lead graphic designer to shine a light on the 10 myths surrounding AI and its use in FinCrime Compliance (FCC). Download this fantastical version of the paper to optimize your reading and viewing experience. The insights challenge notions of AI that have gained traction in recent times, yet which fail to reflect the reality of many practitioners today.

10 hidden gems within the AI fantasy lands

While the number three is consistently the most popular and prevalent in classical fantasy stories, AI drums up so many illusions that 10 pieces of lore are conveyed herein. This post won’t be a spoiler to what’s in the paper, but here’s a trailer type of view surrounding just two of the 10 myths. 

The first, and one that is sure to give you pause, is the frightening folktale of how “AI will replace humans.”    

“Not to worry,”, says Kyle. From his point of view, nothing could be further from the truth. He’s a big believer, based on his experiences with WorkFusion customers in Banking and Financial Services, that it just won’t happen. He agrees wholeheartedly with the Harvard Business Review (HBR) 2023 article entitled AI Won’t Replace Humans — But Humans With AI Will Replace Humans Without AI.  

Consider how powerful a human—armed with AI—can be when performing standard FinCrime compliance processes. These typically require analysis and/or thinking at several steps in a process—one or more requiring the human feel for nuanced information, and most others requiring near-human kinds of analysis which can be automated. An AI Agent can handle the latter and save human partners a lot of work.    

Below is the workflow image of a typical adverse media screening & alert review process where an AI Agent collaborates with humans in the loop. Software performs the first two steps. The AI Agent (named Evan and pictured at step 3) performs the rest of the process, with humans interacting with Evan at step 7. 

Evan Diagram WorkFusion 1

Here’s a detailed description of steps 3 through 8 of the workflow, underscoring the blend of AI and human intelligence: 

Step 3 (inside the shaded box): Evan retrieves all potential adverse media results (articles, reports, data, etc.) from any of the leading tools (e.g. LSEG, Dow Jones, Thomson Reuters, and Google).  

Step 4: Evan reviews the retrieved articles/data/etc., then applies both pre-trained insights and AI-based decisioning. At the end of this step, Evan also writes the rationale behind each decision ‘he’ makes and provides detailed justification in auditable form.  

Step 5: Evan dispositions articles and other information pieces based on identifying mismatches for entity and people names, location, focus of the article, level of materiality, etc.   

Step 6: After eliminating the majority of content that fails to indicate relevant issues, Evan prioritizes for human review (typically for a Level 1 analyst) all the information pieces/articles/news that could possibly match the search target and be material.  

Step 7:  Pulled into the process via automatic human-in-the-loop (HITL) functionality, the human analyst makes final decisions on the escalated pieces from Evan within the WorkFusion UI or WorkFusion WorkSpace.  

Step 8: Once the human analyst completes their work, Evan produces a final report and delivers it to the location desired by the compliance team. This can be a case management system, an email to designated individuals—whatever place(s) the compliance leaders dictate.  

In the same HBR article, Harvard Business School professor Karim Lakhani talks of the AI-human collaboration and integration. “The places where you can apply it?” he says. “Well, where do you apply thinking?” Lakhani’s question captures the value of WorkFusion AI Agents, applying AI with humans to streamline processes and get more done with AI-human collaboration. 

Want to hear about the second myth? Okay, but just one more.  

Here it is: “You should avoid AI if you’re not a technical person.”  

“Not True”, says the myth-busting paper. As it points out, many of us use AI all the time, often without noticing it. Non-technical people use AI all the time, such as when they: 

  • Adjust their route on Google Maps due to traffic 

  • Click on a Netflix movie recommendation 

  • Prompt ChatGPT to explain how to make Swiss fondue   

“AI for FCC can be similar,” says Kyle. “You do not need to become a data scientist, machine learning engineer, or even citizen developer to bring significant value to your organization.” He points to WorkFusion customer Valley Bank as a prime example. Their FinCrime compliance leadership team decided to hire a WorkFusion AI Agent to automate sanctions alert adjudication, because they wanted to make faster payments and improve the employee experience. Without needing technical AI expertise, Valley Bank achieved a 65% automation rate for reviews of sanctions hits on a volume of more than 20,000 alerts per month. No one on the team had to become an AI developer to get it done.  

We’ve just scratched the surface of the paper’s 10 myths. Instead of spoiling your trip into the fantastical world of AI myths, delve into the other eight myths by enjoying your own copy today. 



Adverse Media Monitoring,AI,ai agents,Banking & Financial Services,BSA/AML compliance,KYC

Leave a Comment