Sanctions Screening Effectiveness – Benchmarking – Part 2

  • Home
  • |
  • Views
  • |
  • Sanctions Screening Effectiveness – Benchmarking – Part 2
  • Home
  • |
  • Views
  • |
  • Sanctions Screening Effectiveness – Benchmarking – Part 2

In a previous article, we introduced SQA Consulting’s sanctions screening effectiveness benchmarking, and where the use of benchmarking is most valuable. Here we will explore in more detail how benchmarking works in practice.

  1. Financial Institutions – Screening Systems – Regular Assurance

2020 05 01 14 10 17 Document2 Word

Screening Effectiveness vs Efficiency

The above is an example of our Customer Sanctions Screening Effectiveness vs Efficiency Benchmark Graph. We produce a similar graph for Payments Sanctions Screening.

The horizontal x-axis shows efficiency. Here, we screen a benchmark test file of 25,000 random personal and company names. We do not expect these names to generate alerts against sanctions screening lists, and we treat any alerts that are generated as false positives. The more alerts generated from this test pack, the less efficient the screening system is at finding genuine matches.

The vertical y-axis shows fuzzy matching screening effectiveness. Here, we screen a benchmark test file of over 7,000 personal and company names with over 50 different types of fuzzy name scenarios. These names are all sourced from exact names taken from the major regulatory sanction lists and while we would expect all the source exact names to generate sanction alerts. The screening will only recognise and generate alerts for a portion of the fuzzy name variants. The more alerts generated from this fuzzy test pack, the more effective the screening is at finding name matches.

The ideal position to be in on the above graph is on the top right-hand side, where screening would be both effective and efficient.

Now we come to the benchmarking value of the above graph. We add new fuzzy tests to the test pack over time as new fuzzy testing requirements are identified, however the efficiency and fuzzy test packs are standardised and remain unchanged, such that we consistently process the same test pack against different sanctions screening systems for various financial institutions, and these test results are plotted in the above graph using the blue dots. By running this same test pack for the institution that we are conducting assurance testing for, we can plot that result on the same graph, as shown above by the red dot.

Now we are able to assess the test result against the benchmark comparison results and provide assurance that the screening system’s effectiveness is performing compared with the industry in general. This is valuable information that can be provided to the regulators to evidence both that the screening system has been independently tested, and that the screening effectiveness has been verified.

2. Screening Systems – Tuning / Development

2020 05 01 14 09 27 Document2 Word

Above we have another effectiveness vs efficiency graph, but here we have processed the benchmark test pack through the same screening system multiple times using different screening configuration settings each time, perhaps changing the match threshold each time, or using different matching algorithms.

Typically, screening results will be similar to that shown above where a high alert threshold configuration may give the result at the lower right of the graph, where screening is most efficient but also most ineffective and then as the threshold is lowered, the results plot a curving graph upwards towards the top left where screening is at its effective but also most inefficient configuration.

Using the benchmarking test pack in this way provides a valuable insight into how the screening system performs at different configuration settings and can be used to identify and justify the choice of configuration settings for a production screening system.

Typically, we can identify a ‘sweet spot’ where the rate of increase in effectiveness starts to tail off. Above this point, further improvements in effectiveness come with a comparatively larger decrease inefficiency, and we may deem this particular configuration the optimum balance of both efficiency and effectiveness. An institution will have their own risk appetite for screening effectiveness, in addition to their own acceptance for screening efficiency and their own preference for where they may wish to appear along this graph, but the value of this graph is it provides the analytics to inform that decision.

In a subsequent article in this series we will describe the benchmarking tests in more detail and how we can obtain yet more useful analysis from these results.

To find out more about how SQA Consulting can assist you with your screening needs contact us.

Get In Touch

Technology Consulting Partners