The previous article in this short series explored SQA Consulting’s sanctions screening effectiveness vs efficiency as can be seen in the benchmarking graph as shown below. In this instalment, we will be delving further into the performance assessment of specific fuzzy matching test scenarios.
Screening Effectiveness vs Efficiency
The graph which plots a screening system’s match effectiveness and efficiency against the industry benchmark comparisons is one of SQA Consulting’s most powerful Anti-Money Laundering (AML) tools. It is used to provide assurance over how well a screening system is performing.
Each plot in the above graph is a weighted average of the match results of over 50 different types of personal and company fuzzy name test case scenarios. The overall average match effectiveness score is a powerful metric, but an analysis of the detailed results of the different test scenarios can be just as enlightening.
Assessing the performance of specific fuzzy matching test scenarios
In the graph above, we see the detailed test results for each specific personal name fuzzy test scenario. Against each scenario is a weighting, which is used to weight each scenario by risk significance, used when calculating the average weighted score that appears in the earlier summary graph. In this line graph each specific test scenario is plotted against min/max and average benchmark scores for that scenario.
The above example shows how this detailed graph is used to focus on specific aspects of screening performance, providing a valuable detailed analysis. In the example, most of the scores are above or close to average benchmark scores line, but two scenarios stand out as scoring much worse than the average: Cross-match and “Title in Given Name.”
A Cross-match is where a personal name has been presented in a company name field. Here screening has performed poorly at identifying personal names when presented in this manner. ‘Title in Given Name’ is where a personal name value includes a title within the name field. For example ‘Dr. John Smith.’ Once again, the graph above shows a relatively poor score for this scenario.
The power of this detailed graph is in its ability to identify specific weaknesses in screening, thereby identifying where effort should be focused to ensure we address any weaknesses. Typically, the actions taken to address any specific weakness would be one or both of the following:
1. Tuning the screening system configuration settings or matching algorithms and enhancing list content to address identified areas of weakness in screening effectiveness. The benchmark tests can simply be re-run following any tuning enhancements, and fresh benchmark graphs may be produced, thereby providing the means by which the success of any enhancements can be assessed.
2. Undertaking a Data Quality Review of an organisation’s own customer data content to assess the significance of a specific screening weakness. In the above example, if it was determined that within an organisation’s data, personal names were never present in company name fields, or that personal name fields never included a title within them, then from an assurance perspective we could state that there was a known area of weakness in the screening system but that the risk of this weakness impacting the organisation’s screening was low because the organisation’s data was not impacted by this issue. For this reason, SQA recommends a Data Quality Review be undertaken alongside a Customer Screening Effectiveness Review because the results of a comprehensive data quality analysis can often help inform the assessment of screening effectiveness.
Contact us at SQA Consulting, to see how we may assist you in developing the necessary skills needed for implementing these strategies.