CBA is set to pilot the referral of abuse in transaction descriptions to NSW Police, with the victim’s consent.
The bank said that if its machine-learning model detects abusive descriptions in transactions sent to NSW customers, it will seek the recipient’s consent to report it to police.
CBA’s detection model has already been scanning the descriptions of transactions its customers receive since 2021; the bank’s AI Labs team developed the capability after it was revealed more than 8000 customers were harassed through low-value deposits to bank accounts in 2020.
In a statement, the bank said its models have detected close to 400,000 CommBank app and NetBank transactions containing offensive language sent by around 1500 perpetrators annually.
Flagged transactions are manually reviewed by CBA’s anti-financial abuse team Next Chapter, who can block messages, suspend perpetrators from digital banking services for three months, or permanently for repeated breaches of the bank’s acceptable use policy’.
CBA said that Next Chapter would pilot asking the recipients of the abusive messages if they want the bank to file a police report on their behalf from mid-September.
“Technology-facilitated abuse continues to be a serious problem, and this collaboration with NSW Police enables us to act – not only in supporting victims but in the prevention of abuse,” CBA group customer advocate Angela Macmillan said.
The other ‘big four’ banks have also turned to advanced analytics and algorithms to detect and block the use of low-value transaction descriptions for harassment, but Macmillan said this was the first time industry enabled more streamlined and seamless requests for police intervention.
“This is a first-of-its-kind initiative between the banking industry and law enforcement, and we hope this paves the way for more effective collaboration in the fight against domestic and financial abuse,” Macmillan said.
CBA senior data scientist Dr Anna Leontjeva – who won the 2022 AI in Finance Award for her involvement in the project, told Digital Nation Australia that the reason perpetrators resort to sending low-value transactions is because they “are blocked on all the other platforms, and they tried to find a way to send their messages.”
Next Chapter also delinks victims’ bank accounts from PayID so that perpetrators can no longer use their email address, mobile number or ABN to send them abusive transactions.
CBA AI labs turned to machine learning in 2021 when an earlier filter system that relied on certain keywords and phrases missed a number of abusive messages.
The bank said perpetrators circumvented the filter by using “symbols in instead of letters” and by spelling “words differently”.
CBA AI labs shed light on how its model detects abusive low-value transaction descriptions when it published a research paper published on arXiv [pdf] in March this year.
At a transaction level, the bank is looking at “specifics” such as dollar amount and frequency, as well as some “simple text” analysis of the free-text field, where it looks at variables such as “length of the transaction description, upper/lower/mixed case flags, number of words, length of the longest word in the transaction description, [and] does the message contains special characters/numbers”.
The bank also uses three trained language models to detect “emotion, toxicity and sentiment” in the descriptions.
From there, it aggregates its findings up to a “relationship” level – between an abuser and a potential victim.
If the abuser has more than one victim, it flags as “two [or more] distinct relationships of high risk”.
The bank also checks whether the potential victim has replied or not.
This is all fed into a random forest model that ultimately classifies whether the relationships are “highly abusive or non-abusive”.