AI is making financial fraud easier, Treasury Department says

0
10

Risks to the cybersecurity and stability of financial firms are being redefined by AI, which is making it easier for fraudsters to carry out more complex and persistent attacks, according to a report from the Treasury Department.

With the help of large language models (LLMs) and other AI-based tools, threat actors can carry out more targeted phishing and other types of attacks on business emails, quickly develop new malware code or a variant of existing malware, and impersonate employees and customers to get access to their funds to transfer money to themselves, said the report, which was released Wednesday.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang. The report came via executive order last year, and involved in-depth interviews with 42 companies in the financial services and technology sectors.

The report also found a gap between large and small financial institutions deploying their own AI systems for fraud prevention. While large institutions have the expertise and internal data required to develop and train large models in-house, smaller institutions lack the same resources, the report said. However, the Bank Policy Institute (BPI) and American Bankers Association (ABA) are “making efforts to close the fraud information-sharing gap across the banking sector,” according to the report.

Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft, previously told Quartz cyber attackers are using AI to become more productive, including by using it to carry out reconnaissance to find vulnerabilities in companies, and by improving their coding skills.

Similar to the Treasury Department’s report, Jakkal said cyber attackers are using LLMs to spread disinformation campaigns using AI-generated content, including images and videos, to make their campaigns more believable.

“It fundamentally boils down to finding information and directly launching these attacks to strengthen their own positions of influence and get economic advantage,” Jakkal said about nation-state and financial crime actors targeting companies in cyberattacks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here