A report by Group-IB revealed that hackers leaked over 100,000 ChatGPT credentials on the dark web, meaning that confidentiality of chats was compromised; this, and the widespread reporting of a $5,000 fine on US lawyers whose (mis)use of ChatGPT led to them relying on non-existent cases, are reminders of the need to consider the risks of AI. We identified some issues in our March 2023 Risk Update.

A report by Georgetown University’s Center for Security and Emerging Technology (CSET) identifies further issues. AI’s vulnerabilities may lie not in its algorithms but the data on which it is trained, which may be subjected to malicious inputs. It may be difficult to identify what went wrong when, and how it works may be opaque even to its developers.

Europol reported that AI can be used to commit fraud, impersonation and social engineering, cybercrime, and to spread disinformation, it is an ideal tool for phishing purposes, enabling those with basic English skills to create fraudulent emails that appear highly realistic and convincing.

There are further concerns because in time spoofing with AI may challenge the integrity of biometric client due diligence products used in AML compliance.

‹ Back to Publications