Whenever I start preparing to share another set of tech tips, I always hope it comes at a time free of alarming news about a recent security breach or some other dire cybersecurity issue.
However, like clockwork, another breach happens. So, before diving into the topic of AI and Deepfakes, let’s address the recent significant data breach.
This time, it was National Public Data, a data aggregator for background checks, which confirmed their computer systems had been compromised. The hacking group USDoD alleges to have stolen the personal records of 2.9 billion people (DeLetter, 2024). These records include name, address, and social security numbers. If you want to find out if your personal information was part of that breach the following sites can help NPD Breach Check - Pentester.com and Have I Been Pwned: Check if your email has been compromised in a data breach - now onto our regular scheduled program (topic).
A few years ago, terms like Artificial Intelligence (AI) or Large Language Model (LLM) would have been unfamiliar to many people. Unless that is you were a fan of the Terminator movies and equated AI with Skynet, but I digress. Fast-forward a few years and AI and LLM are all the rage. It seems like every new product is offering some fancy AI features, and even Apple, Microsoft and Google have introduced AI into the tools that we use every day, changing the way we use technology. While AI is not as scary as Skynet, the rapid advancements have made it easier than ever to create highly convincing fake content, and the really scary part is we are only at the infancy of these technologies.
Deepfakes are hyper-realistic videos or audio that mimic real people and even their voice. Products like ChatGPT and Microsoft CoPilot can generate human-like text which is blurring the lines between reality and fiction. While these technologies offer numerous benefits like the ability to quickly summarize information or explaining complex topics, they also pose a significant risk. Threat actors are weaponizing these AI tools to commit fraud, steal financial information, and spread misinformation.
Imagine this scenario: you receive a video call or message from someone who appears and sounds identical to your financial advisor, requesting sensitive information such as your social security number. Similarly, imagine getting an AI-generated email that lacks the typical signs of phishing, like misspellings or grammatical errors, and instructs you to transfer money to a fraudulent account. How would you evaluate and respond to these types of communications? As AI technology continues to evolve, the threats are becoming more prevalent, making it more crucial than ever to safeguard yourself.
How Deepfakes and AI Pose Risks to Your Financial Security
Threat actors are finding new ways to exploit financial services by using deepfakes and AI-generated content to impersonate trusted individuals, manipulate communications, and trick people into making harmful financial decisions. Here is how these technologies are being used to target customers like you:
• Impersonating Financial Representatives - Criminals are using deepfake technology to create highly convincing videos or phone calls that impersonate bank officials, financial advisors, or company executives. These deepfakes can be used to instruct you to transfer money, share sensitive account details, or approve financial transactions. The AI-generated content looks and sounds so real that even the most vigilant person can be deceived. In February, a multinational firm was tricked into paying out $25 million to a threat actor using this type of deepfake technology (Chen, 2024).
• Phishing Emails - Tools like ChatGPT can generate incredibly convincing phishing emails that mimic legitimate financial communications, including the nuance of someone’s tone and writing inflection. Threat actors use these emails to trick you into clicking malicious links, providing account credentials, or transferring funds to fraudulent accounts. The level of realism in these messages makes it much harder to spot the fraud. Since 2022 there has been an increase of 1,265% of malicious phishing emails, and a rise of 967% in credential phishing (Violino, 2023).
• Identity Theft Through AI - Deepfakes and AI-generated content can also be used to steal your identity by creating fake videos or audio of you interacting with financial institutions. There have been examples where hacking organizations have been able to successfully steal biometric data. These forged interactions can be used to open new accounts in your name, apply for loans, or authorize fraudulent transactions, all without your knowledge.
Steps to Help Safeguard Against Deepfakes and AI Risks
As these risks continue to advance, it is essential to adopt stronger cybersecurity practices to protect your personal financial information. Here are specific steps you can take to safeguard your accounts and financial assets from AI-enabled fraud:
1. Verify Requests for Financial Information - Always be cautious when receiving requests for sensitive financial information, especially if they come through unexpected channels like email or video calls. GreenStone will never ask you to reply to an email message to update your confidential information or to provide a PIN, account number, social security number, username, password, or other similar information. We recommend to our customers never to respond to any email or call that asks for such information, even if it appears to be from GreenStone or another financial institution. If you are unsure of the authenticity of a communication, contact us to confirm. If you suspect fraud has occurred in connection with your GreenStone accounts, please let us know immediately and we will promptly assist you in resolving the matter.
2. Use Strong Authentication Methods - If you could do one thing today to protect your personal financial information, it would be enabling multi-factor authentication (MFA) everywhere you can, including email, online banking and financial accounts. Our customer portal, My Access, offers MFA by sending a code via text or a phone call to your device to verify it’s you. MFA adds an extra layer of security by requiring a second form of identification beyond your password, such as a code sent to your phone or email. MFA can help prevent unauthorized access, even if your login credentials are compromised.
3. Stay Skeptical of Unusual or High-Pressure Requests - Threat actors often create a sense of urgency to trick you into making quick decisions. Be wary of any financial communication that pressures you to act immediately, especially if it involves transferring money or disclosing sensitive information. Take the time to double-check and verify the request. Trust but verify!
4. Monitor Your Financial Accounts Regularly- Keep a close eye on your bank statements, credit card transactions, and investment accounts for any unusual activity. Set up account alerts to notify you of suspicious transactions, withdrawals, or changes to your account information. Additionally, putting a freeze on your credit file prohibits consumer reporting agencies from releasing your credit report without your express authorization. Protecting Your Information | GreenStone FCS
With the advancement of deepfake and AI technologies, the threats to financial security are increasingly significant. Threat actors are using these tools to generate believable counterfeit communications, impersonate reputable financial officials, and deceive customers into disclosing sensitive data or approving fraudulent activities.
Protecting your personal financial information and assets from the new surge of AI-driven fraud involves staying informed, implementing robust cybersecurity measures, and verifying any suspicious financial requests.