I’m not talking about using a pole and hook in the traditional sense. AI, however, is much like fishing. Unethical developers put out a post, text messages, or hooks inside children’s games to gather important data about the users designed to capture their dollars. Phishing, now powered by AI, is getting more sophisticated everyday. For this reason, we need to sit next to our children to discuss what looks suspicious in their inbox. Parents and teachers are aware of phishing, but children become even more vulnerable if they are not taught what to look for and understand the motives behind those who phish.
“Phishing usually occurs when a fraudster impersonates a business or individual and tries to obtain personally identifiable information (PII) like passwords, credit card details, and physical addresses.” Neuron
Because AI is making it difficult to indentify, it’s important to educate ourselves and then prepare our children to identify phishing. Phishermen are lurking inside their games, and if they have personal emails, they will appear in their inboxes.
The following article presented by Neuron, is focused on businesses but will give parents and teachers valuable information about what they need to be alerted to and what we need to share with children to be prepared when unfamiliar emails show up in our inboxes: (Click Here to read the article in full).
We’ve all seen phishing emails asking for personal information out of the blue. Historically, these were easy to spot with their bad formatting and spelling mistakes. But with artificial intelligence, traditional tells are removed, making it harder for even the savviest among us to catch phishing attempts.
If you work in fraud, risk, or trust and safety, you’re likely concerned with:
- Being prepared: You want to make sure you can manage the damaging effects of phishing scams to protect your company and customers.
- Understanding more sophisticated phishing: With AI, phishing campaigns can sound much more credible and innocuous with much less effort on a fraudster’s part. As much as possible, you want to be able to prevent known bad actors from getting onto your platform in the first place.
- Knowing which tools to use: If fraudsters are using AI, you might need more sophisticated tools to protect your customers.
- With AI phishing, bad actors can use LLM to remove poor grammar, spelling mistakes, and other idiosyncrasies to sound more like a native speaker, luring victims into a false sense of security.
- To protect users and employees, you’ll want everyone to be aware of the extent to which phishing attempts have gotten sophisticated so they can remain vigilant against scams.
- LLM makes it easy to continuously change copy, rendering the old ways of blocking a specific string of sentences and words obsolete. You’ll need a more advanced fraud prevention tool that looks at other signals of the fraudster themselves, such as IP address, geolocation, and user ID.