The golden age of scammers: AI-powered phishing
AI technology is transforming phishing attacks, allowing scammers to send personalized emails at scale. The rise of AI phishing has led to a 1,265% surge in malicious emails. Organizations must implement robust security measures to combat this evolving threat.
Read original articleAI technology is revolutionizing phishing attacks, enabling scammers to send more convincing and personalized emails at scale. By leveraging generative AI tools like WormGPT, scammers can automate mass campaigns, spoof domains, and access sensitive data with ease. The rise of AI phishing has led to a significant increase in malicious emails, with cyber security firm SlashNext reporting a 1,265% surge since 2022. Traditional phishing attacks rely on social engineering, while AI-powered attacks use machine learning to personalize messages based on extensive data analysis. To defend against these evolving threats, organizations are advised to implement multi-layered security measures, recognize AI phishing attempts, and prioritize sender reputation. As scammers continue to exploit AI advancements, staying vigilant and proactive in email security is crucial to safeguarding sensitive information and preventing financial losses.
Related
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.
Bots Compose 42% of Overall Web Traffic; Nearly Two-Thirds Are Malicious
Akamai Technologies reports 42% of web traffic is bots, 65% malicious. Ecommerce faces challenges like data theft, fraud due to web scraper bots. Mitigation strategies and compliance considerations are advised.
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
Microsoft warns of "Skeleton Key" attack exploiting AI models to generate harmful content. Mark Russinovich stresses the need for model-makers to address vulnerabilities. Advanced attacks like BEAST pose significant risks. Microsoft introduces AI security tools.
I Received an AI Email
A blogger, Tim Hårek, received an AI-generated email from Raymond promoting Wisp CMS. Tim found the lack of personalization concerning, leading him to question the ethics of AI-generated mass emails.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.
- Many are surprised at the slow rollout of AI phishing, despite the technology being available for some time.
- There is a call for better security measures, such as two-factor authentication and more transparent email and browser interfaces.
- Concerns are raised about the potential for AI to create highly convincing phishing attacks, including deepfakes and voice impersonations.
- Some suggest that phone carriers should block foreign calls to reduce the risk of phone-based phishing.
- There is a recognition that AI phishing will make it harder to rely on traditional heuristics like bad grammar to detect scams.
The technology for customised text based attacks at scale has been available at least since Llama was open sourced. The tech for custom voice and image based attacks is basically there too with whisper / tortoise and stable diffusion - though clearly more expensive to render. I'm honestly not sure why social networks aren't being leveraged more to target and spoof individuals - especially elderly people.
Tailored attacks impersonating text or voice messages from close contacts and family members should be fairly common, and yet they're not. Robo-calls that carry out a two way conversation convincingly impersonating bank or police officials should be everywhere. Yet the only spam-calls I ever receive are from Indian call centres or static messages using decades old synthesised voice tech.
Always has been.
Tbh the browser/email client makers are complicit in these phishing attempts for hiding the URLs and the actual email addresses.
Put them back!
https://news.ycombinator.com/item?id=40942307
Imagine old people getting phone calls from frantic children. They won't know real from fake. Add tech like this to SIM forgery ..and we will devolve from a high trust society to a no trust society.
a common heuristic to look out for is "badly"-written/spoken communication. the "AI vs Actual Indian" comment and nigerian prince emails stand out for most people, but they still ended up working well enough to become this wide-spread.
you just need to employ some critical thinking now for most external communication now. it is no different from some highly-motivated scammers doing it the old-fashioned way. at the end of the day, we are trying to replicate the success of some native-speaking teens (https://news.ycombinator.com/item?id=32959001).
Can someone show me a modern OS that would install software by clicking a link?
Because you know how to do that, and it's so much easier than helping them when they get hacked.
Related
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.
Bots Compose 42% of Overall Web Traffic; Nearly Two-Thirds Are Malicious
Akamai Technologies reports 42% of web traffic is bots, 65% malicious. Ecommerce faces challenges like data theft, fraud due to web scraper bots. Mitigation strategies and compliance considerations are advised.
'Skeleton Key' attack unlocks the worst of AI, says Microsoft
Microsoft warns of "Skeleton Key" attack exploiting AI models to generate harmful content. Mark Russinovich stresses the need for model-makers to address vulnerabilities. Advanced attacks like BEAST pose significant risks. Microsoft introduces AI security tools.
I Received an AI Email
A blogger, Tim Hårek, received an AI-generated email from Raymond promoting Wisp CMS. Tim found the lack of personalization concerning, leading him to question the ethics of AI-generated mass emails.
Google Researchers Publish Paper About How AI Is Ruining the Internet
Google researchers warn about generative AI's negative impact on the internet, creating fake content blurring authenticity. Misuse includes manipulating human likeness, falsifying evidence, and influencing public opinion for profit. AI integration raises concerns.