Hackers Exploit OpenAI’s ChatGPT to Spread Malware, Meta Reveals
As interest in the artificial intelligence-powered tool ChatGPT continues to surge, hackers have begun exploiting the technology to gain access to people’s devices, Facebook owner Meta has revealed in a new security report. The company said its security team has found hackers offering ChatGPT-based tools through browser extensions and online app stores that contain malware designed to give hackers access to users’ devices.
According to Guy Rosen, Meta’s chief information security officer, scammers have quickly moved to exploit interest in ChatGPT, equating the phenomenon to the surge in cryptocurrency scams. Since March alone, Meta said it has blocked the sharing of more than 1,000 malicious web addresses that claimed to be linked to ChatGPT or related tools.
While some of these tools appear to include working ChatGPT features, they also contain malicious code to infect users’ devices. Meta has launched investigations into these malware strains and taken action against them to prevent users from being tricked into installing malware pretending to provide AI functionality.
The latest wave of malware campaigns has taken notice of generative AI tools becoming popular, with bad actors latching onto hot-button issues and popular topics to get people’s attention. As a result, Meta and other security researchers are urging users to be vigilant and exercise caution when downloading software claiming to offer ChatGPT-related tools.
#AI #Cybersecurity #Malware #ChatGPT #OpenAI #OnlineSecurity #Cybercrime #CyberAttack #CyberThreat #DataSecurity #Privacy #TechSecurity #Meta #Facebook