Home Lifestyle Hackers use ChatGPT to spread malware on Facebook, Instagram, and WhatsApp

Hackers use ChatGPT to spread malware on Facebook, Instagram, and WhatsApp

by admin

Many people fear that AI-powered chatbots, such as ChatGPT, could be used to create malware (malware) in the future. Currently, the biggest concern is ChatGPT’s popularity: more and more hackers are launching impersonation attacks and using ChatGPT implementations to steal information from unsuspecting victims.

Researchers at Meta, the parent company of Facebook, warned on Wednesday, May 3 that malicious groups including Ducktail and NodeStealer are now posing as ChatGPT and other similar tools, targeting Internet users through malicious browser extensions, ads, and even from various social media platforms. . Their goal is to deliver unauthorized advertisements from hacked business accounts online.

Meta said it detected and disabled malware processes, including previously unreported malware families. In addition, Facebook’s parent company has seen a rapid hacker adaptation in response to the discovery.

“We know that the malicious groups behind malware campaigns are very persistent, and we expect them to keep trying to devise new methods and tools to try and survive disruptions to whatever platform they’re on. That’s why our security teams deal with malware, which is one of the most persistent threats across The Internet, as part of our defense-in-depth approach, is making multiple efforts simultaneously. duke h. Nguyen And Ryan Victoryfrom Meta, in a blog post published on Wednesday.

Multiple malware groups

Since March, Meta has identified dozens of malware that uses ChatGPT and other similar tools to compromise Internet accounts.

Duc H. Nguyen and Ryan Victory added: “In one case, we saw malicious browser extensions being made available on official online stores that claim to provide ChatGPT-based tools.” Then they promoted these malicious extensions on social media and through sponsored search results to trick people into downloading malware. In fact, some of these extensions included ChatGPT functionality that could be used with the malware, presumably to avoid suspicion from official online stores. »

Meta claims to have blocked over 1,000 malicious URLs on its ChatGPT-related platforms, while sharing those URLs with industry partners.

According to TechCrunch, the Vietnam-based Ducktail malware has been targeting Facebook users since 2021. It now impersonates ChatGPT to steal browser cookies, while hijacking Facebook sessions to access account information on the Facebook platform. The victim, including general account information and location data, and two-factor authentication codes.

NodeStealer malware

In January, Meta researchers discovered the information-stealing malware codenamed NodeStealer. It allows hackers to steal browser cookies to hijack accounts on Facebook, as well as Gmail and Outlook.

“We identified NodeStealer early, within two weeks of its deployment, and took action to disable and assist those who may have been targeted recover their accounts,” explain Duc H. Nguyen and Ryan Victory. As part of this effort, we’ve sent takedown requests to third-party registrars, hosting providers, and application services like NameCheap, which hackers have targeted to facilitate malicious distribution and operations. These actions successfully terminated the malware. »

Meta researchers said that they have not noticed any new malware samples from the NodeStealer family since February 27, 2023. However, they continue to monitor for potential future activity.

The threat of generative AI

Researchers from cybersecurity firm Blackfog have also warned about the threat posed by ChatGPT, including how the tool is developing code that can be used for malicious purposes. The company is currently monitoring how generative AI can be used as a hoax on social media.

“As BlackFog has shown, ChatGPT and other AI tools can be used very effectively for data mining, including writing the programs needed for this,” he explained. Darren WilliamsCEO and founder of BlackFog, in an email.

“These tools are now being used to create phishing websites and websites to steal credentials and install malware on devices,” added Darren Williams. He also warned that threats related to ChatGPT are likely to escalate, so cybersecurity efforts must keep pace with this emerging technology.

“Traditional defensive methods, endpoint detection and response, and antivirus tools have proven extremely ineffective against these modern ransomware variants,” added Darren Williams.

A strong defense will still be necessary, and this will require Internet users to exercise due diligence so as not to fall for the trap of plagiarism campaigns. “The only real way to ensure your data is protected is to focus on new technologies to prevent data leaks in the first place,” continues Darren Williams. “If the hacker cannot steal the data, then he cannot blackmail the victim and therefore has nothing to gain.”

Translated article from Forbes US – Author: Peter Suciu

<< اقرأ أيضًا: ChatGPT: البيانات في قلب عملية الذكاء الاصطناعي >>>

Related News

Leave a Comment