There's no doubt about it: ChatGPT,Chris Cassidy Archives the AI chatbot from OpenAI, is extremely popular and has seemingly single-handedly thrust chatbots and AI language models into the mainstream.
But with this popularity come some side effects. For one: ChatGPT accounts are now a prime target for hackers.
In a new recently released report, researchers at the cybersecurity firm Group-IB share that they have found over 101,000 compromised ChatGPT login credentials for sale on dark web marketplaces over the past year.
ChatGPT crossed100 million users in February, just months after it first launched to the public. However, as the AI chatbot's popularity has grown over the months, so has the number of stolen login credentials for ChatGPT accounts. Group-IB says it found more than 26,800 ChatGPT credentials last month, a peak since they began tracking the data.
This Tweet is currently unavailable. It might be loading or has been removed.
Group-IB researchers say the majority of these stolen ChatGPT credentials have been accessed thanks to the popular Raccoon malware. Raccoon works just as basic malware does, stealing info from a target's computer after the user downloads the software, which is often disguised as an app or file that the user actually wants. However, Raccoon is easy to use and is available as a dependable, maintained subscription service, which makes it a popular choice among hackers.
There are a number of potential security concerns unique to having a ChatGPT account compromised by hackers. For one, OpenAI released a feature a few months ago that saves a user's chat history. Many companies, like Google, warntheir employees not to input sensitive information into ChatGPT because that data could be used to train the AI language models. However, the fact that they need to warn employees about this means that it does happen. If a hacker has access to a user's ChatGPT history, they can see all that sensitive information that's previously been input into ChatGPT.
"Many enterprises are integrating ChatGPT into their operational flow," said Group-IB's Head of Threat Intelligence Dmitry Shestakov in a statement. "Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials."
In addition, if a user reuses their password for multiple different platforms, a hacker who has access to their ChatGPT account could also soon access their other accounts as well. And, if the target is paying for ChatGPT's premium plan, ChatGPT Plus, they may also be unwittingly paying for others to use the paid-for service as well.
ChatGPT users should be cautious of unauthorized access to their accounts and make sure they don't reuse their account password for other platforms.
Topics Artificial Intelligence Cybersecurity ChatGPT OpenAI
(Editor: {typename type="name"/})
The Baffler’s May Day Round Up
Best Dyson Airwrap deal: $100 off at Amazon
Colleen Ballinger allegations: What's going on with the YouTuber's ukulele song response?
How to Easily Make iPhone Ringtones Using Only iTunes
Google announces top apps and games of 2023 on Google Play
Best robot vacuum deal: Save $140 on roborock Q7 Max Robot Vacuum
Redux: A Secret Mouth by The Paris Review
Shop Owala's Memorial Day Sale for 30% off tumblers
Tips for helping your son navigate the manosphere
接受PR>=1、BR>=1,流量相当,内容相关类链接。