Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Submit feedback
    • Contribute to GitLab
  • Sign in
A
angkor-stroy
  • Project
    • Project
    • Details
    • Activity
    • Cycle Analytics
  • Issues 23
    • Issues 23
    • List
    • Board
    • Labels
    • Milestones
  • Merge Requests 0
    • Merge Requests 0
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Create a new issue
  • Jobs
  • Issue Boards
  • Alejandrina Leblanc
  • angkor-stroy
  • Issues
  • #11

You need to sign in or sign up before continuing.
Closed
Open
Opened Feb 12, 2025 by Alejandrina Leblanc@alejandrinaleb
  • Report abuse
  • New issue
Report abuse New issue

Decrypt's Art, Fashion, And Entertainment Hub


A hacker said they purloined personal details from millions of OpenAI accounts-but researchers are skeptical, and the company is investigating.

OpenAI states it's investigating after a hacker claimed to have actually swiped login credentials for 20 countless the AI firm's user accounts-and put them up for sale on a dark web forum.

The pseudonymous breacher posted a cryptic message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and providing prospective purchasers what they claimed was sample information containing email addresses and passwords. As reported by Gbhackers, tandme.co.uk the complete dataset was being marketed "for simply a couple of dollars."

"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to an equated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus agrees."

If genuine, this would be the third major security incident for the AI business because the release of ChatGPT to the general public. Last year, a hacker got access to the company's internal Slack system. According to The New York Times, the hacker "stole details about the design of the company's A.I. technologies."

Before that, in 2023 an even simpler bug including jailbreaking prompts allowed hackers to obtain the private data of OpenAI's paying clients.

This time, however, security researchers aren't even sure a hack happened. Daily Dot press reporter Mikael Thalan composed on X that he found void email addresses in the expected sample data: "No proof (suggests) this supposed OpenAI breach is legitimate. A minimum of two addresses were invalid. The user's just other post on the online forum is for a stealer log. Thread has since been erased also."

No evidence this supposed OpenAI breach is legitimate.

Contacted every email address from the purported sample of login qualifications.

At least 2 addresses were invalid. The user's only other post on the online forum is for a thief log. Thread has actually since been deleted as well. https://t.co/yKpmxKQhsP

- Mikael Thalen (@MikaelThalen) February 6, 2025

OpenAI takes it 'seriously'

In a declaration shared with Decrypt, prawattasao.awardspace.info an OpenAI spokesperson acknowledged the scenario while maintaining that the company's systems appeared secure.

"We take these claims seriously," the representative said, including: "We have actually not seen any evidence that this is connected to a compromise of OpenAI systems to date."

The scope of the supposed breach triggered issues due to OpenAI's huge user base. Millions of users worldwide count on the company's tools like ChatGPT for business operations, educational purposes, and material generation. A genuine breach could expose private discussions, business projects, and other delicate information.

Until there's a last report, some preventive procedures are always advisable:

- Go to the "Configurations" tab, log out from all connected devices, and make it possible for two-factor authentication or 2FA. This makes it practically difficult for a hacker to gain access to the account, even if the login and passwords are compromised.

  • If your bank supports it, then produce a virtual card number to manage OpenAI subscriptions. This method, it is much easier to find and avoid scams.
  • Always watch on the discussions saved in the chatbot's memory, and understand any phishing attempts. OpenAI does not request any individual details, and any payment update is constantly dealt with through the main OpenAI.com link.
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
No due date
0
Labels
None
Assign labels
  • View project labels
Reference: alejandrinaleb/angkor-stroy#11