Channels

Sticky Video Player with Ad Breaks
📺 WATCH US NOW!

Sam Altman says OpenAI is not the ‘moral police’ after ChatGPT news ‘blew up on the erotica point’

  • Sam Altman on Wednesday expanded on his earlier announcement that ChatGPT is getting an erotic update.
  • OpenAI, Altman said, is not the “moral police” and aims not to be “paternalistic” in its policies.
  • The OpenAI CEO said ChatGPT will “prioritize safety” for teenagers and treat adult users “like adults.”

The reaction to Sam Altman’s Tuesday announcement about coming changes to ChatGPT — especially the addition of erotica — caught the OpenAI CEO off guard.

Altman posted on X that the response to the changes “blew up on the erotica point much more than I thought it was going to!”

“It was meant to be just one example of us allowing more user freedom for adults,” he added.

Altman announced on Tuesday that by December, ChatGPT will be getting a spicy update, allowing it to go head-to-head with competitors like Grok at Elon Musk’s xAI on adult-themed generated content, including erotica.

The move, Altman clarified on Wednesday, will not roll back any of the chatbot’s existing policies related to mental health; instead, it aims to give adult users more leeway to use the tool as they wish.

“We are not the elected moral police of the world,” Altman said. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example), we want to do a similar thing here.”

ChatGPT will continue to “prioritize safety over privacy and freedom for teenagers,” given the “significant protection” that minors need when engaging with the technology, Altman said.

Critics, including former “Shark Tank” star Mark Cuban, worry the planned age restrictions will do little to prevent children from accessing adult content.

“This is going to backfire,” Cuban wrote on X. “Hard. No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM.”

Altman also said that ChatGPT wil “treat users who are having mental health crises very differently from users who are not.”

The chatbot will not allow “things that will cause harm to others,” he added.

Altman did not specify any examples of potentially harmful content that would be prohibited from being generated. He also did not elaborate on how ChatGPT would determine if a user is having a mental health crisis or the difference in the types of responses it would provide to users in crisis.

OpenAI did not immediately respond to a request for more information.

“Without being paternalistic we will attempt to help users achieve their long-term goals,” Altman said.

Read the original article on Business Insider

Content Accuracy: Keewee.News provides news, lifestyle, and cultural content for informational purposes only. Some content is generated or assisted by AI and may contain inaccuracies, errors, or omissions. Readers are responsible for verifying the information. Third-Party Content: We aggregate articles, images, and videos from external sources. All rights to third-party content remain with their respective owners. Keewee.News does not claim ownership or responsibility for third-party materials. Affiliate Advertising: Some content may include affiliate links or sponsored placements. We may earn commissions from purchases made through these links, but we do not guarantee product claims. Age Restrictions: Our content is intended for viewers 21 years and older where applicable. Viewer discretion is advised. Limitation of Liability: By using Keewee.News, you agree that we are not liable for any losses, damages, or claims arising from the content, including AI-generated or third-party material. DMCA & Copyright: If you believe your copyrighted work has been used without permission, contact us at dcma@keewee.news. No Mass Arbitration: Users agree that any disputes will not involve mass or class arbitration; all claims must be individual.

Sponsored Advertisement