tradingkey.logo

OpenAI CEO Sam Altman says he’s more troubled by the subtle, everyday decisions ChatGPT makes than by major policy moves

Cryptopolitan15 de sep de 2025 20:37

OpenAI chief executive Sam Altman explained how his company makes calls about ChatGPT. He said the tiniest choices about the chatbot’s behavior worry him more than sweeping policy.

In an interview with Tucker Carlson, Altman said he does not “sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman said.

Altman described suicide as the toughest issue the team is facing, pointing to a lawsuit from a family who blamed ChatGPT for their teenage son’s death.

“They probably talked about [suicide], and we probably didn’t save their lives,” he said. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.”

Last month, the parents of Adam Raine brought a product-liability and wrongful-death case against OpenAI after their 16-year-old son died by suicide. The complaint said that “ChatGPT actively helped Adam explore suicide methods.” Soon after, OpenAI published a post titled “Helping people when they need it most,” outlining steps to improve responses in “sensitive situations.”

Who sets ChatGPT’s ethical rules?

Altman said the base model is trained on humanity’s shared knowledge, and then OpenAI aligns behavior and decides what the system will not do. “This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework,” he said.

He said the company consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.” He pointed to one boundary the company enforces, which is that the system will not give instructions on creating biological weapons.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” he said, adding that the company “won’t get everything right, and also needs the input of the world.”

What happens to user privacy while using ChatGPT?

When Carlson said generative AI could be used for “totalitarian control”, Altman replied that he has been pushing for an “AI privilege,” under which what someone tells a chatbot would be confidential. “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.”

As mentioned in a Cryptopolitan’s report, Altman has previously stressed that as AI enters more personal areas, user input remains without legal safeguards.

He noted that, at present, U.S. authorities can obtain user data from the company with a subpoena.

Asked if the military would use ChatGPT to harm people, Altman did not give a direct answer. “I don’t know the way that people in the military use ChatGPT today… but I suspect there are a lot of people in the military talking to ChatGPT for advice,” he said, later adding he was not sure “exactly how to feel about that.”

OpenAI is one of the AI firms that received a U.S. Department of Defense award worth $200 million.

Carlson predicted that, on its current path, generative AI, and by extension Altman, could accumulate more power than any other person, calling ChatGPT a “religion.”

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge,” Altman said. He also said he thinks AI will eliminate many jobs that exist today, especially in the short term.

Your crypto news deserves attention - KEY Difference Wire puts you on 250 top sites

Descargo de responsabilidad: La información proporcionada en este sitio web es solo para fines educativos e informativos, y no debe considerarse como asesoramiento financiero o de inversión.

Artículos Relacionados

Tradingkey
KeyAI