Contributor: Sam Altman Bad Argument for Allowing Teens to Talk About Suicide on ChatGPT


Last month, the Senate Judiciary Committee on Crime and Terrorism held a hearing on what many believe is a mental health crisis among teenagers. Two of the witnesses were parents of children who committed suicide last year, and both believed that the AI ​​chatbot played a role in preventing their children’s deaths. Now a couple charges In one case ChatGPT told his son about specific methods to end his life and even offered to help him write a suicide note.

During a September Senate hearing, OpenAI co-founder Sam Altman went into the company Blogoffers its views on how corporate principles are shaping its response to the crisis. The challenge, he wrote, is balancing OpenAI’s twin commitments to security and freedom.

ChatGPT clearly shouldn’t serve as a primary therapy for teens who are showing signs of suicidal ideation, Altman argued in the blog. But because the company values ​​user freedom, the solution isn’t to insert powerful programming commands that might prevent the bot from talking about harming itself. why? “If an adult user asks for help writing a fictional story that depicts suicide, the model should support that request.” In the same post, Altman promises that age restrictions are coming, but similar efforts have been made to keep young users away from social media. stable Unfortunately insufficient.

I’m sure it’s very difficult to create a large, open access software platform that is both safe for my three children and useful for me. However, I find Altman’s argument here deeply troubling, in no small part because if your first impulse is to ask ChatGPT about writing a book about suicide, you probably shouldn’t be writing a book about suicide. More importantly, Altman’s lofty talk of “freedom” reads as an empty moralizing designed to mask the relentless push for rapid progress and big profits.

Of course, that’s not what Altman would say. In the latter interview Along with Tucker Carlson, Altman suggested that he’s thinking this all through very carefully, and that the company’s discussions about the questions its AI should answer (and not answer) are informed by conversations with “like, hundreds of moral philosophers.” I contacted OpenAI to see if they could provide a list of these thinkers. They didn’t answer. So, as I teach moral philosophy at Boston University, I decided to look at Altman’s own words to see if I could get a sense of what he meant when he talked about freedom.

The political philosopher Montesquieu once wrote that there is no word with as many definitions as liberty. So if the stakes are so high, it is imperative that we seek Altman’s own definition. The writings of entrepreneurs give us some important but perhaps unpleasant clues. Last summer, in a much-discussed post titled “Soft Solitude” Altman said of this concept:

“Society is flexible, creative and adapts quickly. If we use the collective will and wisdom of people, then even though we will make a lot of mistakes and some things will go really wrong, we will learn and adapt quickly and use this technology to get the maximum upside and the least downside. The world can start to talk about what these broad boundaries are and how we define collective reform, all right.”

The CEO of OpenAI is painting with very broad brushstrokes here, and such broad generalizations about “community” tend to crumble quickly. Above all, it is Altman, who cares most about freedom, who is tasked with defining the limits of the “collective wisdom.” And please, community, start this conversation fast, he says.

Clues from elsewhere in the public record give us a better sense of Altman’s true intentions. During Carlson’s interview, for example, Altman associates freedom with “adjustment.” (He’s been doing the same lately Chat with the German businessman Matthias Döfner.) For this OpenAImeaning the ability to create a unique experience for the user, complete with “the features you want it to have, how you want it to talk to you, and what rules you want it to follow.” Not coincidentally, these features are mainly available with newer GPT models.

And yet, Altman is disappointed that users in countries with severe AI restrictions cannot quickly access these new models. In Senate testimony this summer, Altman referred to a “joke” among his team about how OpenAI has “this big new thing that’s not available in the EU and a few other countries because they have this long process before the model comes out.”

The “long process” that Altman talks about is just rules – at least some rules Specialists Belief “Protect fundamental rights, ensure justice and do not undermine democracy.” But one thing is increasing bright As Altman testified, he only wants minimal AI regulation in the US:

“We need to give mature users as much freedom to use AI as they want to use it and trust them to be responsible with the tool,” Altman said. “I know there’s some pressure in other parts of the world and in the United States not to do this, but I think it’s a tool and we need to make it a strong and capable tool. We’re certainly going to put some safeguards in place, but I think we should give a lot of freedom.”

Yet this is the word. When you get down to the brass tacks, Altman’s definition of freedom isn’t some lofty philosophical thought. This is just an anomaly. This is the ideal that Altman balances against the mental health and physical safety of our children. That’s why he resists setting limits on what his bots can and can’t say. And that’s why the regulators have to get right and stop him. Because Altman’s freedom is not worth risking the lives of our children.

Joshua Pedersen is a professor of anthropology at Boston University and author of “Sin Sick: Moral Injury in War and Literature.”



https://www.latimes.com/

Post Comment

You May Have Missed