No, Sam Altman, AI Won’t Solve All of Humanity’s Problems

We already knew where OpenAI’s CEO, Sam Altman, stands on artificial intelligence vis-à-vis the human saga: It will be transformative, historic, and overwhelmingly beneficial. He has been nothing but consistent across countless interviews. For some reason, this week he felt it necessary to distill those opinions in a succinct blog post. “The Intelligence Age,” as he calls it, will be a time of abundance. “We can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now,” he writes. “Although it will happen incrementally, astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.”

Maybe he published this to dispute a train of thought that dismisses the apparent gains of large language models as something of an illusion. Nuh-uh, he says. We’re getting this big AI bonus because “deep learning works,” as he said in an interview later in the week, mocking those who said that programs like OpenAI’s GPT4o were simply stupid engines delivering the next token in a queue. “Once it can start to prove unproven mathematical theorems, do we really still want to debate: ‘Oh, but it’s just predicting the next token?'” he said.

No matter what you think of Sam Altman, it’s indisputable that this is his truth: Artificial general intelligence–AI that matches and then exceeds human capabilities–is going to obliterate the problems plaguing humanity and usher in a golden age. I suggest we dub this deus ex machina concept The Strawberry Shortcut, in honor of the codename for OpenAI’s recent breakthrough in artificial reasoning. Like the shortcake, the premise looks appetizing but is less substantial in the eating.

Altman correctly notes that the march of technology has brought what were once luxuries to everyday people—including some unavailable to pharaohs and lords. Charlemagne never enjoyed air-conditioning! Working-class people and even some on public assistance have dishwashers, TVs with giant screens, iPhones, and delivery services that bring pumpkin lattes and pet food to their doors. But Altman is not acknowledging the whole story. Despite massive wealth, not everyone is thriving, and many are homeless or severely impoverished. To paraphrase William Gibson, paradise is here, it’s just not evenly distributed. That’s not because technology has failed—we have. I suspect the same will be true if AGI arrives, especially since so many jobs will be automated.

Altman isn’t terribly specific about what life will be like when many of our current jobs go the way of 18th-century lamplighters. We did get a hint of his vision in a podcast this week that asked tech luminaries and celebrities to share their Spotify playlists. When explaining why he chose the tune “Underwater” by Rüfüs du Sol, Altman said it was a tribute to Burning Man, which he has attended several times. The festival, he says, “is part of what the post-AGI can look like, where people are just focused on doing stuff for each other, caring for each other and making incredible gifts to get each other.”

Altman is a big fan of universal basic income, which he seems to think will cushion the blow of lost wages. Artificial intelligence might indeed generate the wealth to make such a plan feasible, but there’s little evidence that the people who amass fortunes—or even those who still eke out a modest living—will be inclined to embrace the concept. Altman might have had a great experience at Burning Man, but some kind souls of the Playa seem to be up in arms about a proposal, affecting only people worth over $100 million, to tax some of their unrealized capital gains. It’s a dubious premise that such people—or others who become super rich working at AI companies—will crack open their coffers to fund leisure time for the masses. One of the US’s major political parties can’t stand Medicaid, so one can only imagine how populist demagogues will regard UBI.

I’m also wary of the supposed bonanza that will come when all of our big problems are solved. Let’s concede that AI might actually crack humanity’s biggest conundrums. We humans would have to actually implement those solutions, and that’s where we’ve failed time and again. We don’t need a large language model to tell us war is hell and we shouldn’t kill each other. Yet wars keep happening.


Source link
Exit mobile version