OpenAI’s new model, called o1, appears to think and ponder as you use it. But is it thinking? Or pondering? And what does it mean if it is? Would that make it worth the risks, which appear to be both greater and more plausible than ever? How do you balance the risks of destroying humanity with the possibility of improving it? This is the thing about talking about artificial intelligence: it has this nasty penchant of getting all existential on you.

On this episode of The Vergecast, we get all existential about AI. The Verge’s Kylie Robison joins the show to discuss why OpenAI built o1, why it’s launching the way it is, what to make of the folks who are worried about what they’re seeing from the model, and how we should think about this moment in AI as companies pivot toward trying to build “agents” that can do more and more on our behalf. (We recorded this just before Sam Altman published his recent blog post on The Intelligence Age, but it all feels pretty timely.)

Finally, we answer a question on the Vergecast Hotline (call 866-VERGE11, or email [email protected]!) about an issue everybody has: what do you do with all the stuff that accumulates on your devices?

If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with OpenAI:

And on TikTok / Google / Trump:

And a few tools for cleaning up your devices:

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *