Welcome to AI This Week, Gizmodo’s weekly roundup where we do a deep dive on what’s been happening in artificial intelligence.
As governments fumble for a regulatory approach to AI, everybody in the tech world seems to have an opinion about what that approach should be and most of those opinions do not resemble one another. Suffice it to say, this week presented plenty of opportunities for tech nerds to yell at each other online, as two major developments in the space of AI regulations took place, immediately spurring debate.
The first of those big developments was the United Kingdom’s much-hyped artificial intelligence summit, which saw the UK’s prime minister, Rishi Sunak, invite some of the world’s top tech CEOs and leaders to Bletchley Park, home of the UK’s WWII codebreakers, in an effort to suss out the promise and peril of the new technology. The event was marked by a lot of big claims about the dangers of the emergent technology and ended with an agreement surrounding security testing of new software models. The second (arguably bigger) event to happen this week was the unveiling of the Biden administration’s AI executive order, which laid out some modest regulatory initiatives surrounding the new technology in the U.S. Among many other things, the EO also involved a corporate commitment to security testing of software models.
However, some prominent critics have argued that the US and UK’s efforts to wrangle artificial intelligence have been too heavily influenced by a certain strain of corporately-backed doomerism which critics see as a calculated ploy on the part of the tech industry’s most powerful companies. According to this theory, companies like Google, Microsoft, and OpenAI are using AI scaremongering in an effort to squelch open-source research into the tech as well as make it too onerous for smaller startups to operate while keeping its development firmly within the confines of their own corporate laboratories. The allegation that keeps coming up is “regulatory capture.”
This conversation exploded out into the open on Monday with the publication of an interview with Andrew Ng, a professor at Stanford University and the founder of Google Brain. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” Ng told the news outlet. Ng also said that two equally bad ideas had been joined together via doomerist discourse: that “AI could make us go extinct” and that, consequently, “a good way to make AI safer is to impose burdensome licensing requirements” on AI producers.
More criticism swiftly came down the pipe from Yann LeCun, Meta’s top AI scientist and a big proponent of open-source AI research, who got into a fight with another techie on X about how Meta’s competitors were attempting to commandeer the field for themselves. “Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun said, in reference to OpenAI, Google, and Anthropic’s top AI executives. “They are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D,” he said.
After Ng and LeCun’s comments circulated, Google Deepmind’s current CEO, Demis Hassabis, was forced to respond. In an interview with CNBC, he said that Google wasn’t trying to achieve “regulatory capture” and said: “I pretty much disagree with most of those comments from Yann.”
Predictably, Sam Altman eventually decided to jump into the fray to let everybody know that no, actually, he’s a great guy and this whole scaring-people-into-submitting-to-his-business-interests thing is really not his style. On Thursday, the OpenAI CEO tweeted:
there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams. i am pro-regulation on frontier systems, which is what openai has been calling for, and against regulatory capture.
“So, capture it is then,” one person commented, beneath Altman’s tweet.
Of course, no squabble about AI would be complete without a healthy mouthful from the world’s most opinion-filled internet troll and AI funder, Elon Musk. Musk gave himself the opportunity to provide that mouthful this week by somehow forcing the UK’s Sunak to conduct an interview with him (Musk), which was later streamed to Musk’s own website, X. During the conversation, which amounted to Sunak looking like he wanted to take a nap and sleepily asking the billionaire a roster of questions, Musk managed to get in some classic Musk-isms. Musk’s comments weren’t so much thought-provoking or rooted in any sort of serious policy discussion as they were dumb and entertaining—which is more the style of rhetoric he excels at.
Included in Musk’s roster of comments was that AI will eventually create what he called “a future of abundance where there is no scarcity of goods and services” and where the average job is basically redundant. However, the billionaire also warned that we should still be worried about some sort of rogue AI-driven “superintelligence” and that “humanoid robots” that can “chase you into a building or up a tree” were also a potential thing to be worried about.
When the conversation rolled around to regulations, Musk claimed that he “agreed with most” regulations but said, of AI: “I generally think it’s good for government to play a role when public safety is at risk. Really, for the vast majority of software, public safety is not at risk. If an app crashes on your phone or laptop it’s not a massive catastrophe. But when we talk about digital superintelligence—which does pose a risk to the public—then there is a role for government to play.” In other words, whenever software starts resembling that thing from the most recent Mission Impossible movie then Musk will probably be comfortable with the government getting involved. Until then…ehhh.
Musk may want regulators to hold off on any sort of serious policies since his own AI company is apparently debuting its technology soon. In a tweet on X on Friday, Musk announced that his startup, xAI, planned to “release its first AI to a select group” on Saturday and that this tech was in some “important respects,” the “best that currently exists.” That’s about as clear as mud, though it’d probably be safe to assume that Musk’s promises are somewhere in the same neighborhood of hyperbole as his original comments about the Tesla bot.
The Interview: Samir Jain on the Biden Administration’s first attempt to tackle AI
This week we spoke with Samir Jain, vice president of policy at the Center for Democracy and Technology, to get his thoughts on the much anticipated executive order from the White House on artificial intelligence. The Biden administration’s EO is being looked at as the first step in a regulatory process that could take years to unfold. Some onlookers praised the Biden administration’s efforts; others weren’t so thrilled. Jain spoke with us about his thoughts on the legislation as well as his hopes for future regulation. This interview has been edited for brevity and clarity.
I just wanted to get your initial response to Biden’s executive order. Are you pleased with it? Hopeful? Or do you feel like it leaves some stuff out?
Overall we are pleased with the executive order. We think it identifies a lot of key issues, in particular current harms that are happening, and that it really tries to bring together different agencies across the government to address those issues. There’s a lot of work to be done to implement the order and its directives. So, ultimately, I think the judgment as to whether it’s an effective EO or not will turn to a significant degree on how that implementation goes. The question is whether those agencies and other parts of government will carry out those tasks effectively. In terms of setting a direction, in terms of identifying issues and recognizing that the administration can only act within the scope of the authority that it currently has…we were quite pleased with the comprehensive nature of the EO.
One of the things the EO seems like it’s trying to tackle is this idea of long-term harms around AI and some of the more catastrophic potentialities of the way in which it could be wielded. It seems like the executive order focuses more on the long-term harms rather than the short-term ones. Would you say that’s true?
I’m not sure that’s true. I think you’re characterizing the discussion correctly, in that there’s this idea out there that there’s a dichotomy between “long-term” and “short-term” harms. But I actually think that, in many respects, that’s a false dichotomy. It’s a false dichotomy both in the sense that we should have to choose one or the other—and in fact, we shouldn’t; and, also, a lot of the infrastructure and steps that you would take to deal with current harms are also going to help in dealing with whatever long-term harms there may be. So, if for example, we do a good job with promoting and entrenching transparency—in terms of the use and capability of AI systems—that’s going to also help us when we turn to addressing longer-term harms.
With respect to the EO, although there certainly are provisions that deal with long-term harms…there’s actually a lot in the EO—I would go so far as to say the bulk of the EO—deals with current and existing harms. It’s directing the Secretary of Labor to mitigate potential harms from AI-based tracking of workers; it’s calling on the Housing and Urban Development and Consumer Financial Protection bureaus to develop guidance around algorithmic tenant screening; it’s directing the Department of Education to figure out some resources and guidance about the safe and non-discriminatory use of AI in education; it’s telling the Health and Human Services Department to look at benefits administration and to make sure that AI doesn’t undermine equitable administration of benefits. I’ll stop there, but that’s all to say that I think it does a lot with respect to protecting against current harms.
More Headlines This Week
- The race to replace your smartphone is being led by Humane’s weird AI pin. Tech companies want to cash in on the AI gold rush and a lot of them are busy trying to launch algorithm-fueled wearables that will make your smartphone obsolete. At the head of the pack is Humane, a startup founded by two former Apple employees, that is scheduled to unveil its much anticipated AI pin next week. Humane’s pin is actually a tiny projector that you attach to the front of your shirt; the device is equipped with a proprietary large language model powered by GPT-4 and can supposedly answer and make calls for you, read back your emails for you, and generally act as a communication device and virtual assistant.
- News groups release research pointing to how much news content is used to train AI algorithms. The New York Times reports that the News Media Alliance, a trade group that represents numerous large media outlets (including the Times), has published new research alleging that many large language models are built using copyrighted material from news sites. This is potentially big news, as there’s currently a fight brewing over whether AI companies may have legally infringed on the rights of news organizations when they built their algorithms.
- AI-fueled facial recognition is now being used against geese for some reason. In what feels like a weird harbinger of the end times, NPR reports that the surveillance state has come for the waterfowl of the world. That is to say, academics in Vienna recently admitted to writing an AI-fueled facial recognition program designed for geese; the program trolls through databases of known goose faces and seeks to identify individual birds by distinct beak characteristics. Why exactly this is necessary I’m not sure but I can’t stop laughing about it.