I might have inadvertently insulted Bret Taylor and Clay Bavor when I interviewed them about their new AI startup last week. Their new company, Sierra, is developing AI-powered agents to “elevate the customer experience” for big companies. Among its original customers are WeightWatchers, Sonos, SiriusXM, and OluKai (a “Hawaiian-inspired” clothing company). Sierra’s eventual market is any company that communicates with its customers, which is a pretty big opportunity. Their plan strikes me as a validation of the widely voiced prediction that 2024 will be the year when the AI models that have bended our minds for the past year will turn into real products. So when I greeted these cofounders, whom I’ve known for years, I remarked that their company seems “very nuts and bolts.”

Was that the wrong thing to say? “I don’t know if that’s a compliment or criticism or just a fact,” says Taylor, who left his job as co-CEO of Salesforce to start Sierra. I assured him I saw it as more of the latter. “It’s not like you’re building girlfriends!” I noted.

It’s significant that two of the more visionary leaders in Silicon Valley are building an AI startup not to chase the nerd trophy of superintelligence but to use recent AI advances to futurize nontechnical, mainstream corporations. Their experience puts them toe to toe with better known industry luminaries; Taylor was a key developer of Google Maps in the aughts and Bavor headed Google’s VR efforts. They are eager to assure me that their hearts are still in moonshot mode. Both feel that conversational AI is an advance on par with the graphical user interface or the smartphone, and will have at least as much an impact on our lives. Sierra just happens to focus on a specific, enterprise-y aspect of this. ”In the future, a company’s AI agent—basically the AI version of that company—will be just as important as their website,” says Taylor. “It’s going to completely change the way companies exist digitally.”

To build its bots in a way that accomplishes that task effectively, pleasingly, and safely, Sierra had to concoct some innovations that will advance AI agent technology in general. And to tackle perhaps the most worrisome issue—hallucinations that might give customers wrong information—Sierra uses several different AI models at once, with one model acting as a “supervisor” to make sure the AI agent isn’t veering into woo-woo territory. When something is about to happen with actual consequences, Sierra invokes its strength-in-numbers approach. “If you chat with the WeightWatchers agent and you write a message, around four or five different large language models are invoked to decide what to do,” says Taylor.

Because of the power, the vast knowledge, and the uncanny understanding of AI’s powerful large language models, these digital agents can grasp the values and procedures of a company as well as a human can—and perhaps even better than some disgruntled worker in a North Dakota boiler room. The training process is more akin to onboarding an employee than feeding rules into a system. What’s more, these bots are capable enough to be given some, um, agency in serving a caller’s needs. “We found that many of our customers had a policy, and then they had another policy behind the policy, which is the one that actually matters,” says Bavor. Sierra’s agents are sophisticated enough to know this—and also smart enough not to spill the beans right away, and to grant customers a special deal only if they push. Sierra’s goal is no less than to shift automated customer interactions from hell to happiness.

Courtesy of Sierra

This was ambrosia to the ears of one of Sierra’s first clients, WeightWatchers. When Taylor and Bavor told CEO Sima Sistani that AI agents could be genuine and relatable, she was intrigued. But the clincher, she told me, was when the cofounders told her that conversational AI could do “empathy at scale.” She was in, and now WeightWatchers is using Sierra-created agents for its customer interactions.

OK, but empathy? The Merriam-Webster dictionary defines it as “the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, and experience of another.” I asked Sistani whether it might be a contradiction to say a robot can be empathetic. After a pause where I could almost hear the gears grinding in her brain, she stammered out an answer. “It’s interesting when you put it that way, but we’re living in 2D worlds. Algorithms are helping us determine the next connection that we see and the relationship that we make. We’ve moved past that as a society.” That meaning the notion that an interaction with a robot cannot be authentic. Of course IRL is the ideal, she hastens to say, and agents are more of a complement to real life than a substitute. But she won’t back down from the empathy claim.

When I press her for examples, Sistani tells me of one interaction where a WW member said she had to cancel her membership because of hardships. The AI agent love-bombed her: “I’m so sorry to hear that … Those hardships can be so challenging … Let me help you work through this.” And then, like a fairy godmother, the agent helped her explore alternatives. “We’re very clear that it’s a virtual assistant,” says Sistani. “But if we hadn’t been, I don’t think you could tell the difference.”

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *