Apple Is an AI Company Now

Lots of tiny AI tweaks are quietly taking over the iPhone.

A pixelated graphic of a robot holding a globe on a cellphone screen
Illustration by The Atlantic. Source: Getty.

After more than a decade, autocorrect “fails” could be on their way out. Apple’s much-maligned spelling software is getting upgraded by artificial intelligence: Using sophisticated language models, the new autocorrect won’t just check words against a dictionary, but will be able to consider the context of the word in a sentence. In theory, it won’t suggest consolation when you mean consolidation, because it’ll know that those words aren’t interchangeable.

The next generation of autocorrect was one of several small updates to the iPhone experience that Apple announced earlier this month. The Photos app will be able to differentiate between your dog and other dogs, automatically recognizing your pup the same way it recognizes people who frequently appear in your pictures. And AirPods will get smarter about adjusting to background noise, based on your listening over time.

All of these features are powered by AI—even if you might not know it from how Apple talks about them. Its conference unveiling the updates included zero mentions of AI, now a buzzword for tech companies of all stripes. Instead, Apple used more technical language such as machine learning or transformer language model. Apple has been quiet about the technology—so quiet that it has been accused of falling behind. Indeed, whereas ChatGPT can write halfway-decent business proposals, Siri can set your morning alarm and not much else. But Apple is pushing forward with AI in small ways, an incrementalist approach that nonetheless still might be the future of where this technology is headed.

Since ChatGPT debuted last fall, tech leaders have not been very subtle about AI’s potential—for good and for evil. Sam Altman, the CEO at OpenAI, tweeted last month that AI “is the most amazing tool yet created.” Microsoft founder Bill Gates has called AI “the most important advance in technology since the graphical user interface.” At a Google conference, Alphabet CEO Sundar Pichai said “AI” 27 times in a 15-minute speech. (He’s also been known to say that AI will be “more profound” than fire.)

Apple, meanwhile, isn’t even pretending to talk a big game when it comes to AI. John Gruber, a longtime Apple follower who runs the technology blog Daring Fireball, told me that he doesn’t expect any of the machine-learning features Apple announced this year to significantly alter the iPhone-user experience. They’ll just make it nominally better. “We expect autocorrect to just work,” he told me over email. “We notice when it doesn’t.”

The new autocorrect, which will be available in an iOS upgrade later this year, is sort of like a less powerful ChatGPT in your pocket. Apple says the software will be better at fine-tuning itself to how we type, as well at predicting what words and phrases we will use next. When you ask ChatGPT a question, you are accessing the same giant large language model stored on the cloud that everyone else is. But the much smaller and more personalized language model that will now power autocorrect will be living on your iPhone. Apple has not shared more details on how the feature will work, and the exact technical approach that Apple is using here is not clear, Tatsunori Hashimoto, a computer scientist at Stanford University, told me. Researchers, including Hashimoto, have been hard at work figuring out how to scale down large language models so that they fit on a mobile device.

Meanwhile, AirPods will now use “Adaptive Audio” to analyze sound around you and adjust accordingly. For example, your Airpods might automatically lower the volume of your music when you start talking to the barista at a coffee shop, and then raise it when you stop. Apple says it will use machine learning to understand your volume preferences in general and optimize your listening experience.

All of this is deeply Apple, Gruber said: focusing on what a feature does rather than how it does it. “The fact that it’s using AI behind the scenes is no more relevant to users than, say, which programming language they used to create it,” Gruber said. It also emphasizes user privacy, which Apple has long prioritized (or at least claimed to prioritize). Because the company is using an “on device” model, it could pose less of a privacy risk than giant, cloud-based models like ChatGPT do. “In a sense, it’s private because the user data doesn’t leave their phone [and] the model that is fine-tuned to that user doesn’t leave their phone,” Mohsen Bayati, an AI expert at Stanford Business School, told me.

Some of the differences between Apple’s approach to AI and that of the other tech companies can be explained by their respective business models. The tech giants don’t all make money in the same way. Google and Meta control about half of the digital-ad market, and AI-powered chatbots could become just another way to get us to buy things. Microsoft is less in the ad business, but it hopes that adding chatbot functionality to search could help chip away at Google. Amazon’s enormous cloud-hosting business stands to gain from the adoption of large language models (they have to live somewhere!). Apple is a luxury brand, more deeply in the business of making using your computer and phone enjoyable above all else. “So it isn’t surprising that Apple is approaching AI cautiously, with a product-oriented focus,” Gruber said.

Still, the iPhone might be where lots of people first encounter new advances in AI, in part because how chatbots will manifest in our daily lives remains uncertain. ChatGPT was a big hit, garnering 100 million users within two months of its launch, but it’s not clear how many of them are still using it with any regularity. (When asked about current average monthly users, a spokesperson for OpenAI would not share numbers.) Many companies are adding chatbot capabilities of their own—the Instacart app is now using AI to serve up recipes, and Salesforce recently debuted something called “Einstein GPT”—but chatbots continue to have real limitations. They make things up all the time, have biases, and are a copyright nightmare.

Small technical inconveniences are easy to chafe at, but there’s a reason “No, autocorrect, I didn’t mean ‘ducking’” became such a meme. A better autocorrect adds up across billions of phones, tablets, and computers: The majority of smartphones in the U.S. are now iPhones, and the company counts more than 2 billion active devices worldwide. Other tech giants are also using AI to make small upgrades to existing products; Google recently unveiled a feature to draft Gmail replies via chatbot. The scenario of humanlike chatbots taking over everything is not the only way AI can change the world. Lots of tiny tweaks right under our noses can amount to something big. In a sense, they already have: For years, machine learning has served us targeted ads, filtered our social-media feeds, and helped determine our search results.

Like the rest of Silicon Valley, Apple may soon take bigger swings. Daniel Ives (that’s Ives, not Ive), a technology analyst at Wedbush Securities, thinks Apple’s new AI features amount to “just the appetizer before the main entrée.” His team has estimated that the company has spent $8 to $10 billion on AI in the past four or five years—the same amount that Microsoft invested in OpenAI in January—and Apple is reportedly on the hunt for AI talent.

Hey Siri, perhaps your days are numbered.

Caroline Mimbs Nyce is a staff writer at The Atlantic.