Do You Trust OpenAI as Much as You Trust Apple?
Toggle Dark Mode
Last week, we wrote about how Apple isn’t paying anything to add ChatGPT to iOS 18. As far as we know, OpenAI isn’t paying Apple either — at least not yet. That news might have come as a bit of a shock to some. Let’s explore a little further.
According to Mark German at Bloomberg:
Apple isn’t paying OpenAI as part of the partnership, said the people, who asked not to be identified because the deal terms are private. Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments, these people said.
On the other hand, Bloomberg did reveal that, in the future, Apple intends to take a share of revenue from AI providers that monetize chatbots, like the $20 per month subscription plan OpenAI charges for ChatGPT Plus. If you ask ChatGPT 4o, the subscription version, OpenAI’s relationship with Apple could evolve to OpenAI paying Apple for cloud services (like iCloud) and from App Store fees should they derive revenue from subscriptions or in app purchases. That makes sense.
Money aside, OpenAI’s technology will be intertwined with “hundreds of millions” of Apple devices. Apple has achieved a level of consumer trust and loyalty that didn’t happen overnight. They’ve consistently pumped out high-quality products with innovative technology backed by a steadfast commitment to privacy, security, and customer services. However, Apple isn’t perfect. It takes its share of lumps along the way and will continue to do so.
What about OpenAI and ChatGPT? Although we know AI is no longer science fiction, how big of a risk is Apple taking here? Most of us have no idea, and it’s likely many at Apple have their fingers crossed, too. What if security issues emerge, biases creep into responses, or if certain matters and viewpoints are suppressed entirely? Users will be able to turn off access to the chatbot on their devices which is crucial. However, it’s unlikely that alone is enough to safeguard completely against the uncertainties.
What if public trust in OpenAI erodes after a while? Apple has been growing for 48 years. OpenAI is still a baby. Take today’s news for example. Edward Snowden, the former NSA/CIA employee and contractor (now a citizen of Russia, by the way) who leaked classified NSA surveillance information, warned the world on X: “Do not ever trust” OpenAI. Why? OpenAI recently appointed former US Army general and former head of the NSA, Paul M. Nakasone, to its board. “
Nakasone is certainly qualified. But you don’t have to be Edward Snowden for this to give you pause. Why would OpenAI make this move on the heels of Apple’s announcement of its partnership with OpenAI? Is it bold or benign? I think Snowden’s take is that if the answer is somewhere in the middle, that’s just as bad. You’re either a tool of mass surveillance or you’re not. Suspicion is certainly justified.
For now, we encourage our readers to learn more about OpenAI. We suggest starting by reading more about their Safety Systems team, the Safety Overview, and Safety Standards here. It seems clear that OpenAI’s goals and values align with Apple’s. Hopefully, they will succeed in continuing to make AI that is “beneficial and trustworthy.”
For many users, Apple’s partnership with OpenAI will be their first exposure to AI. Given Apple’s track record, it’s reasonable to feel comfortable following Apple’s lead and entrusting its decision to pick an AI partner. Take some time to do your own homework, too. It will be fascinating to follow.