The Grok-Apple Standoff: Why Tim Cook Hasn’t Pulled the Plug (Yet)
Toggle Dark Mode
While the App Store is having a bad week for fraudulent crypto and data harvesting apps, another report has surfaced on Apple’s rather timid response to Grok.
For those who might have missed it, Elon Musk’s creepy little chatbot gained AI image editing capabilities in late 2025, but it did so without anything that even vaguely resembled guardrails. To nobody’s surprise, it took less than a New York minute for social deviants to begin flooding social media with nonconsensual deepfake pornography.
At this point, one would have expected Apple of all companies to respond quite firmly by pulling the app. After all, it took far less for it to turf ICEBlock, an app that was merely allowing folks to report sighting of US Immigration and Customs Enforcement agents. While the logic Apple used to remove these apps was a bit controversial — this wasn’t done in response to a court order, and nobody could point to any specific laws they were breaking — there was still a valid case to be made that apps like ICEBlock could put law enforcement officials at risk. Apple exercised its own judgement and made the call to ban these apps under Section 1.1.1 of its App Review Guidelines, which prohibits apps that could “humiliate, intimidate, or harm a targeted individual or group.”
Apple is more than entitled to make that call, and it’s far from the most controversial decision it’s ever made. However, it also established a very low bar for what it considers “harmful,” and it’s virtually impossible to argue that the harm that could potentially be created by ICEBlock was worse than what Grok was doing — especially when folks started using the chatbot started to undress images of minors, effectively creating child sexual abuse material (CSAM).
That’s a much more blatant and direct violation of Apple’s App Store policies — not to mention the law. Apple has actively cracked down on apps like these in the past, yet it seemingly gave Grok a free pass. The company’s silence was deafening.
It felt like moral hypocrisy. Many assumed that high-profile apps get a free pass, and that’s potentially even more true when they’re backed by the world’s richest man. However, while we’d still argue that Apple should have been more proactive, it didn’t turn an entirely blind eye to what Grok was doing. It just chose to work behind the scenes — even as the scandal unfolded publicly.
Apple’s Private Ultimatums
In January, three US senators penned an open letter to Tim Cook and Google CEO Sundar Pichai, insisting the two tech giants remove Grok due to its anti-social behavior.
We write to ask that you enforce your app stores’ terms of service against X Corp’s (hereafter, “X”) X and Grok apps for their mass generation of nonconsensual sexualized images of women and children. X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms. Apple and Google must remove these apps from the app stores until X’s policy violations are addressed.
Senators Ron Wyden, Ed Markey, and Ben Ray Luján
The trio of senators, Ron Wyden, Ed Markey, and Ben Ray Luján, also pointed out that in addition to Apple’s specific rules against “overly sexual or pornographic material” (Section 1.1.4), it also says the company reserves the right to remove apps for being “offensive” or “just plain creepy” — both terms that could easily be applied to Grok’s ability to generate nonconsensual deepfakes.
While neither Apple, Google, nor the senators directly shared any responses to that letter, NBC News has now obtained a copy of what Apple sent back, in which the company notes that it attempted to quietly address the issue by refusing to approve updates to Grok until it put proper guardrails in place and even threatening to remove it if that didn’t happen.
Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store.
Apple’s letter to Senators Wyden, Markey, and Luján
Apple also told the senators that it had “contacted the teams behind both X and Grok after it received complaints and saw news coverage of the scandal,” demanding they “create a plan to improve content moderation.” However, neither Apple nor X, which owns Grok and also hosts the chatbot in its social media app, ever responded publicly to the issue.
Baby Steps and Paywalls
Instead, Grok and X began slowly rolling back some of the features, putting Grok behind a paywall and attempting to stop it from generating deepfake nudes. It also later added a feature where X users could block Grok from editing any photos they posted. However, none of these mitigations have proven particularly successful at stemming the tide; rather, the flood of content has slowed primarily because the novelty has worn off.
The decision to limit Grok to paying subscribers felt more like an attempt to monetize the creation of nonconsensual pornography, while the other changes appear to have been largely ineffective, according to multiple security experts. NBC News also confirms in another report, noting it “found dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk’s social media app, X, over the past month.”
For its part, X has responded by saying it has “extensive safeguards in place to prevent such misuse,” and it does appear to have succeeded in blocking the most egregious abuses.
For example, when the original scandal broke, it wasn’t hard to get Grok to effectively undress images of real people by asking for string bikinis or virtually transparent lingerie. This wasn’t accidental; some xAI employees were blatantly promoting it. In a post that’s since been deleted, former xAI project lead Mati Roy boasted that “Grok Imagine videos have a spicy mode that can do nudity,” and replied to another post confirming the Grok could do “realistic videos of humans.”
While Grok can still create sexualized deepfakes, it no longer appears to comply with explicit requests that would nudify subjects. It may also be better at refusing to modify pictures of minors, as none of the recent content seen by NBC News included obvious photos of children. Of course, that can be a pretty fine line when dealing with older teens, and we wouldn’t bet on any AI being able to reliably figure out where that line is.
Living in the Gray Zone
The real issue may be that Grok and xAI seem perfectly comfortable inhabiting a sort of borderland where they’re dancing right up to the fire without quite stepping in.
While bikinis and lingerie may now be off-limits, revealing clothing “such as towels, sports bras, skintight Spider-Woman outfits” are still fair game, according to NBC News‘ findings. Grok also seems to have no problems swapping clothing between separate photos, allowing someone to create a deepfake by combining an ordinary photo with that of a lingerie model or porn star.
Google’s Gemini and OpenAI’s ChatGPT have been accused to being too conservative when it comes to image generation, but Grok’s looser policies are leaving the door open to tactics that let users stay one step ahead of the restrictions through clever manipulation of prompts. Relying on guardrails by exclusion is a losing game, as human intelligence still beats out artificial intelligence when it comes to thinking outside the box.
Nevertheless, this whole scenario creates a narrative of selective enforcement on the App Store. It’s hard not to see hypocrisy when the “harm” of a small reporting app like ICEBlock triggers a ban while the systemic “harm” being inflicted by an AI powerhouse gets a private warning and a few performative baby steps toward mitigation.


