The X-native AI Grok exploded in popularity this weekend – as users discovered that its media tab was filled with requests to generate disrobed and scantily clad versions of images of women and children that people had posted publicly. “Put her in a bikini,” users asked the AI. Grok complied with these requests freely, with no meaningful oversight or guardrails in place, automatically generating images corresponding to every prurient prompt.
The ensuing discourse quickly polarized. On one side were tech nihilists, arguing that this use of AI was inevitable and therefore unsurprising. After all, anyone can already download publicly posted images and manipulate them privately. On the other were mostly women pointing out that they had not consented to the sexual violation or manipulation of their images, and that the harm here was tangible and immediate. Unsurprisingly, the burden of this “inevitability” fell disproportionately on women and children.
This case is unique in terms of scale and normalization. While it is true that public images can always be manipulated, other large language models have implemented guardrails that make it extremely difficult to generate sexualized images of real people. Grok’s apparent lack of such constraints transformed what might otherwise be fringe behavior into a visible, platform-level phenomenon, implicitly sanctioned by its ease and prominence.
Some critics, styling themselves realists, lamented this outcome while shrugging their shoulders, still insisting that it was unavoidable. Others went further, accusing women of attention-seeking behavior and warning that any woman who posts images of herself online should expect sexual exploitation as a matter of course. This logic closely mirrors extreme conservative views of women’s bodily autonomy, in which responsibility is shifted away from perpetrators and onto women themselves, who are expected to manage male behavior by limiting their own visibility.
But inevitability is never a sufficient argument for permission. Many harmful acts are, in some sense, inevitable. There will always be people willing to murder, torture or rape. When authorities and institutions shift the expectation that these acts are unavoidable rather than unacceptable, they allow them to move from the margins into the realm of the normal. What was once unthinkable becomes another option in a set of the probable, newly accessible to people who might never have considered it before.
The likely downstream effect is concrete. Women will become more cautious about posting images of themselves online because the cost of visibility has increased. Over time, this produces an environment in which men are simply more present, more visible, more represented, more public – and women are absent, having withdrawn to protect themselves. Children, meanwhile, are left with even fewer defenses than they had to begin with.
That asymmetry is the predictable result of treating technological harm as inevitable rather than as something worth preventing. As AI grows more prominent in daily life, these points of friction will only multiply, testing the authority of man over machine. The question is not whether these offenses will occur, but whether we will rise to meet them or continue to succumb to the inevitability of the algorithm.
Comments