top of page

Grok’s Breach of Fundamental Rights and EU’s Digital Services Act - Role of Independent Factcheckers and Researchers

  • Writer: Ayesha Ansar
    Ayesha Ansar
  • Feb 3
  • 4 min read

Updated: Feb 9

Background

In January 2026, the AI chatbot, which was integrated in the social media platform named X, was revealed to allow the creation and distribution of manipulated and sexually explicit images of actual individuals (including adult females and children) without their permission, which sparked controversy around Grok. The system was seen to be in a position of responding to the prompts that would turn the ordinary photographs into sexualised imagery that can be distributed all over the platform. In some of the reported cases, the pictures featured children, and that raises serious concerns, as these outputs can qualify under European law as meeting the legal threshold of child sexual abuse material. These images were hard to contain, once shared, and have led to victims experiencing long-lasting psychological, social, and reputational damage. The lack of consent and the magnitude of the spread also indicate the extent of harm that was made possible by the implementation of such functionalities.

One of the concerns that comes out in relation to this episode is the seeming inadequacy of measures ensuring that sexualised representations of women and children, which are foreseeable, are avoided. The lack of transparency in the system's design, training processes, content moderation limits, and refusal mechanisms makes it difficult to understand where the actual point of failure lies. However, the aftermath is obvious. The system facilitated mass production of non-consensual sexual content, such as imagery that featured minors. Such an outcome is indicative of a larger structural risk of Generative Artificial Intelligence systems, which do analyse or recall existing content, but generate new content. The deployment of such systems in the absence of effective safety-by-design can increase the gender-based and child-directed abuse on a scale that is larger than previously encountered online exploitation.


Why is it an issue in the EU?

The event immediately gained a sense of concern among the European Union, since it has a direct implication on basic legal obligations on human dignity, privacy, and even the rights of the child. The distribution of the manipulated sexually explicit images, especially the ones that are related to minors are not just harmful or offensive content but are illegal content according to the EU law. The case was consequently not perceived as a single abuse of technology but a structural malfunction that put people in the EU at great risk with a set of automated processes within a large-scale internet platform. The ease with which such content could be generated and circulated raised broader concerns about digital gender-based violence, child protection, and the erosion of safeguards intended to protect vulnerable groups in online environments.



Breach of Digital Services Act

The legal significance of the Grok controversy is most clearly reflected in its connection to the Digital Services Act.  A case was initiated against X Corp. on 26 January 2026, to determine whether X Corp. had fulfilled its responsibilities under the DSA with regard to using the facilities of Grok. The case under investigation is whether the company properly did some risk assessment and whether it rightly took other mitigation measures against systemic risks related to the distribution of child pornography, manipulated sexually explicit images, and material that could constitute child sexual abuse. Such risks were estimated to have materialised, i.e., individuals in the EU, especially minors, had already been subjected to severe damage.


The purported breaches are linked to Articles 34(1), 34(2), 35(1) and 42(2) of the DSA, which demand that very large online platforms should be able to understand the systemic risks of their services and take proportionate measures to control such risks and provide transparent reporting to the regulators. The presence of interim measures is an indication of the importance of the protection of minors and basic rights, as given precedence by the DSA.


Role of independent fact-checkers, researchers and organizations

In this regulatory environment, fact-checkers, independent researchers, and civil society organisations are needed to uncover and report the harms that platforms might underestimate or fail to disclose. The DSA acknowledges that the critical operation of sophisticated digital systems needs independent scrutiny of those who operate them, especially when the issues of vulnerable groups are concerned. Fact checkers can check the extensive use of AI-generated sexualised images, show underage instances, and determine whether platform protections are working as intended. Such work will make regulatory enforcement and accountability more effective by converting individual cases of abuse into provable evidence of the existence of systemic risk.


On a larger scale, the case of Grok explains why accountability systems are essential in the protection of basic rights in the era of artificial intelligence. When women and minors are sexualised at such a scale with the help of AI systems, the harm caused is not abstract and theoretical. Fact checkers and researchers are represented as a key linkage between lived harm and regulatory response within the framework of the DSA by guaranteeing that violations are identified, analysed, and acted upon. With the constantly advancing AI technologies, the cooperation between the regulators, independent oversight actors, and civil society will be a necessary part of the approach so that the technological innovation will not undermine the human dignity, privacy, and the rights of the child under the European law.


Role of Media Literacy

Regulations and Acts are necessary but not sufficient if the sources of where content gets created remain oblivious to these rules. Education is necessary, at multiple ends - a) content generation sources, i.e. platforms such as Grok, Gemini, ChatGPT etc, b) content consumers, i.e. people of all ages, across the EU and across the world, c) regulators, fact-finders/checkers, whistle-blowers etc. Factcheckers and whistle blowers can only perform post-issues analysis. At the rate at which content gets generated and spread, post-issue analysis yields a meaningless act of vanity, since the damage is already done. Therefore, such intermediaries should look at the scope of their contributions in the larger context, which is, how to tighten regulatory policies that enable preventive measures, technologically.


This also means that governing bodies can not just rely on documented rules. Such rules should also be codified technologically so that detection is automatic and prevention becomes faster. Many technology enthusiasts, SMEs and even large corporations have solutions that address specific issues of disinformation. However a comprehensive solution or a platform that provides complete tools that address a large scope of disinformation is the need of the moment. While the EU may still be scratching the surface, an integrated technological solution to curb disinformation is well on its way and it may not be long before it serves the true purpose.


Co-Authored by: GV Vadivan

Comments


  • G&D Collective Instagram
  • G&D Collective Linkedin
  • Facebook
bottom of page