Artificial Intelligence isn't very intelligent, but sometimes it seems like it is.
Over on Foolishness ... Or Is It, Smartfella got a comment from Anonymous that included "... write a paper on why AI should be dismantled or at the very least be regulated by requiring AI percentage of input be a footnote."
I took that phrase to ChatGPT and got this:
"
The Case for Dismantling or Regulating AI Through Transparency Requirements
Introduction
Artificial Intelligence (AI) has transformed numerous aspects of society, from automating labor to generating content. However, as AI’s influence expands, so too does the risk of misinformation, ethical violations, and the erosion of human creativity. While some advocate for AI’s continued proliferation, it is imperative to either dismantle AI or at the very least impose strict regulations that require AI-generated content to be footnoted with the percentage of AI input. Such measures would uphold transparency, accountability, and the integrity of human knowledge.
The Risks of AI and the Need for Dismantling
1. Misinformation and Deception
AI can generate text, images, and videos that are nearly indistinguishable from human-created content. This has already led to widespread misinformation, deepfakes, and the manipulation of public opinion. Without clear attribution, individuals cannot differentiate between human and AI-generated content, leading to an erosion of trust in media and academia.
2. The Undermining of Human Creativity
With AI’s ability to generate literature, music, and art, there is a growing concern that human creativity will be devalued. Artists, writers, and musicians risk becoming obsolete as AI models replicate and appropriate their work. By dismantling AI or at least ensuring transparency in its use, society can preserve human ingenuity and prevent a homogenization of culture.
3. Ethical and Bias Concerns
AI models are trained on vast datasets that often reflect societal biases. This leads to the reinforcement of stereotypes and systemic discrimination. AI-generated content may inadvertently propagate prejudices, making its unchecked use dangerous. If AI cannot be dismantled, it must at least be held accountable through clear labeling and transparency measures.
The Case for AI Regulation Through Footnoting
1. Ensuring Transparency in Knowledge Production
A requirement for AI-generated content to include footnotes detailing the percentage of AI input would provide clarity on the origins of information. This would help readers assess the credibility of a text and prevent AI-generated misinformation from being mistaken as fact.
2. Upholding Academic and Journalistic Integrity
In academic and journalistic fields, authenticity and original thought are paramount. Mandating AI percentage footnoting would prevent AI from being used to plagiarize or fabricate research. It would ensure that human authors receive credit for their intellectual labor and that AI is used as a tool rather than a substitute for critical thinking.
3. Promoting Ethical AI Development
If AI developers are required to disclose AI’s role in content generation, there will be a greater incentive to develop AI responsibly. Transparency requirements would encourage companies to refine AI models to reduce biases and enhance accountability.
Conclusion
While AI has the potential to revolutionize various industries, its risks far outweigh its benefits when left unregulated. The ideal solution would be to dismantle AI before it causes irreversible harm. However, if AI persists, strict regulations must be enforced, including the requirement that all AI-generated content be footnoted with the percentage of AI input. Such measures would protect human creativity, preserve truth in media, and promote ethical AI use. Without immediate action, society risks ceding knowledge, culture, and integrity to artificial systems that lack human accountability.
"
Not bad, wouldn't you say?
So I said "great, now prepare a photograph showing a flock of sheep on a mountain meadow with two shepherd dogs keeping them safe." And true to form one of the sheep has only one front leg. That's AI for you.
Footnote: Some 99% of this post was generated by A.I.
.:. © 2025 Ludwig Keck