What, you might ask, is the "Content-Moderation Problem"? According to an article by Chris Stokel-Walker, a British journalist, anyone who spends time on social media is well aware that "misinformation, abusive language and offensive content" is hard to avoid.
Figuring out how to avoid them is what Stokel-Walker is calling the "content-moderation problem." According to his article published in the Saturday-Sunday, July 12-13, 2025 edition of The Wall Street Journal, Stokel-Walker thinks that Artificial Intelligence (AI) may soon solve the problem:
In [a] Cato survey, 60% of users said they wanted social-media platforms to provide them with greater choice to pick and choose what they see and what they don’t.Soon this kind of personalized content moderation may become a reality, thanks to generative AI tools. In a paper presented this spring at the ACM Web Conference in Sydney, Australia, researchers Syed Mahbubul Huq and Basem Suleiman created a YouTube filter based on commercially available large language models. They used four AI chatbots to analyze subtitles from 4,098 public YouTube videos across 10 popular genres, including cartoons and reality TV. Each video was assessed on 17 metrics used by the British Board of Film Classification to assign film ratings, including violence, nudity and self-harm.Two of the chatbots, GPT-4 and Claude 3.5, were able to identify content that human checkers assessed as harmful at least 80% of the time. The system isn’t perfect, and so far it can only assess language in videos, not images. It’s also expensive: “To filter every video [on YouTube] would cost trillions of dollars at today’s prices,” Huq said.But the demonstration model points to a future in which social media users are able to choose exactly what kinds of content they see" (emphasis added).
Here is my comment. Allowing every person who accesses the Internet to "choose exactly what kinds of content they see" is NOT a good idea. That would NOT be a step forward. That would be one more step away from the "real world," in which different people, and different organizations, and different political actors have very different ideas about what is good, and what is bad, and what ought to be done.
Allowing everyone to insulate themselves from any idea or perspective that is not their own is NOT how we will learn to live together (something that is ever more essential). Insulating ourselves from different perspectives is a prescription for further polarization. It is also a denial of the common world which is a world that belongs to everyone. Allowing everyone to "pick and choose what they see and what they don't" is the opposite of helpful.
Let's be honest. Many people are now, essentially, "living" most of their lives online, and what we need to do is to learn how to live together, with all our differences.
So let me repeat myself. Giving ourselves tools that will allow us to eliminate from our notice, automatically, anyone and everyone with whom we think we disagree is NOT A GOOD IDEA!

Your take on internet content moderation and your stance that internet should be maintained as a more accurate representation of the real world are fascinating. However Gary, I am torn as a result, the by consequential idea, as it follows your theory, that then should our online classrooms also produce intermittent gunfire noises for students to hear while they are sitting at their desks at home? How do we mitigate exposure to violence, crime, and hate on the Internet while maintaining your proposed real world standard?
ReplyDeleteI am not really following how my comments in my blog posting relate to the idea that our "online classrooms" might be designed to "produce intermittent gunfire noises for students to hear."
DeleteMy point was that we should not try to insulate ourselves from commentaries with which we might disagree. We should, in other words, be prepared, in "real life," as on the Internet, to be exposed to what other people say - and the same thing would go for exposure to what other people do, which would inlclude things like violence, crime, and hate. I was not intending to suggest that the Internet should be "maintained as a more accurate representation of the real world." Who would decide what the "real world" really is? We each need to do that for ourselves, based on our own observations and experience.
My point was that we should not be trying to "engineer" what we see on the Internet to try to portray something that we like. We all need to be exposed to "content" without moderation by anyone. Then, we will all, individually, and together as we observe and discuss things, decide what we should do, given what we find when we confront that "unmoderated" content.