• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss What's your worst experience with social media AI moderator bots?

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Oct 12, 2024
Messages
1,721
Impact
213
LOL Coins
𝕷6,086
Facebook, Instagram, Threads, Twitter/X, Reddit, Digg and so many other social media sites are now using AI bots as their mods. This permanently removes human touch in managing and dealing with situations.

Most of the time, these AI moderators work with programmed codes which can't handle cases right which might lead to unwarranted bans and suspensions.

What's your worst experience with social media AI moderator bots?
 
I have Facebook groups, some of which a very big with thousands of members.

FB keeps enabling their AdminAssist AI to automatically approve join requests even though I have disabled it time and time again. Bad actors get through without any of us admins/mods being able to vet them.

That really gets on my nerves because they'll do something bad, FB will remove it and then tell me like my group did something wrong when it was their artificial unintelligence that allows them in the first place to join! WTF!
 
I made a post about social justice for the races with the hashtag #Black Lives Matter on Facebook and a Bot moderator immediately removed the post. I knew it was a result of algorithm bias. I have seen such protests being posted on Facebook and allowed to fly.
It’s a problem with facebook’s moderation system. They do flag political and content like this automatically.

Gone are the days of real human moderation in social media.


The ai moderation system was implemented back in 2020. It still needs better training though.




Facebook has stated that its rules are structured to reduce bias and subjectivity so that reviewers can make consistent judgements on each case.65 In response to growing global pressure from governments and the public to take down violating content quickly, Facebook has invested heavily in automated tools for content moderation. These include image recognition and matching tools to identify and remove objectionable content such as terror-related content; NLP and language matching tools that seek to recognize and learn from patterns in text related to topics such as propaganda and harm; and pattern identification tools, which seek to identify patterns of similar objectionable content on multiple Facebook pages or patterns among individuals who post similar types of objectionable content. The platform has found that pattern detection is most effective for images, such as resized terror propaganda images, rather than text, as text can be more easily manipulated in order to evade detection and removal—and because text requires greater contextual understanding to evaluate.66

As part of its hybrid approach to content moderation, Facebook engages in several phases of algorithmic and human review in order to identify, assess, and take action against content that potentially violates its Community Standards. Automated tools are typically the first layer of review when identifying violating content on the platform. Depending on the level of complexity and the degree of additional judgment needed, the content may then be relayed to human moderators.67
 
Back
Top