.

Exposing a Skewed Reality

Despite all their intelligence, human beings have a pretty dismal record at not making mistakes. This dynamic, in particular, has already been reinforced quite a few times throughout our history, with each testimony practically forcing us to look for a defensive cover. We will, however, solve our conundrum in the most fitting way possible, and we’ll do so by bringing regulatory bodies into the fold. Having a well-defined authority across each and every area was a game-changer, as it instantly gave us a safety cushion against our many shortcomings. Now, the kind of utopia you would generally expect from such a development did arrive, but at the same time, it failed to stick around for long. Talk about what caused it to dissipate so soon, the answer will literally touch upon technology before it covers anything else. You see, the moment technology got its layered nature to take over the scene, it allowed every individual an unprecedented chance to exploit others for their own benefit. In case didn’t sound bad enough, the whole runner soon began to materialize on such a massive scale that it expectantly overwhelmed our governing forces and sent them back to the drawing board. After a lengthy spell in the wilderness, though, it seems like the regulatory contingent is finally ready to make a comeback. The same has turned more and more evident in recent times, and truth be told, a new development involving Meta might just solidify its traces moving forward.

Meta’s Oversight Board, an independent body set up to review the Meta content decisions and policies, has delivered a scathing assessment of the company’s cross-check program. To put some context into play, the cross-check program is basically a measure Meta took to double check any moderation decision when it involves a high profile account. Hence, instead of allowing its AI system to make call like it does in every other case, Meta sends the stated decisions for a human review. But who is in this team of reviewers? Well, it includes journalists reporting from conflict zones and civic leaders whose statements are particularly newsworthy.  However, according to the oversight board, the program is just being used to create an unfair, and even somewhat harmful, moderation queue for various public figures. In a bid to explain its massive downside, the board referred to one incident where non-consensual pornography was left up on the platform for a prolonged period of time only due to a delayed moderation response.

“While Meta told the board that cross-check aims to advance Meta’s human rights commitments, we found that the program appears more directly structured to satisfy business concerns,” the Oversight Board stated. “The board understands that Meta is a business, but by providing extra protection to certain users selected largely according to business interests, cross-check allows content which would otherwise be removed quickly to remain up for a longer period, potentially causing harm.”

As a result, the board has now handed Meta a total of 32 recommendations. These recommendations include hiding posts that are marked as “high severity” violations while a review is underway, even when they’re posted by business partners, improving content moderation for “expression that is important for human rights,” and adopting a special queue for content that is separate from Meta’s business partners. Apart from it, the board also recommended Meta to mark the accounts of users that are part of the cross-check program, and therefore “allow the public to hold privileged users accountable” in terms of whether they are following all the rules or not.

Meta has upto 90 days to provide its response on the stated recommendations.

Hot Topics

Related Articles