Meta’s oversight board is working on a lawsuit that focuses on Meta’s ability to permanently disable user accounts. Permanent bans are drastic measures that lock out people’s profiles, memories, connections with friends, and, in the case of creators and businesses, their ability to market and communicate with fans and customers.
The organization said this is the first time in its five-year history as a watchdog that permanent account bans have been a focus of its watchdog.
The cases considered are not necessarily those of everyday users. Instead, the case involves prominent Instagram users who have repeatedly violated Meta’s community standards by posting visual threats of violence against female journalists, anti-gay slurs against politicians, content depicting sexual acts, and allegations of misconduct against minorities. Although the account did not accumulate enough strikes to be automatically deactivated, Meta made the decision to permanently ban the account.
Although the commission’s documents do not name the accounts in question, the recommendation could affect other users who post abusive, harassing, threatening or other content targeting public figures, or those whose accounts are permanently banned without a transparent explanation.
Meta referred this particular case to the board, which included five posts made in the year before the account was permanently disabled. The commission says it is seeking input on several key issues, including how to fairly handle permanent bans, the effectiveness of current tools to protect public figures and journalists from repeated abuse and threats of violence, the challenges of identifying off-platform content, whether punitive measures effectively shape online behavior, and best practices for transparent reporting of account enforcement decisions.
The decision to review the details of the lawsuit comes a year after users complained about a slew of bans with little information about what went wrong. This issue not only affects Facebook groups, but also individual account holders who believe automated moderation tools are to blame. Additionally, those who have been banned complain that Meta Verified, the paid support provided by Meta, has proven useless in these situations.
There is, of course, ongoing debate as to whether the Oversight Board has any real authority to address issues with Meta’s platform.
The board has limited scope to effect change at the social networking giant, and cannot force Meta to make broad policy changes or address systemic issues. Notably, the board is not consulted when CEO Mark Zuckerberg makes decisions to make fundamental changes to the company’s policies, such as last year’s decision to ease hate speech regulations. The board can make recommendations or overturn certain content moderation decisions, but it often takes time to make decisions. Meta also takes on relatively few cases compared to the millions of moderation decisions it makes across its user base.
According to a report released in December, Meta has implemented 75% of the more than 300 recommendations issued by the board, and content moderation decisions are consistently followed by Meta. Meta recently asked policy advisors for their opinions on the introduction of Community Notes, a crowd-sourced fact-checking feature.
After the oversight board issues policy recommendations to Meta, the company must respond within 60 days. The board is also seeking public comment on this topic.
Correction: Public comments can be submitted anonymously. This post was updated after publication to reflect this.
Source link
