Meta monitoring board handle cases that focus on Meta’s ability to permanently disable user accounts. A permanent ban is a drastic action, locking people out of their profiles, memories, friend connections, and, in the case of creators and businesses, the ability to market and communicate with fans and customers.
This is the first time in the organization’s five-year history as a policy advisor that the permanent account ban has been the subject of the Board of Trustees’ focus, the organization notes.
The case studied is not one of everyday users. However, the case involved a high-profile Instagram user who repeatedly violated the Meta Community Standards by posting violent visual threats against female journalists, anti-gay slurs against politicians, content depicting sexual acts, allegations of misconduct against minorities, and more. The account has not accumulated enough attacks to be automatically disabled, but Meta has made the decision to permanently ban the account.
The Board’s material does not mention the account, but the recommendation may result in others posting content targeting public figures with abuse, harassment, and threats, as well as users whose accounts are permanently banned without receiving a transparent explanation.
Meta referred this particular case to the Board, which involved five posts made in the year before the account was permanently disabled. The tech giant said it is seeking input on a number of key issues: how permanent bans can be processed fairly, the effectiveness of current tools to protect public figures and journalists from abuse and repeated threats of violence, the challenge of identifying content that does not belong on the platform, whether punitive measures effectively shape online behavior, and best practices for transparent reporting on account enforcement decisions.
The decision to review the details of the case comes after a year of users complaining about mass bans with little information about what went wrong. The problem is affected Facebook groupalso as an individual account holder who believe that automated moderation tools are to blame. In addition, those who have been banned have complained that Meta paid support offers, Meta Verified, it has proved futile to help in this situation.
Whether the Board of Trustees has any real influence in solving problems on the Meta platform is still up for debate, of course.
The board has limited scope to effect change at the social networking giant, meaning it cannot force Meta to make broader policy changes or address systemic issues. Notably, the Board was not consulted when CEO Mark Zuckerberg decided to make major changes in company policy – like his decision last year. to relax the restrictions on hate speech. The board can make recommendations and can overturn certain content moderation decisions, but it can be slow to make decisions. It also takes a small number of cases compared to the millions of moderation decisions Meta makes on its user base.
according to report released in December, Meta has implemented 75% of the more than 300 recommendations issued by the Council, and content moderation decisions have been consistently made by Meta. Meta also recently asked for policy advisor opinion about the implementation of the crowdsourced fact-checking feature, Community Notes.
After the Supervisory Board issues a policy recommendation to Meta, companies have 60 days to respond. The Council also solicits public comments on this topic, but these cannot be anonymous.

