Community Requests for Account Suspensions (2 of 3)

[This document will be published in three parts, such that Part 1 can be referenced in future conversations, Part 2 deals with a very specific set of account concerns, and Part 3 addresses some long form questions posted to the moderators on Tŵt.]

Part 2 of 3 Regarding specific account suspension requests

As a preamble, I want to remind anyone reading this that I chose to not block Gab before they came online. In the run-up to their launch there was a lot of conversation about defederating them. And within a couple of minutes of them coming online, we had them blocked. But not before they contravened our Community Guidelines. That’s a principle I cannot waver from. I’ve been around the block a few times, I’ve seen my assumptions been wrong more than a few times, and I’ve seen people change over time. I had very little doubt what was going to happen when they came online, but nonetheless the way I operate, the way I work, they would be blocked the second they contravene our stated Guidelines. Anything else – to me – is unfair, hypocritical, and a dangerous precedent. I am not the judge of all things, I am merely the current administrator of an Internet service with a stated code of conduct.

###

The first account I want to discuss is from an account that posts what one user-submitted report labelled “religious spam”. Here is one of the three toots for which we received a report:

Everything we do is motivated by the life, teachings and ministry of Jesus We believe that every human life has equal value and that every person should be empowered to reach their God-given potential. To do this, we all need to belong to flourishing communities

These words were reported as “Racist spammer”.

If I were to put my non-moderator hat on, I would silence this content. I would mute this account. I would take a one-click action to never see this person’s thoughts again. I have no interest in it, and I find it disagreeable to me, personally. I do not enjoy being preached at.

And this, to me, is one of the strongest tools Mastodon users have that other services either don’t or have weaker versions of. Mastodon makes no inferences about what “should” be in your feeds. No algorithms deciding to “surface” content for you.

You decide. You own your feed. You own your experience. If you don’t like something, you never have to see it again.

Every single toot has the user option to mute the author, block the author, or report the author. “Mute” hides that author’s content from your feeds. “Block” means they can’t see you or your content either. And “report” is how a member can submit a report about the content to the moderation team. These escalating user controls reflect the ideal of Mastodon itself; you have personal control over individual Mastodon users’ abilities to enter your feeds, and in cases that damage the very fabric of the community through hate, malicious attacks or other unwelcome acts you can request moderator intervention.

The moderator team was unsure of what to do here. The account in question received two reports from a single, external server and one from a toot.wales user.

This was our first real test of the Community Guidelines. I reviewed the Community Guidelines, I visited every single site the account linked to – roughly twenty sites – I read page after page of religious mission and I found zero content that came anywhere near racism of any kind. Neither did I find any message of hate or prejudice. Just old-fashioned proselytising. “My religion is the best one, you should convert”.

After ascertaining that the account was not a bot, was curated, hand-entered by a single individual, I could find no good, moderate reason to suspend the account. And the account is still on the service. Besides me, the account has one other follower. (As the server admin I have to follow every account, as I need to see everything on the network.) And for as long as that content does not contravene our Community Guidelines and the Mastodon Server Covenant, I will defend its right to be on the service.

If another server doesn’t agree with me, or our Community Guidelines, that doesn't make either of us right or wrong. This is in fact the very powerful nature of Mastodon. Each server gets to make their own rules, and the users can vote with their feet. If you don’t like our rules, defederate us. Block us. That’s the beauty of the federated network. There is a prevailing force built-in that will help me determine if what I’ve built provides value. If it doesn’t, toot.wales will shrivel and die, and I’m OK with that. If that turns out to be the case, this will not be the first failed experiment in my life.

###

The second account in question poses a similar issue. Some people disagree with the content hosted externally by a particular account. This account is run by a retired journalist who has a BAFTA Cymru award for journalism among others, and the content is primarily a feed from the account owner’s RSS feed of articles published in a Web site self-described as “an investigative news website looking into misdemeanors by organisations and individuals.”

Despite several toot.wales and external users complaining about the content of the account in the public timeline, we only received one series of user-submitted reports from one relatively new user to our service, who reported three of the account’s toots in quick succession.

Each toot features a link to the source blog, with the image from the lede of that article embedded.

Toot 1 contained what the reporter deemed an upsetting image of a clown.

intentionally upsetting clown image shared without a content warning. some people have a strong phobia of clowns – apparently it's fairly common. to provide a safe environment for people on the fediverse, things like this should be hidden behind a content warning, or even left out entirely since the image does nothing to improve the reporting being shared, but instead detracts from it

We have no community guidance on using Content Warnings on images other than “nudity, pornography or sexually explicit content, including artistic depictions, gore or extremely graphic violence, including artistic depictions”. This image, while disagreeable or unsettling to some, is not against the rules in place when the image was posted.

Toot 2 contained a lede image of Pepe the Frog with a Welsh dragon emblazoned across its face with a link to a story about how life in a Welsh Valleys town is not particularly great under a Covid lockdown, here is the article in question: https://the-eye.wales/a-rum-cove/

The user-submitted report stated:

this is pepe. this image has very strong ties to the alt right, particularly in america but across the world as well. people who use this are associated with the right wing, up to and including fascists. it's a dogwhistle: an image shared that the “in group”, other members of the alt right, understands as a symbol of the views of the person sharing it. because it can be dismissed as “just an image”, there is plausible deniability that allows distributers of this image a certain safety. this dismissal does align with the evidence, however. this image should be treated like the alt right dogwhistling that it is, i.e. the offending account should be removed.

Here is the Anti-defamation League’s view on the Pepe the Frog meme:

Though Pepe memes have many defenders, the use of racist and bigoted versions of Pepe memes seems to be increasing, not decreasing. However, because so many Pepe the Frog memes are not bigoted in nature, it is important to examine use of the meme only in context. The mere fact of posting a Pepe meme does not mean that someone is racist or white supremacist. However, if the meme itself is racist or anti-Semitic in nature, or if it appears in a context containing bigoted or offensive language or symbols, then it may have been used for hateful purposes.

https://www.adl.org/education/references/hate-symbols/pepe-the-frog

Having read the article three times to try and divine any hate whatsoever in the article, I have to chalk this image up to non-bigoted in nature.

Toot 3 contains the same clown image again, and the news article linked makes references to a rape trial in the article. The word “rape” occurs four times, and each time is it is in the context of “rape trial”, it never appears without the word “trial”. The user submitted report reads:

the clown image again, and also this post talks about rape, which is another topic that should be hidden behind a content warning to avoid distressing survivors of sexual abuse

The clown image, again, breaks no guideline posted on toot.wales. The article does not “talk about rape”, instead it references one of the subjects of the investigative article as being someone who “sabotaged a rape trial”. This incident was covered by most mainstream media in the UK, and this BBC article is a good summary of the subject matter in question https://www.bbc.com/news/uk-wales-politics-51218364

Now, the site in question is to some an axe-grinding attempt at satire that deserves little to no attention. But the three toots that were reported do not meet the toot.wales threshold for suspension.

While the moderator team reviewed the reports, several observers weighed in with commentary and questions, which I will address below, but to be clear:

  1. The toots referenced by the report did not meet the threshold for suspension.
  2. The account in question was asked to mark their account a bot account as it appeared to meet the threshold for an uncurated bot account.
  3. After sufficient warning and time had passed, the account was silenced, subject to our Community Guideline 1.b. Uncurated bots (will be removed from the public timeline.)

Of course, any member of any community has to put some trust in the moderators of that community. To preserve that trust, the moderator team has a duty of care to obey the spirit of the law as well as the letter of the law – in this case the law being our self-imposed Community Guidelines.

In particular, the user who submitted the report, although new to the toot.wales instance, has a long history on the wider Mastodon network. When reviewing the context of the report, if either (1) the content had been clearly racist/bigoted/hateful in nature or (2) the user submitting the report was either brand new or an extreme/activist member then action might have been easier to take or not take. But neither of these things were true.

Instead, we have content that some people do not want to see.

And therein lies the rub.

###

Ruth Bader Ginsburg is credited with the truism “You can disagree without being disagreeable”, and that for me is where the power of Mastodon’s user-level actions should be brought to bear. (See https://docs.joinmastodon.org/user/moderating/#blocking-and-muting)

At some point, everyone has to accept a little disagreeability in their life. Maybe the smell of microwave popcorn, maybe an annoying uncle, maybe a post on the Internet you don’t like. But I, as admin of toot.wales, am not appointed to monitor every toot on the service for any potential disagreeability. Instead, my role is to provide a space for Wales and the Welsh, at home and abroad, subject to the guidelines put in place. This is the social contract I’ve made with the users and potential users of the service, and it is spelled out in the terms of service that users agree to when signing up. And, as part of a federated network, I have a secondary duty to the broader Mastodon community, although these users have not agreed to our terms of service.

And as part of this great Mastodon experiment, I get to direct this tiny little piece of it to see which way works best, or doesn’t. It’s alright if I get it wrong. I cannot promise to always be right. What I can promise is I am invested in the success of independent social media, Mastodon, and toot.wales. They embody principles and ideals that I hold dear, and I will continue to advocate for my version of that forward for as long as it remains valuable.

We currently have questions about our community guidelines. These are the normal growing pains of a community effort. We will find a way to convene the community in a fair, consistent fashion to make sure the community’s voice is heard and reflected in any changes to our Community Guidelines, as well as a stable way to continue that conversation over the coming years.

I am extremely proud to have been a part of growing our nascent community in this space, and will continue to support the effort. I hope you do too, and if not, I hope you find a corner of the Indie Web that suits your needs. That, after all, is the entire point of the exercise.

Next: Part 3 of 3 Response to community questions

Previous: Part 1 of 3 Background and context on Tŵt Cymru moderation goals and policies