Blogiau Tŵt Blogs


Read the latest posts from Blogiau Tŵt Blogs.

from Tŵt Blog

Community Requests for Account Suspensions (3 of 3)

[This document will be published in three parts, such that Part 1 can be referenced in future conversations, Part 2 deals with a very specific set of account concerns, and Part 3 addresses some long form questions posted to the moderators on Tŵt.]

Part 3 of 3 Response to community questions

Several members of, as well as users from other Mastodon instances posed several questions and comments around our response to their concerns over one of the accounts that some people felt deserved stronger moderation than they were perceiving. These questions are preserved and answered below.

I would argue that if both moderators and members of the Tŵt.Cymru community don't appreciate the account, that's reason in itself to get rid of it, even beyond any breaching of community guidelines or lack of content warnings on distressing content. It seems the account is not seen favourably by the community

Social media, like all Internet communities, are subject to the 1% Rule, also known as the 90-10-1 rule. This rule reflects the observation that for any given community roughly 1% of the community are vocal, active contributors, 10% respond/participate, and 90% consume passively. The comment above was in response to one moderator, not all moderators, and two members, not all members. Three people stating their views does not necessarily reflect the broader community, and this underscores why moderation should never be overly reactive to a vocal minority.

Relegating moderation discussion to the report facility, while encouraging auditability, is the exact opposite of transparency — it encourages moderators to form a different culture to the users on the site, entrenches seperation between moderators and regular users, and is not auditable for people outside of the moderation team.

Staff conversations are not for public consumption. Publicly displaying moderator logs, decision-making, discussion etc. opens the door to harassment of both the account being considered for moderation as well as the volunteer staff themselves. In addition, this can easily lead to derailed threads discussing these actions. I do not believe it encourages the formation of a different culture, this is not a de facto outcome of the guideline. Either there is trust between the community and the staff, or there isn’t. If there isn’t, there are larger issues at work than knowing who said what about which account. The transparency will be in the actions of the moderators and the instance itself, not in their private discussions.

A fedi node is part of a community with other fedi nodes. It's very important to see how your neighbours are handling an issue in order to know if you should continue federating with them. Making moderator discussions private means that other fedi nodes, and potential users, will have to go off the whisper network that will ultimately be formed in parallel with your moderation discussion areas.

Tŵt’s neighbours can read our Community Guidelines, observe our adherence to the Mastodon Server Covenant, and explore our timeline. Our social contract is first with users of our service, second to other Mastodon implementations, and third to the broader fediverse that can consume our content and interact with it.

It means that people can state things about your moderation behaviour and you have no immediate retaliation.

There will never be a need for “retaliation”. All observers are free to state their opinion, and act on it accordingly.

And it encourages users to be rash in favour of not interacting or being on an instance that might encourage toxic or otherwise bad behaviour,

The nature of the service is such that some instances will not federate with some other instances. This is a baked-in concept and I have no doubts that some Fediverse participants do not agree 100% with our instance. The same is true for our instance regarding others’. Our block list is public, and is based on our Community Guidelines. See

because ultimately there is no way to know what resolution was sought.

There is generally no need for the entire community to know what the resolution of a particular report was. 99% of moderator actions are of little interest to 99% of the Fediverse. There is a network of instance administrators, there is a Discourse, there is a Discord community, and there are public Pull Requests and comments on Github. It is not at all hard to interact with staff from most instances and the platform is still evolving. There is currently quite a bit of work being done on Suspend functionality, for example. (see

In the rare instances that the wider community wants to know about a particular problem, it will naturally surface, and will be addressed, or not, by the instance. Clearly, in this case, we feel there is a need to explain our thinking. This will not always be true. And I cannot be clear enough about this point: everyone is free to defederate anyone else. Tŵt has defederated with numerous Fediverse participants. I would be a hypocrite if I didn’t agree that anyone else is free to defederate with us if we do not meet their personal needs or in any other manner fail to live up to that instance or person’s understanding of the social contract.

In all cases, we are not stifling discussion about moderator actions. Healthy communities allow for appropriate discussion and appeal of moderator actions. And, when appropriate, as in this document, we will seek to clarify those actions in response to community concerns.

Did you take a break for four hours? Or did you refuse to ban them from the instance? Did you give them a second chance that is conditional on their response or their behaviour? Or something else? How can we know any of those things?

You can ask. When and where appropriate, we will answer. The broader subject about whether or not Mastodon should display these audit records in public is a question for the Mastodon developer community, which I urge all who are interested to contribute to with their requirements and suggestions. Open source development requires participation from all users of the software for it to be reflective of its users’ needs.

What is a good time to wait for you to respond? What's a good waiting time for choosing to defederate with you? Or for leaving your instance?

We offer no service level agreement on The best waiting time would be exactly as long as anyone deems appropriate. We are a young, small community with a volunteer staff who all have other jobs and commitments. The nature of the Indie Web is it is self-hosted. We have tried to create some structure around this instance to provide stability and confidence, but no guarantees are made beyond the Mastodon Server Covenant and the language found in our Terms of Service.

How do you protect yourselves against people who will inevitably see this discussion, and form word of mouth that tẘ is not a safe space for it's members or for people who interact with it?

If that assertion proves true in a broad enough portion of the Fediverse, it would signal the fact that Tŵt is not a viable exercise in its current form, and we would likely close our doors.

How can people inside and outside of the community (read: federated with) view the record of actions that have been taken in relation to accounts, and the reasons why they were taken


How can people inside and outside of the community figure out the difference between moderators having a day off, versus moderators choosing against acting against an account (For whatever reason is deemed necessary, say an account doesn't meet the criteria, or whatever).

They can ask. When and where appropriate, we will answer.

From the user's perspective, is there a tangible difference between the moderators sitting on their hands, versus the moderators deciding that an account does not meet the guidelines?

All communities must form a mutual trust with their host/moderators/conveners etc. Either this trust exists, or it doesn’t. That trust is built and perpetuated by clear, consistent actions based on fair principles of convention. If a given user does not have trust or faith in a given moderator, host, instance, they should probably find a different moderator, host, or instance.

From the user's perspective, is there a way to view past moderator actions, with the reasons why those actions were taken? One of the reasons for this might be, searching for information on someone who behaved inappropriately and harassed people, who has moved to another instance, and is trying to continue it there.

Not that I am aware of. There is a community of instance administrators where these discussions take place, there are well-informed community users who raise issues and post user-submitted reports, and there is the #fediblock hashtag. These are just a few of the ways we all manage to scrape together a federated community using open standards, and as the community grows, no doubt the underlying software will grow, too.

There are over 3,000 Mastodon servers online. Over 3,000 codes of conduct for everyone to figure out. 3,000 instance administrators to commune, share and learn from each other. I have no doubt it is far from perfect. But at its heart Mastodon and the broader Fediverse is user-centric, user-driven, community-guided. Individual instances may rise and fall in relevance, but continued participation and feedback will help form the necessary community links required for any of this to work.

In summary:

  1. The two accounts in question were not suspended, one of the accounts was silenced
  2. All administrative and moderator activities are free to be discussed, appealed, maligned or endorsed. Questions about these actions should be posed on the Tŵt platform to the Moderator staff. Staff are listed on
  3. All moderator discussions about specific accounts are private. Moderators who cannot demonstrate a consistent, impartial approach will have their moderator privileges revoked
  4. Moderator actions will likely not occur in response to DMs or public timeline discussions. Community members seeking action from moderators should file a report
  5. We are exploring ways to convene the community in a regular, representative manner to review the Community Guidelines

All of us involved in running and maintaining Tŵt are grateful for everyone’s participation in helping craft a bilingual, safe, privacy-focussed social media experience for Wales and the Welsh, at home and abroad.

Previous: Part 2 of 3 Regarding specific account suspension requests


from Tŵt Blog

Community Requests for Account Suspensions (2 of 3)

[This document will be published in three parts, such that Part 1 can be referenced in future conversations, Part 2 deals with a very specific set of account concerns, and Part 3 addresses some long form questions posted to the moderators on Tŵt.]

Part 2 of 3 Regarding specific account suspension requests

As a preamble, I want to remind anyone reading this that I chose to not block Gab before they came online. In the run-up to their launch there was a lot of conversation about defederating them. And within a couple of minutes of them coming online, we had them blocked. But not before they contravened our Community Guidelines. That’s a principle I cannot waver from. I’ve been around the block a few times, I’ve seen my assumptions been wrong more than a few times, and I’ve seen people change over time. I had very little doubt what was going to happen when they came online, but nonetheless the way I operate, the way I work, they would be blocked the second they contravene our stated Guidelines. Anything else – to me – is unfair, hypocritical, and a dangerous precedent. I am not the judge of all things, I am merely the current administrator of an Internet service with a stated code of conduct.


The first account I want to discuss is from an account that posts what one user-submitted report labelled “religious spam”. Here is one of the three toots for which we received a report:

Everything we do is motivated by the life, teachings and ministry of Jesus We believe that every human life has equal value and that every person should be empowered to reach their God-given potential. To do this, we all need to belong to flourishing communities

These words were reported as “Racist spammer”.

If I were to put my non-moderator hat on, I would silence this content. I would mute this account. I would take a one-click action to never see this person’s thoughts again. I have no interest in it, and I find it disagreeable to me, personally. I do not enjoy being preached at.

And this, to me, is one of the strongest tools Mastodon users have that other services either don’t or have weaker versions of. Mastodon makes no inferences about what “should” be in your feeds. No algorithms deciding to “surface” content for you.

You decide. You own your feed. You own your experience. If you don’t like something, you never have to see it again.

Every single toot has the user option to mute the author, block the author, or report the author. “Mute” hides that author’s content from your feeds. “Block” means they can’t see you or your content either. And “report” is how a member can submit a report about the content to the moderation team. These escalating user controls reflect the ideal of Mastodon itself; you have personal control over individual Mastodon users’ abilities to enter your feeds, and in cases that damage the very fabric of the community through hate, malicious attacks or other unwelcome acts you can request moderator intervention.

The moderator team was unsure of what to do here. The account in question received two reports from a single, external server and one from a user.

This was our first real test of the Community Guidelines. I reviewed the Community Guidelines, I visited every single site the account linked to – roughly twenty sites – I read page after page of religious mission and I found zero content that came anywhere near racism of any kind. Neither did I find any message of hate or prejudice. Just old-fashioned proselytising. “My religion is the best one, you should convert”.

After ascertaining that the account was not a bot, was curated, hand-entered by a single individual, I could find no good, moderate reason to suspend the account. And the account is still on the service. Besides me, the account has one other follower. (As the server admin I have to follow every account, as I need to see everything on the network.) And for as long as that content does not contravene our Community Guidelines and the Mastodon Server Covenant, I will defend its right to be on the service.

If another server doesn’t agree with me, or our Community Guidelines, that doesn't make either of us right or wrong. This is in fact the very powerful nature of Mastodon. Each server gets to make their own rules, and the users can vote with their feet. If you don’t like our rules, defederate us. Block us. That’s the beauty of the federated network. There is a prevailing force built-in that will help me determine if what I’ve built provides value. If it doesn’t, will shrivel and die, and I’m OK with that. If that turns out to be the case, this will not be the first failed experiment in my life.


The second account in question poses a similar issue. Some people disagree with the content hosted externally by a particular account. This account is run by a retired journalist who has a BAFTA Cymru award for journalism among others, and the content is primarily a feed from the account owner’s RSS feed of articles published in a Web site self-described as “an investigative news website looking into misdemeanors by organisations and individuals.”

Despite several and external users complaining about the content of the account in the public timeline, we only received one series of user-submitted reports from one relatively new user to our service, who reported three of the account’s toots in quick succession.

Each toot features a link to the source blog, with the image from the lede of that article embedded.

Toot 1 contained what the reporter deemed an upsetting image of a clown.

intentionally upsetting clown image shared without a content warning. some people have a strong phobia of clowns – apparently it's fairly common. to provide a safe environment for people on the fediverse, things like this should be hidden behind a content warning, or even left out entirely since the image does nothing to improve the reporting being shared, but instead detracts from it

We have no community guidance on using Content Warnings on images other than “nudity, pornography or sexually explicit content, including artistic depictions, gore or extremely graphic violence, including artistic depictions”. This image, while disagreeable or unsettling to some, is not against the rules in place when the image was posted.

Toot 2 contained a lede image of Pepe the Frog with a Welsh dragon emblazoned across its face with a link to a story about how life in a Welsh Valleys town is not particularly great under a Covid lockdown, here is the article in question:

The user-submitted report stated:

this is pepe. this image has very strong ties to the alt right, particularly in america but across the world as well. people who use this are associated with the right wing, up to and including fascists. it's a dogwhistle: an image shared that the “in group”, other members of the alt right, understands as a symbol of the views of the person sharing it. because it can be dismissed as “just an image”, there is plausible deniability that allows distributers of this image a certain safety. this dismissal does align with the evidence, however. this image should be treated like the alt right dogwhistling that it is, i.e. the offending account should be removed.

Here is the Anti-defamation League’s view on the Pepe the Frog meme:

Though Pepe memes have many defenders, the use of racist and bigoted versions of Pepe memes seems to be increasing, not decreasing. However, because so many Pepe the Frog memes are not bigoted in nature, it is important to examine use of the meme only in context. The mere fact of posting a Pepe meme does not mean that someone is racist or white supremacist. However, if the meme itself is racist or anti-Semitic in nature, or if it appears in a context containing bigoted or offensive language or symbols, then it may have been used for hateful purposes.

Having read the article three times to try and divine any hate whatsoever in the article, I have to chalk this image up to non-bigoted in nature.

Toot 3 contains the same clown image again, and the news article linked makes references to a rape trial in the article. The word “rape” occurs four times, and each time is it is in the context of “rape trial”, it never appears without the word “trial”. The user submitted report reads:

the clown image again, and also this post talks about rape, which is another topic that should be hidden behind a content warning to avoid distressing survivors of sexual abuse

The clown image, again, breaks no guideline posted on The article does not “talk about rape”, instead it references one of the subjects of the investigative article as being someone who “sabotaged a rape trial”. This incident was covered by most mainstream media in the UK, and this BBC article is a good summary of the subject matter in question

Now, the site in question is to some an axe-grinding attempt at satire that deserves little to no attention. But the three toots that were reported do not meet the threshold for suspension.

While the moderator team reviewed the reports, several observers weighed in with commentary and questions, which I will address below, but to be clear:

  1. The toots referenced by the report did not meet the threshold for suspension.
  2. The account in question was asked to mark their account a bot account as it appeared to meet the threshold for an uncurated bot account.
  3. After sufficient warning and time had passed, the account was silenced, subject to our Community Guideline 1.b. Uncurated bots (will be removed from the public timeline.)

Of course, any member of any community has to put some trust in the moderators of that community. To preserve that trust, the moderator team has a duty of care to obey the spirit of the law as well as the letter of the law – in this case the law being our self-imposed Community Guidelines.

In particular, the user who submitted the report, although new to the instance, has a long history on the wider Mastodon network. When reviewing the context of the report, if either (1) the content had been clearly racist/bigoted/hateful in nature or (2) the user submitting the report was either brand new or an extreme/activist member then action might have been easier to take or not take. But neither of these things were true.

Instead, we have content that some people do not want to see.

And therein lies the rub.


Ruth Bader Ginsburg is credited with the truism “You can disagree without being disagreeable”, and that for me is where the power of Mastodon’s user-level actions should be brought to bear. (See

At some point, everyone has to accept a little disagreeability in their life. Maybe the smell of microwave popcorn, maybe an annoying uncle, maybe a post on the Internet you don’t like. But I, as admin of, am not appointed to monitor every toot on the service for any potential disagreeability. Instead, my role is to provide a space for Wales and the Welsh, at home and abroad, subject to the guidelines put in place. This is the social contract I’ve made with the users and potential users of the service, and it is spelled out in the terms of service that users agree to when signing up. And, as part of a federated network, I have a secondary duty to the broader Mastodon community, although these users have not agreed to our terms of service.

And as part of this great Mastodon experiment, I get to direct this tiny little piece of it to see which way works best, or doesn’t. It’s alright if I get it wrong. I cannot promise to always be right. What I can promise is I am invested in the success of independent social media, Mastodon, and They embody principles and ideals that I hold dear, and I will continue to advocate for my version of that forward for as long as it remains valuable.

We currently have questions about our community guidelines. These are the normal growing pains of a community effort. We will find a way to convene the community in a fair, consistent fashion to make sure the community’s voice is heard and reflected in any changes to our Community Guidelines, as well as a stable way to continue that conversation over the coming years.

I am extremely proud to have been a part of growing our nascent community in this space, and will continue to support the effort. I hope you do too, and if not, I hope you find a corner of the Indie Web that suits your needs. That, after all, is the entire point of the exercise.

Next: Part 3 of 3 Response to community questions

Previous: Part 1 of 3 Background and context on Tŵt Cymru moderation goals and policies


from Tŵt Blog

Community Requests for Account Suspensions (1 of 3)

[This document will be published in three parts, such that Part 1 can be referenced in future conversations, Part 2 deals with a very specific set of account concerns, and Part 3 addresses some long form questions posted to the moderators on Tŵt.]

Part 1 of 3: Background and context on Tŵt Cymru moderation goals and policies.

“Tŵt is the community-led microblogging network for Wales and the Welsh, at home and abroad.”

This is the founding statement for Tŵt, an instance of Mastodon. Mastodon is

“a free and open-source self-hosted social networking service. It allows anyone to host their own server node in the network, and its various separately operated user bases are federated across many different servers. Each operating server has its own code of conduct, terms of service, and moderation policies. This differs from centrally hosted social networks by allowing users to choose a specific server which has policies they agree with, or to leave a server that has policies they disagree with, without losing access to Mastodon's social network.”

In recent weeks, one account on the instance of Mastodon has received a report submitted by a fellow user of the service, a community member. Reports are a built-in feature of Mastodon that allow individual members to signal to the server operator that a particular toot or account is in contravention of the server policies, and this then establishes an audit trail for actions taken by moderation staff. Due to the federated nature of the content, this report can be “remote” – a report made by a user of a different server that is seeking to stop that content from coming in to that server; or “local”, a report made by an account on the same server as the offending account.

In both cases, the members with moderator privileges are then able to review the report and act upon it, with generally four possible outcomes: do nothing, warn the user, silence the user, suspend the user.

Before I dig into the particular account in question, the report our staff received, and the community responses to our handling of the report, I’d like to lay out some context for why exists, why it’s on Mastodon, and why I am enthusiastic about having this sort of problem to deal with.


My name is Jaz-Michael King. I am a technologist. Born in Cardiff, Wales, I moved around as a young man to England and France and then on to the United States, in 1996. I was fortunate to ride the boom of the World Wide Web, and I built a lot of Web sites for a lot of people, including a number of large, successful communities.

Having been homeless at a very young age, the Internet – and the World Wide Web – opened up a world I had never seen or thought I would be privileged to participate in. Opportunities for me to join communities I had never been exposed to, to learn subject matter I otherwise would not have access to, and I took immense joy in the egalitarian nature of a free and open movement founded on open source software.

As a software developer myself, I contributed where I could to various technologies, and I started a company to empower small businesses to join the Web revolution. I still believe the Internet to be the great equaliser.

Anyone can publish, anyone can speak, anyone can contribute. Having been raised in a repressive, religious environment, the Internet remains to me one of the most powerful forces for equality, sharing, and creativity the world has ever known.

The early Web was enthralling. Market forces of course drove up commercial interest but a certain freedom, rooted in a classless egalitarianism, thrived. But, over time, more and more services fell by the wayside and power, and eyeballs, were slowly concentrated into what we have today: the conglomeration of “Big Tech” that comprises Facebook, Apple, Amazon, Netflix, Google, and Microsoft.

Slowly, much of what was good on the Internet, and especially on the Web, was replaced by mega corporate, centralised services delivered by American companies and their stockholder-serving, profit-driven policies. My personal experience of it all was such that I slowly found myself using the social networking capabilities less and less, and as a lonely, broken-familied, Welshman abroad, this saddened me. The magical thing that I had been a tiny part of creating was being bastardised and turned away from what I believed its purpose was.

The World Wide Web was created by Tim Berners-Lee, someone whom I have long supported. Supporting someone doesn’t mean I agree with every single thing that person says or does, but I support the goal and purpose of the WWW. I’ll let Tim say it in his own words:

When I invented the World Wide Web as an information sharing system in 1989, I aimed to create a neutral space where everyone could create, share, debate, innovate, learn and dream. That’s why I gave my invention away for free, so that anyone, anywhere could access and build on it without permission. My vision was an online space that would give people freedom — and America’s entrepreneurial, optimistic spirit embraced it with enthusiasm. In the early days, there was a wonderful spirit of empowerment of individuals. I could read any blog I liked, and I could write my own blog with links pointing to my favorite things. Anyone could put their small business online. Now that vision is threatened. That choice you have to use the Web for whatever you want could be taken away.

I couldn't agree more. Having dabbled with some of the larger social networks, I became more and more disdainful of their intrusion into my privacy. I tried building small solutions to this problem for myself, but I failed to inspire participation or demonstrate value to my circle of friends, for whom Facebook et al offered far too much ease coupled with a strong social graph of their friends. It’s where most of the people they know, are.

Now, being somewhat of a self-starter I’ve never shied away from trying to fix something myself. And social media was no different. I cannot remember how I first became aware of the Mastodon project, but my spidey senses started tingling immediately. Open source? Check. Use of World Wide Web Consortium standards? Check. Mission statement I can get behind? Check.

Several features made immediate, welcoming sense. A simple mute option means you control what’s in your feed. If you disagree with something, mute it. You have the control, not the company, not the platform. A general mission to provide better tools for managing abusive behaviour, baked in to the product itself, means I can leverage all the good work that reflects that particular corner of the zeitgeist.

Self-hosting based on W3C standards means the service can outlive the founder’s personal energy levels. And of course, portability. The opportunity to build a community that can live or die on members being able to quickly vote with their feet by porting their account to the server of their choosing (see means that given a large enough user base, the community should be able to self-manage, empowered by their own freedom to manage their own experience, their own data.

These are the things I myself want in a social media service, and I’m willing to bet my own money there are others like me. So, I started a server.

I was stuck on whether or not to run a “generalist” instance or something more narrow. I opted to combine free, open, safe social media with a means to encourage and promote the use of Welsh language online. My nation’s cultural heritage has been under attack for some time now, and the language has a peculiar place in the broader context of the United Kingdoms.

A renewed zeal for Welsh language learning, a growing sense of pride in our shared traditions rekindled by an empowered, devolved national government meant I could simultaneously participate in the Cymraeg 2050 program, a Welsh Government initiative to get the native speaking population up to a million by 2050. To this end several translators were hired to ensure a complete translation of the service into Welsh was completed.

So,, or Tŵt Cymru, was born. To ensure that it would not turn into my personal dictatorship, or in any way become something from which profit could be harvested, I incorporated a charitable corporation, invited three people to the board, and established governance that can survive me.

The charity is now funded to independently support the service for quite some time, in terms of paying the hosting bills and domain name fees and such. No individual is paid in any way by the charitable corporation. Our governance is described at

As part of administering a self-hosted instance of the service, I needed to come up with an easily understood, and easily defended code of conduct. This code of conduct is available for all to see, at – it is not set in stone and has been edited once to broaden the scope of “Excessive Advertising” to include “new accounts used solely for advertising”. It can, and likely will be edited again in the future as the community finds a way to express itself clearly.

In addition, we are signed on to the Mastodon Covenant, the terms of which can be reviewed at – which signal to the wider community that we, as a self-hosted instance of Mastodon, concur and covenant with the founding developer as to the desired nature of the social network service offering.

Furthermore, to ensure a consistent application of our Community Guidelines, the account moderation function has these Community Guidelines baked in to the moderation workflow, see the following example:

Screenshot of the Moderator Actions menu, requiring the moderator to select the Community Guideline being infringed

The expectation is that the moderator can clearly point to which of our Community Guidelines is driving the action.This helps ensure consistency in moderation. Currently our audit log has just under 1,000 records of actions taken by moderators, and these are visible to all staff.

Screenshot of the Moderator Audit Log

This, I believe, forms a fundamental foundation for a strong community network with a clear roadmap for sustainable growth and continued self-moderation. The moderators currently volunteering their time and energy have demonstrated both their commitment and agreement with the mission statement of, and an ability to moderate, or, more importantly, to be moderate. The term stems from the ability to hold the middle ground, to see both sides, to be reasonable, to not be extreme. And that ability is vital in preserving a fair, consistent code of conduct.

This is especially important in the context of Mastodon. I, clearly, do not hold “moderate” views. I believe mega corporate social media networks are bad for society. This is an extreme view. As I am human, I also have extreme views on certain politics, certain hobbies, certain foods. None of these should be on display as administrator and founder of this instance, if I want to foster a moderate community. is “the community-led microblogging network for Wales and the Welsh, at home and abroad”. This is pretty inclusive. I do not – for example – overtly support a particular political party. I have kept my personal views on Welsh independence away from the service. I do not comment on inflammatory content, or content that seeks to elicit a particular response (unless acting in good faith in service of the Community Guidelines).

Which brings me to the issue of being asked to remove accounts from the service, the content of which some people find disagreeable.

Next: Part 2 of 3 Regarding specific account suspension requests


from kolib

Рow to tighten your face at home

Dermatologists have found the cheapest way to rejuvenate.

It is quite possible to rejuvenate the face without using expensive cosmetic and plastic procedures

This conclusion was made by employees of northwestern University in Illinois (USA).

American scientists have found that to look younger without the intervention of plastic surgeons, it is enough to do facial exercises for half an hour a day. This conclusion is supported by data from an experiment involving 16 women aged 40 to 65 years.

For 20 weeks, these women, along with their instructors, performed a set of 32 facial exercises. In the future, for another 20 weeks, they did facial exercises at home, working out for half an hour a day. At the same time, participants were constantly photographed.

Images of women taken at different points in the experiment were shown to third-party dermatologists to assess their estimated age. Initially, they assumed that the average age of participants is 50.8 years. When women were asked to rate their appearance after a course of mimic gymnastics, reviewers already assumed that they were, on average, just over 48 years old.

Perhaps this is an inexpensive and safe way to rejuvenate, said the study's lead author, Murad Alam. According to him, thanks to facial gymnastics, the subcutaneous muscles become more developed and make the face more voluminous, which gives the effect of youth.

Contact to us:

Читать дальше...

from Best iOS Apps Service

6clicks Risk Review for Teams

Equip your team with this risk management tool they can learn in minutes. Simply swipe to add risks relevant to your team and empower board members, executives, and managers with the wisdom of the crowd to uncover risks across your entire organization. 6clicks is free for teams looking for expertly guided risk identification, first of its kind team assessment, and rich insights. The result is real-time reporting including your team’s aggregated risk matrix sent directly to your email.


BOARD AND EXECUTIVE RISK REVIEWS: Use the 6clicks app to streamline risk reviews, including risk identification and assessment, at the board and executive level. Benefit from tapping into the wisdom of relevant experts across the organization, driving awareness, engagement, and real-time, actionable insights.

PROJECT RISK REVIEWS: Now your weekly/monthly project risk reporting is a breeze. Not only is reporting easy, but you can take a more data-driven approach to risk identification and assessment, getting input from your project team and business stakeholders.

TOPIC SPECIFIC RISK REVIEWS: Focus on specific risk domains like pandemics, cybersecurity, or environment to gain awareness and uncover the risks most relevant to your organization. It’s a great way to explore emerging risks or focus on specific areas on a continuous basis.


SMART SWIPE: This intuitive user experience simplifies the entire review process by providing individuals across all teams the opportunity to individually assess the likelihood and impact of each risk they identify as relevant to the team. The result: more informed valuable discussions around risk awareness, likelihood, and impact.

PRE-DEFINED RISK LIBRARIES: You can choose from our expertly defined risk libraries or create your own! Just some of the risk libraries available now are General Business, Cybersecurity, Environment, Pandemic, Startup, and Projects.

EASY TEAM COLLABORATION: The wisdom is in the crowd. With our unique team-driven risk identification and assessment, you and your colleagues can invite each other to participate in reviews and share data to reach a powerful consensus.

RISK ASSESSMENT: Once identified, your team can assess the impact and likelihood of risks.

MASTERFUL ANALYTICS & REPORTING: Share a Risk Matrix worth your time. Our instantly actionable matrix provides insight like never seen before for directors, consultants, executives, managers and the board to make data-driven, accurate decisions.

Goodbye slides, spreadsheets, and painful meetings! Try 6clicks Risk Review for Teams today for free!


from Tŵt Blog

Tŵt is now updated to Mastodon 3.1 which brings several new features.

New Bookmark Button


Bookmarking is a new way for you to favourite something without informing the author. It's a great way to make a quick note of a toot you found helpful. You can review your bookmarks in the new “Bookmarks” menu item,

To remove a bookmark, simply open that toot and click the bookmark icon again.

Download Media

It is now easier to download audio and video files that have been shared on the network, with a dedicated download button.

Follow Request Notifications

Better access to your follow requests, easy authorise or reject from a straightforward view of all incoming follow requests.


Hide or remove toots containing a particular word or character string in all of your timelines:

And a whole lot more

Mostly admin functions, but there's plenty in this update to make your life a little easier and a little faster.

If you have any comments please DM me


from Tŵt Blog

One of the first problems we ran into when trying to market Tŵt as a friendly alternative social media network was people asking “where's the Tŵt app?”

Of course, our answer was “you can download a bunch of apps, your choice! Freedom!”

To which we heard: “Right. Sounds good. So which one is the Tŵt app?”

The simple truth is when marketing to non-tech crowds, the freedom of Mastodon can introduce some undesirable complexity. “What's an instance?” was one of the first hurdles.

So, we reached out to our two favourite apps, Tusky and Amaroq, both open source, and asked permission to fork so we could provide a branded experience for folks wanting to get on the Mastodon train in a simple manner.

Our belief is this provides a means for onboarding less-savvy members who just want to get on Tŵt and start Tŵting.

Our forks are minimally changed, we added the Tŵt logo and set as the default instance.

Today we're proud to announce the Android (Tusky) fork is live, and we're working hard in the iOS option.

In addition, ConnyDuck (Tusky's maintainer, ) made some changes to Tusky to allow us and others to whitelabel the app, for which we are incredibly grateful!

So, please, download the app, give it a whirl, and let us know what you think!