Admin
Administrator
- Posts
- 6,288
- Likes
- 14,131
AI Moderation, the Online Safety Act, and Why This Is Now Necessary
Back in January 2025, I posted a warning about the UK Online Safety Act and explained that the legal position for online communities was changing.
At the time, I asked everyone to be careful with what they posted. I explained that forums like this are no longer in the old world where people can post first, someone reports it later, and we remove it after the damage has already been done.
The law now places much more responsibility on online platforms and those who run them. That includes forums.
The Online Safety Act requires online services to take steps to protect users from illegal and harmful content, including abuse, hate speech, racial hatred, religious hatred, hatred based on sexual orientation, threatening behaviour, and other forms of illegal content.
That responsibility ultimately lands on me as the person running this forum.
So I need to be very clear about this:
AI moderation is not being introduced because I want to control ordinary conversation. It is being introduced because I have to protect the forum, the members, and myself legally.
This is a motorhoming forum. It is not here to spread racism, hatred, prejudice, abuse, or extreme political views. It is not here for people to attack other groups of people. It is not here for people to dress up abuse as “banter” and then expect me to carry the legal risk for publishing it.
I fully understand that some people know each other well and may see certain comments as joking or banter. The problem is that once something is posted publicly, it is no longer just between two mates who understand each other. Other members, new members, guests, or regulators may see it very differently.
If two people who know each other exchange a rough joke, they may both understand the context.
But if that same wording was aimed at someone who did not know them, or if it targeted race, religion, sexuality, disability, nationality, or another protected group, then it may no longer look like banter. It may look like abuse, harassment, hate speech, or unlawful content.
And once it is published on the forum, I am responsible for the systems that allowed it to be published.
That is the part some people need to understand.
This is not just about whether the person posting meant harm. It is also about whether the forum has proper systems in place to prevent illegal or harmful content appearing in the first place.
Previously, moderation worked in the normal forum way. A post would go live, then another member might report it, or a moderator might see it, and then we would deal with it afterwards.
That system is no longer good enough.
By the time a post has been reported and removed, it has already been published. People may already have seen it. The harm may already have been done. Screenshots may already have been taken. And legally, the forum has still allowed that content to appear.
So we have changed the system.
Every post is now checked by AI before it is allowed onto the forum.
The AI has been given the community rules of this site. It has also been told to allow normal conversation, humour, disagreement, and banter where that banter does not appear to be abusive, hateful, threatening, or targeted.
- It is not there to stop people having a laugh.
- It is not there to stop disagreement.
- It is not there to remove personality from the forum.
- It is there to stop posts that may put the forum, the community, or me personally at risk.
The AI moderation is still being trained and adjusted. At first, it was catching more posts than I wanted it to. That is being improved, and it is now pulling fewer posts out as it learns the tone of the community.
But I also want to make something else very clear:
If the AI catches your post, that does not automatically mean you are banned, punished, or silenced.
It means the post has been held for review.
A human moderator will still check it. If the AI has got it wrong, the post can be approved and placed on the forum. Members can also contact me if they think something has been misunderstood.
So this is not a robot replacing all human judgement. It is a safety net that checks posts before they go live, instead of waiting until after the damage has been done.
I know some members will not like this. I understand that. But I need those members to understand the position I am in.
Many large forums and online communities have already closed or heavily restricted posting because they cannot carry the legal risk, the workload, or the responsibility that now comes with user-generated content.
I do not want to close this forum.
I do not want to remove the character of this community.
I do not want to stop normal members enjoying themselves.
But I also cannot allow a small number of people to use this forum to spread racism, prejudice, hatred, abuse, or twisted political extremism, while expecting me to take the consequences for it.
That is not going to happen.
The overwhelming majority of members have nothing to worry about. If you are posting about motorhoming, travel, campsites, repairs, rallies, humour, general chat, or normal respectful debate, then carry on as normal.
But if you are posting content that attacks people because of race, religion, sexuality, nationality, disability, or other personal characteristics, or if you are posting abusive, hateful, threatening, or deliberately inflammatory material, then expect it to be stopped.
This forum exists for motorhomers.
It exists for the community.
It exists for help, humour, friendship, advice, and shared experience.
It is not a dumping ground for hatred.
So, to summarise:
The AI moderation system is here because the law has changed, the risks have changed, and I have a responsibility to protect the forum. Posts stopped by the AI will still be reviewed by humans. Genuine mistakes can be corrected. But abusive, hateful, racist, or legally risky content will not be allowed through.
I asked everyone back in January 2025 to be careful. Some people have chosen not to be.
So this is now the system we have to use.
Thank you to the members who understand why this is necessary, and thank you to everyone who continues to help keep this community what it is supposed to be: a friendly, useful, motorhoming forum.
The alternative is:
If I do not use AI moderation to stop this material before it appears, then the alternative is that I have to deal with it after it has already been published. That means preserving evidence, removing the content, recording which member posted it, and, where the post appears to involve criminal behaviour such as hate crime, threats, harassment, incitement, or other illegal content, reporting it to the police or the relevant authorities.
I do not want to be in that position, and I am sure most members do not want the forum run that way either.
AI moderation is the better option. It gives posts a chance to be checked before harm is caused, and it means mistakes can still be reviewed by a human moderator. The alternative is far more serious for everyone involved.