Back to Insights content moderation Back to Insights

The Importance of Content Moderation Services

February 27, 2019

Whether you run a global platform or a local forum, giving users the ability to upload content always presents risk. After all, it only takes one ill-intentioned user to ruin a community. Flagging systems can aid in the removal of inappropriate content after it’s posted, but what your community really needs is a proactive approach.

This is when the value of content moderation becomes apparent. By implementing such a service, you can protect your brand and your users, however, these services come at a cost. By outsourcing, you can fit this essential investment into your company’s budget.

What Is Content Moderation?

The purpose of content moderation is to prevent certain content from getting published. This means a system will screen new content–such as images, video, or text–for violations. These violations typically consist of inappropriate, graphic, or irrelevant content and just about every platform out there today uses some sort of moderation. This could come in the form of an automated system that reviews comments or it could be a flagging system that lets users themselves moderate a site. These are worthwhile investments in today’s world, however, they are not 100% effective when it comes to keeping a community clean. There are many different stages of moderation that content can pass through. While it’s likely not feasible to set all of them up, understanding them is important.

content moderationTypes of Content Moderation

These are the primary types of content moderation.

1. Pre-Moderating

If all content gets placed in a moderation queue before going live, that’s an example of pre-moderating. This ensures no violations ever get published as long as the moderator does their job, but popular websites will find that this leads to delays. It can also cause a backlog of content during peak posting periods.

Nevertheless, if you have a community targeted at a sensitive audience, such as children, this is almost a must. While time-consuming and more involved, this is the most reliable of all methods. So long as your content isn’t time-sensitive, this method can be very effective. Otherwise, there are some downsides to consider.

Since the content gets captured in a queue, it could be hours or days before it goes live, which takes away from the “instant gratification” a poster usually gets from contributing. If the time delay is too long, it can also cause threads on a forum to die from inactivity so, for this method to be successful, the moderator has to be quick.

2. Post-Moderating

This form of moderation often results in a better experience for users. With this method, new content publishes right away, however, the live content gets put into a queue for a moderator to check as soon as possible. This method gives the potential for violations to go live for a short amount of time, but this method is also more conducive to conversations.

Post-moderating can work well for a lower risk community, but moderation still has to happen quickly. If not, violations could crowd the community. It’s important to note that the website owner could be liable for certain content so letting illegal or defamatory content go live makes it your responsibility. Failing to remove it in a timely manner could harm your brand and it could also result in legal action against your company.

3. Reactive Moderating

Most forums and communities have this system in place and it’s usually combined with other types of moderation too, but if offensive content slips through the cracks, this is the next line of defence. A community flagging system is a good example of reactive moderating.

Generally, this method works for content that breaks site rules. Official moderation should filter offensive and graphic content before users see it, but it’s important to remember that users could potentially abuse flagging systems.

Content could get flagged because a user disagrees with it, even if the site allows it, which is when you might require two or three users to flag a post before it enters moderation. Of course, then you run the risk of even more users seeing violations. To address these issues, most sites remove content after a single flag and a moderator then must check the content to remove it or re-publish it.

However, a person could use the flagging system as a means of silencing a user they disagree with, so if users end up abusing the flagging system, disciplinary action is in order.

4. Distributed Moderating

This form of moderation involves multiple people where a group of users vote on whether or not content is acceptable for publication. This can be self-moderation in which community members vote on content, but most organisations aren’t willing to take that risk so distributed moderation is usually set up internally. For instance, employees at your company could be responsible for moderating content or you could hire out the job to freelancers or a trusted Business Process Outsourcing company.

5. Automatic Moderating

Automatic moderating can be powerful in conjunction with human moderators. You could use it as your only type of moderation, but that’s a big risk as the technology is still evolving. Generally, multiple tools are set up where you’ll define rules for approving or removing content so the tools can do their job.

The easiest and most effective form of automatic moderation is in comment moderation. If you list certain words in the rule box, submissions won’t be able to go through with harsh language and it can also block comments with links to avoid spam. This also has potential for audio, video, and image moderation. However, it will likely be a few years before these tools really hit the mass markets and become feasible.

What Shouldn’t Go Live?

When paying for moderation services, it’s important that all content is properly filtered. Some things are obvious, for instance, your community probably doesn’t want to have graphic content of any sort, but other rules will need clear definitions.

  1. Illicit content. This is all content forbidden by laws, customs, and social norms, which can range from offensive language to violent imagery.
  2. Hate content. Moderators should block all discriminatory content, which includes anything that directly attacks a culture, race, religion, sexuality, or specific person. Cyberbullying and trolling also need careful moderation.
  3. Irrelevant content. This is content that isn’t offensive, it’s just off-topic. If you run a forum about exotic birds and a user posts about televisions, that shouldn’t go live because, while it won’t necessarily cause harm, it will be out of place. Failure to filter such content will eventually diminish the quality of the community.
  4. Promotional content. Most communities do not permit sponsored content, affiliate links, or anything that advertises a product, company, or service. Whether relevant to the core discussion or not, this can overrun a community so consider keeping it in a specific sub-forum or don’t allow it at all.

As you dive into content moderation, you’ll run into quite a few conundrums. There will be instances where content doesn’t necessarily violate any of the above rules, but it still doesn’t fit into the community–for instance, a user might be on-topic, but pessimistic. It’s up to you whether or not their content gets removed, but if they are regularly turning conversations into arguments or they only around to make negative posts, removing them might be wise.

For example, you might operate an online weight loss community where a user consistently discourages others. They may not attack anyone directly, but their behaviour doesn’t fit with the overall tone of the community. If other users don’t enjoy this person’s presence, something needs to change before valued users get chased away.

It’s examples like these that make content moderation a very involved process, however, it’s also a very necessary activity. The key is identifying what type of content goes against the brand and community that you are trying to build. Once your moderators start to learn your expectations, it will get easier.

Tips for Outsourcing Content Moderation

Once you have established the need for content moderation, it’s time to start utilising it. These tips will help get your community setup appropriately.

Define the Rules

Every company has different expectations from its users and those expectations will likely evolve with time, however, you need to have a clear set of rules to start with. These preliminary questions can get you thinking about what you’ll allow.

  • Can users post their info? Most forums block personal information, which includes names and even where they live. Employer names are usually off-limits too.
  • Can users gossip? Talking about celebrities and public figures is usually discouraged because it can lead to claims of defamatory content and even lawsuits.
  • Can users argue? Defining the point when a heated debate turns into a conflict is difficult but necessary. Intolerance for gender, race, and opposing opinions isn’t usually permitted.
  • Can users use explicit language? Set a clear content rating for your community. If children frequent it, keep the rating G or PG for all content.
  • Can users link to other sites? Linking out to other sites can quickly lead to advertising so linking policies should be clearly laid out.

This list will likely get you thinking about aspects you didn’t consider before. Answering these questions will help you round out your current guidelines, giving moderators the detailed knowledge they need to properly filter content for you.

Explain the Consequences

What happens when a rule gets broken? That’s the next question you need to answer after defining what the rules are. Content will typically get removed if it violates a rule, but what happens to the user? You might implement a three-strike policy. For instance, a user will receive a warning for their first two infractions and, if they violate the rules a third time, they could get suspended or banned.

You should also have exceptions because, depending on the content, the user might get banned or suspended right away. If you want to permanently ban a user, blocking their IP address is a good next step.

Announce the Rules to Every User

Most forums and communities have a pinned post explaining the rules. While it’s common sense not to post illicit content, other nuances can be harder to pick up on. For instance, a user might introduce themselves with their name and city, but if you don’t allow personal information, that would be a violation on their first day. This is when exceptions come in handy because a gentle warning with a link to the rules would be the best approach in this case.

All users should get a quick snapshot of your community’s rules when they register as well. In some cases, a reminder of the rules when they go to post something could be very beneficial. For instance, you might have a pop-up that they have to accept before submitting their content.

Using Outsourcing to Protect Your Community

Outsourcing your content moderation will save money and give peace of mind. Taskeater offers content moderation services in multiple languages. Our experts check videos, images, posts, and other user-generated content, which enables you to focus on building a thriving community.

We use the latest technology to help moderators process content quickly and effectively. When you come to us with your content moderation needs, we’ll get to work right away to assemble a fully managed team on your behalf. We’ll also grow with you as your site expands. Click here to get started.

Inbound v Outbound Webinar Recording

Download Our Ebook: How To Deal With CRM Data Erosion

Ready to put Taskeater to the test?