I attended a launch party for a new user generated content startup here in San Francisco three months ago. I asked one of the co-founders how they were going to handle undesirable content. His answer was to make the platform as open as possible and use automated systems to take down rules violations. He figured that they would eventually need to get a part-time person, perhaps an intern, to help review the more tricky issues. He even used the dreaded word “outsource” to describe what would happen if the site outgrew this process. His response was not an outlier. This is an unfortunate and common approach to content moderation among budding web companies. Having no plan for consistently and thoroughly enforcing a website’s Terms of Service results in user confusion and advertiser dissatisfaction. Maintaining a high standard of quality for a large user generated content site requires a dedicated, trained team of moderators.
The HubPages moderation team is six people strong, more than enough to handle our current workload and enough time for some of us to take on fun side projects, like being a judge for the HubPatron of the Arts contest. We handle most of the support emails sent through the contact us form, and ensure that at least three moderators are working every day.
Of course, an issue every company has to face is scalability. To review all 1,700 Hubs published every day, our moderation team would have to become larger than the full staff of HubPages. For a scrappy startup, growing a single department to this size is not possible.
To help bring the review process to scale, our engineers have built many wonderful and fabulous tools. First, they created several automated systems that run at all times. Our main system is affectionately known as the Maddie Bot, so named for Maddie Ruud, the HubPages Community Manager and first dedicated content moderator. Along with the Maddie Bot, we have over 40 filters that crawl all content on HubPages as it is created. These filters are fed words from several sources, including moderators who notice a pattern of negative behavior associated with a word or phrase (“umpteenth” is favorite of article spinners, for example). Anything the filter feels is suspicious may generate a warning for the author, and will be flagged for our review if it is published.
Another tool that might look familiar to Hubbers is the Moderation Hopper. Though it shares some DNA with the Hub Hopper, it is much more robust. This hopper is a HubPages moderator’s home base. From this tool, a moderator is taken to a flagged hub, where the content is reviewed and a moderation decision is entered into the database. Once this decision is saved, the moderator is taken to the next Hub in the queue automatically. This repeats until all the Hubs have been cleared, moderated, or republished.
Unfortunately, moderators and our trusty robots can only do so much. Along with our automated systems, we rely on Hubbers who familiarize themselves with the rules and flag content for moderator review. These flags are necessary for ensuring HubPages remains a high-quality option for authors seeking an open publishing platform.