Site icon The Hack Post

Trust and Safety Policy

Trust and Safety Policy

It’s difficult to establish a company’s trust and safety policies. The endeavor, which demands extensive research and the navigation of numerous difficulties, might be frightening. ActiveFence has gathered insights on some basic Trust and Safety policies applied by many platforms on many risks, including health and electoral disinformation, marketplace fraud, illegal and violent activities, and child safety, through an in-depth review of each platform’s policies.

What Are Common Trust and Safety Policy?

Centralized artisanal policy

Centralized artisanal policy strategies are trust and safety policies applicable to mostly the new platforms before establishing more elaborate policies. The decision-making in most small businesses lies within the power of very few individuals; hence explicitly, strategy is not highly used.

This model gives trust and safety staff a lot of leeway in organizing their procedures, allowing them to tailor their approach to the moderating team’s unique expertise and skills. Some Centralized Artisanal teams spend a lot of time drafting regulations in considerable detail, while others adhere to a simpler set of standards and rely on skilled team members to make suitable moderation judgments based on their expertise in how users act on the site.

Platforms that rely on centralization Systems that employ artisanal models often operate on a smaller scale than those that employ industrial ones. Because of the small scale, the policy is generally written and enforced by generalists instead of specialists. Because their tiny team is required to cover all abuse domains, they must be conversant in all policy areas, from pornography and hate speech to impersonating and regulated commodities.

Community Reliant Artisanal Policy

Community reliant artisanal policy frameworks are trust and safety policy relying on the rules written by the community and volunteer moderators. Platforms that use this paradigm rely on individual censors or community action to police good behavior, and they typically lack precise, centrally specified norms. Platform providers can directly engage the public in discussions about the appropriate and inappropriate information or conduct in their community using this policy-making technique.

Enforcing the policies in the community reliant artisanal policy will require crowdsourced opinions like flagging, downvoting, or blocking. Therefore, since the decision on any harmful content comes from the community, it makes the enforcement faster than centralized policies.

On the other hand, Crowd-sourced monitoring might suffer from popularity-contest patterns, in which content is pushed or rendered inaccessible based on how much society loves or dislikes it rather than whether it is factual or damaging.

Centralized Industrial Policy

The Centralized Industrial approach entails a huge number of reviewers, frequently dispersed across multiple sites, who are guided by a centralized strategy. Platforms adopting this methodology are frequently best placed to provide around-the-clock help across various languages and abuse kinds than sites using an Artisanal approach, thanks to their massive review teams.

On the other hand, large review teams might be difficult to keep on track, especially if policies must be updated often to reflect changing circumstances or user behavior. Across teams of this size, there will be a broad array of cultural and personal opinions on how abuse should be addressed. As a result, platforms frequently establish considerably more complex regulations for these huge review teams, with substantial enforcement guidelines being employed regularly to assure the quality and consistency of reviews at scale.

Platforms that adopt the Centralized Industrial method frequently deal with a vast amount of user-generated material; as a result, these platforms frequently experiment with automated policy enforcement solutions in addition to their manual review staff. Automated methods are faster than human reviewers and can detect dubious content before being reported, but they frequently lack nuance and background. As a result, the policy created for machines is typically far simpler than the policy developed for humans.

Community Reliant Industrial Policy

Industrial policy frameworks that are community-reliant recur as community-of-communities. The site will give a broader set of moderately liberal policies that will regulate the entire platform and impose those policies in ways that closely resemble a Centralized Industrial approach, such as moderation by trust and safety specialists using regulation guidelines, strike mechanisms, and other mechanisms. Then, within the liberal policy bounds established by the platform’s moderation staff, people are free to build spaces with additional, often much tougher, regulations to create specific types of environments.

In this paradigm, the platform frequently spends a substantial amount of trust and safety assets “moderating the moderators” to ensure that public members are already doing their part in the spaces they manage. If the site’s top-level policies are routinely breached within its spaces, this often entails punishing community administrators, e.g., removing a rental listing.

Conclusion

Trust and safety policy are key factors in controlling the harmful content online. You can employ either of the above measures in your platform.