Blog

How a Common Taxonomy Empowers Trust & Safety Teams

Trust and Safety teams face the formidable challenge of addressing a fluid set of online threats and risks with constraints on resources and support structures.

Given the nascent status of the trust and safety discipline, there is a lack of access to shared knowledge and best practices. In many organizations, trust & safety efforts are limited to reactive postures – responding to issues in an ad hoc way as new threats materialize.

This underscores the value of the World Economic Forum’s Typology of Online Harms as an authoritative framework. Methodically delineating the various categories of harm, it provides a stepping stone for the practice to leap forward in maturity and effectiveness:

  • By providing a comprehensive view of the threat landscape, it empowers trust and safety teams to pursue more strategic initiatives.
  • By establishing a common terminology, it enables greater collaboration across organizational functions that currently operate in silos.
  • Last but not least, it facilitates closer collaboration and shared learning among practitioners across organizations and market segments. 

The 3C Framework: Content, Contact, and Conduct Threats

The proposed typology from WEF includes categorizing threats into content, contact, and conduct-related risks. 

Content-related risks: This refers to problems with the actual substance of user-generated content, such as hate speech, violence, adult content, misinformation, and more. The content itself violates policies and societal norms.

Contact-related risks: These issues arise in how users interact with each other, such as harassment, stalking, grooming, and radicalization. The contact between the users facilitates harm.

Conduct-related risks: These challenges involve how users behave on a platform, including inauthentic behavior, spam, fraud, or data abuse. The conduct of users enables malicious activity to occur.

Categorizing threats into these three buckets allows Trust and Safety teams to think more systematically about the types of harms they need to prevent and provides a simple vocabulary for distinguishing between different violations and root causes. The clarity these categories provide helps with better priority setting, solution development, and collaboration across the field.

The elephant in the room is the missing piece: Identity Risk 

While the WEF's typology offers a robust foundation for understanding a broad spectrum of online risks, there's an elephant in the room: the role of 'Identity Risk.' 

Fake identities play a key role in enabling and amplifying online risks and threats. When considering that fake identities are primarily employed by malicious groups and organizations to exploit online vulnerabilities for coordinated and continuous harmful acts, the urgency of addressing identity risk comes into greater clarity.

This missing dimension—characterized by the systemic use of fake identities to produce and distribute harmful content, or to facilitate damaging contact and conduct—interacts with and exacerbates all other types of online harms delineated by the WEF.

Whether you're a nascent tech company striving to secure your platform, a policy-maker wading through legislative options, or a civil society organization advocating for a safer internet, understanding Identity Risk can add significant nuance to your approach. 

By integrating this new layer into the existing WEF typology, we can move toward a framework that is more detailed, comprehensive, and effective in ensuring the digital world is a safer space for all.

Here are some examples of threats listed in the WEF paper that can be enabled by and amplified with the use of fake identities:

Content Risks

  • Child Sexual Abuse Material (CSAM), Child Sexual Exploitation Material (CSEM), Child Sexual Exploitation and Abuse (CSEA): Organized groups can efficiently distribute CSAM and CSEM using a network of fake identities, making it a significant challenge for law enforcement to trace and prosecute those responsible. With CSEA they can also use these identities in the grooming and livestreaming processes, making the entire chain of abuse difficult to break.
  • Pro-terror Material and Extremist Content: Malicious organizations can use sophisticated networks of fake profiles to disseminate propaganda advocating terrorism, thereby evading detection and diluting individual accountability. By leveraging fake accounts, organizations can systemically disseminate extremist ideologies to amplify radicalization efforts within vulnerable communities. The propagation of violent content is hard for authorities to control, given the anonymity and impunity provided by these false personas. Fake accounts are strategically used  to incite violence or make threats, complicating the efforts to trace the original source of such incitements.
  • Dangerous Physical Behavior: Using a cloak of anonymity, these organizations can encourage dangerous stunts or behaviors, potentially leading to real-world harm, injuries, or even fatalities.

Contact Risks

  • Grooming for Sexual Abuse: Within an organized structure, offenders can create fake identities posing as minors to groom real minors for sexual abuse, making the act far more insidious and challenging to police.
  • Recruitment and Radicalization: Malicious organizations can use fake online personas to methodically recruit and radicalize individuals, offering an illusion of authenticity or authority to lure potential recruits.

Conduct Risks

  • Online Bullying and Harassment: Groups with fake identities can systematically harass, stalk, or threaten victims online to hurt them socially, emotionally, psychologically or even physically—making it extremely difficult for law enforcement to identify and prosecute the people behind these activities.
  • Technology-facilitated Gender-based Violence: Malicious organizations can deploy fake accounts to commit gender-focused cyberbullying, revenge porn, or harassment, capitalizing on the anonymity these fake identities offer.
  • Catfishing: Where malicious individuals create false online identities to manipulate people, often involving the invention of fake backgrounds, careers, and friends to deceive victims into trusting them enough to send money, gifts, or explicit images.

Tackling identity risk: Spotting fake identities at the gate

Pipl enables more accurate identification of bad actors by leveraging its vast repository of online identity data and trust signals to generate risk scores that quantify user trustworthiness. This identity-based approach allows companies to proactively spot fake and fraudulent accounts before they ever cause harm. 

The result is more focused and efficient moderation and enforcement, minimizing false positives to focus limited resources on risky users. Armed with Pipl's robust risk scoring, organizations can determine the appropriate action for each user whether that is closer monitoring, manual review, or outright ban of high-risk accounts. 

Not less important is the ability to fast-track trusted users based on their Identity Trust score. By removing friction for pre-vetted identities, we can deliver a streamlined experience and avoid turning away good users.

The bottom line is highly compelling: Adopting an identity-centric approach to online risk mitigation allows online platforms to get ahead of malicious users, balancing trust and safety with growth through more accurate bad actor detection and fewer false positives.

To learn more about how Pipl helps Trust and Safety teams, check out our whitepaper - The Trust Paradigm for Online Businesses.