Trust in Online Communities: Opening the Door to Meaningful Interaction

Voicing opinions. Sharing advice. Online public forums, chat rooms and news sites host lively conversations about every imaginable topic. These online communities are also a destination for impersonators, imposters and fraudsters with a wide range of goals. The objective might be financial gain, although usually many of these bad actors are first interested in building followers, influencing opinions, destroying reputations or gathering phishing targets for downstream exploitation. 

Fake personas are easy to create in a world of fragmented and stolen identities. After massive data breaches in recent years, digital identity elements are easily available for sale. With an average of 8.4 social network profiles per person, individuals have their own range of—usually legitimate—personas. How do you know who is real? Or trustworthy? It helps to become aware of how abusers, scammers and fraudsters work in online communities.

Impersonation can be something as simple as creating a profile that is partly true but uses a fake phone number, email address or other identifier in order to prevent unwanted contact. Although it's understandable, it's still a form of impersonation. Impersonators on a more serious mission will copy photos, names, descriptions and hashtags from official accounts to create new accounts with names of random people. 

Fishy Tactics

Online impersonators and fraudsters individually—or in collaboration—use their fake profiles with the following tactics to advance their causes.

  • Sock puppeting: A sock puppet is a person using a fake identity for malicious purposes. Initially, people created sock puppets to praise their own comments or blog posts. Increasingly, they've been used to bully, harass or intimidate others. For example, an online stalker who creates an account posing as a teenage boy in order to lure young girls is a malicious sock puppet. Malicious sock puppeting can result in legal prosecution for harassment, criminal impersonation and other crimes. 
  • Brigading: Brigading is when people band together online to perform a coordinated action, like manipulating a vote or poll or harassing a member or members of an online community. The goal is to create an artificial impression that the opinions of a few voters actually represent the majority opinion. Brigaders might try to deflate the reputation of a product, business or artist. They might aim to drive posts or websites lower in SEO results or users' feeds and effectively censor them. They'll use retweets, comments and quote retweets on Twitter and even email to achieve their goals. 
  • Ratioing: On Twitter, the ratio is the proportion of replies to a tweet compared to the combined number of retweets and likes, where a high ratio usually indicates a barrage of negative replies. 
  • Sealioning: This occurs when a commenter seems to make an effort to engage in sincere debate, usually by asking persistent questions. However, the questions are intended to attack the other commenter's patience or goodwill. The goal is sparking other commenters to lash out and appear impatient or unreasonable. Sealioning is trolling designed to exhaust other participants with no intention of real discourse.
  • Astroturfing: Astroturfing occurs when a concealed group or organization initiates and controls activity intended to create a false impression of a widespread, spontaneous grassroots movement in support of—or in opposition to—something.
  • Fake news: This is "news" composed of false or manipulated information purportedly coming from legitimate sources that is shared to deceive and mislead audiences. The information might be designed specifically to harm the audience or for political, personal or financial gain. 
  • Deep fakes: Technology advances have led to a fast-growing threat—deep fakes. Sock puppets on steroids, deep fakes combine and superimpose existing images and videos onto source images or videos to trick people into believing that a person did or said something that never happened. As one example, images and voices of chief executives have been deep faked to deceive financial controllers into transferring cash. When a face and voice match, it’s difficult for an unsuspecting person to question what they see.

Trust as a Strategic Weapon

It's impossible to over-value trust in online interactions. When impersonators’ and fraudsters’ tactics are known, online communities can leverage trust to their advantage. It begins with trusted identities, because a trusted identity will interact authentically. Whereas non-trusted identities and fake profiles are potential threats likely to be up to no good. Organizations that can verify identities of their users—at any point along their journey, across all points and over time—can begin to identify trustworthy customers, followers and users. 

With trusted identities, organizations can give their good customers and followers safe, frictionless online experiences. This is especially critical when trust is essential to building accounts and attracting new users. Word travels fast on the internet, and a once-thriving online community can become a virtual ghost town if abusive fake profiles are allowed to run rampant. Trusted identities enable organizations to keep the door to their business open while greatly reducing risk and exposure to scams, abuse and fraud. 

Want to know more? Download Best Practices for Online Trust & Safety guide or learn how to automate trust decisioning with Pipl Trust, here.