Blog

How Businesses Can Stop the Brigade at the Gates

Social media and online communities have created new opportunities for connectedness across the social landscape. But they also introduce new perils. Negative online behavior known as “brigading” shows how bad actors can misuse social platforms—and demonstrates the need for companies to develop strategies to protect their users and their businesses with robust online trust and safety policies.

The Social Landscape

Social media and online communities are perhaps unique in the annals of human communication. Many of these channels offer the chance for literally anyone, anywhere, to weigh in with their thoughts and opinions, unfiltered and unvarnished.

Interactive elements go beyond “likes” to include comments and postings, as well as product reviews and other forms of narrative feedback. Those comments can sway opinion on a brand, a service or a political position. But who is leaving all those comments, and in what way are they (or are they not) “legitimate”?

Researchers report that 82% of adults check the reviews on Amazon, of which there are some
250 million. But only 17% of Amazon users fully trust those reviews—and with good reason.
Some 61% of online reviews have been deemed “fake,” and so-called black hat companies offer as much as $10,000 a month for fake reviews.

It’s been widely reported that bots and bad actors piled onto social media in an effort to
influence recent elections. Why? Because it works. Studies have shown, for example, that
tweets containing falsehoods were 70% more likely to be repeated than truthful tweets.

The spread of misinformation isn’t solely due to fraudsters and scammers. In a December 2020
survey, for example, 38% of consumers admitted they had at some point unknowingly shared
fake news or information on social media.

Brigades of Fake Profiles

The misuse of social media may have started as a solo operation, with individuals posting fake information in support of some personal agenda. But the bad actors have long since graduated to the big time, with sophisticated schemes on the level of organized crime. Patterns of online abuse and fraud are constantly evolving, and now we're seeing the rise of a type of online subversion known as “brigading.”

In its most basic form, brigading looks something like this: A person posts in an online
community about their love of bananas. Well, the apple lovers aren’t going to stand idly by! They agree among themselves to attack the banana post in the comments: Bananas are slimy, gross and so on. This “brigade” of bad actors creates for itself an outsized voice on the social landscape. Hence, “brigading.”

The notion of brigading was born on the social platform Reddit, where individuals discovered
they could launch coordinated attacks against subreddits, or forums dedicated to a specific
subject. The brigaders would privately agree to collude, effectively censoring another user’s
view.

The use of the term has since been expanded to refer to any coordinated online effort to make something seem more or less desirable or popular. The coordinated abuse can take the form of posts, tweets, comments, email campaigns, online reviews and other actions, perpetrated by a real human or a bot.

Brigading can take different forms on different social media sites:

  • Sock Puppeting: The use of fake accounts to make a user’s position seem more popular
    than it is. This is a means to increase the volume of attacks.
  • Ratioing: If a tweet reply gets more replies than retweets or likes than the original post,
    that may indicate the original comment is unpopular. The brigade may drive up the
    numbers, and then report a “winning” ratio on other platforms to elevate its position.
  • Quote Retweets: The brigade adds a comment to a retweet as a means of harassing the
    original poster, while also gaining greater visibility for its point of view.
  • Sealioning: In an online discussion, brigade members go after another participant with a barrage of bad-faith questions in order to break up debate and wear down the target.
  • Mass Reporting: Here, a brigade tries to get other users suspended from an online
    platform by reporting their posts en masse.
  • Astroturfing: A marketing technique that can also be used in brigading. The bad actors
    create fake posts that look legitimate in order to sway the discussion.

In all these efforts, brigading typically will include the use of fake accounts, bogus identities that serve both to hide the bad actors’ true selves and to give them far greater reach.

A single brigader can potentially create dozens of false identities, giving themselves a larger-than-life footprint in the social space. With many brigade members operating under multiple fake accounts, the impact is magnified.

What's at Stake?

Online communities and businesses and social media platforms all have much to lose if
brigading goes unchecked.

Businesses can suffer vast reputational damage to their brands. Online abuse, fake reviews and bogus complaints are always problematic: They erode trust and can drive away customers. Scale that up with brigading, and the potential harm is magnified considerably.

For social media platforms and online communities, the risk is existential.

If people don’t believe they can trust online experiences—if their natural skepticism is supported by coordinated abuse or disinformation efforts for example—the premise of the platform itself is profoundly undermined. Brigading tears at the very heart of social media, the premise that these online forums are “real” places—communities where people can share and interact safely and authentically.

online Trust and Safety

To keep brigading in check, social media platforms and others in the online arena need access
to trustworthy, corroborated data about the people who are posting. Organizations can look to tackle this problem with better information, supported by emerging technologies.

Traditional databases are helpful, but they give only a bare-bones snapshot. Emerging tools can delve deeper into a person’s digital footprint, to gain insight into their interests, friends, business associates and opinions. Modern tools can continuously collect, corroborate and connect data from public sources around the world, building a comprehensive profile of an individual to establish identity trust.

Empowered by such tools, leaders in digital trust and safety and risk analysis have the means to make organizational change. They can raise awareness of the threat posed by brigading and
implement trust-based strategies to combat the phenomenon, from login through downstream interactions. Because when an organization knows that a user's online identity is, in fact, who they claim to be, then it becomes much easier to assess trust in that user's activities throughout their journey.

Such an approach would help ensure online interactions are safe and authentic, effectively
stopping the brigaders in their tracks.

Pipl Trust enables companies to automate identity trust decisioning at every digital interaction. When an organization implements Pipl Trust, scammers and bad actors trying to use fake profiles and accounts may never even get access to the community/forum/marketplace where they're trying to do reputational damage. Consumers get a seamless experience and an elevated sense of safety, while business leaders are able to implement effective, trust-based controls, quickly and easily.

Contact us anytime to schedule a proof of concept or to book a meeting.

Sources:

  • https://www.science.org/content/article/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots
  • https://www.statista.com/statistics/657111/fake-news-sharing-online/
  • https://review42.com/resources/what-percentage-of-amazon-reviews-are-fake/
  • https://institute.global/policy/social-media-futures-what-brigading