Social Media
We’re building a social+ world, but how will we moderate it?
Social is not just what you do on Facebook anymore; it’s what you do in every single app you use. Think of the experience on Venmo, Strava, Duolingo or even Sephora.
Companies that implement social components into their apps and services, known as social+ companies, are thriving because they can establish connections and enable interactions with users.
Andreessen Horowitz’s D’Arcy Coolican explained the appeal of social+ companies, writing:
“[Social+] can help us find community in everything from video games to music to workouts. Social+ occurs when delight-sparking utility is thoughtfully integrated with that essential human connection. That’s powerful because, ultimately, the more ways we find to connect with each other in authentic and positive ways, the better.”
Social+ will soon permeate all aspects of our lives, accelerating at breakneck pace in the months ahead. I would wager adoption will continue to the point of utility – where every company is a social company. This is very exciting, but only if we plan accordingly. As we’ve seen with social’s influence in the past, it’s amazing … until it’s not.
What’s incredibly additive to the user experience today could become an absolute nightmare if apps invoking social don’t find religion on sound moderation practices and invest the necessary resources into ensuring they build the right tech and processes from the start.
Learning from Facebook
As the OG social pioneer, Facebook redefined how society functions. In doing so, it endured some very painful lessons. Notably, it must bear the burden of monitoring individual, group and organization posts from 1.93 billion daily active users– all while trying to cultivate a sense of community without censorship and driving platform adoption, engagement and profits. While social+ companies are not likely to see this kind of volume, at least in the near-term, they will still have to contend with the same issues – only they no longer have the excuse of not being able to foresee that such things could happen.
If Facebook and its army of developers, moderators and AI technology struggle, what kind of chance do you have if you don’t make moderation and community guidelines a priority from the start?
Let’s look at a few areas where Facebook stumbled on moderation:
- Failing to account for bad user behavior amid rapid growth: In Facebook’s early days, platform moderation wasn’t deemed necessary in what was considered a free, user-driven space. The company was merely a conduit for connection. Facebook failed to recognize the potential for user harm until it was too late to manage effectively. Even with the most advanced software and a workforce in which 15,000 employees are dedicated solely to reviewing content across 70 languages, content moderation remains an enormous problem that has cost the company users, ad dollars and vast amounts of reputational capital.
- Underestimating the language barrier: While we live in an increasingly global society, connected through online services and networks, documents released to Congress showed that 87% of Facebook’s global budget allocated for identifying misinformation was reserved for the United States. Just 13% goes to moderation practices for the rest of the world, even though North Americans represent only 10% of its daily users. Facebook tried to apply AI-based software for content moderation into markets where language is incredibly nuanced in an attempt to address the issue, which has not gone well. In Facebook’s largest market (India, with 350 million users) misinformation and calls for violence have proliferated because of a language deficit. It’s even worse with the varied dialects of North Africa and the Middle East. As a result, both human and automated content reviews have mistakenly allowed hate speech to run rampant while benign posts are removed for seemingly promoting terrorist activities.
- Getting political: The most clear-cut language has become weaponized in the U.S. Deep fakes and disinformation campaigns have become normalized, yet posts that Facebook rightfully removes or flags according to its service terms draw the ire of users who feel that their rights of expression are being violated and their voices suppressed. This has caused significant public backlash, along with a smattering of new legal proceedings. As recently as December 1, a federal judge blocked a Texas law from going into effect that would allow state residents to sue Facebook for damages if their content was removed based on political beliefs. A similar law in Florida, which attempted to hold Facebook liable for censoring political candidates, news sites and users, was also struck down. These attempts, however, show just how incensed people have become about content moderation practices they don’t like or that they perceive as changing over time to work against them.
- Determining what to do with banned content: There is also the issue of what happens to that content once it is removed and whether a company has an ethical responsibility to turn over objectionable content or alert authorities regarding potential illegal activity. For example, prosecutors are currently demanding that Facebook hand over data that will help them identify members of a group, the New Mexico Civil Guard, who were involved in a violent incident in which a protester was shot. Facebook claims it can’t help because it deleted records of the group, which had been banned. Tensions continue to flare between law enforcement and social companies in terms of who owns what, reasonable expectations of privacy, and whether companies can release content
All of these issues should be considered carefully by companies planning to incorporate a social component into their app or service.
The next generation of social apps
Social engagement is key to sales, adoption and much more, but we must not forget that humans are flawed. Trolling, spam, pornography, phishing, and money scams are as much a part of the internet as browsers and shopping carts. They can wipe out and destroy a community.
Consider: If Facebook and its army of developers, moderators and AI technology struggle, what kind of chance do you have if you don’t make moderation and community guidelines a priority from the start?
Companies must build moderation features – or partner with companies that provide robust solutions – that can scale with the company, especially as services go global. This cannot be overstated. It is fundamental to the long-term success and viability of a platform– and to the future of the social+ movement.
For moderation tools to do their part, however, companies must create clearly defined codes of conduct for communities, ones that minimize the gray areas and that are written clearly and concisely so that all users understand the expectations.
Transparency is vital. Companies should also have a structure in place for how they handle inappropriate conduct – what are the processes for removing posts or blocking users? How long will they be locked out of accounts? Can they appeal?
And then the big test – companies must enforce these rules from the beginning with consistency. Any time there is ambiguity or a comparison between instances, the company loses.
Organizations must also define their stance on their ethical responsibility when it comes to objectionable content. Companies have to decide for themselves how they will manage user privacy and content, particularly that which could be of interest to law enforcement. This is a messy problem, and the way for social companies to keep their hands clean is to clearly articulate the company’s privacy stance rather than hide from it, trotting it out only when a problem arises.
Social models are getting baked into every app from fintech to healthcare to food delivery to make our digital lives more engaging and fun. At the same time, mistakes are unavoidable as companies carve out an entirely new way of communicating with their users and customers.
What’s important now is for social+ companies to learn from pioneers like Facebook in order to create safer, more cooperative online worlds. It just requires some forethought and commitment.
-
Entertainment6 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment7 days ago
Rules for blocking or going no contact after a breakup
-
Entertainment6 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment5 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment4 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment5 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment3 days ago
‘The Room Next Door’ review: Tilda Swinton and Julianne Moore are magnificent
-
Entertainment4 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end