Social Media
Twitter will suspend repeat offenders posting abusive comments on Periscope live streams
As part of Twitter’s attempted crackdown on abusive behavior across its network, the company announced on Friday afternoon a new policy facing those who repeatedly harass, threaten, or otherwise make abusive comments during a Periscope broadcaster’s live stream. According to Twitter, the company will begin to more aggressively enforce its Periscope Community Guidelines by reviewing and suspending accounts of habitual offenders.
The plans were announced via a Periscope blog post and tweet that said everyone should be able to feel safe watching live video.
Currently, Periscope’s comment moderation policy involves group moderation.
That is, when one viewer reports a comment as “abuse,” “spam,” or selects “other reason,” Periscope’s software will then randomly select a few other viewers to take a look and decide if the comment is abuse, spam, or if it looks okay. The randomness factor here prevents a person (or persons) from using the reporting feature to shut down conversations. Only if a majority of the randomly selected voters agree the comment is spam or abuse does the commenter get suspended.
However, this suspension would only disable their ability to chat during the broadcast itself – it didn’t prevent them from continuing to watch other live broadcasts and make further abusive remarks in the comments. Though they would risk the temporary ban by doing so, they could still disrupt the conversation, and make the video creator – and their community – feel threatened or otherwise harassed.
Twitter says that accounts who repeatedly get suspended for violating its guidelines will soon be reviewed and suspended. This enhanced enforcement begins on August 10, and is one of several other changes Twitter is making to its product across Periscope and Twitter focused on user safety.
To what extent those changes have been working is questionable. Twitter may have policies in place around online harassment and abuse, but its enforcement has been hit-or-miss. But ridding its platform of unwanted accounts – including spam, despite the impact to monthly active user numbers – is something the company must do for its long-term health. The fact that so much hate and abuse is seemingly tolerated or overlooked on Twitter has been an issue for some time, and the problem continues today. And it could be one of the factors in Twitter’s stagnant user growth. After all, who willingly signs up for harassment?
The company is at least attempting to address the problem, most recently by acquiring the anti-abuse technology provider Smyte. Its transition to Twitter didn’t go so well, but the technology it offers the company could help Twitter to address abuse at a greater scale in the future.
-
Entertainment7 days ago
‘Only Murders in the Building’ Season 4 ending explained: Who killed Sazz and why?
-
Entertainment7 days ago
5 Dyson Supersonic dupes worth the hype in 2024
-
Entertainment6 days ago
When will we have 2024 election results online?
-
Entertainment5 days ago
Halloween 2024: Weekend debates, obscure memes, and a legacy of racism
-
Entertainment6 days ago
Social media drives toxic fandom. Is there a solution?
-
Entertainment5 days ago
Is ‘The Substance’ streaming? How to watch at home
-
Entertainment5 days ago
M4 MacBook Pro vs. M3 MacBook Pro: What are the differences?
-
Entertainment3 days ago
Menendez brothers case reignites online: The questions that keep resurfacing