Social Media
Twitter widens its view of bad actors to fight election fiddlers
Twitter has announced more changes to its rules to try to make it harder for people to use its platform to spread politically charged disinformation and thereby erode democratic processes.
In an update on its “elections integrity work” yesterday, the company flagged several new changes to the Twitter Rules which it said are intended to provide “clearer guidance” on behaviors it’s cracking down on.
In the problem area of “spam and fake accounts”, Twitter says it’s responding to feedback that, to date, it’s been too conservative in how it thinks about spammers on its platform, and only taking account of “common spam tactics like selling fake goods”. So it’s expanding its net to try to catch more types of “inauthentic activity” — by taking into account more factors when determining whether an account is fake.
“As platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” Twitter writes. “We now may remove fake accounts engaged in a variety of emergent, malicious behaviors.”
Some of the factors it says it will now also take into account when making a ‘spammer or not’ judgement are:
- Use of stock or stolen avatar photos
- Use of stolen or copied profile bios
- Use of intentionally misleading profile information, including profile location
Kremlin-backed online disinformation agents have been known to use stolen photos for avatars and also to claim accounts are US based, despite spambots being operated out of Russia. So it’s pretty clear why Twitter is cracking down on fake profiles pics and location claims.
Less clear: Why it took so long for Twitter’s spam detection systems to be able to take account of these suspicious signals. But, well, progress is still progress.
(Intentionally satirical ‘Twitter fakes’ (aka parody accounts) should not be caught in this net, as Twitter has had a longstanding policy of requiring parody and fan accounts to be directly labeled as such in their Twitter bios.)
Pulling the threads of spambots
In another major-sounding policy change, the company says it’s targeting what it dubs “attributed activity” — so that when/if it “reliably” identifies an entity behind a rule-breaking account it can apply the same penalty actions against any additional accounts associated with that entity, regardless of whether the accounts themselves were breaking its rules or not.
This is potentially a very important change, given that spambot operators often create accounts long before they make active malicious use of them, leaving these spammer-in-waiting accounts entirely dormant, or doing something totally innocuous, sometimes for years before they get deployed for an active spam or disinformation operation.
So if Twitter is able to link an active disinformation campaign with spambots lurking in waiting to carry out the next operation it could successfully disrupt the long term planning of election fiddlers. Which would be great news.
Albeit, the devil will be in the detail of whether and/or how Twitter actually enforces this new policy — such as how high a bar it’s setting itself with the word “reliably”.
Obviously there’s a risk that, if defined too loosely, Twitter could shut innocent newbs off its platform by incorrectly connecting them to a previously identified bad actor. Which it clearly won’t want to do.
The hope is that behind the scenes Twitter has got better at spotting patterns of behavior it can reliably associate with spammers — and will thus be able to put this new policy to good use.
There’s certainly good external research being done in this area. For example, recent work by Duo Security has yielded an open source methodology for identifying account automation on Twitter.
The team also dug into botnet architectures — and were able to spot a cryptocurrency scam botnet which Twitter had previously been recommending other users follow. So, again hopefully, it’s been taking close note of such research to underpin this policy change.
There’s also more on this front: “We are expanding our enforcement approach to include accounts that deliberately mimic or are intended to replace accounts we have previously suspended for violating our rules,” Twitter also writes.
This additional element is also notable. It essentially means Twitter has given itself a policy allowing it to act against entire malicious ideologies — i.e. against groups of people trying to spread the same sort of disinformation, not just any a single identified bad actor connected to a number of accounts.
To use the example of the fake news peddler behind InfoWars, Alex Jones, who Twitter finally permanently banned last month, Twitter’s new policy suggests any attempts by followers of Jones to create ‘in the style of’ copycat InfoWars accounts on its platform, i.e. to try to indirectly return Jones’ disinformation to Twitter, would — or, well, could — face the same enforcement action it has already meted out to Jones’ own accounts.
Though Twitter does have a reputation for inconsistently applying its own policies. So it remains to be seen how it will, in fact, act.
And how enthusiastic it will be about slapping down disinformation ideologies — given its longstanding position as a free speech champion, and in the face of criticism that it is ‘censoring’ certain viewpoints.
Hacked materials
Another change being announced by Twitter now is a clampdown on the distribution of hacked materials via its platform.
Leaking hacked emails of political officials at key moments during an election cycle has been a key tactic for democracy fiddlers in recent years — such as the leak of emails sent by top officials in the Democratic National Committee during the 2016 US presidential election.
Or the last minute email leak in France during the presidential election last year.
Twitter notes that its rules already prohibit the distribution of hacked material which contains “private information or trade secrets, or could put people in harm’s way” — but says it’s now expanding “the criteria for when we will take action on accounts which claim responsibility for a hack, which includes threats and public incentives to hack specific people and accounts”.
So it seems, generally, to be broadening its policy to cover a wider support ecosystem around election hackers. Twitter’s platform does frequently host hackers — who use anonymous Twitter accounts to crow about their hacks and/or send attack threats at other users…
Presumably Twitter will be shutting that kind of activity down in future.
Though it’s unclear what Twitter’s new policy might mean for a hacktivist group like Anonymous (which is very active on Twitter).
The new policies might also have repercussions against Wikileaks — which was directly involved in the spreading of the DNC leaked emails, for example, yet nonetheless has not previously been penalized by Twitter.
One also wonders how Twitter might respond to a future tweet from, say, US president Trump encouraging the hacking of a political opponent….
Safe to say, this policy could get pretty murky and tricky for Twitter.
“Commentary about a hack or hacked materials, such as news articles discussing a hack, are generally not considered a violation of this policy,” it also writes, so is giving itself a bit of wiggle room on how it will apply (or not apply) the policy.
Daily spam report decline
In the same blog post, it also gives an update on detection and enforcement actions related to its stated mission of improving “conversational health” and information integrity on its platform — including reiterating the action it took against Iran-based disinformation accounts in August.
It also notes that it removed ~50 accounts that had been misrepresenting themselves as members of various state Republican parties that same month and using Twitter to share “media regarding elections and political issues with misleading or incorrect party affiliation information”.
“We continue to partner closely with the RNC, DNC, and state election institutions to improve how we handle these issues,” it adds.
On the automated detections front — where Twitter announced a fresh squeeze just three months ago — it reports that in the first half of September it challenged an average of 9.4 million accounts per week. (Though it does not specify how many of those challenges turned out to be bona fide spammers, or at least went unchallenged).
It also reports a continued decline in the average number of spam-related reports from users — down from an average of ~17,000 daily in May, to ~16,000 daily in September.
This summer it introduced a new registration process for developers requesting access to its APIs — intended to prevent the registration of what it describes as “spammy and low quality apps”.
Now it says it’s suspending, on average, ~30,000 applications per month as a result of efforts “to make it more difficult for these kinds of apps to operate in the first place”.
Elsewhere, Twitter also says it’s working on new proprietary systems to identify and remove “ban evaders at speed and scale”, as part of ongoing efforts to improve “proactive enforcements against common policy violations”.
In the blog, the company flags a number of product changes it has made this year, including a recent change it announced two weeks ago which brings back the chronological timeline (via a setting users can toggle).
“We recently updated the timeline personalization setting to allow people to select a strictly reverse-chronological experience, without recommended content and recaps. This ensures you have more control of how you experience what’s happening on our service,” it writes now, saying also this is intended to help people “stay informed”.
Though, given that a chronological timeline remains not the default, with algorithmically surfaced ‘interesting tweets’ instead being most actively pushed at Twitter users, it seems unlikely this change will have a major impact on mitigating disinformation campaigns.
Those in the know (that they can change settings) being able to stay more informed is not how election fiddling will be defeated.
US midterm focus
Twitter also says it’s continuing to roll out new features to show more context around accounts — giving the example of the launch of election labels earlier this year, as a beta for candidates in the 2018 U.S. midterm elections. Though it’s clearly got lots of work to do on that front — given all the other elections constantly taking place in the world.
With an eye on the security of the US midterms, it says it will send election candidates a message prompt to ensure they have two-factor authentication enabled on their account to boost security.
“We are offering electoral institutions increased support via an elections-specific support portal, which is designed to ensure we receive and review critical feedback about emerging issues as quickly as possible. We will continue to expand this program ahead of the elections and will provide information about the feedback we receive in the near future,” it adds, again showing that its candidate support efforts are US-focused.
On the civic engagement front, Twitter says it is also actively encouraging US-based users to vote and to register to vote, as well as aiming to increase access to relevant voter registration info.
“As part of our civic engagement efforts, we are building conversation around the hashtag #BeAVoter with a custom emoji, sending U.S.-based users a prompt in their home timeline with information on how to register to vote, and drawing attention to these conversations and resources through the top US trend,” it writes. “This trend is being promoted by @TwitterGov, which will create even more access to voter registration information, including election reminders and an absentee ballot FAQ.”
-
Entertainment6 days ago
Earth’s mini moon could be a chunk of the big moon, scientists say
-
Entertainment6 days ago
The space station is leaking. Why it hasn’t imperiled the mission.
-
Entertainment5 days ago
‘Dune: Prophecy’ review: The Bene Gesserit shine in this sci-fi showstopper
-
Entertainment5 days ago
Black Friday 2024: The greatest early deals in Australia – live now
-
Entertainment4 days ago
How to watch ‘Smile 2’ at home: When is it streaming?
-
Entertainment3 days ago
‘Wicked’ review: Ariana Grande and Cynthia Erivo aspire to movie musical magic
-
Entertainment2 days ago
A24 is selling chocolate now. But what would their films actually taste like?
-
Entertainment3 days ago
New teen video-viewing guidelines: What you should know