Democracy By Design
4
2. Utilize virality circuit breakers to automatically ag fast-spreading posts and
trigger a brief halt on algorithmic amplication. Not all high-reach content comes
from users with huge followings – especially now, as platforms increasingly rely almost
entirely on algorithms in determining what to surface with little friction. Inspired by
automatic triggers used in nancial markets to prevent panic selling at moments of high
volatility, experts have proposed that platforms introduce circuit breakers to automatically
ag posts that are beginning to gain virality, temporarily pause algorithmic boosting, and
present users an interstitial about the post’s virality and status upon click-through or
reshare. It’s a proposal that’s been lent credence by tech companies themselves, with Meta
reportedly testing the concept and Snap going a step further, as an executive testied that
all posts on their Spotlight platform were speed-bumped and checked before reaching 25
unique viewers. Proper thresholds and processes will of course vary by platform and content
type, so companies should outline their policies and publish pertinent aggregate data in
transparency reports.
3. Restrict rampant resharing during election season by removing simple share
buttons on posts after multiple levels of sharing. Frictionless resharing is a staple
of social platforms – and a key driver of toxicity. Internal Meta research showed users are
4x more likely to encounter falsehoods in a reshare of a reshare than in the News Feed in
general, and concluded aggressively limiting these ‘deep reshares’ would be “an eective,
content-agnostic approach to mitigate the harms.” Meta also placed limits that proved
eective on how many times WhatsApp messages could be forwarded after mass-sharing
exacerbated unrest in India and Brazil. Platforms should remove share buttons on posts
after multiple levels of reshare, and/or test other mechanisms that enhance reshare friction
in a targeted manner during election season, with careful consideration of the impact on
legitimate advocacy campaigns.
4. Implement clear strike systems to deter repeat oenses, curtail the outsized
impact of malign actors, and better inform users. This is not meant to dictate
specic content policies; strike systems should be based on violations of platforms’ own
standards. Most platforms already utilize some form of strike system to levy sanctions
on repeat oenders in recognition of the disproportionate harm they drive, but both the
policies and their application are typically vague and/or obscured from users – often
under the argument that it’s impossible to clarify such rules without helping bad actors
game the system. And even approaches that have more explicitly addressed this threat –
like Twitch’s Harmful Misinformation Actor policy, and Twitter’s bygone 5-strike Civic
Integrity policy – have focused chiey on when to suspend the worst actors. Platforms
should develop and implement transparent strike systems that clearly outline escalating
‘soft interventions’ to limit the impact of repeat oenders, such as restricting resharing,
curtailing algorithmic amplication, and placing posts behind click-through warning
labels with context. This approach steers clear of the false choice between censorship
and inaction, and would demystify enforcement decisions, deter habitual rule-breaking,
and defang the malign actors who pose the greatest election integrity threats. A detailed
example of what such a strike system might look like for a given platform, developed by
Accountable Tech, can be found here.