Instagram plans to combat cyberbullying with AI

Illustration by Twylamae
Words by Maeve Kerr-Crowley

If you don’t have anything nice to say…

Online bullying isn’t new, but it is more pervasive than ever thanks to our constantly increasing reliance on social media and technology.

Our friends are one click or message away 24-7, but so are our enemies and a whole world of strangers ready to comment “ur ugly go die” the second we post a selfie or sunset pic.

This is particularly hard to navigate for kids and teenagers, who use apps like Instagram and Snapchat more than any generation before them. It’s unchartered waters, and the people in charge of regulating them often don’t even know how to use them.

Which is why Instagram introduced an AI bully-watch feature earlier in the year, which could pick up on potentially offensive or harmful comments and prompt would-be commenters to find something nicer to say.

The idea is that even a gentle callout – “Are you sure you want to post this?” – might encourage users to think twice about what they’re typing and the impact it might have.

Now, in order to target bullying at its source, the AI’s reach has expanded its watchful eye to posts themselves.

As well as comments, captions will be scanned for any offensive content based on similar posts that have been reported for bullying in the past. Users can then choose to edit their captions, learn more about why their post could be harmful, or share the post anyway.

The idea is essentially robots battling trolls in a non-physical realm. Is this the future, or a medieval fairy tale?

You can read more about the new feature here.

Lazy Loading