Instagram Bringing New Update That Will Flag Your Hateful Comments
Instagram is working at the final stage of a new feature which will make users think twice before making any hateful comments. It’s a great effort to stop cyber-bullying on the social media platform.
This new feature makes use of AI which will notify users if their posts may be harmful or offensive. User will get to see this message- “Are you sure you want to post this?” They will have the option to delete or change the comment before anyone else is able to see it.
Early tests of this feature found that some of the users are actually less likely to post harmful comments- chief Adam Mosseri wrote in a blog post.
You might have noticed that GMail also has a similar feature that gives you 30 seconds to cancel an email after pressing the “Send Button”.
Other social media platforms like Facebook, Twitter have taken measure to take action against harmful posts. But, there’s no hard and fast rules of what these platforms are trying to restrict.
Monitoring harmful content on the social media is definitely a challenging job. Justin Patchin, co-director of the Cyberbullying Research Center, says he works with different platforms that are trying to find a solution to this problem.
With a huge load of contents published every second, it is Instagram’s one attempt to use AI to monitor the posts. Facebook and Twitter have already tried this feature in the past. The main problem experienced by the AI is that the algorithms often have a hard interpreting slang and nuances in different languages.
Instagram’s this feature is little different from other social media platforms to stop cyberbullying as it warns users but ultimately let them make the decision on what to post.
“The transparency here is helpful to those who have wondered why these big social media companies aren’t doing more technologically to address bullying,” Patchin said.
It’s true that Instagram is the first big platform to implement this feature but, the similar concept of this app was already created by Trisha Prabhu in 2013. The then 13-year-old created a social platform called ReThink, which also alerts users when their message may be offensive. This app was highly praised for its innovation but in order to be more effective, this solution should be available on the platforms with huge traffic.
Patchin says that social media platforms are actually moving forward to the right decision and getting closer enough to prevent harmful or explicit comments.
“Companies have devoted a lot of energy to refining these systems, and they’re getting better every year,” he said. “They do have a responsibility and obligation to lead the way and at least experiment with these kinds of technologies.”
Instagram has plans to continue beefing up its safety features and will soon introduce a “restrict” feature, which allows users to filter content from specific accounts without blocking them. Instagram’s Mosseri wrote in the blog post that the company decided to add that feature after users said they were worried that blocking accounts that were posting offensive comments on their page would lead to retaliation.