Last year, Twitter launched a test of warning prompts on
tweet replies that were potentially harmful, and is now relaunching these
prompts with a different format and will also offer additional explanations in
relation to various reasons for the flagging of replies.
You can see three large buttons displayed below the prompt that
asks users to review replies with potentially harmful or offensive language with three different options. Using
the buttons, you can choose to proceed with tweeting your response, edit it, or
delete it.
You will also notice that these prompts are similar to Instagram’s automated alerts for potentially offensive comments, that the company released in July 2019. While such prompts are mainly created with the purpose to reduce intentional offense, they have actually helped improving misinterpretation in the online world. In a research report published by Facebook last year, it was found that misinterpretation plays a significant role in causing more angst and argument among Facebook users.
This finding suggests that by prompting users to simply re-evaluate
their online language, a lot of online disputes could be avoided. This scenario
works even better for Twitter, as the platform allows typing a maximum of only 280
characters, which can sometimes result in unintended messaging.
The prompt is simple, but it could give users a chance to
reflect on their actions for a moment, which can potentially account for
reduction in online aggression. It is one of Twitter’s attempts to ensure
better and more positive interactions across its platform.
The format of this new prompt test is quite different from
the previous launch, which was simpler. The feature is already set to roll out
for iOS users.