Backlash and offensive interactions are common on Twitter
and the platform has continued to develop ways to limit such experiences for
its users. In an attempt to further improve its detection systems in relation
to this issue, Twitter is working on a new tool called Reply Filter.
The Reply Filter option works by preventing a user from
seeing replies containing potential violent language. It is assumed that this element functions
using the same algorithm as the ‘offensive reply warnings,’ that Twitter had
released during the time of the 2020 US elections. The prompts were found to be
effective in reducing negative tweet replies by 30%.
Considering the significant impact that reply warnings had on
the way users’ communicated on Twitter, the current experiment with the Reply
Filter could also prove to be of some value.
Twitter has not released any official statement with regard
to rolling out Reply Filters, however it has been spotted by app researcher
Jane Manchun Wong, which indicates the possibility of a roll out soon.