Artificial intelligence committed to the fight against hate speech
2020 looks promising for conflict management from the web.
Indeed, researchers have decided to use tools that will be based on artificial intelligence to not only detect hate speech but also quarantine it to prevent it from reaching the maximum of people.
This article will also interest you: What is the place of artificial intelligence in the war between hacker and it publishers of antivirus solutions?
from so it will be more effective to combat these violent remarks and without be accused of censorship.
This proposal was made by researchers at the Cambridge University, which after conducting a study on social media decided to act by leveraging their knowledge of intelligence management of the spread of hate speech. In a objective approach, we know that forms of hate speech depend on state as well as the region of the world in which they are issued. What makes that the simple blocking keywords or interruption publications would not be perhaps really effective. furthermore some multimedia or graphic descriptions don't need to contain words or insults of an ethnic nature for being offensive or hurtful.
In the eyes of a linguistics expert and a computer engineer decided to propose a solution through a publication published in the journal Ethics and Information Technology, where it was proposed to use techniques by pointing to cybersecurity to better target messages that should not appear on the web. This is without resorting to censorship. Using a database based on violent threats and insults, Cambridge University engineers are creating the algorithms that will determine a set of elements that formally characterize hate speech.
Thus the more the messages carry references to hatred the more they will be quarantined unceremoniously. Subsequently, other users will receive a warning message through a barometer that will tell them the seriousness of the message in terms of hate speech, as well as the sender's name and a window for it to either read the content of the message or delete it without even reading it.
In this regard, one of the experts named Tomalin, having participated in the implementation of this algorithm stated: "Our system will report when you need to be careful, but the last word is up to you. he doesn't stop people from posting or seeing what they like, but it gives the control for those inundated with hate speech." It will add subsequently: "Thanks to automated quarantines that provide advice on the strength of hateful content, we can empower those that are the object of hate speech that poisons our online exchanges"
However, the device only works with 60% accuracy there are several things still that would have to be put in place to achieve perfection at this level. "Identifying individual keywords is not enough, we look at whole sentence structures and beyond. Sociolinguistic information in user profiles and publishing hisism can all help improve the classification process," argued another expert.
Now access an unlimited number of passwords: