One controversial posture by AI Ethics is that we can purposely devise toxic AI or biased AI in order to ferret out and cope with other toxic AI. As they say, sometimes it takes one to know one. This includes self-driving cars too.
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Use such datasets to train Machine Learning and Deep Learning models about detecting biases and figuring out computational patterns entailing societal toxicity An insightful example of trying to establish datasets that contain unsavory societal biases is the CivilComments dataset of the WILDS curated collection.WILDS is an open-source collection of datasets that can be used for training ML/DL. The primary stated purpose for WILDS is that it allows AI developers to have ready access to data that representsin various specific domains.
The CivilComments dataset is described this way: “Automatic review of user-generated text—e.g., detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet. Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics. These types of spurious correlations can significantly degrade model performance on particular subpopulations.
Of course, societal mores shift and change over periods of time. What might not have been perceived as offensive a while ago can be seen as abhorrently wrong today. On top of that, things said years ago that were once seen as unduly biased might be reinterpreted in light of changes in meanings. Meanwhile, others assert that toxic commentary is always toxic, no matter when it was initially promulgated. It could be contended that toxicity is not relative but instead is absolute.
Australia Latest News, Australia Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Spotify to Acquire AI Voice Startup SonanticThe London-based company has worked with Val Kilmer to create a custom voice model.
Read more »
I Used AI To Disney-Fy 15 'Friends' Characters And I'm Challenging You To Recognise ThemiFriends/i – Good. iDisney/i – Gooood. Quizzes – Goooood.
Read more »
Google AI Claims to Be Sentient in Leaked Transcripts, But Not Everybody AgreesA senior software engineer at Google was suspended on Monday (June 13) after sharing transcripts of a conversation with an artificial intelligence (AI) that he claimed to be 'sentient', according to media reports. The engineer, 41-year-old Blake Le
Read more »
Unpacking Google's 'sentient' AI controversyOne of Google's engineers claimed that its AI chatbot was sentient. Here's what other experts in the field are saying.
Read more »
Using Explainable AI in Decision-Making Applications | HackerNoonHere we explore the essence of explainability in AI and analyzing how applies to decision support systems in healthcare, finance, and other different industries
Read more »
A Booming Market For AI Skills, With Salaries Topping $300,000A new salary survey of AI professionals and data scientists finds unprecedented annual increase in compensation for data analysis skills and experience.
Read more »