The fuss about a bot’s ‘consciousness’ obscures far more troubling concerns
A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. To a computer, meaning is irrelevant. Nevertheless, a large language model such as LaMDA, trained on the extraordinary amount of text that is online, can become adept at recognising patterns and responses meaningful to humans.
It’s a response that makes perfect sense to a human. We do find joy in “spending time with friends and family”. But in what sense has LaMDA ever spent “time with family”? It has been programmed well enough to recognise that this would be a meaningful sentence for humans and an eloquent response to the question it was asked without it ever being meaningful to itself.
Humans, in thinking and talking and reading and writing, also manipulate symbols. For humans, however, unlike for computers, meaning is everything. When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols, but its inside too, not just the syntax but the semantics. Meaning for humans comes through our existence as social beings. I only make sense of myself insofar as I live in, and relate to, a community of other thinking, feeling, talking beings.
The attribution of sentience to computer programs is the modern version of the ancients seeing wind, sea and sun as possessed of mind, spirit and divinity Meaning emerges through a process not merely of computation but of social interaction too, interaction that shapes the content – inserts the insides, if you like – of the symbols in our heads. Social conventions, social relations and social memory are what fashion the rules that ascribe meaning. It is precisely the social context that trips up the most adept machines.
Australia Latest News, Australia Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
From Trump Nevermind babies to deep fakes: DALL-E and the ethics of AI artA neural network that can transform a text phrase into an artwork is transforming our understanding of creative thinking, but it opens new issues
Read more »
Photorealistic AI images have arrived. Are artists in trouble?In 2014, artist Richard Prince angered some colleagues with an exhibition of blown-up screenshots from Instagram. Now he’d have a new way to challenge audiences. | nickbonyhady
Read more »
Is ‘fake data’ the real deal when training algorithms?The use of synthetic data is a cost‑effective way to teach AI about human responses. But can it help eliminate bias and make self‑driving cars safer?
Read more »
Literary experts find John Hughes’ plagiarism defence unconvincingScholars respond to author’s explanation for his new book appearing to copy some parts of classic texts
Read more »
Literary experts find John Hughes’ plagiarism defence unconvincingScholars respond to author’s explanation for his new book appearing to copy some parts of classic texts
Read more »
From Trump Nevermind babies to deep fakes: DALL-E and the ethics of AI artA neural network that can transform a text phrase into an artwork is transforming our understanding of creative thinking, but it opens new issues
Read more »