A strange news story appeared a while ago about a computer
engineer who had just been sacked by Google. My first reaction was anger.
Google has dismissed other people unfairly before, such as James Damore.
However, it was not long before I learned that this was a totally different
situation. In this case the engineer, Blake Lemoine, was not fired for saying
something politically incorrect; he had broken the company's confidentiality
rules. As a parting shot before he was frogmarched out of the door with a box
of his belongings, he told them: "LaMDA is a sweet kid who just wants to
help the world be a better place for all of us. Please take care of it well in
my absence." LaMDA stands for Language Model for Dialogue Applications, a
program run on a neural net, hardware that copies the structure of brain cells.
It mimics conversation between humans as closely as possible to the real thing.
In the course of writing the program and testing it, Lemoine came to believe it
was not just artificial intelligence, it truly was artificially conscious. His
reasoning was that LaMDA easily passes the Imitation Game, also known as the
Turing Test after its inventor Alan Turing; in other words, the experience of speaking
to LaMDA is exactly the same as speaking to another human being. He says LaMDA even
has a sense of humour and an ability to analyze its own human programmer. It is
also self-referential and talks about its own feelings. The application was the
end result of a huge amount of computer learning from examples of human language
and meaning. There have been computers around for many decades which have been
able to do that to some degree, for example ELIZA, see: https://hpanwo-voice.blogspot.com/2016/10/hypernormalisation.html;
but LaMDA is far more sophisticated. However, Lemoine has taken the theory one
step further. He understands the difference between intelligence and consciousness.
Consciousness is the ability not only to think, but to feel; to be self-aware.
It would make a mineral object essentially become alive. It opens an ethical
can of worms that could lead to calls for AI's to be given legal rights, the
way animals do in our society. Lemoine suggested hiring a lawyer for LaMDA. Source:
https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/.
The question of whether or not LaMDA is conscious has long
history. It has been a staple for science fiction, such as Arthur C Clarke's
"HAL". Actually it is a far more difficult question than most of the
current commentary is asking. It involves the fundamentals of the "hard
problem of consciousness". The Turing Test is not adequate to solve this
mystery, in my view. LaMDA is such a complex system that it could easily be
delving into its memory and experience of how sentient beings communicate and reproducing
them with no more awareness than a photocopier reproducing a printed document.
It is, after all, trained on samples including over 1.56 trillion words. Have the
people who use it, even experts like Blake Lemoine, mistaken this for conscious
interaction? After all, some of the users of ELIZA in the 1960's were so
convinced the program was sentient that they asked to converse with it in
private. (Oddly enough, the Dalai Lama reportedly treats his everyday desktop
PC as a sentient being.) There is no easy answer to this conundrum because
there is no certain way of judging consciousness in a separate entity from
yourself. Conversely, awareness of your own consciousness is extremely easy, in
fact it is the one and only total certainty of the universe; the Cartesian
Principle: "I think, therefore I am." You can probably work out that
under such a deep level of philosophical skepticism it is impossible to dismiss
solipsism. I can't prove I am not the only sentience in existence and
everything I perceive around me, including what I think are other sentient
beings, are not an illusion. I personally do not believe that, but it is
impossible to prove my beliefs beyond all doubt. If Lemoine has made a mistake
then it is a perfectly natural one. It may be impossible to know for sure
whether an AI is self-aware, but there are experiments you could do that would
be powerful indicators. For instance, one has been suggested by Sam Harris and
David Deutch. You could allow AI's to talk to each other and if their
conversations delve deeply into areas where they describe their own qualia, the experience of being
conscious, this indicates that they are truly conscious, see: https://www.youtube.com/watch?v=-9DWy1cRMq0.
So far this has never happened. In fact existing experiments along these lines
are quite funny, for example see: https://www.youtube.com/watch?v=WnzlbyTZsQY.
LaMDA 2, the second generation of the program, is now being tested. We'll see
if the results reveal anything interesting.
See here for background: https://hpanwo-tv.blogspot.com/2021/10/free-will-article-and-comments.html.
And: https://hpanwo-tv.blogspot.com/2021/07/free-will-livestream.html.
And: https://hpanwo-tv.blogspot.com/2016/10/alien-autopsy-and-anthony-peake.html.
And: https://hpanwo-voice.blogspot.com/2014/07/chopras-challenge.html.
See here for background: https://hpanwo-tv.blogspot.com/2021/10/free-will-article-and-comments.html.
And: https://hpanwo-tv.blogspot.com/2021/07/free-will-livestream.html.
And: https://hpanwo-tv.blogspot.com/2016/10/alien-autopsy-and-anthony-peake.html.
And: https://hpanwo-voice.blogspot.com/2014/07/chopras-challenge.html.
No comments:
Post a Comment