Saturday, June 10, 2023

#161 / A Word From The Godfather Of A.I.


That is Geoffrey Hinton, pictured. Some call him "the godfather of A.I." I recommend that you read the Wikipedia entry linked to Hinton's name, to get a sense of his credentials.  
If you click this link, you will be able to read a short article that outlines Hinton's growing trepidation about the rapid development of "artificial intelligence" and what that will mean for us. The "Now This News" website, from which I got the article, and the picture of Hinton I have placed above, informs us that Hinton has recently quit his extremely high-paying job at Google, because Hinton wants to "talk about AI safety issues without having to worry about how it interacts with Google’s business. As long as I’m paid by Google," Hinton says, "I can’t do that.”
Hinton does not, I was happy to see, present A.I. as some kind of independently-existing entity that is now seeking to displace human beings from their position in the world, on the basis of a claim that we will all soon be "outmoded." Some discussions of A.I. do seem to characterize A.I. in just this way, pretty much portraying A.I. as some kind of new "creature," a Frankenstein intelligence that will soon wreak havoc upon, and then displace, all that is human. 
It may well be true that the technology that makes artificial intelligence work, invented by human beings like Hinton, can now accomplish certain things much more rapidly and comprehensively than any human being ever could. Still, the way I read what Hinton is saying, artificial intelligence is (and will remain) a human-developed "tool." It is not some separate and independent creature that will ultimately be able to dispose of human beings altogether. 

Hinton, in fact, is not so much worried about what "A.I." will do, but about what "bad human actors" will do, utilizing the new capacities available to them through A.I.:
“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times in a highly publicized interview published May 1.

"Hinton says one of his primary concerns with AI is its potential for spreading misinformation. He noted that, given how simple it is to use AI for text and image generation, it’s become easier for people to create fake content. Hinton worries that we’ll eventually reach a point where people might “not be able to know what is true anymore (emphasis added).”

"Personalizing" A.I., thinking of A.I. as some new "entity," separate from ourselves, is a category error. "A.I." is not, in fact, independent of its human creators - and never will be. Human beings always need to look to themselves to figure out what to do about everything - and now we have to decide what to do about this new "tool" that can, literally, make it impossible for anyone to know what is "true," and what is "real," and what is not. 

We must know "the truth" if we want to be able to act in a way that can be successful in attaining the outcomes we want to achieve. If the result of using A.I. will be to make it impossible to know or discover the actual truth - and that is what "the godfather of A.I." is telling us may happen - then it seems rather clear what we need to do with A.I.
Get rid of it! 
Image Credit:

No comments:

Post a Comment

Thanks for your comment!