Sunday, October 29, 2023

#302 / Everything Goes Boom?


Vanity Fair ran an article in its October 2023 edition that was titled, "Artificial Intelligence May Be Humanity’s Most Ingenious Invention—And Its Last?" The article was written by Nick Bilton. As you might guess from that title, Bilton's article is not exactly "upbeat." Here are the last couple of paragraphs:

Numerous government studies published over the past 78 years, since the first atomic bomb was detonated in New Mexico, have estimated that a full-scale nuclear war would kill hundreds of millions of people, and the subsequent nuclear winter, a theorized period of prolonged cold and darkness caused by the fallout from the blasts, could kill hundreds of millions more. At most, a few billion people might die, but there is no scenario where our entire species would disappear. The same is true for biological weapons and chemical warfare, which could kill thousands of people. Guns, bombs, lasers, disease, and famine.

Artificial intelligence, however, is arguably the first technology that could wipe out everyone on the planet. Do your own math: Do you really think we’re going to make it another 6,000 years? Another 200 generations? As Kedrosky put it, if we continue unmitigated across this razor blade, the odds are simply inevitable: “Given enough time, and enough AI coin flips, eventually everything goes boom.”

The QUESTION CIRCULATING around Silicon Valley isn’t if such a scenario is worth it, even with a 1 PERCENT CHANCE OF ANNIHILATION, but rather, if it is really such a bad thing if we build a machine that CHANGES HUMAN LIFE AS WE KNOW IT.

"Changing human life as we know it," I'd argue, is something a little bit different from "everything goes boom." However, setting aside that rather contradictory ending to Bilton's article, should human beings be designing and deploying a technology that could "supersede" the human race, and make it, essentially, superfluous? One of the claims made in the article is that in creating Artificial Intelligence, “We’re creating God.” 

Good idea? Or a not-so-good idea? It's Sunday, so let me remind you of a Bible story I have already written about before. It involves a Golden Calf. You may or may not have paid much attention to that story, but you probably know about it. In short, the Creator of the universe is not happy when human beings create things themselves, and then worship their own creations. Penalties apply!

To return to my beloved "Two Worlds Hypothesis," World #1 is the "World That God Created," and we live ultimately in that world. Picture Earth from space. As far as we know, that's where we are, and while we can study the stars, and contemplate the idea that there are "many worlds" like ours (completely unproven at this point, of course), whether or not there are many such worlds, and multiple "universes," is rather beside the point, in terms of the practicalities. We exist, inevitably, in any "ultimate" sense, in that "World That God Created." Planet Earth is that world. That is the world upon which we ultimately rely for everything. We are supposed to pay attention to this fact, and when we don't, when we fail to remember that the "World That God Made," the "World of Nature" - what some people might call "the environment" - is all we ultimately have, we experience bad results. 

What that story about the Golden Calf is intended to remind us is that when we start thinking that "our" world, the "Human World," is more important than the "World That God Made," we are going to be punished for our failure to accord primacy to the Creator, and to the Creation. 

Such, it seems to me, is what we are talking about when we talk about "Artificial Intelligence," and the kind of world that it portends.

I'd keep focused, really to understand the implications of Artificial Intelligence, on that word, "Artificial." 

Artificial? Not good! We are picking a loser, there! Like the purveyors of what many people think of as their favorite beverage, our best choice will always be the "Real Thing." 

No comments:

Post a Comment

Thanks for your comment!