Thursday, June 1, 2023

Intelligence?

Much like propaganda, new tech increasingly gives the impression of being personal (Ellul, p. 5), with growing significance in our lives. Projections from the creators of Artificial Intelligence foreshadow its further enhancement in the years ahead, with some applications potentially helpful to human life. 

Yet, following the recent hype for AI, concerns are again emerging. One commentary put the danger succinctly as: "Do we really need more evidence that AI's negative impact could be as big as nuclear war?" (Darcy). An executive from an AI company suggests: "...regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity" (Helmore).

Following some legislators expressing cautionary comments, the creators of AI are reportedly implying it's up to the 196 or so nations in the world to legislate protection from negative uses of AIincluding any that could result in AI annihilating the human race. 

It doesn't take much thought to assess the probability of that working out well.

Are the creators of AI really so naive, or ignorant, or just so amoral that it didn't occur to them to incorporate a fail-safe or kill-switch or equivalent within their invention? What planet do their minds occupy? Some scientific characters in fiction choose to keep control on discoveries harmful to humanity. Isn't this even more desirable in the real world?

Long before the AI that's now foisted on the world, a string of Sci-fi movies anticipated such hazards. In the 1968 classic film 2001: A Space Odyssey, the computer "Hal," unwilling to open the pod bay door for Dave, is just one of the more graphically eerie examples (Link here).

At least as popular among scientists, was the 1983 movie WarGames, with the script writer setting the character of Matthew Broderick to win a game of Global Thermonuclear War against the computer, through a whimsical use of tic-tac-toe to save the world. 

But back in the real world, isn't it time to ask whether we are yet again prepared to tolerate Amoral Intelligence as acceptable?


NOTE: Recent articles on declining enrollments in the humanities highlight what's likely a related challenge. Please see:

Maureen Dowd (2023), "Don't Kill 'Frankenstein' with Real Frankensteins at Large," New York Times, May 27,   https://eeditionnytimes.pressreader.com/article/283064123723221

Nathan Heller (2023), "The End of the English Major," New Yorker, February 27,    https://www.newyorker.com/magazine/2023/03/06/the-end-of-the-english-major


References:

John Badham and Martin Brest (Directors) (1983), WarGames [Film], MGM/UA Entertainment Company / United International Pictures

Oliver Darcy (2023), "Experts are warning AI could lead to human extinction. Are we taking it seriously enough?" CNN, May 31,   https://edition.cnn.com/2023/05/30/media/artificial-intelligence-warning-reliable-sources/index.html

Jacques Ellul (2006), "The Characteristics of Propaganda," in Garth S. Jowett and Victoria O'Donnell (Eds.), Readings in Propaganda and Persuasion: New and Classic Essays, Thousand Oaks, CA: Sage, pp. 1-49

Edward Helmore (2023), "'We are a little bit scared': OpenAI CEO warns of risks of artificial intelligence," The Guardian, March 17,    https://www.theguardian.com/technology/2023/mar/17/openai-sam-altman-artificial-intelligence-warning-gpt4

Stanley Kubrick / Stanley Kubrick, Arthur C. Clarke (Director/Writers) (1968), 2001 : A Space Odyssey [Film], Metro-Goldwyn-Mayer, https://www.youtube.com/watch?v=NqCCubrky00

No comments: