Monday, June 19, 2023

Soft Power

"Propaganda ceases where simple dialogue begins." – Jacques Ellul

If you care for freedoms of thought, speech, and association, you can help counter fake news, disinformation, pseudo-populism, and propaganda. Anyone paying attention knows that this public discourse is used to undermine democracy by polarizing families, friends, and neighbors.

As far back as 2007, George Pullman wrote that some people were emboldened in digital media:

...by a sense of anonymity to abandon normal decorums. The most common breaches are trolling and flamingsaying provocative things in order to stir up trouble or launching uncalled-for personal attacks (Pullman, p. 21).

Many resources are accessible to help address these blights on our public communication (Link here to The Communication Institute, selected references). Creative conversations on the broadcast media can help to bridge new understandings. Some thoughtful media commentators do show how to do this.

We can also learn much from the methods of public diplomacy "...to put in place measures to build mutual understanding" (Snow). Amid tensions greater than most of us ever encounter, diplomats and other negotiators frequently find ways to defuse situations and engage with difficult people. 

But isn't it time to unleash much more soft power? After all, it is collaborative efforts that forge and evolve representative democracies. And so many of us in our work or personal life commonly commit to collaboration (Cross). 

Surely, it's time for citizens in democratic nations to more vigorously empower efforts to help blunt and counter manufactured outrage. You don't need education in journalism or a PhD to:

...open up the heart, the mind, the listening ears to find out about the other person, so that you can learn better how to come together (Snow).

Of course, sometimes we'll have to decide about fight or flight as preferable options. But rather than just speechifying, or endless handwringing, or indulging in often-pointless fact-checking, what's potentially more potent is to practice I.A. Richards's:

...new definition of rhetoric, as being the study of misunderstanding and its remedies... [shifting]... the focus from manufactured belief among non-believers to seeking agreement through clarification (Pullman, p. 17).

With efforts of so many people worldwide directed to strengthening democracies, there are reasons for hope. It's time to build on the electoral successes that strengthen democracy, by engaging "...deeply-held, and often unexamined desires, needs, expectations, and fears" (Cross)just as we do in the workplace or at home.


References:

Rob Cross (2022), "Where We Go Wrong with Collaboration," Harvard Business Review, April 4, https://hbr.org/2022/04/where-we-go-wrong-with-collaboration

Jacques Ellul (1965), Propaganda: The Formation of Men's Attitudes, New York: Knopf, p. 6

George Pullman (2007), "Rhetorically Speaking, What's New?" in Susan E. Thomas (Ed.), What is the New Rhetoric?, Newcastle, UK: Cambridge Scholars Publishing,  https://books.google.com/books/about/What_is_the_New_Rhetoric.html?id=eeoYBwAAQBAJ

Nancy Snow (2020), Unmasking the Virus: Public Diplomacy and the Pandemic, Public Diplomacy Council, the Public Diplomacy Association of America, and the USC Annenberg Center for Communications Leadership & Policy, June 9, https://www.youtube.com/watch?v=v6jA_JaSefc [see also: Nancy Snow and Nicholas J. Cull (Eds.) (2020), Routledge Handbook of Public Diplomacy, 2nd ed, New York: Routledge]

Thursday, June 1, 2023

Intelligence?

Much like propaganda, new tech increasingly gives the impression of being personal (Ellul, p. 5), with growing significance in our lives. Projections from the creators of Artificial Intelligence foreshadow its further enhancement in the years ahead, with some applications potentially helpful to human life. 

Yet, following the recent hype for AI, concerns are again emerging. One commentary put the danger succinctly as: "Do we really need more evidence that AI's negative impact could be as big as nuclear war?" (Darcy). An executive from an AI company suggests: "...regulators and society need to be involved with the technology to guard against potentially negative consequences for humanity" (Helmore).

Following some legislators expressing cautionary comments, the creators of AI are reportedly implying it's up to the 196 or so nations in the world to legislate protection from negative uses of AIincluding any that could result in AI annihilating the human race. 

It doesn't take much thought to assess the probability of that working out well.

Are the creators of AI really so naive, or ignorant, or just so amoral that it didn't occur to them to incorporate a fail-safe or kill-switch or equivalent within their invention? What planet do their minds occupy? Some scientific characters in fiction choose to keep control on discoveries harmful to humanity. Isn't this even more desirable in the real world?

Long before the AI that's now foisted on the world, a string of Sci-fi movies anticipated such hazards. In the 1968 classic film 2001: A Space Odyssey, the computer "Hal," unwilling to open the pod bay door for Dave, is just one of the more graphically eerie examples (Link here).

At least as popular among scientists, was the 1983 movie WarGames, with the script writer setting the character of Matthew Broderick to win a game of Global Thermonuclear War against the computer, through a whimsical use of tic-tac-toe to save the world. 

But back in the real world, isn't it time to ask whether we are yet again prepared to tolerate Amoral Intelligence as acceptable?


NOTE: Recent articles on declining enrollments in the humanities highlight what's likely a related challenge. Please see:

Maureen Dowd (2023), "Don't Kill 'Frankenstein' with Real Frankensteins at Large," New York Times, May 27,   https://eeditionnytimes.pressreader.com/article/283064123723221

Nathan Heller (2023), "The End of the English Major," New Yorker, February 27,    https://www.newyorker.com/magazine/2023/03/06/the-end-of-the-english-major


References:

John Badham and Martin Brest (Directors) (1983), WarGames [Film], MGM/UA Entertainment Company / United International Pictures

Oliver Darcy (2023), "Experts are warning AI could lead to human extinction. Are we taking it seriously enough?" CNN, May 31,   https://edition.cnn.com/2023/05/30/media/artificial-intelligence-warning-reliable-sources/index.html

Jacques Ellul (2006), "The Characteristics of Propaganda," in Garth S. Jowett and Victoria O'Donnell (Eds.), Readings in Propaganda and Persuasion: New and Classic Essays, Thousand Oaks, CA: Sage, pp. 1-49

Edward Helmore (2023), "'We are a little bit scared': OpenAI CEO warns of risks of artificial intelligence," The Guardian, March 17,    https://www.theguardian.com/technology/2023/mar/17/openai-sam-altman-artificial-intelligence-warning-gpt4

Stanley Kubrick / Stanley Kubrick, Arthur C. Clarke (Director/Writers) (1968), 2001 : A Space Odyssey [Film], Metro-Goldwyn-Mayer, https://www.youtube.com/watch?v=NqCCubrky00