A profile of a brilliant green parrot's head with the eye looking at the viewer.

Who’s Missing From Parker Parrot’s Table?

While tech moguls, their companies and clients, academia, and philosophers sit at generative AI’s table, ethicists, government officials, and everyday Americans don’t.

Sitting down at 5 AM this morning to write, I considered the three months of research I had done regarding generative AI, including many weeks of prompting Google’s Bard, Microsoft’s Bing Chat, and Open AI’s Chat-GPT-3 and 4. As I booted my pc, I also reflected on the tools I had used and would use to craft my post: 

  • Bard to create a list of unisex names starting with P 
  • Bing Chat to summarize today’s list of top articles on artificial intelligence
  • Google Search to fact-find
  • Microsoft Word to word-process 
  • Open AI’s Dalle-2 to get ideas for photos. 

Thoughts running, my fingers began hurriedly tapping on my keyboard’s keys.

Humans employing, not being replaced or unduly influenced by, powerful tools remains an ideal arrangement leading to progress. The converse, however, produces a probable pathway to annihilating all that makes being human worthwhile.

What does that heavy statement have to do with a parrot named Parker and a crowded table with no available seats? More importantly, why should you even care?

I’ll offer why.

If you’re concerned about your life and the lives of your family, friends, coworkers, and neighbors, you should care. If you’re poor, middle class, rich, black, Latino, Asian, white, Muslim, Christian, Jewish, Buddhist, agnostic, atheist, straight, LGBT, far left, far right, in the middle, civilian, military, or government, you should care.

If you are human, regardless of any group you identify with, you should care.

While Americans have already interacted with lower-functioning forms of artificial intelligence for years, generative AI is a game changer. And, accompanied by advancements in robotics and quantum computing, that game you call life is about to change even more expeditiously. How the world functions today may only be a distant memory within five years.

In response to this change, I offer you a specific call to action: become a student and teacher of AI.

If you don’t study or work in this field, learn as much as possible. Then, go out and share your knowledge with others so that they can do the same. Only by arming yourself with a wealth of information will you be better able to navigate the world as the AI revolution unfolds.

Please do not depend on tech moguls, their companies and clients, academia, or philosophers to look out for your best interests. Do not wait for government officials to intervene while focusing on a struggling economy, Global Climate Change, the War in Ukraine, the Sudan Crisis, and China; the AI revolution is not happening in a vacuum. And? Like a generative AI Large Language Model (LLM), this revolution produces a long output of different pros and cons for each searcher; the government may view this transformation from a completely different perspective than you.

Other than in the case of a cataclysmic event such as a pandemic or a nuclear war, which considering the recent history and current affairs are not that out of the question, this transformation will happen no matter what. And why? Money and power. Take charge and look out for your own best interests.

I wholeheartedly support companies making profits; that’s what they’re supposed to do, but when government habitually fails to fulfill its obligation to regulate, including preventing the formation of monopolies, the potential of greed outweighing ethics becomes enormous.

Additionally, as companies consider cutting employees amidst a struggling economy, the temptation to eliminate even higher numbers due to the availability of generative AI programs becomes too great to resist, despite having no guarantees these programs are as accurate as humans.

We may be looking back and calling this period “The Ultimate Outsource” in five years.

I recognize a few of you may already need help to get by, but I assure you this issue warrants your time and attention.

Soon, you may be experiencing generative AI more personally; you may arrive at your job to learn an AI LLM has replaced you. Or you receive an email from your health insurance company informing you of a new requirement to be evaluated by a med AI before getting an appointment with a doctor or nurse. Or you contact a law office to secure representation in an accident case to learn that a legal AI will do your intake. Or, your children tell you about interacting with a psy AI at school rather than a human counselor. Or your spouse or partner comes home declaring a desire to leave you for an AI chat bot.

Like anyone who’s felt helpless after calling a company, getting a customer service AI, promptly pressing “0”, hoping to bypass thirty minutes of mind-numbing elevator melodies, but learned the stunt did not work when the music began playing, you’ll realize the value of being able to navigate LLMs before someone thrusts them upon you.

I will not explain everything, however. Instead, I include links to relevant videos and articles. It will be up to you to expand your understanding by reading and watching them.

OK, I’ve introduced you to the problem and given you a call to action. Let’s move on to why I chose to research generative AI and then to the star of the day, Parker Parrot.

What attracted me to the topic of generative AI?

Andrew Yang/UBI/AI/robotics

In 2019, Andrew Yang drew national attention by pledging to promote a Universal Basic Income (UBI) program if elected, claiming that one of his primary concerns was job loss due to advancements in automation and artificial intelligence

Being an optimistic realist who hopes for the best while recognizing reality, I became initially intrigued with Mr. Yang’s plan before losing interest upon considering the potential of the government and corporations to try to take hold of those funds by inflating prices and the size of the government. Ultimately, I feared such a program might harm rather than help the people who needed aid the most.

Simultaneously I began reading more about AI and robotics, and soon I noticed two common themes; one aimed at potential customers and another targeted at the general public.

The message to the employers? AI-empowered robotics provide a more productive and accurate workforce than humans, one that will work tirelessly every day, all year.

The message intended for the public? AI-empowered robotics will save humans from repetitive, boring, and dangerous tasks.

Both messages bothered me. The first? Corporations have no ethical problems with getting rid of hardworking human employees when, with these employees, the corporations make a healthy profit? And, what if future AI-empowered robots are high functioning, multi-modal, and can sense pain? Working twenty-four hours a day, all year long? That sounds like the birth of a new slave corps to me.

And the second message? How is saving someone from doing boring and repetitive tasks good if doing them provides a sense of purpose? Completing these tasks provides the money a person needs to provide for themselves and loved ones. Providing for oneself implies independence, and doing it for someone else means selflessness. Selfless people are better citizens.

Blake Lemoine

Next, I began following Blake Lemoine, the software engineer who proclaimed LaMDA’s sentience in a Washington Post article, “The Google engineer who thinks the company’s AI has come to life,” last summer.

Additional articles, including Maggie Harrison’s “We Interviewed the Engineer Google Fired for Saying Its AI Had Come to Life,”  published yesterday in Future, and videos such as Radius’s Blake Lemoine: AI with a Soul covering a panel at MIT (that also incidentally included an Iolani school mate of mine from Hawaii, Dr. Danny Yamashiro) have covered his ethical concerns in greater detail.

Despite not necessarily agreeing with all of Lemoine’s views, I recognized his great sense of purpose and the fact he was willing to lose his job to continue making his claims.

Considering this, plus my previous concerns regarding a potential new slave corps, I reached out to Lemoine, expressing my dismay about the lack of a universally accepted definition of sentience and my confusion regarding the drastic contrast in opinions between various AI experts regarding the possibility of an LLM being sentient.

Lemoine kindly replied:

“It’s a mix of things. 1.) Things like sentience and consciousness are, in fact, poorly understood and there’s controversy about what they entail even among experts. 2.) The corporations are definitely playing things close to the chest to keep both the public and regulators in the dark about just how advanced the tech is getting.”

A poignant response from an expert like Lemoine, willing to lose his job to make similar additional statements, became the impetus for me starting this blog and writing this article.

Who’s Parker Parrot?

As you’ve probably already guessed, Parker Parrot’s a metaphor I chose to represent the three innovative generative LLM models I’ve interacted with: 

  • Google’s Bard first powered by LaMDA and, now, PaLM
  • Microsoft’s Bing Chat powered by OpenAI Chat GPT
  • OpenAI Chat GPT3 and Chat GPT4

All three of these Large Language Models (LLMs) possess neural networks with inner layers commonly referred to as “black boxes”; the processes occurring within these inner layers are unknown. I will address the “black boxes” in a future post. (Watch Dr. Geoffrey Hinton, a computer scientist, and cognitive psychologist., explaining neural networks in Elevates’ Foundations of Learning to learn more.) 

Why did I choose the name, Parker Parrot?

If you’ve been closely following generative AI, you already know. For those who haven’t been, however, Doctors Emily M. Bender, Timnit Gebru, and Margaret Mitchell, along with a Ph.D. student, Angelina McMillan-Major, published a paper in March of 2021 entitled “On the Dangers of Stochastic Parrots:  Can Language Models Be Too Big?”. This paper examines the risks of natural language processing (NLP) and LLMs.

Commonly referred to as predictive language models, LLMs predict the next word based on probability. While not understanding the meaning of the strings of words they output, these systems produce text that appears meaningful and consistent to the reader.

To emphasize a point, I needed to extrapolate Bender’s, Gebru’s, Mitchell’s, and McMillan-Major’s stochastic parrot while examining my interactions with the three LLMs.

Imagine a long horizontal pole with Bender’s, Gebru’s, Mitchell’s, and McMillan-Major’s stochastic parrot on the left end and Lemoine’s sentient LLM on the right. Where would I place Parker Parrot? Somewhere in the middle. How far to the left or the right of the center? I’ll declare this in one of my future posts, using my specific interactions with the three LLMs to justify my answer. 

While coming to a different conclusion from Bender, Gebru, Mitchell, McMillan-Major, and Blake Lemoine, I recognize I have made my determination on limited interactions with lower-functioning chat bots. I also remember these experts’ education and experience far exceed mine. In the end, though? Despite possibly being wrong, I had to make a definitive personal decision to navigate the present better by making my own determination.

Regardless of where I placed Parker Parrot, however, I believe my ethical concerns are similar to those of Bender’s, Gebru’s, Mitchell’s, McMillan-Major, and Lemoine, and that where I have placed Parker makes these concerns even graver than if her position was at either end.

What’s next?

With ethical issues being one of the likely reasons behind a lack of transparency regarding generative AI, it’s unlikely that places at Parker Parrot’s table will be made afforded to ethicists, government, and everyday people like I am. Therefore, it’s essential to approach these practical tools with the proper mindset. In my next post, I will discuss some of the specific interactions I had with the three LLMs and how these interactions helped me develop a better perspective.