AI Mirage: Disentangling Fear and Perception in the Creative Industry"

We all live within our own bubbles, surrounded by like-minded individuals and information streams we curate. When we step out of these bubbles, we often face realities vastly different from our own. This is what I felt in a room full of creative people and top automotive designers at the event of Car Design News, where no prosecco could hide the fact that the overwhelming sentiment echoed a common fear within the design community regarding AI's impact on the profession.

Returning home, I wondered why academics seem excited about machine learning (ML) capabilities, while the creative industry appears anxious and uneasy. Is it fear of the unknown or perhaps because AI feels so human-like?

- Ah, distinctly I remember it was in the bleak December;

Machine learning has progressed rapidly in recent years, but it wasn't until the end of 2022 that AI became a buzzword in the design world. Innovations like Dall-e, GDP, Midjourney, and Stable Diffusion have brought about a perfect blend of ML algorithms generating and executing prompts. The results are nothing short of magical—and they look human.

girl in a field wearing a hat and long dress, in the style of stormy seascapes, dark beige and light aquamarine, 32k

- Who are you?” said the Caterpillar. This was not an encouraging opening for a conversation. Alice replied rather shyly, “I—I hardly know, Sir, just at present—at least I know who I was when I got up this morning, but I think I must have been changed several times since then.

But let’s quickly get to the basics.

What is intelligence? Intelligence is a multifaceted and elusive concept that has long challenged psychologists, philosophers, and computer scientists. There is no general agreement upon definition of intelligence, but one aspect that is broadly accepted is that intelligence is not limited to a specific domain or task, but rather encompasses a broad range of cognitive skills and abilities.

What is Artificial Intelligence?

"narrow AI” - a specialised Neural Network trained to perform a specific task. This has been around for many years, like a Convolutional Neural Networks for image recognition. What is a computer program able to identify what's looks like a cat and what's not.

AGI — Artificial General Intelligence can be referred to systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level (Bubeck et. al.).

Are we there yet? A widely acknowledged paper "Sparks of artificial general intelligence: Early experiments with gpt-4" was completely misinterpreted by journalists bringing a narrative of "Judgement Day is coming" 

However, what the authors just said in that paper was: "Moving beyond a focus on the automation of tasks and the potential for various dimensions of human intellect and resourcefulness to be performed by machines, we see promising possibilities ahead for extending human intellect and abilities with new kinds of human-AI interaction and collaboration. We expect rich opportunities for innovation and transformation of occupations with creative uses of AI technologies to support human agency and creativity and to enhance and extend human capabilities."

Finally, there is a SAI - Strong Artificial Intelligence that constructs mental abilities, thought processes, and functions that are impersonated from the human brain. Today it is more of a philosophical rather than practical approach. We are definitely not even close to being there.

The Parrot Effect:

Interestingly, we tend to perceive creatures like parrots as smarter than ravens or crows, simply because they mimic us. We apply the same logic to AI systems like Chat GDP, which can replicate specific writing styles, or Midjourney, which emulates a watercolor painting.

Light watercolor, Greek small town, bright, tealcolor tones, white background, dreamy,in the style of

The "humanising" of ML related terminology very often leads to its misunderstanding.

The process of ML "learning" has nothing in common with the human learning process. When human receives new information and an explanation of its origin, he/she uses cognition to build logical and self-consistent understanding of phenomena. In the case of AI, "learning" is an algorithm of "fine-tuning" for a complex system. I will not describe the evolution of neural networks from GAN (Generative Adversarial Network), Mirza and Osindero, 2014 to LLM (Large Language Model), Melis et.al., 2017. I will mention that today we have Chat GDP4.0 as the greatest survival of this evolutionary process.

GPT (Generative Pre-trained Transformer) can predict the next word in your text. How? Leo Tolstoy's "War and Peace" has a little more than 500 000 words. The GDP-3 was trained with approximately a billion times more words.

The accuracy of GPT will only grow with the growth of input parameters (Brown et.al.,2020). But will it make GPT more intelligent?

Me and my presumably intelligent friends. 

My kitchen aid can prepare bread dough far more quickly than I can. An engineering calculator is able to compute numbers at a speed I cannot match. Adobe Illustrator can create precise curves within seconds, a skill that takes me years to master.

However, does this make the kitchen aid, calculator, or Adobe Illustrator intelligent? I believe not.

A Midsummer Night's Dream.

https://www.instagram.com/p/CqL06v9vK5P/

Our brain serves as the primary instrument for our survival, functioning as a machine that continuously minimizes errors or "mistakes." Each mistake could potentially be life-threatening.

Our brain constantly constructs predictive models of our surroundings, which is why our cognitive processes can be described as controlled hallucinations. Our brain interprets signals from our senses to form these predictions (Seth, 2021).

GPT also engages in a form of hallucination, which scientists call "daydreaming." For instance, when asked to generate a research article, GPT often produces false citations, fabricating justifications for its assumptions and presenting them as facts. This uncontrolled hallucination is akin to the behavior of someone who is unable to distinguish reality from illusion.

Such individuals are often considered mentally ill.

Close-up of Viking King emerging from wet black mud

When we think about the future or reminisce about the past, we engage in a form of hallucination, as we cannot accurately replicate the exact sequence of events and must attempt to recreate them. GPT, however, is incapable of doing this.

It demonstrates no intelligence, merely making accurate predictions about the next word in a sentence (Hofstadter, 2022). And that's all it can do.

Will this heartless unintelligent thing take your job? 

It depends. In the next article, I will explain why.

NB: several parts of this text were edited with the help of Chat GDP-4.0

PS: I'm still human. 

References:

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S. and Nori, H., 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.

Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.

Melis, G., Dyer, C., & Blunsom, P. (2017). On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.

Seth, A. (2021). Being you: A new science of consciousness. Penguin.

https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-today-are-not-conscious-according-to-douglas-hofstadter

Previous
Previous

Great time with friends at Pistons & Pretzels by Konzepthaus.

Next
Next

What a Car Design Night!