How Machines Learn: Understand AI by Understanding Your Mind

Return
View All Posts
Strategy and Marketing Lead
Jan 26, 2024

One of the essential areas for professional development in business in the next decade and beyond will be supporting our collective understanding and adoption of artificial intelligence and machine learning. From the C-suite to the technology team to the employees who work at the front desk – each level of an organization will have the opportunity to benefit from the adoption of AI tools in their work to increase productivity and efficiency. However, there hasn’t been this much fear around introducing new technology into the business environment since computers first arrived in office buildings.

Here, we offer the beginning of a conversation about how we might begin to unpack our fears and approach the most potent technological transformation of our lifetimes. 

The idea of metacognition is to think about our thinking and learning. Thinking about how we think and learn as human beings is one of the best ways to think about how a computer thinks and learns. Comparing and contrasting what we know about human learning and machine learning can help us understand the current state of Generative AI and where things might go next. It can also help us ask better questions and consider possible applications for AI in our work.

When we think about Artificial Intelligence and the training of Generative AI, we generally talk about four categories: Machine Learning, Neural Networks, Natural Language Processing, and Robotics

The father of artificial intelligence, Alan Turing, once speculated that the human mind may be a computing machine. Is the mind like a computer? In some ways, yes, and in many ways, no. 

Some of these AI domains mirror human cognition better than others. Understanding how the human mind works is a great starting point for understanding how AI works and will work in the future. 

There is so much about the human mind that we don’t know. The field of neuroscience has changed rapidly over the last 50 years, as has the field of Artificial Intelligence. As we continue to develop our understanding of how the brain works, so too will it influence the design and training of our AI systems. 

The Pattern Seeking Animal and Machine Learning

Machine learning is at the core of Generative AI – a departure from traditional programming methods. Machine learning is based on pattern recognition. Human beings are also pattern-seeking prediction machines. We take in all our individual experiences, attempt to discern patterns, and then apply those learned patterns as we move through the world to anticipate what we think will happen next. 

Pattern recognition is so strongly associated with human intelligence that we lean on it to determine IQ. IQ tests most often measure how well you can detect and discern patterns and then deduce what will come next. 

AI employs machine learning algorithms to detect patterns from vast datasets, refining its performance iteratively through trial and error.

Like Artificial Intelligence, humans depend on the quality of the information used as inputs. We can start detecting inaccurate or biased patterns with bad inputs or incorrect information (aka garbage in, garbage out).

One of the great hopes for AI is that it can make decisions more unbiased and objectively than humans. This idea has merit. Take memory, for example. Humans will add things to their memories that didn’t exist there before, whereas a computer will not invent new information. But the aim of objectivity depends on the quality of the information the AI is trained on – just like the human mind is only as good as the quality of the information it has been trained on. 

Neural Networks

In the human mind, some of the brain’s structure is innate. Meaning, we don’t come into the world as a blank slate. There are parts of our mind that are already there, organized in advance of experience. For example, the ability to acquire our first language is innate. No one has to teach us how to speak. We just do it. 

Reading, on the other hand, is an example of a neural network. A neural network will develop in the mind with use. It becomes stronger with use or weaker without use. This is the idea of neuroplasticity – meaning we can change the organization of the mind with experience. 

For example, learning to read (which is fairly recent in human evolution) literally changes the organization of the brain. This new skill creates a new region in the brain known as the letterbox in the left hemisphere and moves the processing of faces more over to the brain's right hemisphere. 

In contrast, AI's neural networks are meticulously crafted, disciplined, and honed through extensive datasets. While AI can emulate learning through the adjustment of connection weights, the innate self-evolving nature of the human brain currently eludes AI’s grasp.

Natural Language Processing: Innate Mastery vs. Deliberate Learning

The acquisition of human language, a remarkable feat, unfolds organically during early childhood. This innate ability allows humans to grasp linguistic nuances effortlessly.

AI's training for natural language processing (NLP) involves a deliberate process of training models on massive datasets, which mirrors how a human might try to learn a second language in adulthood. 

The seamless acquisition of our first human language is a stark contrast to the meticulous training required to instill a new language in AI.

Robotics: Embodied Intelligence vs. Programmed Adaptability

One only has to observe the difficulty my Roomba has at navigating a home filled with children’s toys to recognize that robotics is perhaps the arena where the most prominent gap exists between human ability and artificial intelligence. 

Humans adapt to the physical world through sensory experiences, emotions, and consciousness. We organically integrate with our environment.

Conversely, AI-driven robots lean on programmed adaptability. While they glean insights from real-world experiences, the innate understanding characterizing human cognition is still beyond our complete knowledge and, thus, our ability to translate into a machine. 

Let's look at some more examples.

The Next Frontier: Perception 

Currently, most generative AI systems like ChatGPT and Bard are focused on text, specifically trained on the English language.

Human perception is guided by the sensory nervous system, allowing us to experience the world through sight, sound, touch, taste, and smell. 

For humans, visual processing is our most cognitively demanding sense. 

According to the bestselling author of the book Atomic Habits, James Clear, there are subconscious ways in which we sense stimuli, such as detecting a drop in temperature before a storm. The mind's preceptors can pick up on various internal cues, like salt levels in the blood or the need for hydration when thirsty. Clear notes that vision is the most powerful among these sensory abilities, with approximately 10 million of our 11 million sensory receptors dedicated to it. Some experts even suggest half of the brain's resources are devoted to vision.

And our perception leads to experience. Perception seems straightforward on the surface, but it is incredibly complex. We still don’t understand how it works all that well. This is why security measures can thwart most algorithms by simply asking humans to select the image boxes of sidewalks or motorcycles. 

Take this example from the Yale psychologist Paul Bloom

“Among other things, a good perceiver has to recover information about fundamentally static properties. But perception is in constant flux. Imagine watching a dog as it runs past you down the street. The image of the dog gets smaller and smaller on your retina as it retreats, but you shouldn’t conclude that the dog is shrinking. As the dog turns a corner, you move from an image of its tail and back legs to the full dog in profile–a radical change in shape–but you shouldn’t conclude that the dog itself is changing shape. When the dog darts under the shade of a tree, there is less light reflecting from its body, but you shouldn’t see the dog as changing color. Somehow, the brain takes shifting information and calculates that there is a singular, unchanging being that is moving through space and time. (To make this problem even harder, we need to be able to perceive objects in the world that actually do change, as when we watch someone blow up a balloon, twist clay into a statue, or paint a canvas.)

If we knew how the mind did all this, we could program computers to do the same, but we don’t, and so we can’t.” 

The next step in AI is like what Google is attempting with their Gemini model: a multi-modality with capabilities that haven’t existed in computers before. The idea is that Gemini will be able to understand the world around us in the way that we do, including text, code, audio, image, and video. 

Up until now, the conventional method for developing multimodal models involved training distinct components for various sensory inputs and then combining them to approximate certain functionalities. While these models may excel in tasks such as image description, they often struggle with more abstract and intricate forms of reasoning.

Gemini, on the other hand, was built to possess inherent multimodal capabilities right from the outset. Google pre-trained it to handle diverse modalities and then fine-tuned it using additional multimodal data to enhance its overall performance. This unique approach allows Gemini to comprehend and engage in reasoning across a wide spectrum of inputs, surpassing the capabilities of existing multimodal models.

If your organization wants help with professional development or management related to Artificial Intelligence, we are here to help! Let our technology consultants train your team to better understand AI and move past the fear of adopting it into their work. 

Contact Us

Do you have a project like this?

The latest from Integrity

View all posts