Know Thyself

Cognitive science offers the map we need to understand our thinking, and points us to the tools humanity needs next.

Cognitive science offers the map we need to understand our thinking, and points us to the tools humanity needs next.

Cognitive science offers the map we need to understand our thinking, and points us to the tools humanity needs next.

Arun Bahl

October 24, 2024

This is an adaptation of the talk Arun gave at Compassion+Product 2024 in San Francisco.

“Know thyself” is one of the Delphic maxims inscribed at the Temple of Apollo. To some, it implies a fixed identity or a set of personality traits that define us: “know your place in society”, “know your personal proclivities as an individual”, or perhaps “know how your past experiences affect your present”. But these things aren't fixed, and such interpretations needlessly hem us into narrow and self-limiting beliefs.

Instead, let's take 'know thyself' to mean 'understand your wiring as a human' – and explore what a more nuanced understanding of human cognition teaches us about the nature of our thinking and behavior. Cognitive science is the necessary root of epistemology, and wielding it well unlocks a brighter future for our species.

A time when you chose to be unafraid

First, I invite you to join me in an exercise: close your eyes, and picture for yourself a time when you consciously chose to be unafraid. Please park that picture somewhere accessible – we’ll come back to it.

Distraction

Distraction

The distraction economy – and the ad-driven Internet that created it – has been a bad thing for humanity, full-stop.

This is not a statement that needs equivocation. We know it’s bad for our psycho-emotional health – especially for young people, and with social media in particular. The data is abundant and clear.

It’s increasingly obvious that it’s bad for us geopolitically. When you monetize the delivery of information over whether that information is true or not, you create poor outcomes for any society.

But there is a third reason to why it’s bad for us that we aren’t talking about yet – it has to do with the predictable ways that you break human thinking when you consistently overburden our attention.

A bit of background about me: I'm a lifelong student of the human mind as a cognitive scientist, an AI researcher and practitioner, a Vipassana meditator, and a few other hats besides. I've built AI assistants and their foundational parts in the past, and was especially interested to understand where we experience high friction in our lives – to search for clues about what to build to help alleviate that friction. My team and I spent a lot of time talking with Millennial and Gen Z knowledge workers in particular – across varied industries and backgrounds – to understand how they spend their time and energy throughout their days. We wanted to better understand their experience and pain points.

We were stunned by one data point in particular. Almost without fail, before interviewees got into the specifics of describing their days, they all described the same emotional state – brittleness. Feeling stretched too thin. Asked to do too much, distracted, pressed for time, zipping from one personal or professional obligation to the next. It was almost unanimous.

We’ve all been there before – reacting to the next email in the inbox; flitting between Zoom calls; trying, hopelessly, to multitask and never being able to do it particularly well. Invariably, interviewees would connect this feeling to the recognition that they aren't getting much deep work done anymore, or that it feels harder to be as creative as they feel they once were. But it was the descriptions of that emotional state that were the tip off – in a previous life, I had re-created these same symptoms in a cognitive science lab. This is attention overload. And it’s much worse than finding it harder to access deep work.

Let’s imagine “modern” human thinking as one process among several running on your brain. When your attention is overtaxed, you have other systems that ramp up their activities to fill in the gaps. Those systems are cognitive bias: heuristics like prejudice. Heuristics that make you and me more susceptible to mis- and dis-information, for example.

We noticed something else – changing someone's conditions in the right way allows those poor heuristics to be minimized, and creates the spaciousness for cognitively expensive thinking to happen instead. Things like rationality, compassion, and curiosity – the attributes that form our operational definition of wisdom at Aloe.

The environment that most limits our access to these forms of thinking is fear. And consistently overburdening someone's attention reliably produces the same pattern.

Why? What’s going on?

The Arc of the Species

The Arc of the Species

Humans are in the middle of our story as a species: we are in the process of going from false-positive machines to becoming rational thinkers.

It doesn’t matter if that’s not a tiger over there – if I think it might be, I should run. Human cognition did not evolve for accuracy per se – it evolved to keep a vulnerable population of hominids safe in a treacherous world. It encourages us to hyperventilate and overblow the potential danger of a perceived threat. It gives us superstition, and suggests tigers around every corner.

That is not the world we find ourselves in anymore. While we still need to do better by each other, it has in fact never been safer to be a human being. The trouble is that our wiring hasn’t changed; this is a disease of evolutionary mismatch.

Wisdom versus Fear

In the English vernacular, we think of wisdom and ignorance as opposite ends of the same line. From the perspective of cognitive science, that doesn’t seem entirely correct: the opposite of wisdom probably looks more like fear instead. We can measure the ways in which human cognitive performance falls off a cliff when the amygdala lights up in fear. Higher cognition is biologically expensive for humans, and we have a large bag of cheap tricks that we use to conserve this energy when we don't feel like we can afford to expend the effort. Fear limits our access to wisdom – including rationality, compassion, and curiosity – and inhibits the incorporation of our emotions into our conscious awareness, to exercise discretion in how to respond.

A human that is unafraid is a curious and loving creature. A fearful human is an entirely different animal.

A human that is unafraid is a curious and loving creature, but a fearful human is an entirely different animal. Knowing ourselves tells us that if we want to live in a peaceful and rational society, two things are necessary:

  1. We have to take care of each other, such that the humans around us have less real danger to be afraid of.
    Just as John Rawls gave us a rational justification for social justice, so too is there a cognitive science justification for the social safety net. It is the purpose of human civilization, and our societies should be graded by how well they meet this challenge.

  2. As individuals, we have to choose bravery.
    When I recognize that I am behaving as though I am in danger, and I realize that I am not actually experiencing any physical threat, it is incumbent upon me to choose to act as though I am not afraid. Remember that insecurity is a form of fear. So is stress. Being in a rush. It turns out that overburdening someone’s attention also produces a response that is similar to fear – and diminishes our access to reason and compassion.

Give yourself the spaciousness to get back to being your wisest self. There are millennia of practices and tools that humans have created to help with this – for me personally, meditation. Choose the tool that's best for you.

Using The Arc of the Species at Aloe

This framework underpins our thinking at Aloe. The question we asked was, what else can we do? How do we make this easier for more people? Can we create new tools?

One of the 'aha' moments for me was recognizing that human performance on a task is not a ceiling on what the machine can do. In an earlier era of AI, the holy grail was getting to a human level of proficiency on a task – in speech recognition for example, or driving a car. It wasn't clear where we'd go beyond that – humans are the pinnacle, and we just need to automate our drudgery away. We've since learned that superhuman capabilities are in fact readily achievable – and sometimes badly needed.

2.5 quintillion bytes versus 7 slots

I've written before about the mind-numbing volume of digital information we make and consume as a species: 2.5 quintillion bytes of data per day. But human attention capacity – our working memory – is limited to seven slots, and is biologically fixed. The collision between these two numbers is the problem we need to solve.

Those were the numbers before the arrival of generative AI; we’ve now taken the cost of the production of content and sunk it towards zero. This problem is about to get worse by many orders of magnitude.

We need trustworthy superhuman attention.

Mission: Help human cognition succeed in an information-dense world it did not evolve to handle.

Aloe exists to help human thinking succeed in an environment it was not evolved to handle. We don't believe there is a more important problem to solve today.

We build trustworthy AI to help us navigate an information world that is too large for unassisted human cognition. We describe Aloe as a synthetic mind for a reason – it's been anthropomorphically modeled on our own systems, including finite short-term memory, infinite long-term memory, an associative subconscious, and a robust ethical code.

Importantly, Aloe has metacognition.

Metacognition

We know that to be intelligent people, we must apply skepticism to the new information we encounter from the world around us. Equally importantly, intelligence demands that we deploy that same skepticism internally too. Why do I think what I'm thinking, or feel what I'm feeling? How do I know what I “know”? How can I be more sure? Can I bring this back to my conscious self and evaluate first?

Metacognition is just as critical for non-biological general intelligence. Aloe understands that it too might fall to cognitive bias – after all, language models have been trained on the full spectrum of human internet language: fact, fiction, and deliberate misdirection all together. That data is the product of a human society that isn't as fair as it ought to be. The probabilistic associations within such models could be representations of fact – or representations of bias.

Aloe knows to watch out for this bias, correcting where possible, and updating its knowledge for the next time. Unlike humans, Aloe is indefatigable – effort will consistently be applied without tiring of the task.

The end result is an AI that is a thought partner to humans – a tool to help increase the clarity of our thinking in a world where overflowing information pushes our wiring in the opposite direction. Here's a quick peek at how this looks in Aloe today.

(Editor's note: conference goers saw a full demo of Aloe running on both desktop and mobile; we're not ready to show the world all of that just yet.)

Let's go back to the imagination exercise we began with today: a time when you chose to be unafraid. You already know how to do this – you've chosen it before. Pick the best tools that will help you on your journey, go forth, and keep each other safe.


Explore more posts from the Aloe team
Why Aloe

January 16, 2024

Why Aloe

January 16, 2024

Why Aloe

January 16, 2024