Know Thyself

Arun Bahl
October 24, 2024
This is an adaptation of the talk Arun gave at Compassion+Product 2024 in San Francisco.
“Know thyself” is one of the Delphic maxims inscribed at the Temple of Apollo. To some, it implies a fixed identity or a set of personality traits that define us: “know your place in society”, “know your personal proclivities as an individual”, or perhaps “know how your past experiences affect your present”. But these things aren't fixed, and such interpretations needlessly hem us into narrow and self-limiting beliefs.
Instead, let's take 'know thyself' to mean 'understand your wiring as a human' – and explore what a more nuanced understanding of human cognition teaches us about the nature of our thinking and behavior. Cognitive science is the necessary root of epistemology, and wielding it well unlocks a brighter future for our species.

First, I invite you to join me in an exercise: close your eyes, and picture for yourself a time when you consciously chose to be unafraid. Please park that picture somewhere accessible – we’ll come back to it.
Distraction

The distraction economy – and the ad-driven Internet that created it – have been a bad thing for humanity, full-stop.
This is not a statement that needs equivocation. We know it’s bad for our psycho-emotional health – especially for young people, and with social media in particular. The data is abundant and clear. It’s increasingly obvious that it’s bad for us geopolitically. When we monetize the delivery of information over whether that information is true or not, we create poor outcomes, for any society.
But there is a third reason to why it’s bad for us that we aren’t talking about yet: human thinking breaks in predictable ways when our attention is consistently overburdened.
A bit of background about me: I'm a lifelong student of the human mind as a cognitive scientist, an AI researcher and practitioner, a Vipassana meditator, and a few other hats besides. I've built AI assistants and their foundational parts in the past, and was especially interested to understand where we experience high friction in our lives – to search for clues about what to build to help alleviate that friction. My team and I spent a lot of time talking with Millennial and Gen-Z knowledge workers in particular – across varied industries and backgrounds – to understand how they spend their time and energy throughout their days. We wanted to better understand their experience and pain points.
We were stunned by one data point in particular. Almost without fail, before interviewees got into the specifics of describing their activities, they all described the same emotional state – brittleness. Feeling stretched too thin, asked to do too much, distracted, pressed for time, and zipping from one personal or professional obligation to the next. It was almost unanimous.
We’ve all been there – reacting to the next email in the inbox; flitting between Zoom calls; trying, hopelessly, to multitask and never being able to do it particularly well. Invariably, interviewees would connect this feeling to the recognition that they aren't getting much deep work done anymore, or that it feels harder to be as creative as they once were. But it was the consistent and familiar description of that emotional state that was the tip off – in a previous life, I had re-created these same symptoms in a cognitive science lab. This is attention overload. And it’s much worse than finding it harder to power through your to-do list.
Let’s imagine “modern” human thinking as one process among several running on your brain. When your attention is overtaxed, you have other systems that ramp up their activities to fill in the gaps. Those systems are cognitive bias: heuristics like prejudice. Heuristics that make you and me more susceptible to mis- and dis-information, for example.

We noticed something else – changing someone's conditions in the right way allows those poor heuristics to be minimized, and creates the spaciousness for cognitively expensive thinking to happen instead. Things like rationality, compassion, and curiosity – the attributes that taken together constitute our operational definition of wisdom at Aloe. The environment that most limits our access to this kind of thinking is fear, and consistently overburdening someone's attention reliably produces the same pattern.
Why? What’s going on?
The Arc of the Species

Humans are in the middle of our story as a species: we are in the process of going from false-positive machines to becoming rational thinkers.
It doesn’t matter if that’s not a tiger over there – if I think it might be, I should run. Human cognition did not evolve for accuracy per se – it evolved to keep a vulnerable population of hominids safe in a treacherous natural world. It encourages us to hyperventilate, and exaggerate the potential danger of a perceived threat. It's prone to superstition, and suggests tigers around every corner.
That is not the world we find ourselves in anymore. While we still need to do better by each other, it has in fact never been safer to be a human being. The trouble is that our wiring hasn’t changed; this is a disease of evolutionary mismatch.
Wisdom versus Fear

In the English vernacular, we think of wisdom and ignorance as opposite ends of the same line. From the perspective of cognitive science, that doesn’t seem entirely correct: the opposite of wisdom probably looks more like fear instead. We can measure the ways in which human cognitive performance falls off a cliff when the amygdala lights up in fear. Higher cognition is biologically expensive for humans, and we have a large bag of cheap tricks that we use to conserve this energy when we don't feel like we can afford to expend the effort. Fear limits our access to wisdom – including rationality, compassion, and curiosity – and inhibits the incorporation of our emotions into our conscious awareness, to exercise discretion in how we choose to respond.
A human that is unafraid is a curious and loving creature. A fearful human is an entirely different animal.
A human that is unafraid is a curious and loving creature, but a fearful human is an entirely different animal. Knowing ourselves and our wiring shows us that if we want to live in a peaceful and rational society, two things are necessary:
We have an obligation to take care of each other, such that the humans around us have less real danger to be afraid of.
Just as John Rawls gave us a rational justification for social justice, so too is there a cognitive science justification for the social safety net. It is the purpose of human civilization, and our societies should be graded by how well they meet this challenge.As individuals, we must choose bravery.
When I recognize that I am behaving as though I am in danger, and I realize that I am not actually experiencing any physical threat, it is incumbent upon me to choose to act as though I am not afraid. Remember that insecurity is a form of fear. So is stress. Being in a rush. It turns out that overburdening someone’s attention also produces a response that is similar to fear – and diminishes our access to reason and compassion.
Give yourself the spaciousness to get back to being your wisest self. There are millennia of practices and tools that humans have created to help with this – for me personally, meditation. Choose the tool that's best for you.
Using The Arc of the Species at Aloe
This framework underpins our thinking at Aloe. The question we asked was, what else can we do? How do we make this easier for more people? Can we create new tools to increase our odds of success?
One of the 'aha' moments for me was recognizing that human performance on a task is not a ceiling on what the machine can do. In an earlier era of AI, the holy grail was getting to a human level of proficiency on a task – in speech recognition for example, or driving a car. It wasn't clear where we'd go beyond that – humans are the pinnacle, and we just need to automate our drudgery away. We've since learned that superhuman capabilities are in fact readily achievable – and sometimes badly needed.

I've written before about the mind-numbing volume of digital information we make and consume as a species: 2.5 quintillion bytes of data per day. But human attention capacity – our working memory – is limited to seven slots, and is biologically fixed. The collision between these two numbers is the problem we need to solve.
Those were the numbers before the arrival of generative AI; we’ve now taken the cost of the production of content and sunk it towards zero. This problem is about to get worse by many orders of magnitude.
We need trustworthy superhuman attention.

Aloe exists to help human thinking succeed in an environment it was not evolved to handle. We don't believe there is a more important problem to solve today.
We build trustworthy AI to help us navigate an information world that is too large for unassisted human cognition. We describe Aloe as a synthetic mind for a reason – it's been anthropomorphically modeled on our own systems, including finite short-term memory, infinite long-term memory, an associative subconscious, and a robust ethical code.
Importantly, Aloe has metacognition.
Metacognition

We know that to be intelligent people, we must apply skepticism to the information we encounter in the world around us. Equally importantly, intelligence demands that we use that same skepticism internally, too. Why do I think what I'm thinking, or feel what I'm feeling? How do I know what I “know”, and can I be more sure? Can I bring this into to my conscious experience to evaluate it, and use that to make a better decision?
Metacognition is just as crucial for artificial general intelligence. Aloe understands that it too is susceptible to its own form of cognitive bias – after all, autoregressive language models have been trained on the full spectrum of human internet language: fact, fiction, and deliberate misdirection all together. That data is a mirror to a human society that isn't as fair as it ought to be. The probabilistic associations within such models could be representations of fact – or representations of bias.
Aloe is an AI built to understand and watch out for the flaws that might be present in its own thinking – to not assume those probabilistic associations are representative of fact, use reason and tools to correct itself when possible, and update its knowledge and mental models based on what it has learned, in preparation for the next time. Unlike humans, Aloe is indefatigable – effort will consistently be applied without tiring of this vigilance. And Aloe will continue to improve as we bring additional forms of reasoning online for it to employ.
The end result is an AI that can think critically – a tool that can be a trustworthy thought partner to humans, and help increase the clarity of our thinking in a world where overflowing information pushes our wiring in the opposite direction. Here's a quick peek at how this looks in Aloe today.
(Editor's note: conference goers saw a full demo of Aloe running on both desktop and mobile; we're not ready to show the world all of that just yet.)
Let's go back to the imagination exercise we began with today: a time when you chose to be unafraid. You already know how to do this – you've chosen it before. Pick the best tools that will help you on your journey, go forth, and keep each other safe.