The First Sapiens is us. The ones whose history came to a quiet end in November 2022. This according to anthropologists and historian Yuval Noah Harari. The basis for his argument was that human history – all of it as we knew it – has become quantifiable and digitizable, storable somewhere in the vast ecology of AI clouds, never to be challenged for its voluminous content and its meticulous accuracy. So, if he is right and human history has ended, what are we left with? The now? Yes. The future? Yes! Can AI mine that future? No. Can it be trained to do so with its learning algorithms and billions of iterations? NO! Is the future different than the past? Yes. Exponentially so.
Let me explain.
We are heading into an era of human existence where a new form of intelligence will be needed, one that is far higher than anything homo sapiens has known. It’s the only form of intelligence that will save us and what remains of life on the planet. Unlike the minds that drove the First sapiens to built better weapons, better shelter and better living conditions, the new intelligence examines the ethos of these historic endeavors to determine if and how they have diminished our planet’s resilience and her ability to ensure the continuity of life. This is Gaian science, an entirely different level of scientific exploration which has remained in a dormant state that still hasn’t been defined, vetted, or validated. It is exponential in nature and it defies all existing scientific methods. It is a new form of planetary science that must examine the degree to which the First sapiens and his insatiable appetite for the modern life has damaged our planet’s ecology. It is the study of ecological collapse at the planetary level that moves at exponential speed and no algorithm or AI model created by the minds of First sapiens has the capacity to understand.
Technology solutionists today believe that the refinement of their innovations will help us revolutionize our future. That is the worldview of the optimistic, brilliant, yet limited thinking of the First sapiens who focuses on human-built systems to the detriment of everything else. Second Sapiens, focuses on that everything else, on the planetary and natural systems that make the continuity of life on the planet possible. He/she remains skeptical about how much digital technologies in their current form and content can contribute to stabilizing a world defined by mega-scale systems and exponential change.
We live in a binary universe. Every action has an equal and opposite reaction. Every action has a feedback loop that keeps our world in balance. But, due to his desires and attachments, the First sapiens has ignored most feedback loops resulting from his actions, violating the binary nature of the universe that has existed since time immemorial. Nothing explains the binary nature of First sapiens better than the digital universe it has created. Information, which is the bloodline of the digital age in its elemental form, is binary. Computers store and process data using bits and bytes that can only be in two states: 0 or 1. This binary system has no innate intelligence; it merely allows for efficient manipulation and transmission of information that makes all digital devices from smartphones to supercomputers operate the way they do. Algorithms that run the world—from the largest supply chain spanning the globe to the world’s biggest financial trading platforms handling billions of dollars’ worth of transactions a day—are all encoded using binary sequences that are representations of the binary nature of its creators. .
The digital age that has disrupted so much of our lives, for better or for worse, is fundamentally built on binary information storage and processing. It is the different combinatorials and iterations, the creative sequential and algorithmic programming, that give us the rich complexity of the world and its ever-expanding technology ecosystem as we know and experience it today. But what happens when the most advanced form of AI perceives patterns or objects that are nonexistent and altogether inaccurate? What happens when the computer models designed to help navigate the Anthropocene must do so with insufficient data from the higher-complexity science that is proprietary to that stage of development? Why do computer models still fail accurately to predict the annual rate of the rise of global temperature and of ecological collapse and how quickly polar ice is melting?
Based on past experience, there is little doubt that we can work out the bugs in the current systems of knowledge. We will find AI-based cures for all types of diseases by expediting the processing of genomic data and creating tailored treatment plans unique to each individual. But how do we know what the bugs are in a system in which its data is of a completely different order and remains emergent, changing unpredictably in real time? It took scientists over twenty years to map out the human genome, which makes it possible for AI to mine that data in a fraction of the time it took researchers to uncover it. What Earth-systems knowledge base can programmers use to establish reliable patterns that AI can mine so we can predict our future within reason? Unlike the current ways AI gathers data, will programmers be able to train their models to gather data from the future—data that doesn’t yet exist? AI could not create those tailored treatments derived from the genomic knowledge base if it weren’t for the extraordinary worldwide commitment to fund and support the Human Genome Project for over two decades. Similarly, AI’s role in helping us resolve problems in the Anthropocene epoch must be proceeded by investments and long-term commitments intended to quantify the nature of Anthropogenic – Gaian sciences.
Much of this new science is yet to be uncovered, and, due to its complex and highly interdisciplinary and collaborative nature, patterns of its emergence remain greatly unpredictable. This becomes a challenge to computer programmers attempting to train their data models to follow identifiable patterns when the science that creates these complex patterns has remained beyond quantification and far beyond the linear and binary grasp of First-sapiens intelligence. Due to what remains unknown in the Anthropocene, would computers hallucinate answers the way ChatGPT and other AI generative models do today, and would such hallucinations create more chaos and misinformation that would derail the upward progress we’ve made in moving the needle on the Anthropogenic – Gaian science learning curve? If we acknowledge this as our new reality, then the question for technology solutionists becomes, How can we build predictive training models in the form of machine learning that help address Anthropocene issues from a knowledge base that has been greatly shaped, defined, and constrained by the deficient motivations of the First sapiens?
I found two possible answers to that question in the work of two individuals who are not technology solutionists but think in systems. The first was from Harari who was quick to qualify what he meant by the end of human history based on his own perception of the evolutionary stages of Homo sapiens. In his 2017 book, Homo Deus: A Brief History of Tomorrow, Harari argues that we have sidelined deo-centrism, the worship of an outer god, (which is proprietary to stage 4 in the chart above), in favor of homo-centrism, the worship of ourselves, (Stage 5), and that the next stage of evolution would sideline homo-centrism in favor of data-centrism. In interviews and lectures he has given since the release of the different large language models, Harari defends his views on the end of human history by claiming that the operating system of human culture is language. It is from language that we have created human narratives such as myth, law, art, and science, and these are the things that build civilizations. By gaining mastery of language, Harari believes that AI has acquired the master code to human civilization. In some sense, this development could represent what has long been feared and debated: a concept known as the technological singularity in which machine intelligence surpasses human intelligence and forever changes the trajectory of our future.
Data-centrism, is an algorithmic expression of the 5th stage of development, part of First sapiens. Unlike past narratives depicted in science-fiction movies, large language models are not violent machines that subjugate human civilization through blood and gore. Rather, they do it through soft skills that affect the mind. They do it by telling alternative stories generated by their algorithms that first and foremost seek to maximize profits and valuations for the companies that create them. The end of human history based on this data-centric narrative is far more dystopian than science fiction can imagine. It will be brought about by us, disintegrating from within. Generative AI, like its younger kin, social media, will continue to exploit our weaknesses, our biases, and our addictions. It will continue to assemble language that vastly expands our social and political polarization, undermines our mental health, and unravels our democracies. Without wise regulatory structures that see through the mirage of technology and learn how to transcend and use it wisely, human history could end the way Harari defined it.
In my work in human and social development, a humanity that is data-centric, in a free-market economy that monetizes data, is nothing more than an extension of free market ethos operating without government supervision. Before large language models, language spoke to different people and different cultures at all stages of development. We fought against the dark forces of the unhealthy side of these stages to unshackle ourselves and move up to higher levels of psychological freedom. This is the nature of the evolutionary process that enables our spirit, our never-ending quest to continue. It does not signify the end of human history; it is the transcendence of First sapiens, more specifically, stage five in First sapiens development that seeks to manipulate the world through its reductive sciences and algorithmic modeling.
In order for us to tap into Second sapiens intelligence, there needs to be a global ecology of wise governance that is in tune with our environmental and digital challenges, and not beholden to the values of the industrial age and neoliberal economics. The ideal candidates for this crucial transition will be those who see the simplicity beyond the algorithmic complexity of the digital world. Tristan Harris and Aza Raskin, the cofounders of the Center for Humane Technologies will be the ideal candidates to steer that ship. In March 2023, they, along with 1,100 technologists asked our government to place a moratorium on AI development. This is the specialized intelligence from the Seventh stage of development that works collaboratively with other Seventh stage thinkers from other areas of specialty to make governance of the future possible. Becoming part of that complex adaptive system will also transform the end of human history into informational units that serve the Anthropocene.
Harari’s narrative on the evolutionary sequence of Homo sapiens led me to search the unpublished archives of Clare Graves, the academic behind the model I use in my work. I wanted to explore his views on technology and the role it plays in our psychosocial evolutionary process. That is where I found the second answer to my question on the constraints we face in building predictive training models from a knowledge base constrained by the deficiencies of the First sapiens. Unlike Harari, Graves was very cautious in predicting the precise details of our future, especially when that future entails our ascendence into Gaian Second-sapiens intelligence known for its exponentially higher degrees of neurological and psychological activation. That is the place where we must examine the failures inherent in the reductive First sapiens sciences that have contributed to ecological collapse.
Graves believed that while human development in stage seven in the table above, will represent an exponential growth in intelligence, technology will only be a quantitative extension of the lower stages of development, not an exponential one. He made that prediction in the late 1970s. In examining how his hypothesis has withstood the test of time, one might think that advancements in artificial intelligence and machine learning that were beyond his grasp would have rendered his thinking obsolete, but that may not be the case. As complex as the digital world is today, with all its complicated iterations and the various creative programming that gives it form, the best it can do is mine knowledge of our human experience that is part of our past. Even with its predictive powers, it cannot give us a reliable, nonlinear representation of the future, especially if that future represents a partial reversal of the past and is defined by an exponentially higher level of psychosocial intelligence that seeks to preserve what remains of planetary life.
Generative AI and other forms of machine learning will continue to expand our intellectual rigor and raise our cognitive intelligence. They will even help us articulate some Second-sapiens concepts, but these improvements are quantitative and will come at a high cost; that is, the more machine learning we rely on, the more we will lose our uniquely human qualities. Virtues such as emotional and spiritual intelligence become diluted in an ecosystem designed for a data-centric society. We are becoming less and less equipped to handle uniquely human problems at a time when we most need to do so. Ultimately, when the time comes to transcend the ideology of data-centrism, we will realize that the idea of technological singularity is a fallacy and that AI will reveal to us what Homo sapiens is by revealing the things it cannot do.
The more our Anthropogenic reality comes into focus, the more it will become necessary for us to reverse the corrosive aspects of our past and create new ways of being and thinking. The intelligence that defines those virtues is just beginning to emerge and will eventually serve as the new reservoir of knowledge in which machine intelligence is recognized for what it is—a utility subordinate to human wisdom that is needed now more than ever to help all life forms on the planet survive and thrive.
For a full developmental perspective on Second Sapiens, look for my book Second Sapiens: The Rise of the Planetary Mind and the Future of Humanity available from your favorite book seller or click here to purchase it from Amazon
Thank you for this great installment in your prolific contribution to the work of this sacred inquiry…
A few things come to light for me in the negative space of what you've shared here…
First is the Harari view of history — I do wonder what our aboriginal brothers and sisters, particularly those in Australia and the Kalahari might have to say about how history ends, and indeed what it even is…
The other is a curiosity about the place of quantum computing in these potentially "trans binary" evolutionary unfoldings…
Trickle through we do, we little drops in the canyons of evolutionary stream…