Life Touches Life: People, Purpose and Future of Work
Editor’s note: Enjoy this excerpt of an article written by ILFI founder and board member Jason F. McLennan. You can find a link to the entire article at the bottom of the page.
Just because a machine can do a job better than a person, should we let it? People have been asking this question since the term “Artificial Intelligence” (A.I.) was coined by a handful of computer scientists presenting a proposal to a conference at Dartmouth in 1956. By 1976, computer scientist and MIT professor Joseph Weizenbaum’s Computer Power and Human Reason focused a cautionary lens on the relatively new science and its potential negative impacts on society. Weizenbaum called for societal consensus that machines not replace humans in work that benefits from empathy—a state that a computer would be unable to simulate. Drawing on the work of other contemporaries looking at specific instances in which A.I. would be inappropriate, Weizenbaum named customer service representatives, therapists, eldercare workers, soldiers, judges, and police officers as roles that ought to be fulfilled by humans. This critique was in turn criticized for its vagueness and as such deemed dangerous, threatening to slow the rate of innovation and usher in authoritarian government control.1 This debate continues in the background even today, but has been overshadowed by the relentless march forward in technological innovation, which has thus far outpaced the ethical conversation. As A.I., supercomputers, and technological advancements cross threshold after threshold, it is time that we, as a society, answer a fundamental question: What are people for?2
Technological innovation has already established the ability to replace human labor. Brute force tasks, previously held by both humans and other animals, were the first to be automated with inventions like the steam engine, steam shovel, and mechanized farming equipment. Repetitive tasks, the work of manufacturing the stuff of life, were the next to be automated. Then technology replaced humans in simple communicative roles—automated help desks being a prime example. Right now, we are witnessing how technology is continually replacing much more complex functions such as that of cashier or bank teller. And while we may have difficulty comprehending how technology could possibly replace more creative, complex tasks, A.I. promises to do just that. In a not-so-distant future where machines can replace humans at nearly all tasks, is there a line we should not cross? Have we already crossed it? Are there things that should be sacred? Is there work that we should reserve for us, even if robots and computers can do it more accurately, faster, or cheaper? What is our purpose, if not to have a purpose?
The idea of utilizing more discernment in the adoption of new technological advances is one I have found compelling for some time. The clarion call of neoliberalism has been for deregulation and privatization of everything. Guardrails formerly in place to safeguard the excesses of the free market have been systematically dismantled year after year in the United States and state involvement decried as impediments to innovation, or worse, branded as “socialist overreach.” In neoliberalist theory, the free market allows for equally unbounded prosperity—a rising tide that lifts all boats—but in effect it is concentrating wealth in the hands of a few and widening socio-economic gaps between the haves and the have-nots. This wealth concentration is, in fact, by design. In a past article I authored in 2014 called “Ecological Ordnung” I put forward the simple premise that technology for the sake of itself (or simply the enrichment of a few) is not a reason to use it, and that there needs to be a societal and ecological screening criterion for any invention or new technology based on a democratic and egalitarian review of its impact on human and planetary health.
This article, in essence, picks up where that one left off by asking the follow up questions, “How do we better regulate rapid technological and A.I. adoption without stifling innovation and progress? Are there ways that we can use the arc of technological progress to wean humanity off work that we should not do, while making us more adaptable and better at the work we should do? How can we ensure technologies are, on a net basis, benefiting the whole—not just people, but the entirety of life on this planet—and not the few at the expense of the whole? Who should arbitrate this conversation and by what criterion?” We should ask ourselves why our politicians and business leaders are so silent on these issues even though the answers are often self-evident.
While there are no shortages of dystopian predictions of A.I. gone horribly wrong, it is not all gloom and doom if we put up suitable and rigorous guardrails and take more care with our designs. These are our technologies; they should work for us, not diminish us. They should not strip us of meaningful work and a viable future simply for the benefit of those who hold a patent or own enough shares and certainly not simply because they exist. The widespread adoption of technology must be more critically assessed going forward or, as we are now seeing, it will go on unchecked at great and perhaps terminal peril to humanity and most higher order life on this planet. One person’s invention should not undermine entire livelihoods and human dignity, or ecological and sociological health. This critical assessment is the work of democracy. Therefore, to get at the question, “What are people for?” from a holistic perspective, we must simultaneously ask ourselves, “What is technology for?”
Discernment And The Myth Of Inevitability
We do not exercise much discernment with regards to technology anymore in the United States. And while other countries have exhibited slightly more good judgment, this is also a global issue. In fact, a hallmark of our contemporary society is a total lack of discernment in many regards—particularly when there is money to be made. When conversations around AI and technology surface on questions of impact, we tend to shrug our shoulders and view the future as inevitable rather than designable. Screening for social and environmental good is quickly labeled socialist and regressive. We delude ourselves with aphorisms like, “All progress is good progress,” or, “The market will sort it out,” or, “Automation achieves better results and people will always find other work.” But when societal and ecological fabrics are as frayed as they are, it becomes clear that progress for progress’ sake does not inevitably lead to good. Our societal track record since World War II has been abysmal. When all major living systems across the globe are in decline and with the effects of runaway climate change becoming evermore evident, it has become clear that the market’s only inclination is to make money; it certainly does not sort anything out for the betterment of nature—which we perennially forget includes ourselves. And it does not magically level out the playing field for the poor and middle classes of the world who are given as little thought as other species. Left to its own devices, market forces without rules lead to degeneration, the lowest common denominator, as well as massive inequities, and even death. In the context of work, automation, and better economic results, we really need to define what we mean when we say “better.” If our only metric for better is GDP or stock market gains, our economic betterment portends our decline.
The language of inevitability, at the heart of neoliberalist policies, casts us as puppets in the hands of fate, carrying out the work of our own demise like brainwashed suicide bombers. We forget that we have agency in democracies, that we build legacies, that we can create meaningful change at massive scales when we wake up and act together for the benefit of the whole. We can utilize the same ingenuity that invents dazzling new technologies to thoughtfully consider their limitations, externalities and the precautions that should be taken in their use. Then as a society we can design what is best for us as a collective whole. This idea that, for good or ill, technological “progress” is inevitable is merely a human idea. We can introduce other ideas that change the timbre of this discussion and the trajectory on which we find ourselves as a global society.
In a democratic, free society we have the ability and the responsibility to say how we want things to be for all of us, including our children and our grandchildren. We must recognize that capitalism and democracy are not one and the same, and that capitalism is but one tool in our democratic toolbox to create the societies and world we want. It is possible to have active and vibrant capitalist systems within a functioning and active democracy that safeguard the world.3 Why then are we not asking ourselves, “How do we want to live? What role do we want technology to play in our world? What do we want our societies to look like? What are people for?”
Read Jason’s entire article in the spring 2021 issue of Love + Regeneration, here.
1 https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence
2 Wendell Berry wrote a great book of essays with this name. The name always captured my interest as a provocative question to understand in the context of work, play and human activity.
3 Let’s not forget that capitalism also exists in fascist states, monarchies, and communist countries like China. Capitalism does not equal democracy any more than putting economic and technological guardrails means that a country is now “communist” as so often is the scare tactic of neoliberal conservatives.