Ours is a volatile period in which technology has gone out of control and raised the specter of technological determinism. Coined by sociologist and economist Thorstein Veblen around the early 1900s, technological determinism is the notion of autonomous technology determining the nature of societies. Professor Jacques Ellul, a long time French professor of history and sociology, argued similarly while warning that technology and its persistent pursuit of efficiency or what he labeled “technique,” threatened human freedom and individuality. He remarked that technology shapes our thinking, reducing everything to problems solvable by technological means (technological fix), potentially leading to totalitarian control. He rationalized that prioritizing efficiency could erode human values and make individuals subservient to technological systems. To this day, I experience that dependence on technology Ellul warned against sometimes when I pay my bills. Instinctively, I reach out for my calculator, only to realize it is a simple addition or subtraction I could easily process mentally. It is necessary to point out that Jacques Ellul was neither a technophobe nor a Luddite. Rudi Volti, professor emeritus of sociology at Pitzer College, California, rationalized that if technological determinism is true, then “we have become the servant of technology instead of its master.” At the alarming rate new technologies are unleashed in society with minimal technology assessment, I fear that technology may well have developed a mind of its own and our ability to tame it has waned considerably. Francois Hetman warned a long time ago that “uncontrolled introduction of new technologies is not an unmitigated blessing.” “The accumulation of technology,” he wrote, “seems to produce a rapidly extending array of negative side effects.” He added that “there is a growing awareness of disruptive impacts of technology which tend to be identified with causes of individual dissatisfaction and social disillusionment.” The effects of technology are not limited to what is expected, but also serious harmful unintended, delayed, and indirect effects.
Energy technologies and Artificial Intelligence are no exceptions. The first operational commercial nuclear power plant in the United States in 1957 was established barely 15 years after physicist Enrico Fermi, an Italian immigrant to the United States, demonstrated self-nuclear chain reaction involving fissile uranium 235 in his laboratory under the bleachers of Stagg Field at the University of Chicago on December 2, 1942.
Inadequate assessment of this technology — an assessment almost entirely limited to short-term economic gains — leaves us today with over 90,000 metric tons of highly radioactive irradiated waste, and an estimated 2,200 metric tons generated annually from the 94 operational nuclear power plants in the United States, consisting of two light water designs of 63 pressurized water reactors and 31 boiling water reactors. The nuclear energy industry has stagnated for years now because of the politicization of building an underground repository to store irradiated fuel in the United States. A 2009 Government Accountability Office (GAO) Report projects the United States will have about 153,000 metric tons of nuclear waste by 2055, as the establishment of an underground repository to bury this waste continues to be mired in politics. The world also has to deal with nuclear accidents and radioactive air that kills and sickens people.
We are transitioning to renewable energy technologies of solar and wind power in the hope to slow down climate change, which has been accelerated due largely to fossil fuel combustion. Turning to renewable energy technologies is a step in the right direction, but we must not ignore conducting a thorough assessment of these technologies. Even though renewable energy technologies are without doubt better for human health, animal species, and the environment than fossil fuels, their production and use may have unintended, indirect, and delayed consequences. For example, according to the U.S. Fish and Wildlife Service, wind farms are hazardous to bats and migrating birds, killing about 888,000 and 573,000, respectively, in the United States. What else do wind farms and solar gardens do besides generating electricity and being environmentally friendlier than burning fossil fuels?
This is the same question we need to be asking regarding Artificial intelligence (AI), the current state-of-the-art technology spreading like wildfire in societies. I grade student papers written for them last minute by ChatGPT. It is my observation that papers written by ChatGPT lack the flesh and blood of the actual. I paraphrase Lewis Mumford, a historian, sociologist, and philosopher of technology, wondering whether by acquiescing to Artificial Intelligence, we are deliberately anesthetizing ourselves of the normal feelings, emotions, anxieties, and hopes that alone could bring us to our senses.
As promising as AI is, it has its potential short-term effects. Geoffrey Hinton, a British-Canadian Professor Emeritus at the University of Toronto, warns that AI “has already created divisive echo chambers by offering people content that makes them indignant. It’s already being used by authoritarian governments for massive surveillance, and by cyber criminals for phishing attacks. In the near future, AI may be used to create terrible new viruses and horrendous lethal weapons that decide for themselves who to kill or maim.”
This is why technology assessment must be an essential aspect of the process of technological development and innovation! Technology assessment is an attempt through objective studies to find what else besides the desired primary purpose, what secondary and tertiary consequences could occur before any technology is let loose in society. Joseph Coates defined technology assessment as “the systematic study of the effects on society that may occur when a technology is introduced, extended, or modified, with special emphasis on the impacts that are unintended, indirect, and delayed.” Technology assessment is not about putting limitations on the freedom to conduct research. It is important to note for the benefit of skeptics that the focus of technology assessment is not discovery, but diffusion, and not invention but application.
For ethical reasons and impartiality, meaningful and objective technology assessment should not be left to businesspeople, scientists, engineers, or administrators who push technology. A multidisciplinary body is needed for objective and effective technology assessment. This was the case with the nonpartisan, highly relevant Congressional Office of Technology Assessment when it existed. However, under pressure primarily from the corporate community, the agency was shuttered in 1995 after it was defunded during the Contract-with-America 104th Congress. Not only does technology assessment contribute enormously to informed policy formulation, it also offers the opportunity for detection, correction, or mitigation of any undesirable consequences of technology prior to its deployment in society.
I strongly believe that anthropogenic activities have exacerbated climate change. I believe that acquiescence to Artificial Intelligence is akin to going down a slippery slope that must not be given the liberty to run roughshod over society, its values, and morals. I also believe that technology assessment must precede the introduction of energy and Artificial Intelligence technologies in a world in which technology has become complex and growing beyond human scale. Such a concerted effort is needed to detect and eliminate their harmful, unintended effects. It is in our best interest to work on not becoming victims of our own creation.
Anthony Akubue, Professor of Environmental & Technological Studies, St. Cloud, MN, Tuesday, October 7, 2025



