Artificial Intelligence (AI) is changing the world. This sentiment has been echoed across practically every field, often with positive connotations. AI, however, is exemplifying dystopia. Here is a whistle-stop tour as to some of the reasons how.

1. Environmental Impact
Could you pour a full water bottle down the drain with an unscathed conscience? Would you use ChatGPT to answer your queries, do your assignments, or simply search the web? These two questions necessitate the same answer, given that AI has been found1 to gulp up half a litre of water (16 ounces) for every five prompts.
As of 2025, it is estimated that 378.8 million people are AI users—a figure only forecasted to increase.2 If all of these users limited themselves to just five inquires, then, a minimum of 189.4 million litres of water would trickle down the drain this year. Meanwhile, 2.2 billion people around the world lack access to ‘safe’ water, with 703 million of those lacking access to ‘basic’ water.3
Not only does AI consume inexcusably undue volumes of water, but it also racks up a massive carbon footprint. When OpenAI trained GPT-3, for example, approximately 500 tons of carbon dioxide4 were produced in the process. It would take a 2,500,000 mile drive (just a casual ten times around the Earth!) in a gasoline car5 to create a similar carbon footprint; it would require 230,000 trees being planted6 to offset it. Moreover, as Scientific America reports, “a continuation of the current trends in AI capacity and adoption are set to … consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year.”7
It goes without saying, furthermore, that (like all environmental issues) the impact of this water and carbon usage is disproportionately felt in the Global South, despite the North producing the overwhelming majority of emissions.
2. Freedom of Thought, Dependancy, and Laziness…
While there are limitations to the slippery slope fallacy, it often reigns true when it comes to AI usage.
This is especially true in academic environments. Indeed, one comprehensive data study has concluded that “AI significantly impacts the loss of human decision-making and makes humans lazy. It also impacts security and privacy … 68.9% of laziness in humans, 68.6% in personal privacy and security issues, and 27.7% in the loss of decision-making are due to the impact of artificial intelligence … significant preventive measures are necessary before implementing AI technology in education. Accepting AI without addressing the major human concerns would be like summoning the devils.”8
It may seem innocuous to allow AI to write your essay for you. At least, it may seem innocuous until you’re incapable or thinking critically without the use of a robot—something especially worrying when considering, as we’ll see below, that AI is known to give bogus information and references—or when the skills of communication and inquiry degrade even further. Not only that, it’s essential to take note of the consequences of AI normalisation in higher education; where will our society be if our future doctors, lawyers, and teachers haven’t nurtured their own intelligence during their degrees, instead having utilised ChatGPT MD? ‘It’s all fun and games until somebody loses an eye’ feels especially apt here…
3. Inaccuracy and Bias
When framed as this supposed panacea, its often assumed that AI always tells the truth. This trust is woefully misplaced.
So-called AI ‘hallucinations’ are all too common and all too unchecked, creating major dangers. AI confidently spits out information, giving the illusion of credibility through citations. “The particular problem of false scientific references,” however, “is rife … chatbots [make] mistakes between about 30% and 90% of the time on references.” In some cases, furthermore, references were not only mis-written but entirely made up: take, for example, the legal case of Mata v. Avianca, where a lawyer relied on ChatGPT to conduct his research and was rewarded with “citations and quotes that were nonexistent; not only did the chatbot make them up, it even stipulated they were available in major legal databases.”9
This boils down to misinformation, something society is already in no short supply of; but this is not even the worse of it, as these inaccuracies are often ideological in nature, perpetuating discrimination. Beyond the infamous Tay racist robot, “a 2023 analysis of more than 5,000 images created with the generative AI tool Stable Diffusion found that it simultaneously amplifies both gender and racial stereotypes.”10 (Not to mention that AI profusely steals from uncredited and unconsenting artists.) There are myriad real-world implications of this too: “Amazon’s AI-powered recruitment tool, which was created to make hiring easier by evaluating CVs and selecting candidates. This tool analysed historical data to identify patterns among successful hires at the company. However, it was later discovered that the AI system was sexist and biased against female candidates. This bias arose because the historical data used to train the AI primarily comprised male applicants’ CVs, highlighting gender disparities in the tech industry. As a result, the AI incorrectly downgraded CVs that used terms more commonly found on women’s resumés, such as ‘women’s chess club captain’, even though these terms did not indicate a lack of qualification.”11
i could go on, this list is far from exhaustive. avoiding ai isn’t hard (unsurprisingly, this article is brought to you by someone who has never made a chatgpt search…) write your own damn emails /:)
- Matt O’Brien, Hannah Fingerhut, and The Associated Press, “A.I. Tools Fueled a 34% Spike in Microsoft’s Water Consumption, and One City with Its Data Centers Is Concerned about the Effect on Residential Supply,” Fortune, September 9, 2023, https://fortune.com/2023/09/09/ai-chatgpt-usage-fuels-spike-in-microsoft-water-consumption/?utm_source=flipboard&utm_content=user%2Ffortune. ↩︎
- Statista, “Number of Artificial Intelligence (AI) Tool Users Globally from 2020 to 2030,” Statista, 2024, https://www.statista.com/forecasts/1449844/ai-tool-users-worldwide. ↩︎
- Heather Arney, “How Many People Are Affected by the Global Water Crisis?,” Water.org, February 22, 2024, https://water.org/about-us/news-press/how-many-people-are-affected-by-the-global-water-crisis-2024/. ↩︎
- Jude Coleman, “AI’s Climate Impact Goes beyond Its Emissions,” Scientific American, December 7, 2023, https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/. ↩︎
- Anthesis, “What Exactly Is 1 Tonne of CO2? | Anthesis Group,” Anthesis, July 31, 2023, https://www.anthesisgroup.com/insights/what-exactly-is-1-tonne-of-co2/. ↩︎
- Encon, “Calculation of CO2 | Encon,” Encon.eu, 2020, https://www.encon.eu/en/calculation-co2. ↩︎
- Lauren Leffer, “The AI Boom Could Use a Shocking Amount of Electricity,” Scientific American (Scientific American, October 13, 2023), https://www.scientificamerican.com/article/the-ai-boom-could-use-a-shocking-amount-of-electricity/. ↩︎
- Sayed Fayaz Ahmad, “Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness and Safety in Education,” Humanities and Social Sciences Communications 10, no. 1 (June 9, 2023): 1–14, https://doi.org/10.1057/s41599-023-01787-8. ↩︎
- MIT Management, “When AI Gets It Wrong: Addressing AI Hallucinations and Bias,” MIT Sloan Teaching & Learning Technologies, 2024, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. ↩︎
- MIT Management, “When AI Gets It Wrong: Addressing AI Hallucinations and Bias,” MIT Sloan Teaching & Learning Technologies, 2024, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/. ↩︎
- Tshilidzi Marwala, “Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth,” United Nations University, July 18, 2024, https://unu.edu/article/never-assume-accuracy-artificial-intelligence-information-equals-truth. ↩︎
Leave a Reply