top of page

AI and Energy: A Double-Edged Sword in the Climate Fight

There are two major existential threats facing humanity today: climate change and artificial intelligence (AI). Each poses immense risks on its own. But together, they form a self-reinforcing cycle. A vicious circle.


AI contributes to the climate crisis, but can it also help mitigate it? This remains an open question. While some researchers and companies suggest that AI can accelerate climate solutions, such as optimising energy systems or designing new materials, the overall impact is still uncertain and requires careful evaluation. Understanding this dual role is essential if we are to shape a sustainable future.


Let’s focus on AI’s contribution to climate crisis in this article.


The rapid diffusion of AI technologies has triggered an explosion in energy consumption. Every chatbot interaction, image generation, or language query is powered by data centers that rely on massive computational infrastructure. Training large language models like GPT-4 or Gemini requires tens of thousands of high-performance Graphics Processing Units (GPUs) operating for weeks or months. The energy demands do not stop there. Once deployed, these models continue to consume electricity with every use.


Recent estimates suggest that AI-related data centers already account for around 3% of total global greenhouse gas emissions, an amount comparable to that of the entire aviation industry. This figure is rising.


Despite public commitments to climate goals, major technology companies are struggling to meet their own targets. Microsoft and Google, for instance, have pledged to achieve net zero emissions. Yet, their recent performance tells a different story.


In 2023, Microsoft’s carbon emissions increased by 29% compared to 2020. For Google, the rise was even more dramatic: 48% in the same timeframe. The main driver behind these increases? Expansion of AI infrastructure.


So far, no major tech company has adopted standardized methodologies for measuring the emissions of individual AI models. Tools such as “AI Energy Score” exist, but remain unused at scale. There is a troubling lack of transparency.


Not all AI models consume the same amount of energy. Parameters such as model size, training frequency, inference efficiency, and data processing techniques all influence total emissions. Without standardised measurement, it is impossible for regulators or users to compare the environmental impact of different AI products. This opacity makes it difficult to enforce accountability or incentivise low-carbon AI development.


There is a strong case to be made that AI, if guided carefully, could become a powerful tool in the fight against climate change. AI is already being used to:

  • Design new solar cell materials and battery chemistries (e.g., by DeepMind and related research labs).

  • Optimise grid load distribution and energy demand forecasting.

  • Reduce the carbon intensity of industrial processes such as cement and steel production.

  • Monitor deforestation, methane leaks, and illegal mining through satellite imagery.


These applications are not theoretical. They are being piloted in the real world. Their broader success depends on scaling these innovations without simultaneously increasing emissions elsewhere. The net climate impact of AI must therefore be evaluated holistically.


Given the energy demands of AI infrastructure, tech companies are exploring cleaner sources of electricity. In the United States, Microsoft has signed a deal to source power from the reopening of the Three Mile Island nuclear facility in Pennsylvania. Alphabet (Google’s parent company) is investing in experimental small modular nuclear reactors (SMRs). These moves align with a broader trend. In 2024, tech firms were responsible for 92% of all new clean energy purchases in the U.S.


Nuclear power, once marginalized due to historic accidents and public opposition, is gaining renewed attention. Its ability to provide stable, carbon-free energy makes it attractive in a world increasingly shaped by the electricity demands of AI, electric vehicles, and digital infrastructure.


Projections suggest that global datacenter energy consumption worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh), slightly more than the entire electricity consumption of Japan today. Without radical improvements in energy efficiency or a full transition to low-carbon power, this will pose a severe risk to climate goals.


The future of AI and the future of climate policy are now tightly intertwined. Policymakers, developers, and investors must confront a central question. Can AI become part of the solution without worsening the problem?


AI can help design better energy systems, reduce waste, and accelerate research into climate-resilient technologies. But these benefits will only be realized if we acknowledge and control AI’s own environmental costs.


What is needed now is a coordinated strategy:

  • Develop industry-wide standards to measure AI energy use and emissions.

  • Create incentives for low-emission AI model design.

  • Mandate transparency on energy consumption for large-scale models.

  • Encourage research into frugal AI—models that achieve high performance with low energy inputs.

  • Support the integration of AI systems into clean energy transitions, particularly in grid management and infrastructure planning.


AI represents one of the most powerful technological shifts of our era. Power comes with responsibility. If unchecked, AI will accelerate the climate crisis. If directed wisely, it could help avert it.


Transparency, regulation, and innovation will determine whether AI becomes a burden on the planet or a force for planetary repair.


We should also keep in mind that AI gives you back how you train it. The problem lies in the hands of humanity. It all depends how humanity is willing to use the new technological tool. Human behavior and intention are still extremely crucial.

© 2025 by Arda Tunca

bottom of page