Energy, emissions, water: AI's impact on the environment

Plus: The politics of ChatGPT

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, guides, technical analysis and exciting developments in open source.

In today’s Dispatch:

  • The Associated Press, USA Today parent company Gannett and eight other media organizations on Wednesday called on policymakers to regulate artificial intelligence models, arguing that media companies should be able to collectively negotiate the access and use of their content with AI companies.

  • Stability AI has released StableCode, a new open source AI model that can generate code when prompted with natural language descriptions of programming needs. Stability AI is most well-known for their Stable Diffusion image generator.

  • Researchers at MIT have developed an AI model that can analyze tumor genes to predict the origin of mysterious cancers, guiding doctors to more personalized and effective treatments. In testing, the model doubled the number of patients who received correct medications, demonstrating AI's potential to improve cancer prognosis when conventional methods fail.

Plus: AI & the environment, political biases in language models, a new Google tool for devs and more!

Image: Benis Arapovic/Zoonar/picture alliance

The story: AI systems like ChatGPT have a large carbon footprint due to energy-intensive computing requirements for model training and continued use. There are also concerns about how AI is currently being leveraged to accelerate climate-damaging activities. Hyper-personalized targeted advertising powered by AI directly increases consumerism and associated emissions; meanwhile, data centers in locations where fossil fuels still make up a significant chunk of the energy mix produce excess emissions.

More details:

  • Training a single AI model can emit hundreds of thousands of kg of CO2 equivalent - comparable to 5 times the lifetime emission of the average car.

  • Overall data center infrastructure for AI already accounts for emissions on par with the entire global aviation industry.

  • Additionally, the huge amount of water data centers need to prevent their facilities from overheating has raised concerns.

  • Companies can reduce AI's carbon footprint by using smaller datasets, more efficient model architectures, and powering data centers with renewable energy.

Takeaways: July 2023 was the hottest month ever recorded. AI’s large and growing carbon footprint is concerning - especially when used to accelerate climate change-causing activities. Regulation needs early implementation to steer the technology in a climate-friendly direction.

Tracking the trails of political biases in AI chatbots

The story: The Association for Computational Linguistics has awarded a recent research paper from MIT & Carnegie Mellon University that analyzed the political biases of popular AI chatbots like ChatGPT and LLaMA. The study found that different chatbots exhibit biases broadly consistent with a conservative or liberal political alignment.

More details:

  • Researchers surveyed 14 large language models by having them agree/disagree with political statements based on the popular Political Compass test, and then plotting the results for each model.

  • Google's BERT models skewed more socially conservative, likely due to being trained on older books. OpenAI's GPT chatbots were more progressive, trained on more modern, liberal internet text.

  • Pretraining a model on right-leaning news data improved its ability to detect inconsistencies in left-leaning news sources, compared to the unmodified model. And vice versa.

  • Fine-tuning an existing model like GPT-2 and RoBERTa on biased news or social media reinforced their inherent political leanings further. Right-leaning models became more conservative, left ones more liberal.

  • Biases affected hate speech and misinformation categorization. Left-leaning AI better detected minority hate speech but dismissed leftist misinformation. Right-leaning AI did the opposite.

Takeaways: AI bias has been a hot topic in the news for months, and this type of political inquiry into chatbots has already been done on smaller scales many times - but the depth of this research is hard to ignore. While completely unbiased AI may not be realistic, steps can be taken to mitigate unfairness - like including more thoughtfully-neutral curated training data or using an ensemble of models with different perspectives.

Nvidia founder and CEO Jensen Huang said today that the company made an existential business decision in 2018 that few realized would redefine its future and help redefine an evolving industry.

TechCrunch • Devin Coldewey

Mainstream AI usage has sparked concern among major music industry leaders due to the amount of “deep fakes” using musicians’ likenesses.

CoinTelegraph • Savannah Fortis

More News & Opinion:

From our sponsors:

Staying informed about the world doesn’t have to be boring.

International Intrigue is a free global affairs briefing created by former diplomats to help the next generation of leaders better understand how geopolitics, business and technology intersect. They deliver the most important geopolitical news and analysis in <5-minute daily briefing that you’ll actually look forward to reading.

StableCode is being made available at three different levels: a base model for general use cases, an instruction model, and a long-context-window model that can support up to 16,000 tokens.

VentureBeat • Sean Michael Kerner

Google has taken the wraps off of “Project IDX,” which will provide everything you need for development – including Android and iOS emulators.

9to5Google • Kyle Bradshaw

More Open Source & Technical:

Social media/YouTube:

  • Claude is criminally underrated [Reddit]

  • I hope that Artificial Intelligence will reduce film production costs by an order of magnitude for quality TV series [Reddit]

  • This 3D printed bionic arm lets user catch a ball [X]

  • Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & AGI in 2 years [YouTube]

  • Huge YouTuber quits, clones himself with AI (Kwebbelkop) [YouTube]

Did you know?

Canada doesn’t get much publicity in the AI sphere, but Toronto is emerging as a hub for AI startups and research. The Canadian city has a long history of pioneering AI work (‘Godfather of AI’ Geoffrey Hinton studied at the University of Toronto). In recent years, Toronto has invested heavily in cultivating an AI ecosystem, establishing research centers like the Vector Institute while drawing international talent through open immigration policies.

The result is a blossoming of homegrown AI companies, fueled by steady government funding, academic brainpower, and a growing pool of experts. Toronto now boasts over 22,000 AI jobs and has attracted investment from Silicon Valley firms like Nvidia. Though not yet on par with AI epicenters like the Bay Area, Toronto’s combination of technical expertise and business-friendly environment make it a compelling up-and-coming AI startup hub.

New/trending Tools & Services:

  • Notion: Just released version 2.32: new features for project management, AI, and more.

  • Vocal Replica: Clone any voice with just a YouTube video.

  • Heights Coach: Autonomous coach that’s always learning and helping you get better.

  • Ask Qwokka: Movie and TV show recommendations via Whatsapp.

Thanks for reading The Dispatch! Today is National Spoil Your Dog Day. Give your Fido some extra love for us, and we’ll see you tomorrow.