- WeeklyDispatch.AI
- Posts
- TIME Magazine's 100 most influential in AI
TIME Magazine's 100 most influential in AI
Plus: PCMag's Top 10 ChatGPT Plugins

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, technical analysis and exciting developments in open source. Even if you aren’t an engineer, we’ll keep you in touch with what’s going on under the hood in AI.
Good morning. As usual, Monday is a busy news day for AI. Here’s just some of what’s going on since the end of last week:
TIME Magazine’s ‘100 in AI’ & how they were chosen
Google/YouTube’s new policy regarding the use of AI in political campaigns
Two stories on the escalating AI rivalry between the US and China - including serious allegations coming from Microsoft
European journalists and media makers are banding together en masse to draft a charter aimed at AI regulations
The Amazon rainforest has lost almost 20% of its cover since 1970 - could AI help preserve it?
The *other* Amazon now requires authors to disclose AI-generated content
PCMag’s helpful list of top ChatGPT plugins, our own curated list of open source coding LLM’s and many more resources & trending tools
The story: Microsoft’s AI For Good Lab is partnering with several organizations in Colombia help monitor and protect the Amazon rainforest in the country. Colombia contains about 13% of the Amazon rainforest within its borders. The new project will use satellite imagery, hidden cameras, and audio recordings to track deforestation patterns and biodiversity in the region.
More Details:
Deforestation driven by mining and agriculture remains an urgent threat to the Amazon. In 2022 alone, Colombia lost nearly 1 million hectares of forest, primarily in the northwest region. Rampant deforestation destroys biodiversity and carbon-absorbing trees, dangerously throwing off ecological balance. If left unchecked, experts warn it could permanently damage the planet's climate patterns and ecosystem.
The project (Guacamaya) takes a three-pronged approach, using satellite images to spot illegal deforestation from above, camera traps to monitor biodiversity on the ground, and bioacoustics to identify animal sounds.
AI helps process the massive amounts of data from these sources much faster than manual analysis would allow. This helps all parties get a better picture of the ecosystem and develop critical reports to governmental bodies and conservation organizations,
The open source models could eventually be replicated across the Amazon region as a collaborative effort between countries that share the rainforest.
Takeaways: The goal is to equip policymakers and conservationists with better data to combat the urgent threat of deforestation. Project Guacamaya’s innovative use of multimodal data and machine learning models provides a scalable data-driven angle to help tackle the complex challenges of preserving the Amazon rainforest. Hopefully this project can evoke the spirit of past international collaborations on environmental issues like the Montreal Protocol to protect the ozone layer.
Google has announced that starting in November, it will require political advertisements using synthetic content generated by AI to clearly disclose that fact to viewers. With the 2024 US presidential election approaching, digital experts warn such AI-generated content could lead to a wave of misinformation that platforms are unprepared to handle. |
The new policy applies across Google’s entire platform, but is particularly pertinent to YouTube. Some political ads have already been using the technology - in April, the Republican National Committee in April released an AI-generated ad meant to show the future of the United States if President Joe Biden is re-elected.
Currently, the Federal Election Committee is in the midst of a 60-day public commentary window on the issue at the behest of advocacy group Public Citizen, who petitioned the agency to amend its regulations to explicitly prohibit the use of deliberately deceptive artificial intelligence in political campaign advertisements.
If you would like the FEC to hear your opinion on this matter, please comment here.
Nvidia has announced it will partner with major Indian tech companies including Tata Group for advancing AI in India. Tata is India's largest IT business in revenue and market capitalization. CEO Jensen Huang expressed optimism about the country becoming a global AI powerhouse, highlighting India's top-tier strengths in overall IT talent and engineering capabilities. |
The collaboration will focus on building AI computing infrastructure and platforms using Nvidia technology like the Grace Hopper Superchip and DGX Cloud. This will help provide resources for developing AI solutions to address India's major challenges in areas like healthcare, agriculture, and weather prediction. Huang emphasized the potential for India to not just use AI domestically but become a leader in exporting AI technology globally.
Nvidia started the year with a $350b market capitalization; it’s total market value is now well over $1T, making it the 6th largest company in the world behind Amazon, Google, Saudi Oil, Microsoft, and Apple.
TIME Magazine’s 100 most influential people in AI
How they were chosen
China suspected of using AI on social media to sway US voters, Microsoft says
China and the US are in a battle over AI - experts say this is just the start
European Federation of Journalists joins 17 media sector organizations to develop a charter aimed at regulating the use of AI in European media
Artists sign open letter saying generative AI is good, actually
A surprising explanation for the global decline of religion
eBay's new 'magical' AI tool writes product descriptions for you from a single photo
Why Meta’s Yann LeCun isn’t buying the AI doomer narrative
From our sponsors:
Small word. Huge impact
$340 billion is spent annually on care. 10,000 people turn 65 every day. 80% of brain development occurs from the ages of zero-four. It’s when quality childcare is absolutely critical. 50% of families in the USA live in a childcare desert. There’s the reality we all live with… And then there’s Care.com. Our purpose is to help every family at each stage of care and today, we’re helping millions of families at home and at work across 17+ countries and growing.

Special feature: a list of coding LLMs
Falcon 180B: currently tops the leaderboard for pre-trained open LLM’s - also the largest openly available model, with 180 billion parameters
Meta’s Code Llama: a commercially permissible SOTA model built on top of Llama 2, fine-tuned for generating and discussing code
WizardCoder 34B: a fine-tuned language model developed by Microsoft team members, good at JavaScript, decent at Python
Stability AI’s StableCode: its first coding LLM supporting multiple programming languages and a 16k context window
Refact Code LLM: a 1.6B coding LLM outperforming similar-sized coding models across 20 programming languages
DeciCoder 1B: a code completion model trained on Python, JS, and Java
Hugging Face’s SafeCoder: an enterprise coding assistant with a fully compliant and self-hosted pair programmer
Defog’s SQLCoder: a state-of-the-art LLM for SQL generation outperforming GPT-3.5
![]() | A Kubeshop blog post on The New Stack argues that the rise of proprietary AI-driven DevOps platforms threatens to cut off the public's access to valuable collective troubleshooting wisdom for Kubernetes. As platforms scrape sites like StackOverflow to train proprietary AI models, they could create a future where operators have less understanding of root causes within complex Kubernetes infrastructure. |
The author contrasts startups taking two paths: 1) His own startup, botkube, using public data and AI to augment human troubleshooting versus 2) AI startup Causely, who is specifically training on private data with a mission to replace operators entirely. It advocates for engineers to keep contributing public Kubernetes knowledge. While AI assistance has benefits, losing collective troubleshooting techniques makes operators dependent on black box systems.
(Listing again for anyone who missed it Friday) Open Interpreter - an open-source Code Interpreter that runs locally
Move over AI, quantum computing will be the most powerful and worrying technology
What are Google Cloud TPUs? Google aiming for the cloud-based AI creation sector
Casually running a 180B parameter LLM on a single Apple M2 Ultra

Trending AI Tools & Services:
G-Prompter: a free Midjourney prompt generator
USearch Images: semantic image search server in 200 lines of Python
Trickle: turn your screenshot chaos into gold
Replit Ghostwriter: updated, now defaults to the most advanced model on the market.
M1-Project: let AI transform your product knowledge into a detailed Ideal Customer Profile
Guides/useful/lists/fun:
Do more with AI: the 10 best ChatGPT plugins and how to install them
How to make YouTube videos using AI using a single prompt in minutes
How to perform data analysis in Python using the OpenAI API
Can I use AI to interior design my home? The rise of artificial intelligence design tools, explained
Adobe Photoshop 25 review: AI is here to stay
(YouTube) Learn how to code a Large Language Model (LLM) from scratch with Python
Social media/video/podcast:
Did you know?
A blog post from a popular newsletter about semiconductors has gained attention for arguing that Google is poised to take the lead in large language models and AI more broadly. After missing the boat early, Google has now woken up. They have been developing a powerful new language model called Gemini that is substantially stronger than current systems (including GPT-4; DeepMind’s CEO has also touted this level of capability).
At the same time, Google has an enormous advantage in computing infrastructure for training AI models and integrating them more broadly. Its new TPUv5 chips are far more advanced than GPUs used by other firms and researchers. This will allow Google to scale up Gemini and future models at a staggering pace.
No other organization can compete with the raw scale of Google's AI infrastructure - but their vision to fully utilize it and just how good Gemini might be (we expect it will become the most powerful and broadly leveraged AI system in the world upon release) remains to be seen. One potential hitch? Google is embroiled in a landmark antitrust case with the US Justice Department - they’re set to face off in court tomorrow.
Gemini is expected to be released this year.
What we need is a collective global commitment, anchored in sound principles, to uphold the ethics of journalism and harness AI for preserving the right to information. We believe that the charter, to be drafted under the leadership of Maria Ressa and with the contribution of this committee composed of prominent figures, will become a strong international reference.