- WeeklyDispatch.AI
- Posts
- The week in AI: President Biden's executive order on AI, Beatles swan song
The week in AI: President Biden's executive order on AI, Beatles swan song
Plus: UK global AI summit commences

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each Thursday, we aggregate the major developments in artificial intelligence; we pass along the news, useful resources, tools or services, and exciting projects in open source. Even if you aren’t an engineer, we’ll keep you in touch with what’s going on in AI.

President Biden has signed into effect a wide-ranging executive order to establish new safeguards, oversight, and ethical guidelines around AI technology and applications. White House deputy chief of staff Bruce Reed claimed that the order amounts to "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust." |
The order itself sets a very wide range of mandates and aspirational goals, with guidance and recommendations to federal agencies on establishing AI policies and regulation. Here are some highlights:
From the signing of the order on October 30th, developers of cutting-edge AI systems have 90 days to provide the federal government with the following: A) breakdowns of any ongoing or planned activities related to training, developing, or producing these models; B) ongoing information about model ownership, access control and cybersecurity measures; C) red team testing results and safety mitigations - to be based on guidance from the National Institute of Standards and Technology.
The order calls heavily upon existing agencies to explore and promulgate AI regulations. For example, it tasks the Department of Labor to drill down on the potential for AI to cause rampant job loss, and the Department of Housing and Urban Development to address discrimination in housing sectors with a lens on AI, etc.
It takes a (frankly, massive) number of measures to promote innovation and enhance America’s global competitiveness in AI, with an eye on the growing high-tech rivalry with China. It dedicates new funding to provide researchers access to cloud resources, data, and models. It also mandates at least least four new National AI Research Institutes be established within a year and a half.
The order touches on all kinds of areas in AI - from exploring watermarking as a method to be examined for use in labeling AI-generated media to establishing a research coordination network specifically focused on advancing privacy, which many feel is a critical and primary issue at stake. Given the short timelines provisioned for compliance throughout the order, we should find out in the coming months how effectively it combats heightening regulatory concerns. Expert opinions on the order are unsurprisingly varied.
The Beatles have just released their swan song, Now and Then, with the help of AI - culminating a 29-year effort to restore a vocal track of John Lennon provided by Yoko Ono. There is a mini-documentary of the project directed by Peter Jackson - it’s a remarkable story of musical archaeology made possible through the power of algorithm.
The first ever International AI Safety Summit kicked off yesterday at Bletchley Park in the UK, bringing together government representatives from 28 nations including the US, China, UK, Canada, and India - although US President Biden and PRC President Xi Jinping are conspicuously absent. The summit, which runs through today, aims to address the risks posed by rapid advances in AI and forge an international consensus on ‘containment strategies’. US Vice President Kamala Harris and UK Prime Minister Rishi Sunak are among the keynote speakers, and word so far is that the summit has created room for informal networking sessions and healthy debate - in addition to all 28 countries signing off on the Bletchley Declaration - which aims to identify the ‘AI safety risks of shared concern’ and build ‘respective risk-based policies across countries’.
We will have a more in-depth breakdown of developments from the summit, including the Bletchley Declaration, in next week's newsletter.
A ChatGPT update is being rolled out that allows users to chat with their PDF's and other documents as part of the default chat mode. This relatively small update poses an existential threat to AI "wrapper" startups - companies that leverage the power of a language model like GPT to perform a specific service (dozens of AI startups had cropped up in recent months allowing you to ‘Chat with your PDF’s’). By continuing to enable more functionality directly within ChatGPT’s interface, OpenAI is making many of the third-party AI tools and services that have emerged of late redundant. Founders and investors will continue to face tough questions about the viability of AI startups that don’t have a long-term competitive moat versus major platforms like ChatGPT.
A federal judge has dismissed most of the claims from a group of artists alleging that AI art generators like Midjourney and Stable Diffusion infringe on their copyrights by training their systems on billions of images downloaded from the internet without permission. The judge broadly ruled that the artists’ case needs clearer argumentation and proof of infringement. |
That will be a tall task, given the opacity around training these models. The artists were given a chance to re-file an amended lawsuit addressing the deficiencies cited by the judge - but the ruling shows the difficulties of pursuing copyright claims against AI systems. Most of the existing lawsuits conflate different parts of the technology, and the lack of clarity in the claims makes it harder for the judiciary to support them.
The judge did allow one important claim of direct infringement to move forward - based on allegations that Stability AI used copyrighted images without permission to create Stable Diffusion. Stability’s defense is essentially that they don’t train their model by wholesale copying, but by teaching it parameters around lines, color, shades, and other attributes of the artwork. That might be a ‘legally’ sound defense, but it’s pretty shady. This single issue could still be enough to decide the case in the artists’ favor.
Google has invested $2b in AI startup Anthropic, the creators of popular chatbot Claude. This comes on the heels of Amazon investing up to $4b in Anthropic. Internal documents from Anthropic have suggested that they would need to spend at least $1b by the end of 2024 just to train their next-generation LLM, “Claude-Next,” so perhaps these investments are well-timed.
It’s a bit surprising to see Google’s investment here, though - the company’s upcoming multi-modal LLM, Gemini, will be a direct competitor to Anthropic and Claude. Anthropic’s leaked pitch deck from earlier this year claimed that the companies who have the best AI models by 2025 would be too far ahead in the game for others to ‘catch’, as those systems begin to automate large parts of the economy. That’s what they want Claude-Next to do; seems like Google thinks Anthropic is a good bet.
Anthropic is recently the target of a lawsuit from Universal Music claiming an alleged ‘theft of lyrics’; we can confirm that Claude’s outputs have been altered/’neutered’ based on copyright concerns.
More in AI this week:
New AWS service lets customers rent Nvidia GPUs for quick AI projects
OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats
Google Brain cofounder says Big Tech companies are lying about the risks of AI wiping out humanity because they want to dominate the market
University of California San Diego research: does GPT-4 pass the Turing Test? (spoiler: not quite, but closer than ever)
With its new M3 chips, Apple joins the AI party
Ilya Sutskever, OpenAI’s chief scientist, on his hopes and fears for the future of AI
Boston Dynamics robots can now chat with you using ChatGPT
ChatGPT app revenue shows no signs of slowing, but some other AI apps top it
MetNet-3: a state-of-the-art 24-hr neural weather model available in Google products

Trending AI Tools & Services:
Perplexity: popular search AI just added their own fine-tuned LLM’s to their labs page
gov2me: receive an e-mail when the agenda for a city hall meeting is posted
AI Town v2: exactly what you think it is - interact with AI citizens
Dot by New Computer: (waitlist) an intelligent AI guide designed to help you remember, organize, and navigate your life
flowRL: AI-powered realtime UI personalization
Olle: ChatGPT as a toolbar Mac app
LlamaIndex Chat: create chat bots that know your data
Guides/useful/lists:
Google 8 Pixel Pro review: scary good AI
Working with AI: Two paths to prompting
This wild AI tool can turn any website into a better version of itself
Microsoft Copilot launched worldwide this week - but what the hell is it?
29 AI statistics and trends from 2023
This ChatGPT iPhone app lets you use GPT-4 for way less than ChatGPT Plus
(YouTube) AI swiss army knife: free tool that does everything
13 of the best ChatGPT courses you can take online for free
LinkedIn’s new AI chatbot wants to help you get a job
Social media/videos/podcasts:
Build and ride a roller coaster in your house with VR [X]
Nvidia’s new AI: gaming supercharged [YouTube]
Behind the scenes: scaling ChatGPT with Evan Morikawa from OpenAI [YouTube]
OpenAI is adding PDF chat for some users - you can also chat with data files and other document types [X]
Meta Chief AI Scientist Yann LeCun takes a shot at the corporate lobbying of OpenAI, Google DeepMind and Anthropic [X]
(Discussion) Google DeepMind boss hits back at Meta AI chief over ‘fearmongering’ claim [Reddit]
Solving software developer challenges with AI - with Tsavo Knott of Pieces [Podcast]
Open source & technical:
Accelerating AI tasks while preserving data security
How AI can supercharge observability
RedPajama-Data-v2: an open dataset with 30 trillion tokens for training large language models
Phind Model beats GPT-4 at coding, with GPT-3.5-like speed and 16k context
(Open source) DreamCraft3D: an impressive 2D to 3D converter
(Open source) MimicGen: data generation system for scalable robot learning using human demonstrations
(Open source) Ludwig AI: low-code framework for building custom LLMs, neural networks, and other AI models
It was a busy news week! We hope you enjoyed - see you next Thursday.