- WeeklyDispatch.AI
- Posts
- White House voices support for hackers to expose ChatGPT & other chatbot weaknesses
White House voices support for hackers to expose ChatGPT & other chatbot weaknesses
Plus: Musk v Zuckerberg cage match to be streamed on X, ChatGPT updates, and more.

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, guides, technical analysis and exciting developments in open source.
In today’s Dispatch:
China's plan to become a leading AI chip maker and substitute imports with local products is in jeopardy as U.S. sanctions stymie progress in the country. China’s response to the U.S. - their own sanctions on U.S. based chipmaker Micron - has led many Chinese tech firms to simply get their chips from South Korea, rather than buying domestically.
Legendary hacker George Hotz and Conjecture AI CEO Connor Leahy debate the feasibility of aligning AI with human values on a popular AI podcast. Hotz asserts that human-aligned AI is unattainable, while Leahy believes in solving AI alignment to ensure human safety. While they acknowledge the potential threats AI poses, they differ on solutions, with Leahy supporting restrictions and Hotz promoting openness to counter government overreach.
Elon Musk has announced that a potential fight between he and Meta CEO Mark Zuckerberg will be streamed on X, previously known as Twitter. The announcement comes amid tensions and back-and-forth exchanges between the two tech moguls.
Plus: ChatGPT upgrades rolling out this week, a new GitHub Copilot feature coming down the pipeline, more LK-99 discussions on social media and more.

ChatGPT content moderators from Kenya initiate petition to government over exploitative work conditions
The story: Four former Kenyan content moderators for ChatGPT are filing a petition with their government for an investigation into the exploitative conditions they experienced while working for OpenAI. These conditions included exposure to extremely dark content (including graphic depictions of sexual violence), low compensation, and abrupt dismissals. The former employees say they weren’t properly warned about how brutal the content was going to be, and there are claims of permanent mental health damage.
More details:
Mophat Okinyi, a former moderator, describes the effects from viewing up to 700 graphic text passages daily. His deteriorating mental state resulted in personal losses, including the end of his marriage.
The petition against these conditions involves a contract between OpenAI and Sama, a Californian data annotation services firm. Moderators allege they suffered psychological trauma, received inadequate wages (ranging from $1.46 to $3.74 an hour), and faced sudden job terminations.
When OpenAI's contract with Sama ended prematurely, many moderators felt abandoned, left to deal with the trauma on their own terms - and now without an income source.
Takeaways: OpenAI has been deeply criticized for not accounting for the human cost of their AI development. And it’s not the only trouble OpenAI CEO Sam Altman has on his plate coming out of Kenya: the country recently banned his Worldcoin crypto-project over economic and data concerns.
The story: The White House has voiced its support of Def Con 31 - the world's biggest annual hacker convention - to facilitate a targeted inquiry into the vulnerabilities of chatbots like ChatGPT. Here, leading tech companies, including Google and OpenAI, are allowing their AI systems to be hacked and tested in a unique competition.
Objective: The competition aims to identify as many problems as possible in current AI systems. Meta, Google, OpenAI, Anthropic, Cohere, Microsoft, Nvidia and Stability have all been persuaded to open up their models to be hacked to identify problems.
Process: Over two and a half days in Las Vegas, around 3,000 participants will be given 50 minutes each to discover flaws in eight popular LLM’s without knowing which one they are attempting to exploit. Successful hacks earn points, and the overall winner receives a high-end graphics processing unit.
Tests: One challenge is to prompt a model to invent or "hallucinate" false information about a political figure. Furthermore, the models' linguistic versatility will be assessed, especially given concerns that safety features might not operate across all languages.
Takeaways: As we move closer to another US presidential election, misinformation, biases, and the reliability of AI have become critical discussion points. With tech giants willingly placing their models under scrutiny, Def Con 31 could prove an insightful evaluation of the current state of chatbot safety. Hopefully, the event will help fully highlight the weaknesses in current popular chatbots to help flesh out what steps might be required to mitigate the aforementioned concerns. Def Con 31 runs from August 10-13; we’ll be providing any updates that arise from the event.
OpenAI’s developer relations expert has announced that a set of ChatGPT updates are rolling out immediately. VentureBeat • Carl Franzen |
Legendary hacker George Hotz and Conjecture AI CEO Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Machine Learning Street Talk • Dr. Tim Scarfe |
More News & Opinion:
Despite the tech supergiant’s relative silence in the AI sector, CEO Tim Cook calls AI ‘absolutely critical’ to Apple products
Continuing trade war with the U.S. and dependency on South Korean chips are hampering China’s AI ambitions
Adobe's Firefly AI looks like a game changer for photo restoration
IGN launches an AI chatbot for its gaming guides
How AI has already detected potentially hazardous asteroids that scientists missed
From our sponsors:
FL0 is your powerful, fully managed deployment platform. Deploy backend applications and databases in minutes. No need to understand Kubernetes, or hack together complex cloud resources on your own. Launch your next big idea on FL0.
Build, Deploy, and Scale Code Effortlessly.

The story: GitHub is introducing a new feature for its Copilot tool that allows developers to see when its suggestions align with code from a public repository. This follows concerns that Copilot, which assists developers in writing code, could unintentionally reproduce code from public repositories, leading to potential licensing issues.
More details:
Last year, GitHub provided an option for users to block Copilot suggestions that match public code. That system was triggered less than 1% of the time.
The new "code referencing" feature for GitHub Copilot, currently in private beta, won't automatically block matching code. Instead, it displays this code to developers in a sidebar for them to decide its utility.
GitHub's CEO, Thomas Dohmke, mentioned that the original feature was a "blunt tool", limiting users from exploring potentially useful open source libraries.
The code snippets matched are currently listed in the order they are found by the search engine, with potential future functionality to sort them by various criteria.
Takeaways: This enhancement to Copilot is a solid response to one of the challenges posed by AI coding and licensing. Copilot won’t automatically block any matching code it generates, but instead will display it in a sidebar and let the dev decide what to do with it. The feature is currently in private beta; you can sign up for the waitlist.
This feels like the beginning of a new technological supercycle that will last decades or even longer. Venture Beat • Ashish Kakran |
More Open Source & Technical:
From Datadog: integration round-up - monitoring your AI stack
A discussion of some of the best DevOps automation tools for developers and DevOps engineers.
IBM and NASA Open Source Largest Geospatial AI Foundation Model on Hugging Face

Social media/YouTube:
Nvidia GPU shortage is ‘top gossip’ of Silicon Valley [Reddit]
Conversation on Taiwan University confirming LK-99 diamagnetism at room temperature [Reddit]
World-renowned quantum materials physicist gives his take on LK-99, given data published so far [X]
Midjourney + Runway movie trailers are getting better by the day [X]
Microsoft’s new AI will have the ability to generate images, videos, and audio from text prompts - and it watched 100 million YouTube videos [YouTube]
Did you know?
According to a new report by the Pew Research Center, about one-in-five U.S. workers have jobs with key tasks that are more likely to be aided or replaced by AI. Interestingly, employees in industries more exposed to AI are more likely to say they think it will help, rather than hurt their jobs.
Trending AI Tools & Services:
Ax: comprehensive AI framework for TypeScript
SuperHuman: The fastest email experience ever made
Softr: Build software blazingly fast - turn your Airtable or Google Sheets into modern business tools you need.
Triple Whale: automate analytics, attribution, merchandising, forecasting and more—all in the palm of your hand.
Alpaca: personalized AI toolkit photoshop plug-in built for artists
Have a great start to your week! We’ll be back with more tomorrow.

It’s a civilized form of war. Men love war.