- WeeklyDispatch.AI
- Posts
- Inside the messy world of making war with AI
Inside the messy world of making war with AI
Plus: Fine-tuning for GPT-3.5 Turbo is here, GPT-4 coming in fall


Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, guides, technical analysis and exciting developments in open source.
In today’s Dispatch:
(Long read) The Verge takes a critical look at Google's willingness to protect its business interests in AI through legal maneuvering and deal striking, all while insisting their use of news and books to train AI models is fair use. Their current business strategy could be challenged through impending copyright lawsuits over AI training, which may reshape the dynamics between platforms and copyright holders.
OpenAI has released an update enabling fine-tuning for GPT-3.5 Turbo, giving developers the ability to customize the large language model to improve performance on business tasks. Early testing indicates fine-tuned GPT-3.5 Turbo can match base GPT-4 capabilities for narrow use cases. The update provides increased control for model steering, consistent/improved output formatting, and tone adaptation. Fine-tuning for GPT-4 is slated for release this fall.
Meta is set to turn off its personal feed algorithms for Facebook and Instragram in Europe. The Digital Services Act, a revamped digital rulebook by the EU, mandates major platforms to offer users an option to switch off AI-driven personalization, which curates content based on individual tracking and profiling. The goal is to empower users with more choice, combat the rise of filter bubbles, and mitigate risks of addiction and automated manipulation. Users can expect feeds based on chronological order or local popularity, rather than on tracking data. While the specific release date for this AI off-switch is unknown, compliance with the DSA is expected shortly.
Plus: An AI mind-reading experiment, light-powered language models, trending tools and more!

From MIT Technology Review: An ethical gray zone is emerging as AI creeps its way into military targeting decisions. When a soldier looks through a high-tech gunsight that uses algorithms to detect and highlight potential enemies, or a commander approves an AI-recommended artillery strike, who bears responsibility if things go tragically wrong? The author argues man-machine decisions blur the moral clarity of war as we approach more fully-autonomous weapons.
More details:
Machine learning is enabling new AI-powered targeting systems like computer vision-enabled gunsights that can automatically detect and identify targets. Some worry this removes key human judgment from the decision to take a life.
Militaries are interested in AI that can accelerate and potentially automate parts of the kill chain, like identifying targets and pairing them with optimal weapons systems to strike. This raises legal and ethical issues around meaningful human control over lethal force.
There are no easy choices as the technology evolves rapidly - and previous analogous dilemmas like the creation of precision guided munitions did not slow defense innovation.
Takeaways: Even identifying responsible humans could be difficult in war scenarios, as decision chains can spread across distributed networks. The software coders and defense contractors who design these AI systems seem likely to evade any backlash. Eliminating difficult choices through AI efficiency may be tactically superior, but compromises human ethics - while AI can aid judgment, for now humans must remain in control to maintain legitimacy. As systems become more autonomous, society will need to determine acceptable limits on AI's role in lethal force.
From UCSF Research: Researchers from UC San Francisco and UC Berkeley have leveraged AI brain-computer technology to allow a paralyzed woman, Ann, to communicate using a digital avatar that emulates human speech and facial expressions. The woman had suffered a brainstem stroke 18 years ago; previous to the technology, she was utilizing a text display that was based on small head movements and more than 5 times slower for communicating.
More details:
The brain-computer interface aims to synthesize speech or facial expressions from brain signals. For Ann, this system decodes brain signals into text at a speed of 80 words per minute, far surpassing her earlier device which only achieved 14 words per minute.
The core principle behind the innovation involves a thin rectangle of 253 electrodes placed on Ann's brain. These electrodes detect the brain signals meant for speech muscles, and a connected cable transfers this data to computers for interpretation.
The AI system was trained not on whole words, but on smaller speech components - phonemes. By learning only 39 phonemes, the system can decipher any word in English.
Takeaways: They were also able to generate Ann’s authentic voice from previous recordings. Ann worked with the research team for weeks to train the system’s AI on her unique brain signals. They repeated a 1,024 word conversational vocabulary over and over until he computer recognized patterns associated with the phonemes. The researchers say that in time Ann will be able to communicate at just about normal speed.
Works by thousands of authors also including Margaret Atwood, Haruki Murakami and Jonathan Franzen fed into models run by firms including Meta and Bloomberg The Guardian • Ella Creamer |
Will this improve patient health and reduce obstacles to accessing care, or will it create discomfort and dissatisfaction in the healthcare setting? TIME• Various |
More News & Opinion:
Google and YouTube are trying to have it both ways with AI and copyright
(OpenAI blog) GPT-3.5 Turbo fine-tuning and API updates
Meta confirms AI ‘off-switch’ incoming to Facebook, Instagram in Europe
More teachers plan to use artificial intelligence in classrooms, report says
You.com introduces AI-powered search on WhatsApp
ElevenLabs’ voice-generating tools launch out of beta
How ChatGPT turned generative AI into an “anything tool”
From our sponsors, Incogni:
Leave behind the overwhelming task of removing your personal information from the vast expanse of the internet & reverse lookup sites. By embracing Incogni, you gain the power to combat ID theft, spam, robocalls, and mitigate a range of other risks that pose a threat to your privacy.
Data Brokers' Money Game: Buying Your Personal Info, Selling Your Privacy to the Highest Bidder! Incogni - The best solution to remove yourself from the internet in 2023.

It isn't data that will unlock AI, it is human expertise. The previous generations of AI, prior to Large Language Models and ChatGPT, rewarded whoever had the best hoards of good data. One Useful Thing • Ethan Mollick |
In other words, cellphones and other small devices could become capable of running programs that can currently only be computed at large data centers. MIT News • Elizabeth A. Thomson |
More Open Source & Technical:
Google adds new AI-powered security controls to its Workspace
Introducing Pieces Copilot - Pieces for Developers
Tutorial: How to build an AI agent (Part 1)

Social media/news/video/podcast:
New 'BeFake' social media app encourages users to transform their photos with AI [ZDNet]
An AI mind reading experiment - Two Minute Papers [YouTube]
Midjourney's Inpainting is SUPER Impressive! [YouTube]
Medical research made understandable with AI - The Stack Overflow [Podcast]
(Discussion) I think many people don’t realize the power of ChatGPT [Reddit]
Xi Jinping announces BRICS (Brazil, Russia, China, India, South Africa) has agreed to establish an AI study group for information exchange & cooperation [X]
Did you know?
Just following Russia’s first lunar mission in decades crashing, India’s Chandrayaan 3 landed safely on the moon, and AI paired with key sensors played big role in how it flew and landed. India becomes the first country to land on the moon’s lunar south pole - and there’s a lot of interest in discovering potential sources of ice there.
If ice exists in sufficient quantities, it could be a source of drinking water for moon exploration and could help cool equipment. It could also be broken down to produce hydrogen for fuel and oxygen to breathe, supporting missions to Mars or lunar mining. The Chandrayaan program has a storied legacy - and it was a NASA instrument on-board Chandrayaan-1 that definitively discovered moon water in 2008.
Trending AI Tools & Services:
Uptrends AI: the ultimate AI-powered stock tracker
AISuitUP: an AI-powered tool that generates professional headshots
Kaleido: AI product manager for dev teams.
Bloc: Born from the vision of simplifying and revolutionizing how you interact with your documents
Deepfakes.lol: (for entertainment only) Choose a video of someone talking. Enter what you want them to say. Get a lip-synced, deepfake video.
ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.