• WeeklyDispatch.AI
  • Posts
  • Tech giants were cautious with advanced facial recognition - AI startups are not

Tech giants were cautious with advanced facial recognition - AI startups are not

Plus: AI and the end of programming?

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, technical analysis and exciting developments in open source. Even if you aren’t an engineer, we’ll keep you in touch with what’s going on under the hood in AI.

Good morning. Today in AI:

  • Google, Meta and Apple chose not to pursue advanced facial recognition development and applications over ethical concerns - now, startups (and open source) are doing it anyway

  • Former Scientific American editor, researcher and author Brian Hayes investigates whether he believes AI could be the end of programming or not

  • AI is coming to the National Football League

  • (Social media section) The backstory of Palantir, a controversial company currently mining the AI opportunity with government customers for intelligence gathering, counterterrorism, and military purposes [video - quick link]

  • A new service that lets you speak with ‘AI clones’ of the 2024 presidential candidates; Matthew Berman puts the largest open source language model (Falcon180B) to the test & more

Matt Walsh has strong opinions about where AI is taking computer science; let’s take a closer look

The Story: Matt Walsh, a former Professor of Computer Science at Harvard and Google/Apple team leader, has stated unequivocally that the end of classical computer science is coming and that programming will be obsolete. He theorizes a window between 10-30 years, after which there will be little need for human involvement. The headline opinion piece above provides an analysis of the current landscape through the lens of the current capabilities of ChatGPT (3.5 and 4)

More Details:

  • Hayes asked ChatGPT to solve word ladder puzzles, where you change one letter at a time to transform one word into another. ChatGPT made errors like repeating words and allowing invalid words. GPT uses brute-force algorithms, even when a more elegant solution is easily available.

  • When Hayes prompted ChatGPT to write code to solve problems like calculating Fibonacci numbers, it produced wrong solutions or code that simply fails.

  • Studies by Hayes and others found AI-generated code failed 30-50% of the time on basic programming tasks. Performance was worse on more recent problems missing from the AI's training data. The code lacked reliability and robustness.

  • Hayes argues the statistical, language-modeling nature of systems like ChatGPT lacks true reasoning ability. Until an AI system has that, humans will be needed as much as ever.

Takeaways: Current AI systems may appear impressive (and they are!) but simply cannot yet match human coding skills in creative problem solving and debugging. Rote code generation divorced from meaning has very clear limitations in the real world.

But that’s only the present moment. Looking ahead, strong AI or artificial general intelligence (AGI) is the point at which an AI could learn to effectively accomplish any intellectual task that human beings can perform. We are not yet at AGI; no one knows when will get there, but most experts are certain that it’s coming.

Superintelligence implies an even more extreme form of intelligence that we cannot fully comprehend or control, beyond AGI. AGI does not necessarily have to reach this profound level to be considered capable of replacing humans. But OpenAI believes even superintelligence could arrive this decade. Paying attention to the trends, and planning around them, will be critical to anyone who doesn’t wish to be left behind - that goes for more than just programming.

(From New York Times; article may be paywalled) Major tech companies like Meta Google, and Apple developed advanced facial recognition capability years ago but chose not to make it widely available (or only available in relatively benign ways) due to privacy concerns. However, smaller startups have now released facial recognition services - some even publicly. With these tools, widely used by police in the case of Clearview AI and the public at large in the case of PimEyes, a snapshot of someone can be used to find other online photos where that face appears, or personal information a person might not want to be linked to.

In the case of PimEyes, the only way to get yourself out of their system is to first even know about the service in the first place, and then go through a user-submission process to explain why you don’t want to be in their database. Here’s the real kicker, direct from the website:

Note: Review of the request and complete removal of the search result takes about 48 hours. So if your request doesn't get approved, the search image will reappear in search results.”

Why can they deny your request to publicize your data?

  • PimEyes operates as a Polish company outside US jurisdiction. Even if they violated American privacy law - enforcement would be very difficult.

  • Were they US-based, specific federal laws regulating facial recognition technology and its uses are still currently lacking, although some states have legislation pending. Without clear restrictions, companies can gather and monetize biometric data. That might not matter as much when it’s a FitBit, but the stakes are increasing.

  • Datasets and algorithms shared freely for ‘academic purposes’ - and open data scraped from the web - can currently be repurposed for commercial applications like PimEyes without consent.

University of Texas San Antonio has launched a new dual degree program between AI and medicine. It is the first known program in the US to combine a medical degree (M.D.) and a master's in artificial intelligence (M.S.A.I.). The goal is to train future physicians on how to harness AI to improve health care. The 5-year program was developed through collaboration between UTSA’s Health School and the University.

Students will take a 1 year leave from medical school to complete AI coursework at UTSA. The degree combines medicine curriculum with courses in data analytics, computer science, and intelligent systems. They aim to prepare graduates to lead advances in research, education, industry and health care administration.

While it’s not a combined degree, Harvard also recently announced a PhD track for Artificial Intelligence in Medicine (AIM).

From our sponsors:

Help your neighbor store things while you earn money with your unused space.

Neighbor is a peer-to-peer storage marketplace that connects people with unused space to people in need of storage.

-

SpectroCloud discusses how edge AI is enabling immersive, personalized experiences in retail and other industries. Technologies like speech recognition, computer vision, and natural language processing are what’s behind new innovations like retail assistants that greet customers by name and recommend products tailored to them.

However, while AI model training occurs in the cloud, most production AI workloads belong at the edge for reasons like latency, connectivity, costs, and privacy. Managing AI at the edge introduces challenges around deploying hardware, security, and keeping models and engines updated. SpectroCloud introduces a new product, Palette EdgeAI, that helps with these challenges by enabling organizations to easily deploy and manage full AI stacks on edge Kubernetes infrastructure. The summary is that while edge AI promises exciting innovations, it also poses complex management hurdles that requires new platforms to help address.

A VentureBeat article discusses a study by Bain & Company surveying gaming executives on their views of generative AI's impact. It notes executives believe AI usage will grow from under 5% now to over 50% of game development in the next decade.

They see AI improving quality and speeding up production but not significantly reducing costs or alleviating talent shortages. Executives highlighted the continued importance of human creativity, viewing AI as an enhancing tool requiring oversight.

They expect a larger impact on pre-production and production vs. other stages. Challenges include system integration, data training, capabilities, legal issues, and costs. Bain recommends a disciplined AI approach focused on benefiting players, strategically building capabilities, and adapting processes.

Trending AI Tools & Services:

  • Slite: your company knowledge base on auto-pilot

  • Chat2024: meet the 2024 Presidential candidate clones

  • GPTConsole: intelligent CLI and autonomous AI agents

  • EY.ai: new platform by Ernst & Young to evaluate a business’s current level of AI adoption and uncover gaps, identify opportunities for value creation, and equip teams with AI tools for enhanced productivity

  • (App) Learn.xyz: Make learning a fun habit

  • Epsilon: research assistant that answer academic research questions and generate summaries with citations.

Guides/FYI/lists:

Social media/video/podcast:

  • Lex Fridman interviews Walter Isaacson, writer of Elon Musk’s biography [Podcast]

  • (Discussion) AI in healthcare - what do you think? [Reddit]

  • In Langley, the CIA began hunting Bin Laden using every tool they had. But in Silicon Valley, engineers were about to start building entirely new tools to help get the job done. This is the story of Palantir [X]

  • Testing Falcon180B - the largest open source LLM [YouTube]

Did you know? 

Last week, Reuters released a special report on the race between the US and China to deploy ‘killer robots’ and other AI-powered autonomous warfare systems. The in-depth report details how both countries are rushing to develop drones, ships, and aircraft that can operate independently using artificial intelligence.

The report highlights projects like Ghost Shark, a fleet of AI-powered submarines being built for the Australian navy that will have no human crew. It also examines China's unveiling of the FH-97A, an autonomous jet fighter-like drone. Swarms of small drones and unmanned stealth fighters are also under development.

There are concerns about arms races and lose of human control. But most military leaders see autonomous systems as a simple necessity to counter rivals. The pace of change is rapid, with commercial tech firms challenging traditional defense contractors. In a wide-ranging conversation, NPR’s Terry Gross recently interviewed a New York Times correspondent covering the US Navy’s trouble modernizing.

The Navy has all of the results of all these war games that are quite clear. If they were to send these destroyers or either aircraft carriers or aircraft carrier groups anywhere near China, if there was a war that occurred with China, then those things would be sunk - it means that the Navy, in its traditional platforms, can't get very close to China in a Taiwan Strait scenario.

Eric Lipton, New York Times correspondent, September 2023