• WeeklyDispatch.AI
  • Posts
  • The week in AI: Elon Musk sues OpenAI, which responds by leaking contradictory Musk e-mails

The week in AI: Elon Musk sues OpenAI, which responds by leaking contradictory Musk e-mails

Plus: Anthropic releases Claude 3 - a new state of the art in LLMs

Sponsored by

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each Thursday, we aggregate the major developments in artificial intelligence; we pass along the news, useful resources, tools and services, and highlight the top research in the field as well as exciting developments in open source. Even if you aren’t an engineer, we’ll keep you in touch with what’s going on in AI.

NEWS & OPINION

-------------------------

Elon Musk is suing OpenAI and CEO Sam Altman. The Tesla founder has accused OpenAI of violating the company's original mission statement by putting profits over benefiting humanity. In the lawsuit, Musk alleges breach of contract and fiduciary duty (among other claims) against OpenAI, Altman and OpenAI President Greg Brockman.

Fundamentally, Musk accuses the group of pretending to run a nonprofit designed to benefit humanity while actually running a traditional for profit tech company. It’s not an unfair criticism of OpenAI’s situation, and the company’s success is putting a big target on its back of late.

However, even aside from major loopholes in the lawsuit (the contract or agreement in question is not attached to the lawsuit; rather, Musk’s lawyers claim it exists in memoriam through e-mail exchanges), internal e-mail documents posted by OpenAI between Musk and other founding members show Musk not only agreeing with the company’s for profit shift, but attempting to take control of the company and merge OpenAI with Tesla.

The lawsuit will bring more attention to the extent of OpenAI’s abandonment of their original mission, but mostly reads like Musk’s personal sour grapes. Elon Musk left OpenAI in February of 2018, citing a 0% success probability.

-------------------------

Anthropic’s ChatGPT competitor Claude has been upgraded with the Claude 3 model family, a trio of models designed to cater to a wide spectrum of cognitive tasks. The version 3 upgrade includes: the Haiku model for speed and cost efficiency, Sonnet for a balanced approach, and Opus for maximum intelligence and reasoning power. Opus and Sonnet are accessible now in 159 countries through both the Claude platform and API, with Haiku set to join them shortly.

Here’s the breakdown:

  • The models are multimodal, handling a wide range of visual formats, including photos, charts, graphs, and technical diagrams. All three models perform very competitively if not better than GPT-4V and Gemini vision models.

  • Haiku is almost exclusively aimed as businesses as opposed to everyday users, and will only be available through API at $0.25 per million input tokens and $1.25 per million output tokens. The model breaks new ground in processing speed, capable of analyzing dense research papers with charts and graphs up to 10,000 tokens long in under three seconds.

  • Sonnet is available free to use with the default version of Claude, and we consider it to be the best and most capable free LLM/chatbot on the market - much better than the free version of ChatGPT (version 3.5). The API pricing is $3 per million input tokens and $15 per million output tokens.

  • Opus, the most intelligent model, outperforms its peers including GPT-4 and Gemini Ultra on most of the common evaluation benchmarks. Opus has doubled accuracy relative to Claude 2 (which was quite good already) in answering complex, open-ended questions. This model is state of the art, and we encourage you to view this YouTube video for a balanced analysis of its capabilities. It’s available through a $20/month Claude Pro subscription and the API costs $15 per million input tokens and $75 per million output tokens.

  • Significant improvements have been made in reducing refusals to comply with user prompts, with the new models showing a better understanding of their own guardrail limitations.

  • All three initially offer a 200K token context window, with expansion to 1 million tokens on the horizon.

-------------------------

Google has announced major changes to combat AI-generated spam and low-quality content flooding its search results. The search company is revamping its spam policies to reduce scaled content abuse where bad actors publish massive amounts of AI-generated articles and content specifically designed to game search rankings. Google will crack down on ‘domain squatting’ - where previously reputable websites are purchased to host AI-generated spam content, as well as ‘reputation abuse’ - where trustworthy sites allow low-quality sponsored or third-party content (such as “a third-party publishing payday loan reviews on a trusted education website,” per Google’s blog post).

According to Google, the updates could reduce unoriginal & low-quality content in search results by 40 percent. While most aspects will be enforced immediately, Google is giving websites 60 days to remove any reputation abuse content before that aspect is enforced. Some SEO experts are optimistic the changes could help restore Google's search quality, which has suffered the effects of AI-generated spam. Google states it has been working on these updates since late 2022 in response to growing issues around AI misuse degrading search results.

MORE IN AI THIS WEEK

Artificial Intelligence online short course from MIT

Study artificial intelligence and gain the knowledge to support its integration into your organization. If you're looking to gain a competitive edge in today's business world, then this artificial intelligence online course may be the perfect option for you.

  • Key AI management and leadership insights to support informed, strategic decision making.

  • A practical grounding in AI and its business applications, helping you to transform your organization into a future-forward business.

  • A road map for the strategic implementation of AI technologies in a business context.

TRENDING AI TOOLS & SERVICES

  • Zapier Central: an experimental AI workspace where you can teach bots to work across 6,000+ apps

  • Qualia: generate a series of AI images, as easy as using Google image search

  • Dora AI: Generating powerful websites, one prompt at a time - text-to-website beta waitlist now live

  • Or if you can’t wait for Dora AI, Wix AI website builder: create a unique, business-ready website in seconds

  • Microsoft Copilot for Finance: (preview) accelerate time to business impact for finance professionals. Copilot surfaces insights that reduce the time spent on manual, repetitive work.

  • Circleback: unbelievably good meeting notes, actions, and automations. Automatically updates HubSpot, Notion, and more

  • Vercel: AI SDK 3.0 with Generative UI support update - move beyond plaintext and markdown chatbots to give LLMs rich, component-based interfaces

  • Neosync: Synthetic test data for devs

GUIDES, LISTS, UPDATES, INFO

VIDEOS, SOCIAL MEDIA & PODCASTS

  • The new, smartest AI? Claude 3 – tested vs Gemini 1.5 + GPT-4 [YouTube]

  • Google DeepMind CEO Demis Hassabis: scaling, superhuman AIs, AlphaZero atop LLMs, rogue nations threat [YouTube]

  • ChatGPT can now read responses to you [X]

  • Claude 3's retrieval ability over long content is so good that if you provide structured data, it essentially acts like a fine-tune [X]

  • (Discussion) Interesting details about Elon Musk's lawsuit against 8 OpenAI companies [Reddit]

  • (Discussion) Ever feel "Why am I doing this, when this'll be obsolete when AGI hits?" [Reddit]

  • Open source AI is AI we can trust - with Soumith Chintala of Meta AI [Podcast]

TECHNICAL, RESEARCH & OPEN SOURCE

-------------------------

Cloudflare is rolling out its latest security offering to safeguard LLMs from potential abuses like prompt injection, disclosing sensitive information, and data poisoning. As LLMs increasingly begin to power internet-connected applications, a need for specialized protection against both traditional cyber threats and new vulnerabilities unique to LLMs has arisen.

Firewall for AI’s features like Rate Limiting and Sensitive Data Detection, plus a newly developed validation layer for prompt analysis, Cloudflare's approach is more comprehensive than traditional safety guardrails and tailored to the specific challenges of LLM applications. The Firewall is engineered with a deep understanding of the unique LLM vulnerabilities including non-deterministic operation modes and the integration of training data within the models (which traditional security mechanisms may not effectively address).

Cloudflare's solution is not just limited to a single type of LLM deployment. It covers internal, public, and product LLMs, ensuring a wide range of applications -from corporate assets to customer-facing products - are shielded from cyber threats.

-------------------------

The rise of AI is shifting the balance of labor in software development from writing code to testing and quality assurance. Traditionally, the "Inner Loop" of designing, coding, and debugging was the primary focus, but now the "Outer Loop" of testing, security, reliability, and quality assurance is growing in importance. While AI assistants like GitHub's Copilot can significantly boost coding productivity, they also frequently introduce more bugs and design flaws, increasing the workload on the Outer Loop tasks of testing and quality control.

To adapt, software teams are needing to prioritize automating the tedious, repetitive tasks ("toil") in the Outer Loop. This includes adopting tools like continuous integration/continuous delivery (CI/CD) for automated testing and deployment, cloud cost management, and internal developer platforms. Additionally, companies will have to start building/expanding talent for roles like QA engineers and reliability engineers, which are likely to be in high demand as the Outer Loop expands with more automated code generation.

MORE IN T/R/OS

That’s it for this week! We’ll see you next Thursday.