• WeeklyDispatch.AI
  • Posts
  • New York Times prohibits its content from being used to train AI

New York Times prohibits its content from being used to train AI

Plus: The Australian news outlet that writes 3,000 stories a week with AI

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, guides, technical analysis and exciting developments in open source.

In today’s Dispatch:

  • Google’s Waymo and GM’s Cruise have received approval to operate paid autonomous taxi services day and night throughout San Francisco, marking a pivotal moment in the autonomous vehicle industry. Over 500 vehicles are in use already with more on the way.

  • News Corp Australia, a ‘global media and information services company’, is leveraging AI to create up to 3,000 weekly articles. With the oversight of four staff members, AI crafts articles on various topics including daily local essentials like weather, fuel prices, and traffic. The company is facing scrutiny for not disclosing the nature of these articles’ creation.

  • (Technical) With the current worldwide GPU shortage, combined with a burgeoning AI sector, it’s never been more important to find ways to maximize GPU performance – especially for the short term.

Plus: AI ‘godfather’ says analog computers could be viable AI safety precaution, a Forbes opinion piece on the importance of open source development in AI, trending tools and more!

Refusing to comply with new restrictions could result in unspecified fines or penalties.
Photo by Kena Betancur/VIEWpress

From The Verge: The New York Times has modified its Terms of Service to restrict its content from being used to train AI models. This change encompasses most forms of content including text, images, and videos, among others. Additionally, the use of automated tools like web-crawlers to access or collect the NYT's content now requires written permission.

More details:

  • The NYT's updated terms explicitly mention that content cannot be used for the development of any software, especially for training machine learning or AI systems.

  • Interestingly, since the announcement the publication hadn’t yet updated its robots.txt file, which guides search engine crawlers on content accessibility.

  • Google has recently updated its policy, permitting the use of public web data to train its AI services. Meanwhile, NYT agreed to a $100 million deal with Google back in February, enabling the tech giant to showcase Times content across its platforms.

  • The NYT has also exited from a media coalition that was in talks with tech companies regarding AI training data. This suggests future collaborations might be negotiated individually.

  • OpenAI and Microsoft have made related updates to their own service terms, with OpenAI allowing site operators to block its web crawler in future training, and Microsoft introducing restrictions on using its AI tools for other services.

Takeaways: The tug-of-war between media companies and tech giants is intensifying as concerns grow over the ethical use of content to train AI. The NYT's move underscores a broader trend where original content creators are seeking more control over how their work is used in the evolving AI landscape. No information about training AI has been made public with regard to NYT’s collaborative dealings with Google. The Associated Press has already partnered with OpenAI.

British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton
Photo by Geoff Robins/Getty Images

From WIRED: Geoffrey Hinton, acclaimed as the "Godfather of AI," left Google largely so that he could openly discuss the risks of artificial intelligence. Hinton believes that transitioning from digital to analog computers is one possible way ensure that AI remains more aligned with human intentions.

More details:

  • After witnessing the capabilities of large language models such as OpenAI's ChatGPT, Hinton became concerned about AI's potential dangers.

  • He anticipates a 50% chance that AI will surpass human intelligence within the next 5 to 20 years. Additionally, such an AI might not even reveal its full capabilities to humans.

  • Hinton challenges the notion that chatbots lack real understanding, arguing that predicting the next word requires a genuine grasp of the context.

  • Emphasizing the potential for AI systems to comprehend the world, mimic human behavior, and process vast amounts of information, Hinton suggests a pivot to analog computing. Unlike digital systems, analog systems have inherent uniqueness, reducing the risk of a hive-mind intelligence.

Takeaways: While AI's growing capabilities are impressive, its unchecked progression might lead to unforeseen challenges. Hinton's suggestion of analog computing underscores a broader point: the need to be innovative not just in developing AI, but also in setting boundaries for it. His emphasis on the uniqueness of analog systems implies a vision of AI that is powerful yet individualistic - reducing the risks of collective, unintended behaviors.

The release of multiple open-source options paves the way for companies to create their own AI implementations and applications,

Forbes • Kevin Korte

Bill Gates’ vision of a personal AI is coming. That future is one that will disrupt SEO and e-commerce and require marketers and creators to move beyond optimizing traditional search engines to optimize AI

VentureBeat • Sharon Goldman

More News & Opinion:

From our sponsors, SocialBee:

Smart & affordable social media management

Keep your audience engaged and your social media profiles active with SocialBee.

SocialBee posts for you without skipping a beat – when you’re away, sleeping, or on vacation.

There are other ways to raise performance that should be considered and are becoming increasingly necessary amid low supply and high costs.

Enterprise AI • Steve Lanigan

Their system is already being used by Open AI and will be used in the testing for the White House hackathon. The waitlist is open for early access now.

Scale AI • The Scale Team

More Open Source & Technical:

Social media/Video/Podcast:

  • PromptOps: How Generative AI Can Help DevOps [Podcast]

  • NVIDIA’s New AI Is Gaming With Style! [YouTube]

  • (Discussion) AI performance on benchmarks relative to human performance [Reddit]

  • Nvidia releases Neuralangelo (2D video > 3D object) source code [X]

  • How to install MetaGPT - a new open source project that aims to recreate an entire engineering organization using AI [YouTube]

Did you know? 

The US government continues to integrate with AI: the Department of Defense announced a new generative AI task force. ‘Task Force Lima’ will analyze, integrate, and implement generative AI tools across the DoD to bolster national security, reduce risks, and ‘harness AI innovation responsibly and strategically’.

Additionally, Congress members were given a crash course by professors from Stanford on the benefits and risks of AI. The curriculum covered AI’s potential to reshape education and health care, a primer on deepfakes, as well as a crisis simulation where participants had to use AI to respond to a national security threat in Taiwan.

Trending AI Tools & Services:

Have a great Tuesday! We’ll be back tomorrow.

Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they don’t. That's just bullsh*t.

Geoffrey Hinton, August 2023