Intel unveils inaugural Neural Processing Unit (NPU) for AI

Plus: AI Search Engine plugin brings browsing back to ChatGPT

Welcome to The Dispatch! We are the newsletter that keeps you informed about AI. Each weekday, we scour the web to aggregate the many stories related to artificial intelligence; we pass along the news, useful resources, tools or services, technical analysis and exciting developments in open source. Even if you aren’t an engineer, we’ll keep you in touch with what’s going on under the hood in AI.

Good morning. Today in AI:

  • (Technical section) Intel unveils highly-anticipated Intel Core Ultra NPU

  • Google Bard gets updates ahead of Gemini release

  • Microsoft AI research team accidentally leaks 38TB of sensitive internal data

  • After hemming and hawing, Britain officially invites China to global AI summit

  • (Social Media section) Two-minute papers covers Nvidia DLSS 3.5 & an AI roundtable with Israeli Prime Minister Benjamin Netanyahu, Tesla CEO Elon Musk, OpenAI president Greg Brockman, and renowned AI researcher Max Tegmark

  • A new plugin for ChatGPT called AI Search Engine brings browsing capabilities back (plus subscribers only), optical illusions with Stable Diffusion XL, & more

Google continues to push Bard into its suite of tools and services

The story: Google has announced several new capabilities for its AI chatbot Bard, including integration with Google apps and services like Gmail, Docs, and Maps. Bard can now pull relevant information from any of these tools to provide more customized responses. Google has also improved Bard's ability to fact-check its own responses using Google Search.

More Details:

  • Bard Extensions allow Bard to show information from Google apps directly in the chat. This allows Bard to pull together relevant information from across Google tools. For example, Bard can now grab flight details from Google Flights when helping plan a trip.

  • Bard can now double-check its own responses using Google Search. Users can click on highlighted phrases to see information from the web that supports or contradicts Bard's statements.

  • The latest updates also aim to make Bard better at creative collaboration, multilingual conversation, and providing in-depth coding help. Users can now continue conversations started by others who have shared a public Bard chat link.

  • Google says this is Bard's "most capable model yet." Existing features like image uploads, search image results, and response modifications are also now available in 40+ languages, expanding access.

Takeaways: Bard has been and remains a bit of a curiosity. Launched as a rush project to compete with ChatGPT, its performance was widely panned at launch. Since then, it has been updated repeatedly to offer image recognition, improved coding capability, and continued integration into the Google ecosystem. Far from abandoning the project, Google continues to pump resources into Bard even as the company hypes the release of Gemini. It’s widely expected that Bard itself will be powered by Gemini upon the latter’s release.

Semafor interviewed Sissie Hsiao, Bard’s general manager, about the new features and her thoughts on AI tools. The release of Gemini is expected very soon.

Microsoft's AI research team accidentally exposed 38 terabytes of sensitive internal data due to a misconfigured Azure SAS token. The data, accessed through a public GitHub repository, included backups of employee workstations containing passwords, secret keys, and over 30,000 internal Microsoft Teams messages. The overly permissive SAS token granted full control of the storage account instead of read-only access to the intended AI models.

Using SAS tokens for sharing data is a potential security risk, since they lack monitoring and are difficult to revoke. Companies adopting AI more broadly will have to ensure proper security controls govern researchers' access to sensitive data sets. With massive data now flowing through developers' hands, new risks emerge that require updated policies and safeguards. As this event shows, even basic misconfigurations around data access can lead to major leaks at scale.

A Belgian startup has enlisted bees to gather environmental data. Using the bee’s pollen, BeeOdiversity can identify over 500 pesticides and heavy metals, as well as catalog local plant life. With Microsoft’s help, BeeOdiversity developed BeeOimpact, which uses AI to extrapolate the bees' findings to much larger areas. This data helps to asses biodiversity and pollution impacts.

Water utility companies, farms, and even the world’s largest brewer Anheuser-Busch are using the data to monitor water sources, restore native plants, and reduce pesticide use. For example, an Oregon farm implemented wetlands filtration and organic growing practices based on the bees' data revelations. The interplay of biology, technology, and ecology may provide a model for restoring a bit of balance between human activity and the natural world.

From our sponsors:

Marketing tactics based on science

3-min marketing recommendations based on the latest scientific research from top business schools.

Intel Core Ultra Processor code-named ‘Meteor Lake’ Image: Intel

The story: At Intel Innovation 2023, the company confirmed its five-nodes-in-four-years plan remains on track, and unveiled its new Core Ultra processors featuring an integrated neural processing unit (NPU) designed to bring more AI capabilities to PCs. The chips will enable new experiences like enhanced voice commands and real-time video editing that rely on efficient on-device AI processing. Intel confirmed the processors will launch on December 14th.

More details:

  • The NPU is tailored for workloads like voice recognition and background noise cancellation that were previously dependent on the CPU or cloud computing. This allows for lower latency and enhanced privacy by keeping data processing on-device.

  • The NPU inclusion represents a major evolution in client processors. Combined with advancements like Intel's 4 process technology and Arc discrete-level graphics, the Core Ultra promises substantial improvements in AI, power efficiency, and graphics performance.

  • The specialized NPU design is optimized for neural network computations like matrix multiplications. This makes it more efficient than general CPUs/GPUs for AI workloads in terms of speed, power consumption, and real-time responsiveness.

  • The launch positions Intel as a leader in bringing more advanced AI capabilities natively to PCs and laptops, without reliance on connectivity or the cloud. This has major implications for democratizing on-device AI (link to more info on that subject’s importance from the Qualcomm blog).

Takeaways: Believe it or not, Intel is one of the few companies left that both designs and produces advanced AI chips at high volumes domestically. Most tech giants, AMD, Qualcomm, and Nvidia all heavily rely on outsourced manufacturing (primarily in Taiwan through TSMC; Amazon’s Inferentia chip is manufactured in Israel). Given the important geopolitical factors around the semiconductor industry, Intel is expected to be the largest recipient of the 2022 $54b CHIPS act and figures to play heavily into how the ‘AI chip war’ plays out globally.

If Meteor Lake is successful, it will be a huge boost for the US effort to maintain AI sovereignty going forward. The US is, perhaps dangerously, relying heavily on TSMC to manufacture its advanced chips.

Nvidia and Anyscale have announced a collaboration to integrate NVIDIA's AI software into the Anyscale Ray unified computing framework. This will enable developers using Ray and Anyscale to more easily build, tune, train and scale large language models (LLMs) for generative AI applications.

Key integrations include Nvidia TensorRT-LLM for optimized LLM inference, Triton Inference Server for model deployment, and NeMo for customizing and fine-tuning LLMs. Developers can use these tools with the open source Ray framework or the fully managed Anyscale Platform. The platform runs on accelerated computing from leading clouds, with the option to run on hybrid or multi-cloud computing. Nvidia AI integrations with Anyscale are in development and expected to be available by the end of the year. Nvidia is also offering a 90-day free evaluation of their AI Enterprise solution.

Trending AI Tools & Services:

  • (ChatGPT Plugin) AI Search Engine: search and browse the web in ChatGPT

  • DialMe: unlock the dialogue you've been missing - DialMe's AI interviewers gets users talking and you learning.

  • Pentest Copilot: ultimate ethical hacking assistant, copilot utilizes context to give directed results - from analyzing web apps to root shells, it's got you covered

  • Dualite: a Figma plugin to convert dynamic designs into responsive code

  • CodeWiz: solve your coding challenges - no more wasting time searching for answers on Stack Overflow or other forums

  • Klu: seamlessly searches, engages with and understands data across apps like Gmail, Notion, Drive, Trello, Slack, and more

Guides/useful/lists:

Social media/video/podcast:

  • Doing it the hard way: making the AI engine and language of the future with Chris Lattner of Modular [Podcast]

  • (Discussion) Why aren’t more people using Bing AI? [Reddit]

  • AI Rountable: Israeli PM Netanyahu, Elon Musk, Greg Brockman, Max Tegmark [X]

  • Unveiling the secret: how Gamma gained 3 million users in just 3 months [YouTube]

  • NVIDIA’s DLSS 3.5: This should be impossible! [YouTube]

Did you know? 

At Innovation 2023, Intel also unveiled glass substrates for advanced chip packaging to keep Moore’s Law advancing. For decades, Moore's Law has predicted that the number of transistors packed onto a chip will double every couple of years. This has been enabled by shrinking the size of transistors - but we are reaching atomic-scale limits on further transistor shrinkage. Intel's solution is to use larger glass substrates inside chip packages to fit more chips and interconnects.

Compared to conventional organic substrates (usually epoxy based resins), glass offers advantages like greater thermal stability, higher interconnect density, and the ability to withstand higher temperatures during manufacturing. Intel sees glass substrates enabling a 10x increase in interconnect density versus organic substrates; they’re on track to introduce them by the late 2020s, ensuring Moore's Law advances for at least another decade. The initial applications will focus on data centers, AI, and graphics workloads requiring high-density, high-performance packages.

Corning, SK Group and a few other American companies are also developing glass substrates for semiconductors. There is currently a chip packaging shortage for advanced chips.

Well, AI is scarily powerful! And India, as the world’s fastest-growing economy, acknowledges its pivotal role in this revolution. With the world’s largest pool of skilled AI professionals, we are poised to lead the way. By 2035, AI has the potential to contribute an astounding one trillion dollars to India’s economy.

India’s UN ambassador Ruchira Kamboj, September 2023