• Starter AI
  • Posts
  • Meta’s Llama 3.1, xAI’s AI training cluster, Introducing: Visual HayStacks

Meta’s Llama 3.1, xAI’s AI training cluster, Introducing: Visual HayStacks

The future of AI is open.

In partnership with

Hello, Starters!

AI keeps growing, and so does its influence. One thing that many tech leaders have in mind is that there's something that will assure constant innovation... and that is open source.

Here’s what you’ll find today:

  • Meta unveils Llama 3.1

  • Elon Musk announces xAI’s Supercluster

  • Visual HayStacks: A new benchmark

  • Adobe’s latest AI tools

  • Mark Zuckerberg’s thoughts on open source

  • And more.

🦙 Meta unveils Llama 3.1 (15 min)

Meta is turning the world of open source upside down with the release of Llama 3.1, which not only includes upgrades to its 8B and 70B models but also introduces a 405B version that they've described as the first frontier-level open-source AI model, with capabilities almost on par with GPT-4o and Claude 3.5 Sonnet.

Open source is a key part of AI advancements. By granting users access to powerful systems like Llama 3.1 405B, Meta ensures a never-ending stream of innovation, as the decentralisation of AI technologies opens up new development possibilities.

Memphis, Tennessee is the selected place for what appears to be an upcoming AI breakthrough. xAI's CEO, Elon Musk, shared through X that they've started training the "most powerful AI training cluster" in the world, also known as the Memphis Supercluster. Musk's main goal for the cluster is to train "the world's most powerful AI" by December of this year.

The initiative also counts on the presence of Nvidia, as the cluster began its training with 100,000 liquid-cooled H100 GPUs on a single RDMA (Remote Direct Memory Access) fabric. It's a matter of time to see how xAI performs this ambitious feat.

A study from UC Berkeley's Artificial Intelligence Research has discovered that most models fall short when it comes to filtering relevant information from large image sets. Addressing this weakness has led them to develop "Visual Haystacks," a benchmark that allows them to evaluate an AI model's performance when processing extensive collections of images.

The benchmark draws inspiration from the "needle in a haystack" saying, and is based on two tasks, a "single needle" challenge and a "multi-needle" challenge. The "needle" is the relevant image or images on which the model should focus.

FREE AI & ChatGPT Masterclass to automate 50% of your workflow

More than 300 Million people use AI across the globe, but just the top 1% know the right ones for the right use-cases.

Join this free masterclass on AI tools that will teach you the 25 most useful AI tools on the internet – that too for $0 (they have 100 free seats only!)

This masterclass will teach you how to:

  • Build business strategies & solve problems like a pro

  • Write content for emails, socials & more in minutes

  • Build AI assistants & custom bots in minutes

  • Research 10x faster, do more in less time & make your life easier

You’ll wish you knew about this FREE AI masterclass sooner 😉

🎨Continuing with its infusion of AI into most of its applications, Adobe has recently announced new Firefly tools aimed at Photoshop and Illustrator. The features focus on image generation and allow users to create textures or images by just using prompts. Adobe claims to have taken the necessary steps to maintain a mindful AI approach.

🌐Following the launching of Meta's Llama 3.1 model, Mark Zuckerberg shared his thoughts on open source and how he believes that it will lead to major developments as it becomes the industry standard in the future, cataloguing it as a successful approach that Meta plans to stick with in the long run, and a positive platform for developers and the AI community.

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading!