• Starter AI
  • Posts
  • LLama gets smaller, Microsoft’s quantum breakthrough, Apple’s latest deal

LLama gets smaller, Microsoft’s quantum breakthrough, Apple’s latest deal

Compact models are trending.

Hello, Starters!

We receive all kinds of model updates every day. In some cases, they become larger, while in others, they become smaller. Some are more finely tuned, and others are faster. Yes, it can be confusing, but don't worry; we'll guide you through it.

Here’s what you’ll find today:

  • Meta is releasing smaller versions of Llama 3

  • Microsoft has cracked a quantum problem

  • Apple seals AI deal with Shutterstock

  • Karpathy’s latest repository

  • Stability unveils Stable LM 2 12B

  • And more.

As a way to provide a glimpse of the upcoming Llama 3 and to follow the trend of smaller models, sources close to Meta claim that the company will release two compact versions of the model next week. 

According to reports, these models will be aimed at text generation. While there aren't many details about the full Llama 3, it is expected to be unveiled in the summer with multimodal capabilities and hopes of surpassing rivals such as OpenAI's GPT-4.

Quantum computing is a field poised to bring many technological advancements to life. That's why major companies in the field are constantly putting effort into achieving it. Recently, Microsoft and Quantinuum have made strides in improving error rates and enhancing the performance of qubits, which are the basic units of quantum computing.

This is a significant step, as stabilising qubits and making them capable of holding multiple states will increase their reliability, ultimately leading to better implementation of quantum computing in practice. Such progress could have a significant impact on the development of AI.

As we move closer to June, details of Apple's approach to AI are beginning to emerge, with the latest being a deal between $25 million and $50 million with the stock photography platform Shutterstock.

Apple is cautious about its moves, so very little is known about its advancements. It's unclear how many models they are launching or how they plan to integrate them into their products. However, it's evident that they're seeking valuable data to train them ethically.

💻Andrej Karpathy recently published a GitHub repository to train LLMs in pure C without the need for large amounts of memory for PyTorch and cPython. Using GPT-2 as a starting point, Karpathy showcases how 1,000 lines of code can produce similar results and run as fast as those of PyTorch.

🤖Despite all odds, Stability AI continues to release enhanced versions of its language models. The latest update for Stable LM 2 now includes 12 billion parameters, surpassing similar models like Llama 2 70B on certain benchmarks.

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading!