• Starter AI
  • Posts
  • Nvidia’s GTC 2024, Understanding Neural Networks, Google introduces VLOGGER

Nvidia’s GTC 2024, Understanding Neural Networks, Google introduces VLOGGER

San Jose is the place to be!

Sponsored by

Hello, Starters!

The spotlight is on Nvidia today, following the start of their GTC conference yesterday. We're witnessing the unveiling of hardware that will power AI in the future. Exciting times!

Here’s what you’ll find today:

  • GTC 2024: The conference for the Era of AI

  • How do neural networks learn?

  • Google announces VLOGGER

  • YouTube sets AI rules

  • Sam Altman hints at a new AI model

  • And more.

Nvidia proved yesterday why it is one of the main companies in the current AI landscape, kicking off its GTC conference with a keynote from CEO Jensen Huang that gave attendees a glimpse into what they've got in store for the future, including chips, software, and even robots.

Among the main announcements, Blackwell stood out—a new generation of GPUs that includes a chip named GB200, set to ship later this year. According to Huang, "we need bigger GPUs." Additionally, the company introduced NIM, a software platform for easily deploying pre-trained AI models. They also joined the robotic trend with the GR00T project, a general-purpose foundation model for humanoid robots.

Neural networks are a key part of AI, and as the need for AI democratisation grows, understanding how these mechanisms work in reality can give us a better outlook on how to build machine learning models that adapt to our needs and are simpler to interpret. 

A group of scientists from the University of California San Diego has found a mathematical formula that explains how these networks learn.

The formula, used in statistical analysis, allows us to see how these systems learn from examples and then make predictions, enabling us to pinpoint which relevant patterns of the training data the networks consider when making decisions.

Turning still images into videos is part of Google researchers' approach to VLOGGER. This AI model can take any photo or audio clip of a person to craft a lifelike video showing them speaking, gesturing, and moving.

By harnessing diffusion models, the researchers could train the system with a dataset containing 800,000 diverse identities and 2,200 hours of video to teach it how to generate realistic footage from a simple picture. Although it may raise concerns about potential deep fakes, it's also intriguing to discover its future applications.

Artificial Intelligence online short course from MIT

Study artificial intelligence and gain the knowledge to support its integration into your organization. If you're looking to gain a competitive edge in today's business world, then this artificial intelligence online course may be the perfect option for you.

  • Key AI management and leadership insights to support informed, strategic decision making.

  • A practical grounding in AI and its business applications, helping you to transform your organization into a future-forward business.

  • A road map for the strategic implementation of AI technologies in a business context.

🎥After the surge of AI-generated content, YouTube has announced a new set of rules for creators who use this technology in their videos. Any piece of media created with AI that could potentially mislead viewers due to its realistic appearance must be labelled "altered or synthetic content."

🤖Recently, on a podcast, Sam Altman talked about OpenAI's upcoming plans, hinting at an "amazing model" set to come out later this year. Although no more details were shared, many believe it could be related to the GPT-4.5 landing page that "accidentally" leaked a few days ago.

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading!