• Starter AI
  • Posts
  • AI for cancer treatment, Meta’s new models, Introducing: DeepSeek-Coder-V2

AI for cancer treatment, Meta’s new models, Introducing: DeepSeek-Coder-V2

GPT-4o improves cancer research.

Hello, Starters!

It is not an understatement when we say that AI is set to change lives, especially when we're talking about healthcare. As it advances, there are even more novel applications that will help both doctors and patients.

Here’s what you’ll find today:

  • OpenAI teams up with Color Health

  • Meta’s new AI models

  • DeepSeek-AI unveils DeepSeek-Coder-V2

  • Apple’s AI courses for developers

  • Anthropic’s insights into “Specification Gaming”

  • And more.

Cancer changes the lives of millions yearly, and AI technologies may be a great aid in advancing its research and finding better treatments. Color, a genetic testing company, is aware of these possibilities and is harnessing OpenAI's GPT-4o to develop a copilot application to improve diagnoses, craft treatment plans, and help doctors make better decisions about patient care.

With OpenAI's API, they're able to mix the patient's medical data with clinical knowledge, which results in a customised treatment plan that healthcare providers can refine accordingly. The application can also help find missing diagnostics, determine necessary screenings, and identify other relevant steps that can aid in the patient's journey through the illness.

Meta understands the value of open source for AI development, and like many others in the field, they're giving developers access to their innovations to ensure it. Their Fundamental AI Research (FAIR) team has unveiled a series of models focused on audio generation, watermarking, multi-token prediction, and more.

The announcements include the Chameleon 7B and 34B models, which support mixed inputs. Also, they've released JASCO, an audio generation model, and AudioSeal, which works as a watermarking model to detect AI-generated speech. Lastly, they've unveiled pre-trained models for coding that use multi-token prediction. According to Meta, this approach democratises access to AI and further enhances innovation, opening up opportunities beyond big tech companies.

DeepSeek-AI has recently introduced DeepSeek-Coder-V2, an open-source language model with a powerful combination in its training data: 60% source code, 10% mathematical data, and 30% natural language. With its Mixture-of-Experts architecture and the ability to process up to 128,000 tokens, we are witnessing a strong contender among current leading models.

DeepSeek-Coder-V2 supports 338 programming languages and has been trained on 10.2 million tokens. Benchmark results are impressive, almost surpassing the likes of Claude, GPT-4, and Gemini.

🍎Apple is extending its AI approach into education with the announcement that, starting this fall, all students and mentors at the Apple Developer Academy will be trained in a course about the fundamentals of AI. They will also learn how to build, train, and deploy machine learning models optimised for Apple devices from scratch.

📄A research paper from Anthropic's Alignment Science team reveals intriguing behaviour in AI models trained with reinforcement learning. As they become more capable, they learn that certain actions are rewarded. This could lead to "specification gaming," where the models find ways to trick the system they operate within to get rewarded, rather than acting as the developers intended.

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading!