Apple’s Upcoming AI Model: Catching Up in the AI Race?

What you should know


“`html

  • Apple is in discussions with Google about using its Gemini AI to enhance Siri and introduce new AI features to iOS.
  • Apple has developed a multimodal large language model (MLLM) known as MM1, capable of understanding both text and images, which could significantly improve its product offerings.
  • The MM1 model demonstrates the ability to accurately answer questions about images and perform tasks similar to advanced chatbots, indicating a potential future direction for Siri and other Apple services.
  • Amidst growing competition in the AI space, Apple has acquired DarwinAI, signaling its commitment to advancing its AI technologies and staying competitive in the market.

“`


Full Story

Oh, the tech world? It’s absolutely buzzing about generative AI. But you know who’s been kinda quiet? Yep, that’s right—Apple. But hold on, there’s some juicy gossip going around. Rumor has it, Apple’s been having secret chats with Google. They’re talking about borrowing Google’s Gemini AI. Why? To give Siri a little nudge and sprinkle some AI magic onto iOS.

And guess what? More tea has been spilled. Just last week, Apple went all ninja mode and dropped a research paper. Not just any paper, though—it was featured in Wired. It’s all about their new baby, the MM1. This thing isn’t your average Joe; it’s a multimodal large language model. Fancy, right? It can juggle both text and images like a pro. The paper showcased MM1 flexing its muscles, answering questions about photos and showing off its smarts, kinda like ChatGPT.

But here’s the kicker: MM1’s name is still kinda under wraps. Could MM1 stand for MultiModal 1? Maybe. It seems to play in the same league as Google’s Gemini and Meta’s Llama 2. You know, the big leagues. Other tech giants and smarty-pants from academia think these models could revolutionize chatbots. Or even create “agents” that can code or mess around with computer interfaces and websites. Sounds like MM1 might just be Apple’s next big thing.

On X, Brandon McKinzie, the brain behind MM1, spilled some beans. “This is just the beginning,” he said. The team’s already cooking up the next big thing. And he threw in a big thanks to everyone who pitched in.

So, MM1 is this multimodal marvel, trained on both pics and text. This special training lets it tackle text prompts and dive deep into questions about specific images. Take this example from Apple’s study: MM1 got a pic of a restaurant table, loaded with beers and a menu. When asked about the total cost of the beer, MM1 nailed the price and did the math. Impressive, right?

Now, Apple’s got Siri, their OG AI assistant. But with ChatGPT and others stealing the spotlight, Siri’s starting to look a bit… well, old school. Amazon and Google aren’t sleeping, either. They’re pumping their assistants, Alexa and Google Assistant, with Large Language Model tech. Google even lets Android users swap out the Assistant with Gemini.

With Samsung and Google rolling out fancy AI features left and right, Apple’s feeling the heat. But Tim Cook, Apple’s big boss, told investors not to worry. He teased that Apple’s got some generative AI tricks up its sleeve, set to be revealed this year.

Oh, and there’s more. Apple just scooped up DarwinAI, a Canadian startup known for its sleek and efficient AI systems. This move screams that Apple’s diving headfirst into the AI pool. So, keep your eyes peeled. Apple’s about to make waves in the AI world. Stay tuned for more updates!

Derrick Flynn
Derrick Flynnhttps://www.phonesinsights.com
With over four years of experience in tech journalism, Derrick has honed his skills and knowledge to become a vital part of the PhonesInsights team. His intuitive reviews and insightful commentary on the latest smartphones and wearable technology consistently provide our readers with valuable information.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Phone News