At Google I/O, Google today announced the launch of PaLM 2, its latest Large Language Model (LLM). informed. PalM 2 powers Google’s updated Bard chat tool, the company’s competitor to OpenAI’s ChatGPT, and will serve as the base model for most of the new AI features the company is announcing today. PalM 2 is now available to developers through Google’s Palm API, Firebase, and in Colab.
|Google Bard and Adobe Firefly cooperation; Create and edit new images|
For Google, everything is not the size of the language model
Like OpenAI, Google hasn’t provided many technical details about how to train this next-generation model, including parameter counts (PaLM 2 is a 540 billion parameter model for what it’s worth). The only technical detail provided by Google here is that PalM 2 is built on Google’s latest JAX and TPU v4 infrastructure.
In a press conference before today’s announcement, DeepMind Vice President Zubin Ghahrani said:
“What we’ve found in our work is that it really isn’t the size of the model—that bigger isn’t always better—which is why we’ve come up with a family of models in different sizes. We think that counting parameters is not really a useful way to think about the capabilities of models, and that capabilities should really be judged by the people using the models to understand whether they are useful in the experiments they are trying to achieve with these models. “
Instead, the company decided to focus on its own capabilities. Google says the new model will perform better in common sense reasoning, mathematics and logic. In fact, as Mr. Garhami noted, the company has trained the model on a large volume of math and science texts as well as mathematical expressions. It’s no secret that large language models—focusing on language—have trouble handling math questions without resorting to third-party plugins. However, Google argues that PalM 2 can easily solve math puzzles, reason through problems, and even render graphs.
Google talks about PalM as a family of models, which includes models like Codey as well as Med-PaLM 2, the company’s model focused on medical knowledge. There’s also Sec-PaLM, a version focused on security use cases, and a smaller PaLM 2 model that can run on smartphones, which could potentially open up PalM to privacy-focused use cases, though Google hasn’t said anything yet. The timeline is not committed. Google says this model can process 20 tokens per second, which may not be very fast, but may be acceptable for some cases.
It’s no secret that Google is taking a very serious approach to rolling out these AI features, something the company has confirmed. But at the same time, the standard line of Google representatives in this case is to make these devices responsibly and with safety aspects in mind, and this is what the company says about the Palm. Obviously, without some testing, it’s impossible to know how well it performs and how well it handles edge cases.
- Google’s Find My feature support for tablets and headphones in 2023
- Google’s unveiling of the Magic Editor feature; Entering a new generation of photo editing!
- The omnidirectional view of Google Maps made it possible to display the route in 3D
- Google’s Pixel 7a has officially been announced – as a flagship