Home Business Google’s AI Boss Says Scale Only Gets You So Far

Google’s AI Boss Says Scale Only Gets You So Far


Won’t this also make AI models more problematic or potentially dangerous?

I’ve always said in safety forums and conferences that it is a big step change. Once we get agent-like systems working, AI will feel very different to current systems, which are basically passive Q&A systems, because they’ll suddenly become active learners. Of course, they’ll be more useful as well, because they’ll be able to do tasks for you, actually accomplish them. But we will have to be a lot more careful.

I’ve always advocated for hardened simulation sandboxes to test agents in before we put them out on the web. There are many other proposals, but I think the industry should start really thinking about the advent of those systems. Maybe it’s going to be a couple of years, maybe sooner. But it’s a different class of systems.

You previously said that it took longer to test your most powerful model, Gemini Ultra. Is that just because of the speed of development, or was it because the model was actually more problematic?

It was both actually. The bigger the model, first of all, some things are more complicated to do when you fine-tune it, so it takes longer. Bigger models also have more capabilities you need to test.

Hopefully what you are noticing as Google DeepMind is settling down as a single org is that we release things early and ship things experimentally on to a small number of people, see what our trusted early testers are going to tell us, and then we can modify things before general release.

Speaking of safety, how are discussions with government organizations like the UK AI Safety Institute progressing?

It’s going well. I’m not sure what I’m allowed to say, as it’s all kind of confidential, but of course they have access to our frontier models, and they were testing Ultra, and we continue to work closely with them. I think the US equivalent is being set up now. Those are good outcomes from the Bletchly Park AI Safety Summit. They can check things that we don’t have security clearance to check—CBRN [chemical, biological, radiological, and nuclear weapons] things.

These current systems, I don’t think they are really powerful enough yet to do anything materially sort of worrying. But it’s good to build that muscle up now on all sides, the government side, the industry side, and academia. And I think probably that agent systems will be the next big step change. We’ll see incremental improvements along the way, and there may be some cool, big improvements, but that will feel different.