Difficult To Verify
The missing information makes it difficult to evaluate the model’s safety. The paper also omits any reference to Google’s own Frontier Safety Framework, designed to flag AI risks with serious consequences.
Unlike some companies, Google releases safety reports only after a model is out of its “experimental” phase and often separates findings on dangerous capabilities into internal audits not shared publicly.
“This report is sparse and late. It’s impossible to verify Google’s claims,” said Peter Wildeford, co-founder of the Institute for AI Policy and Strategy.
Transparency Is Slipping
Thomas Woodside of the Secure AI Project welcomed the release but questioned Google’s commitment to consistent updates, noting the last such report came out in June 2024. A paper for the newly announced Gemini 2.5 Flash has yet to appear, though Google says it’s “coming soon.”
Industry-wide, transparency is slipping. Meta’s Llama 4 safety note is only a few pages long, and OpenAI has yet to publish any safety report for GPT-4.1. Google had previously promised governments it would provide full safety papers for all major AI releases.
“This looks like a race to the bottom,” said Kevin Bankston of the Center for Democracy and Technology. “Companies are rushing models out with minimal safety transparency.”
Subscribe to stay informed and receive latest updates on the latest happenings in the crypto world!
Subscribe to stay informed and receive latest updates on the latest happenings in the crypto world!