For a long time the AI conversation revolved around models: who has the smartest chatbot, the best image generator, the most impressive demo. Lately, that framing feels incomplete.
When you look more closely, Google - and by extension Alphabet - appears unusually strong not just in AI models, but across the entire stack. They build competitive frontier models, control massive distribution channels, and crucially run their own AI hardware in the form of TPUs, reducing reliance on a single external compute provider. This combination of models, infrastructure, and distribution represents structural power.
That raises a question that feels bigger than benchmarks: if someone is going to "win" the AI race, who should be allowed to?
Early in Google's history, its corporate code of conduct famously began with the phrase "Don't be evil." The motto expressed an aspiration to ethical restraint during rapid growth. After Google reorganized under Alphabet, the phrase was moved from the beginning of the code of conduct and gradually de-emphasized, reflecting the difficulty of sustaining simple moral slogans at massive scale [1, 2].
One contemporary example of ethical framing is Google's work on watermarking AI-generated media. Systems such as SynthID embed digital watermarks into generated images to enable later identification, addressing risks around misinformation and trust [3, 4]. While not foolproof, this approach establishes a baseline of responsibility that many competing systems do not adopt [5].
In earlier technology races - search, social media, smartphones - the winners primarily shaped markets. In AI, the winners may shape how people think, decide, work, and trust information. At this level, dominance begins to resemble governance rather than ordinary market success.
Rather than naming a single company, it is more useful to define conditions under which dominance should be tolerated: enforceable accountability, internal and external checks and balances, limits on lock-in through interoperability, demonstrated safety performance at scale, and respect for fundamental rights such as privacy and non-discrimination.
The healthiest future is likely not a single winner, but a plural ecosystem of strong providers, open standards, regulated frontier systems, and a mix of open and closed models with clear boundaries.
The core challenge is not whether any company can be trusted forever. No institution deserves permanent trust. The challenge is whether society is willing to tie power to responsibility and success to enforceable limits.
If AI is becoming part of society's nervous system, then winning should never mean ruling unchecked. It should mean operating under constraints strong enough that we do not have to rely on goodwill alone. Not "don't be evil" - but "don't be unaccountable."
Sources
[1] Wikipedia - "Don't be evil" (Google corporate motto history).
[2] Silicon Republic - Coverage on Google's evolving corporate ethics and motto.
[3] Google DeepMind - "Identifying AI-generated images with SynthID."
[4] Google Blog - Gemini AI image verification and watermarking features.
[5] Ruhr-University Bochum News - Research on limitations and manipulation of AI watermarking.