Dear Fidji,
Congratulations on your new role.
Your recent letter struck a chord. You described AI as “the greatest engine of empowerment in history.” That idea resonates - deeply. It reflects a vision of technology that prioritizes people's needs over mere performance metrics. It reflects a belief that systems can be reimagined to include more voices, more agency, and more dignity.
And unlike so much tech rhetoric, your message didn't feel like corporate gloss. It felt intentional. Human. Genuinely hopeful.
But belief, as you surely know, is only the beginning.
The hard part is building systems that match the vision. And that means grappling with where power flows, how value is distributed, and what tradeoffs we’re willing to make - not just for market share, but for public trust.
If we want AI to empower the many - not just the privileged few - we must be willing to re-architect how it’s built, how it’s accessed, and who it rewards.
That includes:
At Bria, we've chosen a different path. We train models only on licensed content, build attribution into the foundation, and work in partnership with global media providers, platforms, and creators. It’s not the fastest path. It doesn't chase headlines. But it's durable and built for trust at scale. And ultimately, more defensible.
Bria is not alone. Across the ecosystem, a movement is taking shape - startups, researchers, policymakers, and creators pushing for AI that is transparent, accountable, and pro-human by design. From provenance standards and licensing coalitions to the GPAI Code of Practice and the CLEAR Act, there's growing momentum behind a more responsible model of innovation.
You have the opportunity - and now the mandate - to shape this next phase. You don’t have to choose between scale and integrity. But you do have to design for both.
So here's an open invitation:
Let’s talk - not because we hold the answers, but because realizing your vision will take more than scale. It will take shared responsibility - and a willingness to rethink how value is created, credited, and shared. At Bria, we’ve built attribution-powered systems from first principles, proving that it’s possible to align innovation with humanity. I’d welcome the chance to compare approaches, challenge assumptions, and explore where responsible AI can go next.
Because the real opportunity isn't just to distribute tools. It's to redistribute power.
And that begins with how we train, what we disclose, and who we include.
Warmly,
Vered Horesh
Chief AI Strategy Officer, Bria