While much of the global AI debate is still driven by speed, scale, and competitive advantage, one recent case has made the stakes unmistakably clear. Anthropic CEO Dario Amodei has publicly defended ethical limits on AI use, refusing to remove safeguards around mass surveillance and fully autonomous weapons without human oversight. He maintained this position even as the dispute with the U.S. Department of Defense escalated into a high-profile political and commercial confrontation.
The case reveals a deeper truth: the future of AI will ultimately depend less on code and more on the mindset of those who lead it. This is the starting point of Martha Giannakoudi’s keynote.
Her keynote examines what responsible AI adoption actually requires from leaders and organisations once the buzzwords fade and real decisions begin. Her approach is practical, human-centered, and urgently relevant for organisations that want to adopt AI with clarity, accountability, and trust. As AI systems increasingly shape decisions, workflows, and human judgement, leadership responsibility is entering a new phase.Questions of responsibility, judgement, and ethical limits — central to philosophical traditions since Aristotle — are becoming practical leadership challenges for organisations working with AI today.
This leads to three central questions:
How do we take real responsibility when working with AI?
What kind of mindset is required to design and govern AI in a human-centered way?
And how can organisations balance opportunity and risk without losing their ethical compass?