A secret sauce to innovation

Katalin Feher
3 min readApr 30, 2024

--

At the prominent World AI Summit Americas conference, a machine learning expert approached me in the Speakers’ Room with a deep, searching gaze after the Responsible AI and Governance session. He confessed that despite spending half a day in sessions, he was puzzled about the real significance of the topic. In the end, he asked:

“Honestly, could it be that AI ethics and responsible AI are just nice words?”

Finally, our conversation ranged from geopolitics to individual moral beliefs. Thirty minutes later, his skepticism turned into a smile of realization. The personalized conversations transformed abstract ideas into meaningful insights. He concluded,

“Responsible AI is the secret sauce to innovation, right!”

When the secret sauce is part of the innovation design. Yet, the true flavor reveals itself only once you’ve taken a bite. Illustration. Photo credit: Katalin Feher

Lessons to be learned

Discussing responsible AI (RAI) is crucial for compliance and navigating the complex mosaic of policy-making human rights or tech hypes. It’s about recognizing and respecting diverse socio-cultural values and moral frameworks that differ dramatically across world regions. By actively engaging in discussions, we challenge and refine our perspectives, ensuring AI development honors these differences rather than imposing a one-size-fits-all solution. This dialogue must spark provocative questions about who benefits from AI and who might be harmed, ensuring technology advances with ethical integrity and cultural sensitivity.

Moreover, the debate on responsible AI extends beyond cultural nuances and ethical regulations — it challenges the professional-personal conflicts inherent in the technology sector. Often, AI professionals face dilemmas where business objectives may conflict with personal ethics, underscoring the need for more candid conversations facilitated by academia. These discussions should critique and guide robust, rigorous research that bridges business practices and public policy. We can ensure that AI development is technologically advanced, ethically sound, and socially responsible by fostering an environment where these critical issues are openly debated. More than ever, there’s a pressing need for an ethical framework that aligns with our diverse, global society.

Recommendations for business and public policy makers

Innovative discussion methods in AI ethics deepen understanding and drive responsible implementation. Here are three recommendations to implement AI ethics by design in a responsible way that every organization can apply from the very first moments:

  • RAI Lab Mode: Workshops, role-playing, and foresight events immerse participants in AI scenarios to tackle ethical dilemmas. This practical approach simulates key decision-making, highlighting ethical trade-offs.
  • Ethical Hackathons are focused events that tackle specific AI ethical issues, promoting rapid idea prototyping to solve ethical challenges in AI applications.
  • Dynamic Deliberation of AI Governance: AI-driven platforms enhance public discourse by integrating diverse community inputs to inform policies and practices in real-time.
  • Strategic Foresight in AI Planning: Modeling tools simulate AI trends, helping stakeholders anticipate changes and adapt policies for future resilience.

Our conversations about AI ethics and responsibility must go beyond mere compliance. Provocative and honest face-to-face dialogues are needed to sharply focus on the core issues while appreciating the broader consequences. This has consistently spurred radical rethinking in science and technology policy. The intense debates between Albert Einstein and Niels Bohr transformed physics, while discussions between Steve Jobs and Bill Gates catalyzed innovations shaping today’s tech landscape. Considering this option, responsible AI is not just a session topic for a conference or simple fine-tuning but a necessity to produce the secret sauce to innovation.

This article is part of a project supported by the Horizon Europe NGI grant and hosted by the Concordia University Applied AI Institute.

--

--

Katalin Feher
Katalin Feher

Written by Katalin Feher

Expert in generative AI, AI in media, socio-technical systems, and responsible-explainable AI

No responses yet