Note: The cover image was generated with Midjourney using prompts free from any confidential or sensitive data. No generative AI tools were involved in any other element of this article.
Over the past months, the management consultants at OXYGY and the lawyers at Bird & Bird have been monitoring the unfolding of an AI-powered revolution. Previously reserved to large corporations or the technology sector, we witnessed the “democratisation” of low-code/no-code AI applications. Those range from large-language models (LLMs) like “Chat GPT” to data-integration tools that make it easier for corporations to make most of their sprawling data-lakes and databases.
In our own sectors (consulting and legal), we see the frequent use of generative AI to help consultants and lawyers on a variety of deliverables including writing emails and document drafting. The use of AI generates obvious time and resource efficiencies, but also raises questions about the quality of the work generated and on the confidentiality of data processed. We assume that a large majority of firms now use these types of AI tools in some capacity.
OXYGY and Bird & Bird have already taken steps to ensure our consultants and lawyers use AI tools responsibly, and much of the inspiration for our involvement in Responsible AI has been lived and experienced over the past twelve months.
Why have we created a joint offering for our clients around Responsible AI?
We notice the same types of discussions circulating amongst our clients and contacts. They focus on a dilemma. On one hand our clients want to successfully capitalise on the opportunity of AI. It’s like 1849 and they don’t want to miss out on the AI “California Gold Rush”. In a difficult macroeconomic environment, the potential for AI to improve productivity and reduce costs seems like a silver bullet. However, many CTOs and Senior technology leaders raise important concerns about the risks that mismanaged AI can pose to their company’s reputation and to society at-large.
The implementation of AI tools, from third-party applications to those developed by an in-house team, can be a high-risk and high-reward strategy. OXYGY and Bird & Bird see an opportunity here to respond to the demand and to provide clarity and direction to clients. This is an opportunity for a joint consulting and legal offering to shine through an ESG-inspired Responsible AI governance framework that addresses both the risks and the opportunities.
Subject-matter experts at Bird & Bird and OXYGY are closely tracking and engaging with the AI regulatory processes around the world. It is becoming clear that an AI governance and risk framework will be a crucial part of the EU AI rules (see . However, one cannot wait until 2025-2026 when the regulation will enter into force. Many organisations are looking for a way to responsibly implement and manage AI today, including in line with regulations and industry best practices/standards as they emerge.
What are principal hallmarks of our Responsible AI offering?
“Responsible AI” can mean many things. In the USA, it tends to have a strong focus on data ethics. In Europe, it depends on the sector. It can be used as a buzzword referring to tangential regulatory areas as applied to AI (e.g., Data Protection, Cybersecurity, Consumer protection). In some cases, it can refer to data collection and processing quality in the context of AI.
Our definition of Responsible AI is different. OXYGY and Bird & Bird’s Responsible AI offering is based around an ESG-inspired governance framework to address the risks and opportunities associated with organisational AI implementation (be it third-party applications or in-house AI tools).
Through our joint legal and consulting proposition we can address the whole spectrum of associated risks comprehensively and effectively (legal, regulatory, technological, change management, process/operational, financial, reputational, etc.).
Our focus is on supporting the practical implementation of AI initiatives/tools in line with an organisation’s business strategy and within its ESG framework. To achieve this, we aim to address different aspects of the sustainable AI implementation “Operating Model”:
PURPOSE: Every engagement must start from the client’s fundamental purpose. From here, Responsible AI projects will be modelled around the client’s purpose, mission, vision and values. Q: Do we have an explicit link of our AI initiatives with core company values and mission?
STRUCTURE: AI Governance structure for every step of the AI value-chain (e.g., AI vendor mgt. environment), Ethics Advisory Board, embedding within existing organisational structure. Q: How to align AI ownership with the current organisational structure and roles?
PROCESS: Embedding Ethics-by-design, continuous improvement, fast and safe new product acquisition and vendor/supply management Q: How to embed Ethics-by-design principles in the AI value chain (Develop, Use, Buy, Sell)?
PEOPLE: Invest in value-creating capabilities around AI ethics training, AI technical & management talent. Q: How to develop/update knowledge, skills and attitudes for responsible AI adoption?
TECHNOLOGY AND DATA: Robust information systems and knowledge processes to improve quality of AI stack (data, models, apps, and overall processes/use-cases). Q: How to create an agile and coordinated approach to managing risks (PESTLE) associated with adoption and deployment of breakthrough technologies?
PERFORMANCE MANAGEMENT: Simple, transparent and actionable performance metrics linked to core purpose (e.g. OKRs) Q: How to include Responsible AI KPIs in current performance management framework?
Many organisations find themselves along a “Responsible AI” journey:
|Type||AI Explorer||Compliance Conscious||Ethics Integrator||Paradigm-setter|
|Summary||Interested in AI adoption, or AI integration into the business with little understanding of the broader impacts.
“Without the dedicated resources but willing to begin the journey”
|Actively using or developing AI systems and facing with associated regulatory obstacles and stakeholder pressure. However, Responsible AI considerations are not embedded into the value chain.||An internal operating model (governance, processes, policies) is integrated within the company’s business model, enabling it to cope with stakeholder pressures and anticipating regulation.||Responsible AI principles are core to the way the business handles AI and it contributes to shaping and setting standards around Responsible AI.|
|Key questions to answer||What governance and operational model is needed to embark onto an AI journey?||How to embed Responsible AI considerations into my value chain?||How to improve the way we manage and govern our growing AI system, while keeping up with regulation?||How can we ensure the way we manage and govern AI sets the standard for ourselves and for the entire ecosystem?|
Each stage brings about a different set of needs and practical imperatives. Responsible AI is a journey, it is a developing process of managing the impacts of AI implementation over time and over the course of operational maturity. Developing processes and governance that will be consistent with what they will need in the future.
About the Authors
Edoardo Monopoli – CEO, OXYGY
Since 1995, I have partnered with senior executives on their personal, leadership team and business strategies for sustainable success, combining performance improvement with real people engagement.
Roger Bickerstaff – Partner, Bird & Bird
With over 25 years’ experience as a leading technology lawyer and now based in both our London and San Francisco offices, I have extensive experience advising on tech infrastructure and outsourcing projects.
Yuji Develle – Senior Consultant, OXYGY
Proficient in both start-up and “big corporate” environments, particularly where success depends on the tender interaction between technology and business models, I am responsible for developing Responsible AI and Blockchain offerings across all six OXYGY practices.