The rise of industry’s AI self-regulation
The regulation of Artificial Intelligence (AI) continues to be a difficult but popular topic, especially with the formal adoption of the European Union (EU) AI Act and new guidance in the United States (U.S.) following the White House’s Executive Order last October. As governments around the world try to navigate AI innovation and oversight, we are also starting to see industry-led consortiums and the self-regulation of AI take shape, and it’s a growing trend that business and technology leaders should be watching closely.
We expect this movement toward industry self-regulation around AI to pick up. Two key forces are at work here:
- AI use-cases and priorities vary substantially by industry. Business and technology leaders are typically far better positioned to understand the current and future impact of AI within their respective industries, and that understanding is needed to create realistic guidelines for good governance and responsible AI. Industry-specific priorities and use cases call for policies, controls, and oversight with a degree of nuance that top-down and broad-based government regulation simply can’t incorporate. To support this, in a recent Avanade research study that analyzed 3000 responses from business and IT professionals across industries including banking, energy, government, health, life sciences, manufacturing, nonprofit, retail and utilities, it was found that respondents from energy organizations showed the most confidence in the AI fluency of their leaders with regards to governance, while government professionals were the least confident of all industries surveyed.
- Government oversight can’t keep up with innovation. While some level of government regulation is necessary, we’ve seen time and time again that agencies don’t have the expertise or resources to keep up with technical innovation. AI capabilities especially are advancing quickly right now, and the guidelines needed to deploy AI safely and responsibly will (and should) evolve quickly as the technology advances. It’s also worth noting that if industry self-regulation is shown to be effective, government officials will feel less inclined to pass more heavy-handed regulations that could stifle future innovation.
So how is industry self-regulation around AI helping to move forward AI safety and innovation? A couple key areas that industry-led consortiums and partnerships are working to advance today include:
- Sharing AI best practices by publishing guidelines and frameworks for using AI responsibly, including guidance on security, dependability and oversight of AI algorithms. Also, industry-led consortiums are helping connect people with expertise and skillsets needed to handle AI in a responsible way.
- Co-development of AI capabilities by facilitating collaboration among consortium members, factoring in robust evaluation standards and a deeper understanding of how humans interact with AI. Also, these collaborations ‘even’ the playing field for participating organizations because regardless of their individual resources, each consortium member has access to the same benefits.
Let’s take a look at some examples:
February 2024, the U.S. National Institute of Standards and Technology (NIST) announced the creation of the U.S. Artificial Intelligence Safety Institute Consortium (AISIC), a collaboration between over 200 U.S. firms across various industries and the U.S. government to promote and support the safe use and deployment of AI. As part of this Consortium, members benefit by participating in knowledge and data sharing, have access to testing environments and red-teaming for secure-development practices, and have access to science-backed information of how humans engage with AI.
March 2024, 16 U.S. healthcare leaders, Microsoft and other healthcare technology organizations announced the creation of the Trustworthy & Responsible AI Network (TRAIN), a consortium aiming to improve the quality, safety, and trustworthiness of AI in healthcare settings. TRAIN will also leverage the best practices set forth by the Coalition of Health AI (CHAI) and OCHIN whose mission is to help drive forward health equity. Like other industry-led consortiums, every organization that participates in TRAIN has access to the consortium’s benefits.
April 2024, Cisco, Accenture, Eightfold, Google, IBM, Indeed, Intel, Microsoft and SAP announced the launch of the AI-Enabled Information and Communication Technology (ICT) Workforce Consortium which will focus on upskilling and reskilling roles likely to be impacted by AI.
In the same Avanade research study mentioned earlier, we found that less than half of employees say they completely trust the results produced by AI and only 36% of CEOs say they are very confident about their leadership’s understanding of generative AI and its governance needs today. So, although the efforts look promising, it is far too early to tell whether industry self-regulation will be able to effectively balance AI innovation with the safety and guardrails needed to ensure AI doesn’t do more harm than good. Also, it is important to note that not many industry-specific consortiums have been formally formed or announced yet, so it is also not known if the existing cross-industry consortiums will sufficiently address industry specific priorities and use-cases. Regardless, this is an important enough trend that technology and business leaders should be gearing up now for, so they’re not left behind. Here’s how:
- Evaluate how well your strategy, processes, and policies align with the standards developing in your industry. If you can participate in their development, even better.
- Focus on the basics of good AI governance and responsible AI – like registration, documentation, risk management, testing, and monitoring – which will likely be part of any industry standards or government regulations in this space.
- Maintain a culture of innovation and employee development. Sponsor experimentation and skills development, expand the participation in innovation to a wider set of people and roles, and focus more on employee/candidate skills and training than degrees and experience.
Let us know what you think. Have you started on any efforts to self-regulate around AI? Would you like to talk about how we’re seeing organizations in your industry rise to the challenge?
If you’re ready interested in delivering AI solutions with confidence, learn more about Avanade’s responsible AI capabilities.
Sources:
- Could Industry Self-Regulation Help Govern Artificial Intelligence? (forbes.com)
- Embrace Self-Regulation to Harness The Full Potential Of AI (forbes.com)
- Why self-regulation is best for artificial intelligence | The Hill
- Top AI Companies Join Government Effort to Set Safety Standards - Bloomberg
- HIMSS24: Microsoft, 16 health systems form health AI network (fiercehealthcare.com)
- AI Regulation is Coming- What is the Likely Outcome? (csis.org)
- Regulate AI? How US, EU and China Are Going About It - Bloomberg
- Trustworthy AI: String Of AI Fails Show Self-Regulation Doesn’t Work (forbes.com)
- New Consortium Aims to Ensure Responsible Use of AI in Healthcare (hitconsultant.net)
Subscribe to Avanade Insights
Headquarters
North America
1191 Second Avenue
Suite 100
Seattle, WA 98101
Growth Markets
Avanade Asia Pte Ltd
250 North Bridge Road
#30-03 Raffles City Tower
Singapore 179101