Matteo Tonello is the Head of Benchmarking and Analytics at The Conference Board, Inc. This post is based on a Conference Board/ESGAUGE report by Andrew Jones, Principal Researcher, Governance and Sustainability Center, The Conference Board.
This report analyzes how the largest US public companies disclose artificial intelligence (AI) risks in their 2023–2025 annual filings, providing insight into the issues shaping board agendas, investor expectations, and regulatory oversight in the years ahead.
Trusted Insights for What’s Ahead®
- AI has rapidly become a mainstream enterprise risk, with 72% of S&P 500 companies disclosing at least one material AI risk in 2025, up from just 12% in 2023.
- AI risk disclosure has surged in financials, health care, industrials, IT, and consumer discretionary—frontline adopters facing regulatory scrutiny over data and fairness, operational risks from automation, and reputational exposure in consumer markets.
- Reputational risk is the top AI concern in the S&P 500, making strong governance and proactive oversight essential as companies warn that bias, misinformation, privacy lapses, or failed implementations can quickly erode trust and investor confidence.
- Cybersecurity is a central concern as AI expands attack surfaces and enables more sophisticated threats, influencing boards to expect AI-specific controls, testing, and vendor oversight.
- Legal and regulatory risk is a growing theme in disclosures as firms face fragmented global AI rules, rising compliance demands, and evolving litigation exposure, all of which require directors to anticipate regulatory divergence and integrate legal, operational, and reputational oversight into AI governance.
