Timothy Kang

AI Governance & Decision Architecture | G7 Hiroshima AI Process Contributor | Commissioner, Middle States Association | OECD AI Governance Framework Contributor

Seoul Incheon Metropolitan Area

About

Artificial intelligence is increasingly embedded in consequential institutional decisions. Yet most AI governance frameworks focus on evaluating systems rather than governing the moment when organizations decide to rely on those systems. Institutional risk often emerges not from the absence of technical controls, but from the point at which decision-makers act on AI-generated outputs. My work focuses on the governance of this decision moment—the conditions under which organizations authorize reliance on AI-assisted analysis in consequential decisions. I develop governance architecture that defines when AI outputs may inform institutional action, when independent verification is required, when reliance must be withheld, and how accountability is assigned when AI influences decision processes. This governance layer—often described as decision governance or reliance authorization—serves as the operational bridge between AI system oversight and institutional accountability. This work contributes to an emerging field of AI decision governance, examining how institutions authorize, verify, and account for reliance on automated outputs within real-world decision environments. As a Commissioner with the Middle States Association, I participate in binding accreditation determinations affecting institutions across multiple jurisdictions. These proceedings frequently involve evaluating evidentiary sufficiency, procedural integrity, and the responsible use of analytical or automated inputs in institutional decision processes. Internationally, I contribute implementation-grounded perspectives to the G7 Hiroshima AI Process and OECD AI governance initiatives, helping translate cross-jurisdictional governance principles into operational institutional controls. I also engage with the OECD.AI ecosystem, supporting discussion on how international AI governance frameworks can be implemented within real organizational environments. My work emphasizes interoperability across major governance regimes—including the EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 42001—so that institutional accountability for AI-assisted decision-making can be evaluated consistently across sectors and jurisdictions. As AI systems evolve toward increasingly autonomous and agentic capabilities, governance of the decision moment becomes more critical. The central question is no longer only how AI systems are evaluated, but when institutions are authorized to rely on their outputs and act upon them.

Experience