Seoul Incheon Metropolitan Area
Artificial intelligence is increasingly embedded in consequential institutional decisions. Yet most AI governance frameworks focus on evaluating systems rather than governing the moment when organizations decide to rely on those systems. Institutional risk often emerges not from the absence of technical controls, but from the point at which decision-makers act on AI-generated outputs. My work focuses on the governance of this decision moment—the conditions under which organizations authorize reliance on AI-assisted analysis in consequential decisions. I develop governance architecture that defines when AI outputs may inform institutional action, when independent verification is required, when reliance must be withheld, and how accountability is assigned when AI influences decision processes. This governance layer—often described as decision governance or reliance authorization—serves as the operational bridge between AI system oversight and institutional accountability. This work contributes to an emerging field of AI decision governance, examining how institutions authorize, verify, and account for reliance on automated outputs within real-world decision environments. As a Commissioner with the Middle States Association, I participate in binding accreditation determinations affecting institutions across multiple jurisdictions. These proceedings frequently involve evaluating evidentiary sufficiency, procedural integrity, and the responsible use of analytical or automated inputs in institutional decision processes. Internationally, I contribute implementation-grounded perspectives to the G7 Hiroshima AI Process and OECD AI governance initiatives, helping translate cross-jurisdictional governance principles into operational institutional controls. I also engage with the OECD.AI ecosystem, supporting discussion on how international AI governance frameworks can be implemented within real organizational environments. My work emphasizes interoperability across major governance regimes—including the EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 42001—so that institutional accountability for AI-assisted decision-making can be evaluated consistently across sectors and jurisdictions. As AI systems evolve toward increasingly autonomous and agentic capabilities, governance of the decision moment becomes more critical. The central question is no longer only how AI systems are evaluated, but when institutions are authorized to rely on their outputs and act upon them.
Serve as a delegation lead contributing institutional implementation perspectives to the G7 Hiroshima AI Process (HAIP) policy dialogue, an international governance initiative developed in coordination with the OECD AI policy ecosystem to support responsible deployment of advanced AI systems. • Participate in moderated policy roundtable discussions convened by the Japanese Ministry of Internal Affairs and Communications, contributing practitioner insights informing development of the HAIP Friends Group Action Plan • Delivered intervention during the Second In-Person HAIP Friends Group Meeting (Tokyo, 2026) addressing governance controls governing institutional reliance on AI-assisted analysis in consequential decision processes • Presented an institutional interoperability mapping framework examining alignment between operational governance controls and HAIP reporting categories to support cross-jurisdictional comparability • Contribute deployer-level insights on cross-framework compatibility across major international AI governance regimes including the OECD AI Principles, EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 42001 • Exhibited institutional implementation models demonstrating operationalization of international AI governance principles within decision-accountable environments
Member of the governing commission of a U.S. Department of Education–recognized accrediting organization overseeing 2,700+ institutions across 112 countries. Accreditation Determinations & Adjudication • Participate in binding accreditation determinations including probation, sanction, reaffirmation, and institutional authorization decisions • Review evidentiary records and adjudicate contested cases involving procedural integrity, safeguarding standards, and governance compliance • Decisions documented for formal appeal and external review Governance Risk & Decision Oversight • Contribute governance perspectives on how institutional decision authorities evaluate and verify AI-assisted analyses within accreditation and oversight processes • Examine verification standards, evidentiary sufficiency, and accountability safeguards when automated analysis informs consequential institutional determinations Regulatory Interface • Participate in determinations relied upon by ministries of education, governing boards, and institutional leadership across jurisdictions
Advise on cross-border accreditation comparability, peer-review integrity, and governance policy alignment in international school evaluation.
• Represented the Commission in international accreditation evaluations of institutional governance, safeguarding, and academic quality • Chaired peer-review teams assessing governance effectiveness and institutional risk, advising ministries of education, governing bodies, and accrediting partners • Produced formal evaluation reports informing accreditation determinations
Contribute implementation-grounded perspectives to the OECD/G7 Hiroshima AI Process Reporting Framework, an international transparency mechanism enabling organizations to report governance practices aligned with the Hiroshima AI Process Code of Conduct. • Provide practitioner perspectives on how high-level AI governance principles translate into operational reporting categories, including risk classification, verification requirements, responsibility allocation, auditability, and reliance conditions • Contribute insights from institutional environments operating under regulatory and accreditation accountability to help ensure reporting expectations remain enforceable and comparable across organizations • Support interoperability across major governance frameworks (e.g., EU AI Act, NIST AI Risk Management Framework, ISO/IEC 42001) so evidence of governance practices can be evaluated consistently across jurisdictions
Developed and shared an institutional AI governance implementation framework on the OECD.AI platform, demonstrating how organizations can operationalize the G7 Hiroshima AI Process reporting expectations within institutional environments. • Designed lifecycle governance controls defining when AI outputs may inform institutional decisions, when independent verification is required, and when reliance must be withheld • Established documentation, review, and escalation procedures supporting accountable AI-assisted decision processes • Contributed implementation-grounded perspectives to OECD.AI discussions on proportional reporting and institutional adoption across sectors
Participate in the OECD.AI ONE AI expert community under the Directorate for Science, Technology and Innovation (STI), contributing practitioner perspectives on institutional AI governance and operational implementation. • Share implementation insights on translating international AI governance standards—including the OECD AI Principles, G7 Hiroshima AI Process Code of Conduct, NIST AI Risk Management Framework, and ISO/IEC 42001—into institutional governance practices • Contribute practitioner perspectives to discussions on transparency, reporting mechanisms, and responsible AI deployment across sectors
Institutional Decision Governance • Serve as final institutional decision authority for disciplinary status, academic progression, graduation eligibility, and student services under accreditation standards • Consequential institutional decisions require documented evidentiary justification and independent verification when analytical or automated tools inform decision processes AI-Assisted Decision Governance • Developed internal governance procedures defining when analytical or AI-assisted outputs may inform institutional decisions, when independent verification is required, and when reliance must be withheld • Implement documented challenge procedures and evidentiary review prior to institutional action Incident Review & Remediation • Review contested decisions and reverse outcomes when procedural integrity or evidentiary sufficiency is not met • Lead incident investigations involving alleged harm and mandate corrective governance controls Operational Accountability • Accountable for staffing, program viability, and budget sustainability under accreditation enforcement requirements • Institutional decisions carry governance liability and require written justification against enforceable standards