What Is AI Governance?

In early 2024, New York City released MyCity, an AI chatbot meant to guide small businesses on regulations. But investigative journalists discovered it gave dangerously incorrect advice—encouraging landlords to reject Section 8 tenants, advising businesses to go cashless despite legal bans, and even suggesting restaurants serve rat-bitten food. Despite experts calling it “reckless and irresponsible,” officials kept the tool live under the guise of a beta test.

This incident shows how public trust in AI can backfire when there’s no proper governance—turning helpful tools into harmful liabilities.

AI Governance - What It Means and Why is It Important

AI governance is the framework—policies, roles, and safeguards—that ensures AI is built responsibly, complies with laws, stays transparent, and includes oversight. In cases like MyCity, robust governance would’ve flagged the high-risk content, mandated human review, enforced explanations, and provided mechanisms to correct misinformation before harm occurred.

Core AI Governance Frameworks

GDPR

Europe’s General Data Protection Regulation isn’t AI-specific, but it sets strict standards for how systems can process personal data. It enforces data minimization, accuracy, and privacy-by-design, and it bars fully automated, legally-binding decisions without human intervention. When AI poses risks, companies must conduct Data Protection Impact Assessments and allow individuals to challenge outcomes through rights like access, correction, and deletion.

OECD AI Principles

Adopted by nearly 50 nations, these principles guide AI toward inclusive growth, human rights protection, transparency, safety, and accountability. The 2024 update also addresses concerns around generative AI, focusing on environmental impact, responsible IP use, and data integrity, shaping ethical norms for governments and organizations worldwide.

NIST AI RMF

Launched in 2023 by the U.S. National Institute of Standards and Technology, this voluntary framework moves from theory to practice. It defines four phases—Govern, Map, Measure, Manage—and outlines seven trustworthiness characteristics, including reliability, explainability, and bias mitigation. It provides a repeatable, auditable process for embedding governance into AI systems.

These three frameworks complement each other: GDPR offers legal protection, OECD sets ethical standards, and NIST provides the operational playbook—together forming a holistic strategy for building trustworthy AI.

How PALIF Makes AI Governance Real

PALIF does not just advocate for better AI. It strives to make AI accountable. In May 2025, PALIF launched CASE‑AI (Center for Accessible, Safe, and Ethical AI)—a platform dedicated to promoting responsible AI through policy advocacy, education, industry collaboration, and research focused on bias, privacy, and inclusion.

At the 24th Infopoverty World Conference, chairman Basudeb Pal emphasized that governance requires action—not just policy. He called for a “profit-to-purpose” model prioritizing equitable AI that empowers communities rather than exploits them.

Through CASE‑AI’s tooling and PALIF’s global leadership, organizations now have a real, operational path to build AI that’s safe, legal, ethical, and trustworthy.


© Copyright PAL Impact Foundation 2025. All Rights Reserved