As part of our dedication to building safe and beneficial AGI (Artificial General Intelligence), we’re refining governance practices tailored specifically to highly capable foundation models like the ones we produce. We’re also investing in research areas that can inform regulation, such as assessing potentially dangerous capabilities in AI models.
Anna Makanju, our VP of Global Affairs, emphasizes the importance of these commitments in shaping ongoing discussions on AI regulations. She highlights the collaborative efforts with governments, civil society organizations, and others to advance AI governance.
Outlined below are the voluntary commitments made by companies to promote the safe, secure, and transparent development and use of AI technology:
Safety
1) Commitment to internal and external red-teaming of models or systems to address misuse, societal risks, and national security concerns.
2) Work toward information sharing among companies and governments regarding trust and safety risks, dangerous capabilities, and attempts to circumvent safeguards.
Security
3) Investment in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.
4) Incentivization of third-party discovery and reporting of issues and vulnerabilities.
Trust
5) Development and deployment of mechanisms enabling users to discern AI-generated audio or visual content.
6) Public reporting of model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks.
7) Prioritization of research on societal risks posed by AI systems, including avoiding harmful bias and discrimination, and protecting privacy.
8) Development and deployment of frontier AI systems to address society’s greatest challenges, alongside supporting education and training initiatives.
These commitments, aligned with existing laws and regulations, aim to foster a generative AI legal and policy framework. Companies intend to uphold these commitments until regulations covering similar issues come into effect. Individual companies may also make additional commitments beyond those listed here.
In summary, this collaborative effort underscores our commitment to responsible AI development and usage, ensuring AI technologies benefit humanity while mitigating potential risks.