Artificial Intelligence Policy

1% for the Planet Artificial Intelligence Policy

Last updated: April 2026

Purpose

At 1% for the Planet, we recognize the benefits of artificial intelligence (AI) for supporting our stakeholders, our staff, and advancing our mission. We are committed to using AI in a responsible, ethical, and effective manner, and to fostering a culture of transparency and accountability in our use of AI tools. 

To achieve these goals, we have developed this AI policy to guide our use of AI and ensure that it aligns with our core values and mission. We will regularly review and update this policy to reflect changes in technology, industry standards, and legal and regulatory requirements. We subscribe to the “first draft not final draft” view of content or analyses generated by AI, and will always center our staff and stakeholders when considering the use of AI systems.

We are committed to being open to feedback and making changes as necessary to ensure that our use of AI is responsible, ethical, and mission-aligned.. 

Ethics & transparency

  • We will ensure that any AI systems or tools we deploy are guided by ethical principles, including an unwavering commitment to transparency in how we use AI. We will regularly review and update this publicly-available AI policy to reflect the latest ethical standards and best practices. 
  • We will be transparent with our stakeholders about our use of AI. This includes regularly communicating with our stakeholders about the development and use of AI systems, as well as how those systems or tools make decisions and which data they use.
  • Human-in-the-Loop: any content or images generated by AI will always be reviewed and edited by human experts to ensure they are accurate, meet our ethical standards, and do not contain any objectionable or harmful content. 
  • Attribution: images generated by AI will be accurately credited. For written content, we utilize our standard attribution processes; however, we will over-communicate when AI has assisted.. 

Data privacy & security

We will use AI in ways that protect the privacy of individuals and businesses, and the security of their data. 

  • Handling Sensitive Data: any data we collect or use is handled in accordance with our existing privacy policy, best practices and applicable regulations. Sensitive or personal data will always be anonymized, encrypted, or otherwise protected if used with AI tools.
  • Ecosystem Security: As we operate within the Google Workspace ecosystem, our primary and preferred AI tool is Gemini. This allows us to maintain better control over data security compared to disparate third-party tools. Staff must be logged in to their Google account before using Gemini for any work-related purpose.

Preventing discrimination & bias

  • Inclusivity: we will use AI to promote diversity, equity, and inclusion in our global network. We will work to ensure that our AI tools and services are accessible and inclusive for all.
  • Mitigation: we will prevent and address bias and discrimination in our AI systems by using diverse datasets to build any AI systems, by leaning into diverse input and expert review of AI systems, and by regularly auditing and testing for bias and discrimination.
  • Fact-Checking: we recognize that AI tools, and in particular Large Language Model (LLMs) are only as good as the content they are trained on, which means they are subject to the inaccuracies and biases of the training content. These LLMs can also  “hallucinate” (i.e. generate factually inaccurate content that may sound correct). Staff are required to verify all AI-generated facts and statistics against primary sources.

Human oversight & risk mitigation

  • We will consult with external experts as we develop and deploy any AI systems.
  • Before deploying any AI system or tool, we will conduct a risk assessment and develop a plan for mitigating and managing those risks. 
  • We will ensure that there is human oversight of any AI systems by training staff on AI risk management, and teaching them how to monitor and regularly review those systems, and intervene when necessary. 
  • As described above, we are committed to expert human review of any content generated with the help of AI systems, prior to sharing or publishing outside of our organization.

Regulatory compliance

  • We will consult with legal experts to ensure that any AI systems we develop or deploy are in compliance with applicable laws and regulations.

Practical Staff Guidance

To ensure safe and effective use, all staff must adhere to the following operational guidelines:

  • Approved Tool Repository: Staff should only use AI tools listed in our Approved AI Tool Repository. These tools have been vetted for security and ethical compliance.
  • New Tool Review: Any AI tool not currently in the repository must undergo a formal review and approval process by the Business Operations team before it is used for organizational work.
  • Human Accountability: While AI can assist in analysis or drafting, the individual staff member remains fully accountable for the final output.

Sustainability

  • We recognize the environmental impact of AI compute power. We prioritize the use of Google Gemini, in part because it is integrated into a data infrastructure aiming for 24/7 carbon-free energy by 20301

Review & updates

This policy will be reviewed annually, or as needed based on changes in technology, regulation, or organizational needs. Updates will be communicated to all users and appropriate training will be provided.

More information can be found in Google’s 2025 Environmental Report and an August 2025 technical paper on the environmental impact of Google’s AIy