Harnessing AI Responsibly for Non-Profit Marketing Success
Across the nonprofit sector, Artificial Intelligence (AI) has become a growing point of interest, and, for many, a growing source of questions. AI usage and technical capabilities has been rapidly expanding, fueled by record-breaking investment and accelerating adoption across industries. In 2024, 78% of organizations reported using AI, a substantial increase from 55% the year before. Research continues to confirm that AI boosts productivity across the workforce (Stanford University, 2025).
Looking forward, the growth trajectory shows no signs of slowing. As AI education expands and the technology becomes more accessible, it is set to become even more deeply integrated into our lives, reshaping sectors from non-profit to science and medicine to the global economy. As new tools emerge that promise to save time, expand outreach, and enhance impact, nonprofit professionals are asking: Where does AI fit into our work? How can it be used responsibly in ways that align with our missions, values, and communities?
Within spaces like the Public Awareness Committee (PAC) and nonprofit marketing teams, this conversation is especially relevant. AI has the potential to transform how organizations connect with their audiences, helping to craft messaging, analyze data, and reach supporters more effectively. Yet alongside these opportunities come important cautions. Used without care, AI can amplify misinformation, introduce bias, and even harm emotional well-being.
It is important to note that AI should never be used in place of mental health counseling, therapy, or crisis support. While AI-based tools may offer general information or wellness suggestions, they are not substitutes for trained professionals. When used incorrectly, these tools can have a negative impact on mental health by spreading inaccurate information or giving users a false sense of emotional safety.
This paper explores what nonprofit professionals, especially those engaged in public awareness and marketing, should be mindful of as AI continues to evolve. It aims to highlight both the promise and the limitations of AI, encouraging organizations to adopt it thoughtfully and ethically in accordance with guidelines set by the user’s respective organization. The following sections will provide guidance on getting started, understanding ethical and environmental concerns, and applying AI responsibly to advance your mission while maintaining community trust.
How to get started
The research also suggests AI helps narrow skill gaps across the workforce, enabling employees to perform specialised tasks without specialised expertise. This boosts efficiency and reduces financial barriers for often budget-conscious non-profits.
Amidst this ever-expanding virtual landscape, how does one begin to navigate it all and integrate AI into their workflows and lives?
Given the rapidly changing nature of this field, staying up to date on relevant news, new tools, and ethical and legislative concerns is key.
Dharmesh Shah, co-founder and CTO of Hubspot offers a framework for understanding and adopting AI tools:
The "Just Try It with AI" Rule: Make it a habit to try using AI first for any task you're about to do on a computer. Don't overthink it; you will be surprised by what it can help with.
• Follow the 60-30-10 Rule: To avoid getting stuck in a rut, balance your AI usage:
◦ 60% Repetition: Spend the majority of your time on the proven prompts and use cases that already work for you.
◦ 30% Iteration: Dedicate time to improving the prompts and resources you already use to get better results.
◦ 10% Experimentation: Carve out time to try AI for new things you’ve never tried before. Explore some new tools here.
Revisit Failed Attempts: If you try something with AI and it doesn't work, don't assume it's impossible. Think, "this doesn't work yet". Set a calendar reminder to try the same task again in three to six months. Because AI capability is on an exponential curve, it may be able to handle the task by then (Shah, 2025).
• Subscribe to Simplified Resources: To make sense of the complex world of AI, find resources that simplify it. Newsletters are a great resource. Examples include:
And many more! Subscribe to those that feel most accessible to you.
Understanding the Art of AI Prompting
As many AI platforms utilize a chat-based model, the art of prompting is becoming increasingly relevant to shape outcomes. Here are a few resources to help you experiment with and refine your prompt-writing skills:
As you adopt AI, use it to augment your own intelligence, not replace it. As Dharmesh Shah says,
"Use AI to test your thinking, use it to clarify your thinking and elevate your thinking, but don't use it to replace your thinking" (Shah, 2025).
Humans bring emotional intelligence and lived experience to the table: by combining your unique human experience with AI as a collaborative partner, you can become a more effective version of yourself.
Though AI can speed up research and complex tasks, human fact-checking is necessary to ensure that the information is accurate and trustworthy.
AI Hallucinations, Explained
AI hallucinations occur when language models confidently generate information that looks and sounds right but is actually false. This can happen because current training methods reward confident answers, even when the system doesn’t know the truth. OpenAI’s new research suggests the fix is to redesign training so that models get credit for honesty (“I don’t know”) (OpenAI, 2025). Until then, it’s important to know the types of hallucinations and how to guard against them.
Common Types of AI Hallucinations
Fake citations: AI generates realistic-looking academic references or author names that don’t exist.
How to correct: Always look up citations directly in databases like Google Scholar, PubMed, or legal libraries.Invented quotes: AI attributes made-up statements to real people.
How to correct: Verify quotes with reputable news outlets, transcripts, or primary sources.Incorrect statistics or dates: AI often fills gaps with numbers that “sound right.”
How to correct: Verify figures through trusted data sources (e.g., government sites, academic research, or official reports) (Kalai et al., 2025).
Continue to explore new tools to expand your capabilities, but understand the benefits and limitations of each AI tool you utilize before adopting it.
Some tools to try:
Consensus or Elicit: connect to real sources like PubMed and Semantic Scholar
NotebookLM: allows you to upload your own sources and ask questions based on the contents
AI Ethical Concerns and Environmental Impact
AI offers exciting possibilities, but it also brings risks. Around the world, AI-related incidents are rising, showing just how important it is to use these tools thoughtfully. One big challenge is that many organizations know about the risks but haven’t taken enough concrete steps to address them. For example, there are still very few standardized ways to check whether an AI system is safe, accurate, or fair before it’s put to use.
For non-profit marketers, this can create real problems. If an AI-powered outreach tool unintentionally reinforces social biases, it could damage community trust. If an AI system provides unreliable information in areas like fundraising or impact reporting, it could mislead donors. And if AI continues to be adopted unevenly, underserved communities could end up even further behind. Prioritizing transparency, trustworthiness, and safety are essential in using AI responsibly and maintaining credibility and trust with the community.
One of the most pressing concerns associated with AI usage is the environmental impact it creates. The primary environmental concerns of artificial intelligence usage stem from its immense consumption of natural resources, particularly electricity and freshwater. Training and running large AI models requires massive data centers that consume enormous amounts of energy, which is often sourced from fossil fuels. This high energy demand contributes significantly to greenhouse gas emissions and a growing carbon footprint; for example, training a single large AI model can produce carbon dioxide emissions equivalent to nearly five times the lifetime emissions of an average car. Beyond electricity, these data centers require substantial amounts of water for cooling hardware, potentially straining local water supplies.The environmental toll is further compounded by the production and disposal of specialized hardware like GPUs, which creates e-waste and involves environmentally harmful mining and manufacturing processes. This problem is exacerbated by a lack of transparency from some AI companies regarding their models' energy consumption and environmental impact, making it difficult to accurately assess the full scope of the issue (Zewe, 2025).
Here is a list of ways you can use AI responsibly to help the environment:
Be Mindful of Necessity: Before using an AI tool, consider if a simple web search or your own critical thinking could achieve the same result with less energy. A single generative AI query can use five to ten times more electricity than a standard Google search.
Write Specific Prompts: Vague prompts can cause the AI to use more computational power and generation time. Using clear and concise prompts can lead to more efficient and direct responses.
Choose AI Tools from Sustainable Companies: Support AI providers that are transparent about their environmental impact and publicly commit to sustainable practices, such as powering their data centers with renewable energy.
Educate Others: Share information with friends and family about the environmental effects of AI. Creating collective awareness can encourage more responsible usage.
Use AI to Solve Environmental Problems: While AI presents environmental challenges, it can also be a powerful tool for positive change. You can use AI to develop or support projects related to climate modeling, renewable energy optimization, conservation efforts, disaster prediction, and tracking energy misuse (Nolasco, 2025).
AI Legislation
At the same time, governments are stepping in to create rules and guardrails for AI. In the U.S., federal AI regulations more than doubled between 2023 and 2024, and mentions of AI in legislation have skyrocketed since 2016. Globally, organizations like the EU and U.N. are developing frameworks focused on transparency and safety. These rules are designed to make sure AI is used responsibly while building the public trust needed for wider adoption.
Governments globally are also making record-high investments in AI that aim to boost innovation, making better AI tools more widely available. These investments reflect a global competition to lead in the AI era (Stanford University, 2025). Along with these advancements, we must advocate for ethical adoption and legislative measures.
For non-profits, this means two things: first, staying compliant with new rules is a must when using AI in marketing or donor outreach. Second, these investments could open doors to new tools and partnerships that help you work more effectively.
Ways to use AI in non-profit marketing
Non-profits can leverage artificial intelligence (AI) in marketing and fundraising to operate more efficiently, especially with limited staff and tight budgets. AI enables data-driven decisions, helping organizations reach the right donors at the right time with the right message. By automating repetitive tasks like content creation, data entry, and performance analysis, AI frees staff to focus on higher-value work such as building donor relationships. The core benefit is its ability to personalize donor communication at scale, increasing engagement, conversion rates, and strategic fundraising (Barenblat & Gosselink, 2024).
Specific ways non-profits can use AI in marketing:
Set Clear, Data-Backed Goals: Analyze past campaign data, donor behavior, and seasonal trends to set realistic, strategic fundraising goals.
Identify and Segment Donors: Categorize supporters based on donation history, engagement, and demographics to tailor messaging for first-time donors, recurring givers, or lapsed supporters.
Automate and Enhance Content Creation: Generate donor appeals, emails, social media posts, and ad copy quickly. AI can adjust tone, incorporate storytelling, and fit word counts based on past responses.
Optimize Campaign Timing and Outreach: Analyze email open rates, donation patterns, and social engagement to send messages when donors are most likely to respond.
Measure and Improve Campaign Performance: Track metrics like donor retention, open rates, and conversion to refine future campaigns.
Further Education Resources
Free AI Workplace Proficiency Course by Superhuman
If you’re interested in connecting with others who are passionate about raising awareness in the behavioral health field, and in collaborating to share ideas and best practices, we invite you to join the Public Awareness Committee within the Community Mental Health and Wellness Coalition.
References
Barenblat, K., & Gosselink, B. H. (2024, July 15). Mapping the landscape of AI-powered nonprofits. Stanford Social Innovation Review. https://doi.org/10.48558/a85x-de19
Brannan, A. (2025, February 21). AI for marketing fundraising campaigns: How nonprofits can use AI to raise more with less. Association of Fundraising Professionals New York City Chapter.
Flagler College Proctor Library. (2025, August 4). The environmental impact of AI: Towards sustainable use. In Generative AI ethics and ethical use in academic contexts. https://flagler.libguides.com/EthicalAI
Girls Who Code. (n.d.). Eco-conscious ways to use AI. https://girlswhocode.com
Kalai, A. T., Nachum, O., Vempala, S. S., & Zhang, E. (2025, September 4). Why language models hallucinate. arXiv. https://doi.org/10.48550/arXiv.2509.04664
Nolasco, D. (2025, January 24). How do I use AI and protect the environment? Earth Day. https://www.earthday.org/how-do-i-use-ai-and-protect-the-environment
Shah, D. (2025). You to the power of AI with Dharmesh Shah | INBOUND 2025 [Video]. YouTube. https://www.youtube.com/watch?v=pPQngmSEIe0
Stanford University, Human-Centered Artificial Intelligence. (2025). The 2025 AI Index Report. https://hai.stanford.edu/ai-index/2025-ai-index-report
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. arXiv. https://doi.org/10.48550/arXiv.1906.02243
Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117