HomeAIAI Is Already in Your Nonprofit. You Need a Plan

Sponsored Content

AI Is Already in Your Nonprofit. You Need a Plan

Artificial intelligence (AI), now embedded in many of the tools nonprofit staff use every day, is already present inside most organizations—whether leadership has formally approved it or not.

Staff are drafting communications, writing and summarizing reports, researching donors, organizing ideas, and increasingly supporting planning and decision-making work. In some cases, they are also entering financial, human resources, or member data, including personally identifiable information (PII). (They may also be using an organization-paid enterprise account for their own queries, conversations, and projects.) Even when used with good intentions and in the interest of getting work done faster and better, it introduces risk.

Most leaders have now seen these tools generate a passable donor email, create or summarize a board packet in seconds, or, with well-trained software platforms like PlanPerfect, be a tremendously successful partner in producing, tracking, and implementing strategic and risk-mitigation frameworks. The question is no longer whether to use these sophisticated technologies. It is whether your organization understands how they are already being used—and has put in place a plan and everyday practices to support them.

What having a plan actually means

For most small- to mid-size nonprofits, having an AI plan does not require a new or complicated framework. It begins by extending existing human resources, strategic, governance, and risk management practices to a new set of tools—whether those practices are formally documented or not.

There are three things every organization needs in place for responsible use:

Clear guidance for staff use
A policy that defines how these tools should and should not be used, including expectations for judgment, appropriate use cases, and handling of confidential or sensitive information. This should live within existing HR policies, employee handbooks, and codes of conduct.

A technology environment that supports success
An organization-wide approach to hardware and software tools and training so staff can realistically follow the policy. This includes approved platforms, practical guidance, and defaults that make responsible use straightforward and easy to understand. This should include not only an employee’s on-site work computer, but also their personal computer(s), phone, and tablet.

Ownership, oversight, and alignment with strategy and risk
Accountability for how these tools are used across the organization, with connection to strategic priorities, governance practices, and Enterprise Risk Management (ERM), so people can connect successful adherence to any technology policies to greater mission impact and their own personal and professional success.

A staff policy should make clear where use is appropriate, where additional care is needed, and what is off-limits. It does not need to be long. It also needs to be supported by the organization’s technology environment and reinforced through training. People will follow the path of least resistance; if expectations are unclear or systems are cumbersome, risk will follow.

These tools sit within your broader approach to governance and risk. They touch communications, operations, compliance, data security, and increasingly, planning itself. If your organization already has an Enterprise Risk Management (ERM) process, this work should be incorporated into it. If it does not, this is a practical place to begin. At its core, ERM helps answer a few basic questions: What are the most important risks facing the organization? Who is responsible for each one? How are those risks being monitored and addressed? These tools can stand as their own category or be integrated into existing areas such as data security or communications. Either approach works, as long as there is visibility, ownership, and a regular cadence of review.

Ownership matters because AI tools can support thinking, drafting, and analysis—but responsibility remains with people. Someone should be accountable for every final output, whether that is a donor communication, a financial summary, a grant narrative, or a strategic recommendation.

Boundaries around sensitive information matter for equally practical reasons. Staff need guidance about what can and cannot be entered. When expectations are explicit and risks are understood, organizations are in a much stronger position to protect donor data, personnel information, financial records, and other sensitive material.

It’s also important to understand that AI can make mistakes! Computer-created outputs can be useful, but they are most useful when treated as a draft or a copyedit. Human review protects accuracy, tone, judgment, organizational identity, and credibility—all of which remain central to nonprofit leadership.

Where AI tools help, and where they require discipline

Used thoughtfully, these tools can be genuinely helpful in nonprofit settings. They can accelerate early drafts, support brainstorming, synthesize large amounts of information, and offer an alternate perspective. They can also support more structured work—helping organizations build, track, and execute elements of a strategic plan, organize ERM inputs, and bring consistency to planning, governance, and reporting processes.

At the same time, they change the shape of the work. Drafting may happen more quickly, but the need for review, editing, validation, and judgment becomes more important. Outputs can appear polished while still missing context, introducing errors, or flattening an organization’s voice.

Common assumptions worth examining

Several assumptions tend to shape adoption before they are fully discussed.

One is the idea that these tools are either inherently beneficial or inherently risky. In practice, outcomes depend on how they are used, what information is shared, and what standards are applied.

Another is that they are primarily a time-saver. They can improve speed in some areas, but they also shift effort into review, editing, validation, and decision-making.

There is also a tendency to assume that governance of technology is only necessary for large organizations. Smaller organizations benefit just as much from having structure. Clear expectations help teams move faster, make better decisions, and work more consistently without adding unnecessary complexity.

Broader questions about environmental and labor impact are also part of the conversation. These systems require significant computing resources, and questions about how data is sourced and used continue to evolve.

Where to start

Most organizations do not need a comprehensive framework to begin. A few steps can create a strong foundation:

  • Identify how these tools are currently being used across the organization, including in communications, planning, and decision-making
  • Draft a one-page guideline for staff that covers acceptable use, data boundaries, and review expectations
  • Ensure your technology environment supports the guidance you are putting in place
  • Incorporate this work into existing governance, strategic, and ERM plans—or use this moment as an opportunity to establish a structure for all of these
  • Assign oversight to a specific person, even if the role is informal
  • Bring the topic into leadership and board discussions so there is shared understanding about how tools are being used and how decisions will be made

These steps also create a foundation for more consistent execution—whether that is tracking strategic priorities, monitoring risk, or aligning work across teams.

Leading with a plan

Technology tools will continue to evolve, and nonprofits will continue to find ways to use them. If they do not, they risk reinforcing the technology lag that has affected the sector for decades.

Like a strategic plan or an Enterprise Risk Management approach, this plan should evolve over time. It does not require certainty about every tool or every future development. It requires answering a few practical questions:

How are these tools being used in the organization?

How can we use them to our benefit without introducing unnecessary risk—to those we serve, our staff, or our stakeholders?

Who makes decisions about what is appropriate?

How are those decisions reviewed, and what happens when something goes wrong?

That level of clarity allows organizations to make good use of emerging tools while strengthening the trust that sits at the center of nonprofit work.

Related blog posts for any social media, to draw from as we create the materials:

https://www.planperfect.co/blog-and-insights/leveraging-ai-technology-at-your-nonprofit-a-starter-guide

https://www.planperfect.co/blog-and-insights/balancing-ai-s-environmental-impact-and-its-benefits-for-nonprofit-leaders

https://www.planperfect.co/blog-and-insights/the-power-of-simple-why-small-nonprofits-don-t-need-big-tech-to-succeed

GV-One has everything you need to launch community-generated video campaigns. Starting at:

  • No long-term Commitment: Month-to-Month subscription start and stop at any time
  • Self-service Instant Access vs. 2-4 week implementation process
  • Get up to speed fast with on-demand training

Learn More (Save 90% Now!)

RELATED ARTICLES

Most Popular

Recent Comments