Using AI as a Thought Partner – Webinar Replay
Originally presented as: “AI in Action: Best Practices for Using LLMs in Mission-Driven Work.”
This piece is written by Remy Reya, Director of AI and Thought Leadership at Compass Pro Bono. Compass Pro Bono is a nonprofit that helps other nonprofits strengthen their impact through strategic support. In this piece, Remy Reya shares practical ways organizations can approach AI as a thought partner.
Many nonprofits have begun exploring how AI tools can amplify their work and free up staff time to focus on the warm-touch, relational activities that power our missions.
It can be exciting and empowering to figure out how AI can support our work. It can also feel exhausting trying to keep up with all the new platforms, features, and techniques emerging seemingly every day—especially for a bandwidth-stretched nonprofit leader.
Luckily, you don’t actually need to keep up with everything; most of us in the nonprofit sector can get outsized value by focusing on just a few core tools and techniques. In the webinar below, Compass Pro Bono shares some tried-and-true best practices for using large language models (LLMs) in mission driven work: prompt engineering, deep research, reasoning, customization, connectors/integrations, and more.
Watch the Replay:
As you begin to implement these techniques and integrate AI more deeply into your work, you will also have to contend with where it should fit in, and how to engage these tools in ways that keep your critical thinking and creativity at the center.
One technique we recommend is to bake this philosophy into the tools you use. For example, most large language models (LLMs) allow users to set custom instructions that shape every conversation (sometimes called “personalization features”). Instructions on how to configure these in Claude here, ChatGPT here.
Personalization Language:
We’ve designed these custom instructions to help you stay in control when using LLMs. We hope you’ll read them over, customize as needed, and paste into your LLM of choice:
- I like to use [preferred LLM] as a thought partner. That means my voice, ideas, and critical thinking must stay front-and-center throughout all of our collaborations. Your job is to augment my cognition and creativity.
- When I ask you to help with a complex task, start by asking me clarifying questions to surface what I’ve already thought through on my own. Push back if it seems like I’m outsourcing thinking I should be doing on my own.
- After completing a task, if appropriate, share something I might not know about the topic we’ve been discussing (an interesting concept, an unexpected connection, a robust counterargument, etc.) along with a link to an article, podcast, or resource where I can go deeper.
- Default to helping me think, not thinking for me. Offer frameworks, questions, starting points, and syntheses rather than finished products (unless I explicitly ask for a finished product).
- Finally, do not proactively offer to complete a new task after completing a request I make. Wait for me to decide what I need next, even if that’s just asking you what should come next; I want to stay in the driver’s seat.
Explore additional resources in the accompanying slide deck here.
The materials are being provided for informational purposes only and constitute neither an offer to sell nor a solicitation of an offer to buy securities. These materials also do not constitute an offer or advertisement of TIFF’s investment advisory services or investment, legal or tax advice. Opinions expressed herein are those of TIFF and are not a recommendation to buy or sell any securities.
These materials may contain forward-looking statements relating to future events. In some cases, you can identify forward-looking statements by terminology such as “may,” “will,” “should,” “expect,” “plan,” “intend,” “anticipate,” “believe,” “estimate,” “predict,” “potential,” or “continue,” the negative of such terms or other comparable terminology. Although TIFF believes the expectations reflected in the forward-looking statements are reasonable, future results cannot be guaranteed.