Podcast: Rise of the AI Agents with Rubin Singh and Ryan Ozimek

Listen to Podcast

AI agents are making waves in the tech world, with companies like Microsoft and Salesforce leading the charge. But what exactly are AI agents, and how can your organization leverage them effectively?

Join Ryan Ozimek and Rubin Singh in this insightful episode as they explain the fundamentals of AI agents and their potential applications in nonprofit, foundation, and association spaces.

An AI agent is an automation that can take an input from the user, that can be a natural language, to perform some sort of action that you want it to perform. It has the ability to leverage large language models, enhancing the agent’s abilities to understand the request and to perform the action.

From enhancing customer service experiences to streamlining operations, you’ll learn how these tools are poised to transform the way organizations function.

You’ll also gain practical advice on what to consider when exploring AI agents for your organization and how to approach their implementation with confidence. Is the rise of the AI agents something your organization can use to your advantage?

Whether you’re a seasoned Chief Information Officer or exploring tech solutions for the first time, this episode provides a clear, jargon-free overview to help you stay ahead of the curve.

Our podcasts are designed for audiences with varied experiences with technology. In this podcast with Ryan Ozimek and Ryan Singh on the rise of the AI agents, learn more about how to lead nonprofits by understanding new technologies as they emerge, and how those new tools can fit your use case.

Like podcasts? Find our full archive here or anywhere you listen to podcasts. Or ask your smart speaker.

Transcription

Kyle Haines: Welcome back to Transforming Nonprofits. I’m so excited for this episode, because we’re going to be talking about AI again. And for those of you who are regular listeners, you know this is a topic that we’re always trying to learn more about and figure out how to integrate AI into the way that we work.

This topic is about AI agents, and I brought together two people whose opinion I respect immensely, Rubin Singh and Ryan Ozimek. I’m hoping to learn more about what they’ve learned, and a little bit about what I’ve learned, from two vendors that are promoting heavily the use of their AI agents, Microsoft and Salesforce.

With that, let’s get into the conversation so we can all learn more about AI agents. This is Transforming Nonprofits.

Rubin and Ryan, thank you so much for joining me. Ryan, you’re a veteran. I’m going to have to start compensating you for paid time on Transforming Nonprofits. I really appreciate you making time again. And Rubin, I’m really glad to welcome you in, and I always value hearing your perspective on these topics. I’m super excited for today’s conversation.

Rubin Singh: Thanks for having me, Kyle. Good to see you, Ryan.

Ryan Ozimek: Good to see you both. And I guess, Kyle, five times I get a jacket. Is that what’s going to happen the fifth time I show up?

Kyle Haines: Yeah. It’s something like that. A jacket or a mug? We haven’t figured out which one we’re going to do yet.

I told you all this before, when you agreed to join. What prompted me to reach out to both of you was… I was recently at the Microsoft Conference, and they were talking about AI agents. And this was after I had heard Salesforce talking about AI agents.

And I’m really curious about all things AI right now, and I would imagine both of you are as well.

I was hoping I could learn more from both of you about what AI agents are, how people might use them, and get your candid opinion about how far along are these things? Are people going to actually use them? How do people use them? All those things.

What is an AI Agent?

Can somebody give me a definition of what an AI agent is for people listening who might not have encountered one?

Rubin Singh: I can surely start off, but I would love to hear Ryan’s thoughts as well, because I’ve seen in this space, even a simple question like that, depending on who you ask, you might get a different answer. But that’s kind of where we are with AI and agents at the moment.

When I explain it to folks, especially our clients, the simplest way I can put it is it is an automation.

It is an automation that can take an input from the user, that can be in natural language, to perform some sort of action that you want it to perform. It has the ability to leverage large language models, like a ChatGPT or other models out there. In the simplest form, that’s how I put it.

And I know we’ll talk a little bit more, as you mentioned, about the different scenarios it could be used in. But that’s kind of the way I explain it to folks. It’s an automation that can take natural language inputs and be able to leverage all that AI has to offer through their large language models.

Kyle Haines: Ryan, how much did he get wrong?

Ryan Ozimek: Zero percent wrong. A hundred percent right. I’m going to add another five percent to be a hundred and five percent because it’s math and it’s Friday. So, let’s go.

One of the things that I learned about is large companies like Microsoft are trying to distinguish copilots and agents. And we could talk about that a bit during our discussions today. But it is kind of interesting because I do think that what is becoming more nuanced is, the way I’ve been thinking about it is agents may be like sub-processors, like microprocessors of specific tasks and types of things that may talk to other agents or potentially what Microsoft calls their Copilot. That is maybe necessarily like your sidekick working with you, who is then working with these agents to do things in an automated fashion for specific tasks.

I think what Rubin laid out is absolutely correct. Like that’s I think the broad approach that we’ve been seeing what AI agents are. And I’m wondering as we’re heading into the middle of 2025 now, if we’re starting to see them built for very specific tasks, to then work with other agents or to work with one party, one solution, one Copilot that is your sidekick doing a lot of different things.

It’ll be interesting to talk about that today, but I’m noticing that to be a new nuance that wasn’t there when we talked last year.

Kyle Haines: As an example of that, if you want to develop an agent to create a peanut butter and jelly sandwich, you develop an agent that toasts the bread, and then you develop an agent that spreads the peanut butter, then you develop an agent that spreads the jelly. Rather than thinking about it as, I’m going to write something that creates a peanut butter and jelly sandwich, agents have much more component parts, individual parts?

Ryan Ozimek: At least from what I’ve been saying.

It is really interesting because I feel like we’re just hitting the replay button on 40 or 50 years of compute technologies. Like, hey, instead of using mainframes to try to do everything for everyone, we’re going to start building microprocessors that do small bits of things that then work with other microprocessors to do other bits of things. And then there’ll be a core processor that will do everything and put it all together.

I feel like what Rubin had said was layering on top of all of that large language models and further, more intelligent automation is the big shift that we’re seeing.

And when you’re building the peanut butter and jelly sandwich making factory, it does feel like we’re replaying what we first started working on 40 plus years ago now. But maybe in a way where we just think that we want a peanut butter and jelly sandwich, and then the Copilot then tells the agents to then go to work to make that specifically for you as fast and efficiently as possible. Maybe. I don’t know.

Rubin Singh: Yeah. And to add on to that, the idea of chat bots and having automated task performance is not something that’s new, but now that we have the ability to engage with these large language models, as we know with AI, it learns our style, it learns our behaviors.

So not only is it performing tasks, but over time, it learns how we perform them and what we like, what we don’t like so we can perform them in more personal ways. And that’s, to me, where the power really lies.

Is a Chatbot an AI Agent?

Kyle Haines: I think it’s important for people listening who might not… Well, first of all, it does not seem like the peanut butter and jelly metaphor landed. I have to move on to a new metaphor.

It seems like an important distinction that when we’re talking about AI agents, that we’re not talking about what might come to mind for many people. And that’s a chatbot that you see on a website that’s interacting with you, and you’re asking it questions, and you’re getting frustrated, and you’re figuring out how to circumvent it to get to a live person. This is something much different.

This is something that if I understand what you just said, Rubin, this is about something that is working alongside you. Is that… am I getting this right?

Rubin Singh: Yeah, I mean, that’s the way I see it. You know, in my working with it, it’s almost like not so much a chatbot as it is an assistant. You know, it is that sort of first line of communication that isn’t just looking for keywords to match against, but it is, again, as it’s learning more over time, it is really trying to behave in the same way that you would.

I would say that the assistant is often the example that I often share and how it’s kind of different than a chatbot.

Ryan Ozimek: And then I also think what Rubin had shared about the automation part of this is really important because the experience that many of us have had with ChatGPT is, as Rubin was saying, almost like a chatbot-like experience. Somebody is typing in what we call prompts or questions, or it’s feeling more like a natural conversation now that you can just hit talk and have a verbal and audio-based conversation with the chatbot, the agent, et cetera.

Part of what I’m seeing happening on the AI agent side of things are automations that are happening with intelligence based on data being updated in a CRM, based on registering for a particular event within my community.

Those types of automations that are happening, that are not completely wired end to end to solve every potential solution but instead is left to the intelligence of a large language model or others to be able to do that and infer that intelligence based on past experiences. That’s where I’m seeing more of this acceleration happening on the AI agent side.

AI Agent Use Case Scenarios 

Kyle Haines: Have either of you developed an agent? Have you done any playing around with it? Have you fully replaced yourself with a digital replica of yourself? Are you actually here today? Is this your AI agent on today’s call or is this really Rubin and Ryan?

Ryan Ozimek: There’s zero chance that we’d be just like humans on this podcast with you. Of course, it’s our AI agents, but it’s so difficult to understand the difference because it’s so good. And I feel like one of the things that we’ve learned internally at Soapbox Engage is that when you’ve had a chance to work with, let’s just take an example.

You’ve had a chance to work with a lot of organizations and a lot of different ways. And you’re noticing as human similarities across different issues that organizations may be having over a long period of time. It would be great to be able to intelligently ingest the information that somebody is providing or the pain points that they’re having, and be able to look at what are some of the best practices you’ve seen be successful. And have that baked into a way to understand that knowledge that’s maybe stored in a database for us, maybe it’s something like in Zendesk for a support ticketing system.

We found that really helpful to be able to say, as folks are asking us questions, can we build agents specifically within things like OpenAI’s platform to be able to automate that process a bit, so that we can have it be very specialized.

And one of the things we found beneficial on the agentic, is I think the word we like to use in the community, the agentic model that we’ve got is around customized data. It’s helpful to be able to have a large language model that can just say, of all the content we’ve got in the world, how do we make good sense of things?

But it’s really helpful when you’re trying to solve specific problems for particular cohorts of organizations or people, to have a dataset that might be either proprietary or just unique to that community.

And so we found for us, not just having our support articles, which are kind of public on the web, but to be able to have all the content of that private support ticketing surface to us internally to be able to solve the problems of other similar organizations across so many data points, and we would never be able to script this out. To be able to have an agent that has been built specifically to do that on the open AI platform, for instance, in our use case, has been really helpful.

And it’s helping our team start to realize there’s more opportunity to solve problems faster in ways which we wouldn’t have made the semantic relationship between what somebody’s asking and a question that popped up similar 10 years ago. So that stuff like that’s been really interesting to see internally for us.

Rubin Singh: Yeah, I like that use case. Thanks, Ryan.

I think for us, we’ve been a little bit more reluctant.

At OneTenth Consulting, especially with us focusing on the social justice subsector, I would say a lot of our clients are very nervous. And I don’t really blame them because of privacy concerns, bias and a host of other reasons.

But also, I think the biggest issue was a lot of the examples that we’re seeing that were supposed to be relevant for nonprofits really weren’t resonating.

A lot of the organizations I work with, they’re don’t want their donors or their program participants chatting through an automated tool. We’re very intentional about the way we communicate with folks. That doesn’t fit for us.

Or oftentimes, we hear about, you can automatically generate an email and send it out based on certain conditions. Well, you talk to a comms team at a small and medium nonprofit, but they’re offended by the notion. I actually had a situation where I made a quick comment about, “with this automation, you could just have ChatGPT generate the email and send it out…”

And I had to pause for a second. Here this individual has spent their career in building relationships through their various comms channels and being so careful in the wording that they choose and so thoughtful in the way that they interact with each and every individual. The idea of just unloading that and completely allowing in automation to take care of it as smart as it might be is really jarring.

I think the organizations that we work with have been a little bit slower. But I’d say just now, only in the past few months, we’ve been having different conversations. Maybe it’s not the client communication or the donor communication that we want to automate, but there are absolutely ways that we can create efficiencies. So that’s really where we have found agents to be very helpful for our clients.

If I’m a case manager, what can we do to summarize the most recent 10 cases so I can quickly see how I want to frame the next conversation with my program participant for a counseling session. If I’m a gift officer, and I’m taking over a new prospect that I’m working with, what can I do to summarize that data quickly?

A lot of summaries, a lot of efficiencies. That seems to be the models that are really starting to resonate with our clients. Not so much the external facing stuff, but what efficiencies can agents and AI create internally?

What Does AI Mean for Nonprofit Jobs?

Kyle Haines: I was wondering from an adoption perspective and a testing perspective whether the safest route would be to test on things that lead to operational efficiencies. And just thinking from a change perspective, we’re thinking about this at one of our clients. And I think that a reasonable question people are asking is, what does this mean for my job?

And we very much think it just means you get to work on better different stuff or you’re more effective. It doesn’t mean this is replacing you, but as both of you know, that doesn’t always resonate. It’s just proceeding very cautiously as we think about what that looks like.

Rubin Singh: Right. And in my opinion, I think that value proposition has been a little easier for us to communicate in the nonprofit sector because it’s like, hey, if we can save you that time of having to manually put together that donor portfolio, well, that gives you a chance to meet two donors instead of one. It’s freeing up time so you can do more impactful work, which is I think what you’re getting at, Kyle.

In my experience, that has really resonated with folks and help balance out that fear that folks might have that my job is going away.

Kyle Haines: Yeah.

Can AI Tools Create More Equity and Accessibility?

One of the things Rubin, I’ve always appreciated about how I’ve seen you participate in various conferences is always asking questions about bias and equity built into the tools that we use. And AI, obviously, there are a lot of concerns, and I think they’re justifiable around how these things are trained.

One of the things that one of my clients is evaluating is they have developed a lot of content that is medically reviewed, high quality content, but it’s very inaccessible. I’ve wondered is there an inverse opportunity to make that content more accessible, even just from an educational attainment perspective? How do we make this more consumable by people?

I’m wondering whether you’ve seen any instances or what your thoughts on the opportunities for AI to do that is to remove some of the barriers for people around information, that’s in service of for this organization, it’s in service of understanding more about their diagnosis. This is a long question, isn’t it, Rubin. And you might be like, I think that’s six questions, Kyle.

Can you use AI and an LLM to really home in on, we are going to train this and create processes for it, and are we doing a good job of democratizing this information?

And I’m going to stop talking and let you actually answer one of my 12 questions.

Rubin Singh: I’d love to hear Ryan’s opinion on this too. I’ll be completely honest, Kyle, I’ve been so focused with our organizations that we work with, especially in the climate that we’re in right now, about how do we protect the data? How do we limit what (Personally Identifying Information) PII feeds to the large language models? How do we be more mindful about what models, what large language models we’re using?

It’s really been the opposite. It’s really been, how do we protect? How do we avoid bias? How do we avoid the toxicity and such?

That in my mind has not really gone towards the inverse of that, like you’re saying, of what can we do to democratize data?

It makes perfect sense, especially in the scenario that you’re describing with diagnosis and medical data. I think creating customized models that really are very careful about what data that is, but then creating an opportunity to share makes perfect sense. I mean, that sounds like an amazing idea and model that I’d love to learn more about.

Kyle Haines: Let’s just pause there, Ryan, on “that was an amazing idea, Kyle,” from Rubin, and I don’t think I’ve ever actually heard you say that on a podcast, so I’m curious, Ryan, was that an amazing idea, Kyle?

Ryan Ozimek: Rubin, that was an amazing idea.

Rubin Singh: I’m glad I thought of it, yeah.

Vibe Coding with AI Agents and Other Opportunities

Ryan Ozimek: I think that makes a lot of sense, Kyle. One of the things that I see in my seat as somebody who’s really trying to drive technology to increase productivity and efficiency and growth within organizations, is I come in with a heavy technologist’s hat on it. I think one of the things that we’re seeing from the engineering world, is things like vibe coding.

If folks haven’t heard about that, like this idea that you can start having a conversation with AI to start building software without really knowing anything about software engineering. It is a powerful democratizing opportunity, which is why I got involved with technology so long ago. The purpose of our company was to help democratize technology so that as many organizations as possible could be using it to better their missions and to improve their community’s efforts.

I think that the challenge is that, I’m glad we’ve got folks like Rubin and others looking at it is, well, what does that mean? Because what is the intelligence trained on to make that happen?

And at the same time, I see a new generation of folks saying, “I wish I had the designer skills. I wish I had the language skills of somebody who’s been writing content for our organization for four years. I wish I had the software skills to be able to write a small piece of software that our organization needs to process this particular type of grant application faster…”

Those seem like massive, huge opportunities of growth for folks that otherwise probably wouldn’t be able to have access to doing that. That’s been amazing to see all those benefits.

I think we can all agree there’s some pretty outstanding outcomes that you can see from that. But it’s really important for us to be mindful as to what is the intelligence, which is not really intelligence, but what is the process by which these things are deriving those conclusions. So I see both sides of this as well too.

How Easy Is It to Set Up an AI Agent? What Skills Are Needed? And How Secure Is It?

Kyle Haines: What prompted wanting to talk to both of you, for me, was attending a Microsoft conference that Ryan, you were there, and it was heavily focused on AI agents. What I heard from Microsoft left me feeling like I could walk away from this and probably on the plane ride home develop an agent for myself.

My experience was much different.

It does seem at this point that it requires a skill set that not all nonprofits or foundations or associations have. Do you think it’s a fair approximation or do you think I needed different training? I’m just not that good at this stuff? It’s a Kyle problem?

Ryan Ozimek: Wait. Rubin, can I answer first? Kyle, you’ve been at this too long. Have you seen what the kids are doing these days? Oh my goodness.

I would say I walked away from that event, and, after talking to folks, especially in a younger generation, then maybe what our demographic of the three of us represents, (I came away with the understanding that they have) the ability to pick this up really quickly. And these are the folks that are on the front lines. These are the early twenty, mid twenty somethings at nonprofit organizations that are really the ones doing a lot of the work in an operations perspective. I actually feel like they’re moving really fast. I think they’re picking it up really, really quickly.

That’s not to slight our intelligence of the three folks on this call, of course.

What I think is the challenge is, with great power comes great responsibility, of which that responsibility is not fully understood by most people, but especially folks that are earlier in their career and haven’t made as many mistakes as Kyle Haynes has made in his career.

And so I think with that lack of experience and with powerful automation and the ability to move so quickly, my concern is that people that are, I mean, it’s the same concern we’ve had for decades now in the technology space, is that folks that don’t fully understand what they’re doing or the impacts of what they’re doing as they’re vibe coding and trying to make software solutions or technology solutions for their organizations, they might be exposing themselves to security hacks and other issues that are out there.

I’m feeling like folks in the front lines at many of the small and mid-sized organizations that we serve are the folks that are actually picking the stuff up pretty quickly, and it feels more natural to them. But it’s being done in a way that I don’t feel terribly comfortable about the security and the privacy of the data that the organizations are typically using.

Rubin Singh: Yeah. I just want to echo everything that Ryan said here. In terms of skill set with the students that I work with, this is really a common topic that often comes up.

To me, the skill that I encourage folks to have, especially younger folks who are entering the space right now is curiosity. It’s not so much the technical elements of it. To Ryan’s point, they’re picking that up very quickly.

It’s that curiosity of saying, wait a second, what exactly is the basis of this model that I’m using? How open is it? How closed is it? What data sources are feeding this AI tool that I’m using? What are the risks of having personal data that’s fed back into the LLM? What are the implications of that? To me, those are less technical and more curiosity, and the agency are encouraged to raise your hand if something doesn’t feel right.

Those, to me, are the skills that I think are needed in addition to the technical aspects. I would also say this is where strong leadership comes into play, ensuring that whatever agents that are used are aligned with the mission.

So, if you’re an organization that is very high touch in the way you cultivate relationships, but you want agents to go ahead and automatically send out emails, well, that actually could do a disservice to your nonprofit in this case.

So, I think it’s curiosity, strong leadership, having a clear understanding of your mission and your goals and making sure they’re aligned. There’s definitely skill sets that are needed, but not necessarily the obvious ones that come out that are more technical.

Kyle Haines: Yeah.

What Does Day One Post AI Agent Launch Look Like?

What comes to mind in the example I used earlier, and I’ll use the word democratizing again, because I think it’s a good word, democratizing access to patient information.

Immediately, what comes to mind is post-launch, what does day one look like?

And preliminarily, my hypothesis is somebody’s reviewing all of the incoming prompts and all of the responses and immediately flagging anything that’s of concern. And I don’t know the answer to this question yet, but that probably goes on for a period of time. And then it can probably decrease to once a week and less people involved. And then at some period of time, it can decrease to once a month. But I don’t yet know what that cadence is, but I very much take… That’s a big part of my thinking as well, is I think these agents have enormous power.

And Ryan, I think you said with them comes enormous responsibility. You can’t just turn everything over to them at this point.

Are Larger and Smaller Nonprofits Using AI Agents Differently?

Ryan Ozimek: And Kyle, I’m curious, you tend to work with mid-sized and larger organizations from a CIO perspective. One of the things that I’m not taking into account is that I’m often working on the front lines of organizations that tend to be small to mid-sized groups. They’ve got a small but mighty team, where especially folks younger in their career or earlier in their career are doing a lot of different things.

But with larger organizations, there’s a lot more infrastructure and there’s a lot more enterprise-grade systems happening. The equation is probably significantly different when it comes to adopting, using and managing AI. Would that, does that sound right to you?

Kyle Haines: I’m not sure. I think that different organizations are at different places along the spectrum of their comfort with using AI. And I think that my experience, which is not the experience of our whole firm, but usually the impediment, I shouldn’t frame it as an impediment…

How far organizations are willing to go with respect to AI really heads in the direction of what Rubin’s talked about a couple of times: what are the risks? What is the potential for bias? What’s the potential for erroneous information?

And what comes to mind – I forget the name of the organization, but as a cautionary tale, there was an AI agent that was deployed at an organization that was focused on eating disorders. And the AI agent was giving horrific advice back. And it severely degraded the organization’s ability to fundraise. I think there’s questions about the sustainability of the organization.

There are profound risks associated with this. And I wasn’t involved with that organization.

But I’m curious what got missed and whether, even at a large organization, it was just “I spent a couple of hours. I think this is good. I trained it. It’s ready to go. And away we go.” And there weren’t a lot of the things we’ve talked about to this point.

Rubin Singh: Yeah.

I mean, would you just described, Kyle, earlier about what happens on day one and how you can test against the results, that’s no different than what we might do for an implementation where we’re testing data or where we’re testing functionality. We’re very methodical about how we test and retest, especially shortly after implementation.

Why would we be so methodical in our test on data and functionality but when it comes to AI, we would say that’s perfect, we don’t have to worry about it?

I think it’s really, to your point, just applying those same checks and balances as we do with anything else system related to AI. We can’t just assume it is perfect. We can’t just assume it’s going to give the correct results or the preferred results. We have to test and augment and tailor, just like we would any other element of a system that’s going alive.

Even months later, years later, what might be considered bias or toxicity, that changes over time, too. There’s different prejudices or stereotypes. Things change over time. I think that this feedback loop and iterative testing of AI needs to just be recalibrated every so often to make sure it is aligned with values as well. I know that’s a little trickier, but I guess the overall point is it’s not a one-time thing. It’s definitely an iterative process to make sure it’s tested and tailored and augmented accordingly.

Bias Testing in AI – and in Humans

Kyle Haines: Back to your question, Ryan, about, and I’m wondering if this lands for both of you. I think that irrespective of how large an organization, in 2025, if you’re a $5 million a year organization or a $500 million a year organization… To build on your example, Ryan, testing looks different for AI than it does for a CRM project. A CRM project you’re testing did revenue flow through the system in the way that we expected it? And the stakes are not the same as, is this AI model injecting bias? Is it injecting incorrect information? Is it providing incorrect information?

My thought is, it doesn’t matter how large of an organization you are, if you’re thinking about using AI agents and thinking about how they impact your organization, you need executive leadership to be involved in what does testing look like across the board. There’s not an organization that can hand that off to an individual team or an IT department.

Ryan Ozimek: I think that makes sense. And one of the things that just came to mind is a lot of folks talk about their AI agents or their copilots as their intern or their assistant. I’m just curious, Rubin, if the frame of mind that somebody who’s hiring AI should be the same thing they’re thinking about when they’re hiring a person to work at the organization. Because people come with bias and they may not tell you that in the interview. They may not even tell you that after four months, four years. You may never know. But their outcomes could have bias in them and their work product could have bias in them. And we test that with annual reviews and check-in processes and oversight, etc.

It just makes me mindful of how different will this be than the things we should be doing as business leaders, as organizational leaders with the humans that we work with.

And the one huge difference that I see is that as we trust the technology more, it will always move faster than us. And we can’t keep up with it unless we guardrail it in some way, and it might start making leaps of assumption that go quickly down the path of the wrong direction that we want to take it. But at least with the human, there’s relative time to take into that account.

But I’m curious, Rubin, if you’ve been hearing organizations talk about that as well too, is it making them rethink, well, gosh, what are we doing with our human AI, with our humans when it comes to bias training?

Rubin Singh: That’s great. I love that. Yeah, we talk about them in two different tracks of, we want the AI to be as human as possible. At the same time, from an HR perspective, we’re always mindful of bias and discrimination. But I like the way how you put that together.

If we truly want AI to be as human as possible and equivalent to an assistant, then we have to pause, check, reflect, review, adjust accordingly.

I love that. I think that’s a great way to frame it.

How Do Organizations Best Get Started Engaging with AI Agents?

Kyle Haines: Reflecting on how an organization might begin to explore the use of agents, something about what you said, Ryan, made me wonder if perhaps a way to explore it is individually.

How do I engage with an AI agent?

Before I start thinking about how does a team or how does a department or how does an entire organization (engage with an AI agent.)

Is that where either of you started? Have you seen organizations start with that approach?

Ryan Ozimek: At least from my perspective, my view is that in very short time, we are all going to probably have our own named copilot or agent and it’ll be either Siri on your phone, if you’re an Apple user, or it’ll be Gemini on your Google device. It’ll be named something, whatever you want it to be.

And it’ll be so personal and so connected that we are going to start blurring the lines between who is Rubin and who is Rubin’s AI.

And we might be changing the way we communicate based on what we see as feedback from these agents, and that line is going to get blurred even more because we’re going to see different feedback coming the other way. That’s going to encourage us to communicate more clearly or differently. I can’t imagine that the consumer side of this is not going to be the main driver of helping average individuals understand how they’re going to work with AI in the future.

I think from a team perspective, all the guardrails, all the structure, all the processes we possibly can do to protect the organization’s data, integrity, privacy, et cetera, is going to be really important.

But I think what is just getting steamrolled over all of us is that AI is going to be showing up everywhere, and that is going to be the number one place where I think individuals are going to start understanding the value it brings to them.

The more that happens on an individual basis, the more AI is going to change us as humans, and it’s going to change the way we’re working together. And that is something organizational leaders, I think, need to keep in mind. I actually think about that myself as I want to send a message out, and I just start rage typing sometimes. But then I say, AI, just make this softer, make this better. And then I think, wow, that’s actually really good. I want that just to be the way I communicate with people moving forward.

What’s that mean for individuals and organizations that are communicating internally, externally? And then what’s that going to mean for the way in which we expect other people and agents to work on our behalf?

Rubin Singh: I think that personal usage of agents and AI is definitely a way to go.

I’d say for us, it’s just varied so much from one organization to another. And this is where I think, where do you start with AI and agents must be in a way that is aligned with your organization’s values.

I have some organizations I work with; we hop on a Zoom and the moment they see an AI note-taker, they say, shut them out. Don’t let them in.

And then there’s others that say, oh yes, awesome, can you send me those notes that come afterwards?

Everyone views it a different way, so the recommendation we give to folks is start simple, start slow, start simple, very low risk scenarios.

The efficiency creation tends to be a very model that resonates with folks. Something that just summarizes notes that would normally take you a lot of time.

Another thing, and this is a little bit more of a Salesforce example, but it’s not something that jumped out at me when I saw demos and all the hype about agents. One of our developers in our team showed this to me about how you can use agents to just simplify tasks within systems itself.

For example, in program management in Salesforce, there’s lots of complaints from folks that it’s too many clicks, it’s too many objects, too many clicks in order to indicate one benefit distribution that somebody gets, you’re clicking six or seven times.

We create an agent that you can just in natural language say, for Rubin Singh, enter that he participated in this workshop, go. And it has now created the four different records behind the scenes. It’s a huge time saver and super low risk. In that scenario, it’s not even feeding anything to a large language model.

This is where the leadership can really help define what is appropriate for us as an organization. What is the least risk and decent reward that we can implement here, just to let people get a sense of what’s possible, and then you iterate from there.

Are We Practicing What We Recommend?

Kyle Haines: Yeah. Ryan, you shared an example of how you’re using AI agents internally. At least this is what I heard, to help build knowledge internally as a team.

And I’m wondering, Rubin, if you’ve explored that at all at OneTenth. And I’ve wondered about it at Build; how can AI help me be more efficient in my work and surface the work of others that’s helpful in my daily work?

And an example of that is, have we ever worked with a client who’s encountered this particular challenge, and could we use agents in showing that and making our work more effective and efficient and ultimately lower cost to our clients?

Have you looked at anything like that as an interim step around AI agents, just internally focused at OnTenth?

Rubin Singh: Internally, not so much. I’m just scratching the surface here with AI, so even this conversation has given me some ideas. But we have found ways to pass on some of those things, some of those efficiencies, to the client.

An example would be when folks used to give us a data set, and we didn’t know how clean it is. We would usually have a developer that might review it, analyze it. They might have several tools to say, okay, here’s where your problems are. Now, we’re able to do that a lot quicker. We’re able to ask an AI agent, what are the patterns? What are the errors? What are the inconsistencies? What are the potential data integrity issues? I don’t know if I’d consider that internal, but it is something that would normally take us a huge amount of time that the client is paying for, that we can now do in a fraction of the time. So that’s something. But how we leverage AI for our own content strategies and such, I’m excited to explore that further.

Kyle Haines: Yeah.

The Promise of AI Agents to Reduce Risk and Increase Accessibility

Ryan Ozimek: I wanted to add one other thing on this whole topic around AI agents versus just the general AI itself.

When folks use tools like ChatGPT, it’s a big language model, it could do anything for everyone and be mediocre or really good, depending on what you’re trying to do.

I think the promise of AI agents is that for somebody looking at the world of agents through Rubin’s lens of, how do we make sure we are taking risk into account appropriately? From a technologist’s perspective, a lot of that is scoping down. How do we trim up the number of things that it’s touching?

I feel like the agentic world allows us to say, big picture AI stuff, let’s break that down into the smaller pieces of the peanut butter and jelly factory and all we really care about is the peanut butter side because I’m a peanut butter guy and I was a jelly guy, but I only want to focus on automating the peanut butter process.

Well, that’s really cool because we’ve significantly reduced the scope of what it’s touching and what it’s doing. And that reduces hopefully some of that risk.

And I think that a lot of what we could be seeing in the future are things related to that. Help me know from a technology perspective, when somebody is asking a question and they say, I’m having problems logging in, I’m having problems signing in, I’m having problems authenticating. For systems to know that that’s the same thing.

But if you had it in a help desk that only said sign in, it would miss authenticate and login, because it doesn’t see words like that. I think that AI and large language models and things like that, can help semantically connect people that speak with very different diverse voices in a way that can help resonate and make sense as you’re trying to help an assistant serve them.

If you just scope it down to just something like that, I think it’s a great use case. We all speak in different voices and use different language, but now we can solve problems faster because we’ve really trimmed it up to be just interpreting the language, hopefully getting the semantically right answer for them.

Training the New AI Like You Would Train a New Employee

Kyle Haines: I thought of something as you were talking, Rubin, and some of the things that you said as well, Ryan. I thought about an example, flipping the inverse on the risks associated with AI.

I’m thinking of a time a few years ago where someone’s spidey sense went off. We got contacted by an organization and we just got “lucky,” they were an identified hate group.

And like training an AI agent for good, for example we wouldn’t work with an identified hate group. And is it 100 percent perfect? Could I train an agent to be 100 percent perfect? No, but in a capacity constrained small organization, it’s a good first, it’s better than nothing and it could do scanning against that.

And what I wish I had had the time, Rubin, was thinking about a way I had to be thoughtful about the response back, so that there was no further engagement without inciting anger. So how do I gracefully exit this conversation? Because I’m not prepared to debate whether they’re an identified hate organization by the Southern Environmental Law Center. I’m going to trust them above that organization’s definition of a hate group. I wish I had had AI, which wasn’t an agent at the time, to help me write that response. Clearly, it worked because we never heard from them again and no anger. But it took me a long time to do it.

I think I shared that example because I think that’s what you’re talking about, Ryan. You’re really training an agent to understand this is what we trust. We trust the Southern Environmental Law Center. That is an important input into this agent in this context. Did I get that right? 

Ryan Ozimek: Yeah, it seems like it. It seems like just blurring the lines (between helpfully automated and too automated.) The more it knows about your intents and your needs and the way in which you operate, and hopefully it can start to infer some of that just because of what you’ve done in the past, that’s really helpful and insightful. It’s what a manager would do, when trying to train somebody else to go through the process of working at the organization.

I think it’s just, when you just put it on fully automate, which is not what you’re describing, Kyle, I think that’s where folks are getting in trouble. And I think that’s what Rubin is really responding to as well too.

You can’t just put this on fully automate and go sip a cup of coffee.

And that’s where I think a lot of organizations could get tripped up. And I think if that was a lesson learned, if we could as a community share points where this was a problem, at another organization, they saw this problem, it would maybe enlighten folks as well too.

Is AI Like Electricity? Can We Stop Electrification?

Kyle Haines: One of the things, Ryan, you heard this, two years in a row Microsoft has likened and compared AI to electricity from a fairly analytical perspective, not just perhaps a high perspective, but also an analytical perspective. And in some ways, I feel like the electrification is coming.

And we as leaders have to figure out how do we harness that electricity for good. People are going to clearly harness it for bad, but we can’t stop the electrification at this point.

And I’m wondering, Rubin, whether you think that’s the most hype-filled statement you’ve ever heard, not being at the Microsoft summit. And Ryan, I’m interested, you were there, are you thinking man, you really are drinking the Kool-Aid? I’m curious whether that’s a closing thought for organizations is that people need to start thinking about how this impacts their organization because those impacts will come at some point.

Rubin Singh: Yeah. I mean, despite the gray beard, I wasn’t around when electricity was created. Obviously, it changes the way the world works, but it probably came in phases, or it probably evolved quite a bit and iterated over time is my guess.

I think the general statement of “it’s like electricity” is probably true. It will be as transformative. But I also don’t believe of this idea of, adopt now or you’ll spontaneously combust.

As I tell our clients, where AI and agents fit in, it has to fit into YOUR road map. How and when is still a little bit up to you. And how, when, and to what extent, and what problems it solves, there’s no rush to that quite yet. And it’s being again, more thoughtful, more methodical, more strategic.

I think if we do those things and we’re more strategic and more methodical about it, it can do amazing thingsand make medical diagnoses more available. It can make nonprofits more efficient and be able to spend more time on impact.

I think a slight fear I have is if we are rushed to adopt the technology that we’re not ready for or not curious about, that’s where it could have some negativity. But I’m optimistic. I’m optimistic that through conversations like this, and as you brought up, Kyle, you’d be more vocal and transparent about where things have gone wrong. I think that will help gently press the brakes.

The practitioners can put a little bit more reality in and just not let the marketing of the tech companies drive all this.

Kyle Haines: Ryan, I’m curious, your thoughts?

Ryan Ozimek: Yeah, I drank the Kool-Aid and then along with the Kool-Aid, I had coffee on top of that, so I was really energized and believed everything they told me at Microsoft.

But part of that’s because I’ve just watched over time the Internet bringing the world more globally connected together, social media making it so that everybody feels like they can be connected to share more about themselves and be interconnected to that global network. And then mobile devices meaning that you can do it on the subway while on a plane from anywhere, anywhere.

Now, let’s layer on electricity. I feel the speed at which this is coming and the breadth from all parts of the world, like the idea of electrifying the world 100 years ago, that was going to take a long time to do. I just feel like this is happening almost instantaneously compared to that.

I think that was the big takeaway that I took from that Microsoft conference. Imagine getting electricity within two years, the whole world, having access to it.

We’re not going to have the whole world instantly connected to AI, there are significant divides to be able to cover, but it’s just going to happen really, really fast.

I think being thoughtful and mindful, but also recognizing that everybody in your team, all your potential donors, all the people that you’re serving, this stuff is coming at them as individuals incredibly quickly, and we need to start making rapid decisions as to what we’re going to be doing as a community, but also do that thoughtfully and correctly. It is a big challenge for us.

Kyle Haines: Yeah. I think it’s a great closing note. I think the idea that, if I can try to bring together what you both said, I think you can’t ignore what’s coming.

The most important thing is to have conversations about what it means for your organization. And right now, it may mean nothing. Right now, you might build a homestead and not hook it up to the electric grid, or you might not plan on having electricity in your community for four or five years but be mindful that four or five years from now, there’s going to be an electrical pole that’s running down your street. And you can make some choices about how you want to connect into that, or if you want to at all.

I really appreciate both of you making time. I know how busy you are. As always, these conversations leave me with more questions and more excitement than I came into it with. I just really appreciate both of you.

Ryan Ozimek and Rubin Singh: Thank you.

Kyle Haines: Great to see both of you.

I knew that conversation would be super interesting. It’s always great to be able to connect with both Rubin and Ryan, as those two are always thinking about both the pros and cons of how technology is used.

I hope you came away with some excitement or at least some interest in AI agents, and perhaps how to explore how they can improve the way your organization works internally, and perhaps even engages externally with key constituents and other folks.

Thanks so much for joining Transforming Nonprofits.