Fireside Chat: Johan Hammerstrom on AI
Listen to Podcast
Like podcasts? Find our full archive here or anywhere you listen to podcasts. Or ask your smart speaker.
Some of the best conversations are inspired by sitting around the fire and thinking of possibilities and visions – large and small.
This episode is all about AI and how it is impacting our working and personal lives now, particularly if you work at a nonprofit and you are trying to figure out where to invest in AI tools, and where to wait for the technology to develop further. Kyle Haines and guest Johan Hammerstom discuss fear of missing out, where to be cautious, and where to test-drive AI before taking the plunge. They discuss Microsoft’s Copilot at length as an AI application that everyone using Microsoft products will eventually see integrate across the Microsoft Office.
Johan is CEO of Community IT Innovators, an MSP providing outsourced IT support exclusively to nonprofits. Community IT and Build Consulting have a long history of partnership and have worked with many of the same clients over the years. Kyle and Johan also go way back. Sometimes those conversations between colleagues and friends of longstanding are the best kind to listen in on, and this discussion explores questions and cautions that these experts have about this “wild west” phase of AI we are living through now as the technology sorts itself into products and services. They also touch on the ways different generations interact with knowledge, and the way our algorithms can make echo chambers out of the media we consume.
Our Fireside Chats are designed for audiences with varied experiences with technology. In this Fireside Chat with Johan Hammerstrom on AI, learn more about leading nonprofits by understanding new technologies as they emerge, and how those new tools can fit your use case.
Transcription
Kyle Haines: Welcome back to Transforming Nonprofits, where we explore topics that relate to nonprofits and to technology.
Sometimes we call these podcasts Fireside Chats because my hope is that they feel more like an informal conversation rather than what you might hear from me or one of my colleagues in a webinar or another presentation.
This episode features one of my friends, Johan Hammerstrom of Community IT Innovators. Since about 2015, he’s been one of my favorite people to have these types of conversations with. I won’t spoil all of the story, but Johan and I used to be able to connect much more frequently. But like everything, life has changed since we first met in 2015. So this was a chance to connect with Johan about an exciting, and sometimes controversial, and at times nebulous topic: artificial intelligence. This conversation was so fun for me because I know Johan well enough to get him to laugh at my sense of humor and how to ask him questions that he has already considered deeply. With that, let’s get into the conversation so you can listen for yourself. This is Transforming Nonprofits.
Kyle Haines: Johan, thank you so much for being on our podcast today.
Johan Hammerstrom: Thank you for having me.
Kyle Haines: Yeah, I’m really excited. I’m excited for our conversation because so often we get asked to speak at events, and you and I do webinars and other podcasts. And so you know this, but I tried something different for today. And you don’t have any idea what we’re talking about today.
Johan Hammerstrom: None at all. It’s like a magic trick, coming in completely blind.
Kyle Haines: Well, I promise it’s going to be something really easy. And why I didn’t want to give you the topic for today, I actually really enjoy all of the conversations that you and I have had over the years. And we used to grab coffee at this place. It was the White House Cafe. Is that what it was called?
Johan Hammerstrom: The West Wing Cafe.
Kyle Haines: Sorry. West Wing Cafe. I knew it had some reference. And those conversations were always really interesting because we talked about music, we talked about television; we did do some work. We talked about client work, and we obviously talked about technology and leadership as well. You and I both lived in Portland, Oregon. I just always really loved those conversations. And so I didn’t want to give you a heads up because I wanted to just have this organic conversation today.
Johan Hammerstrom: Great. Yeah. No, I love those conversations, too. And it’s sad that the West Wing Cafe is no longer there, although it got replaced by a Wawa, so.
Kyle Haines: People love them some Wawa.
Johan Hammerstrom: They do. We finally got Wawa in DC.
Kyle Haines: Yeah. So today’s topic, you ready?
Johan Hammerstrom: Mm-hmm.
Kyle Haines: It’s something I thought would be easy and approachable, like pretty discreet: Artificial intelligence.
Johan Hammerstrom: [Laughs].
Kyle Haines: I wanted to start with an easy question about AI. Johan, what’s AI?
Johan Hammerstrom: [Laughs].
Kyle Haines: Okay. No, that’s not really the question. I wanted to say something and just get your reaction to it.
I’ve heard people talk about AI. And they compare it to computers or mobile devices. And I don’t personally think that captures it. I think it’s closer to electricity.
And what I like about thinking about AI as electricity is that it can both be incredibly powerful, but it can also be incredibly dangerous. And when electricity first came out, I imagine there were a lot of accidents and it was incredibly dangerous. And I’m wondering, do you think I’m over-blowing this?
Do you think of AI as transformative, as something like electricity, or do you think it’s something closer to what other people have said, computers or mobile devices?
Johan Hammerstrom: You know, the other thing that I’ve heard AI get compared to is the atomic bomb. Wired magazine did a profile of Christopher Nolan when Oppenheimer came out earlier this year, and he was drawing a lot of analogies between the discovery of nuclear power and what was happening with AI.
I think a lot of people have been making that analogy between discovering something new about the natural world that gives us much greater power than we in humanity had previously had, and that AI sort of falls into that category.
But I don’t know if I quite see it that way. I think there’s a lot to be concerned about, but for me, the concerns come from the complexity of what’s being constructed. And I think we’re constructing something that we don’t fully understand.
We now have the computing power to build something that no one really fully understands. But in some ways, that’s sort of an extension of what computer systems already are. You know, our phones have chips in them that have trillions of transistors, all connected in incredibly complex ways. And I think we’ve created these systems that are beyond the ability of the human mind to fully comprehend.
And so it’s not that we’re creating something that’s going to have sentience, like the AI from 80s science fiction movies that are going to somehow take over the world, are gonna develop a willpower that is opposed to the existence of humanity. I don’t think that’s where we’re headed.
But I do think we’ve created something that is more complicated than the ability of any one person to really understand. And so our creation has the potential to get away from us a little bit. That, to me, is the bigger concern.
That and the fact that this is all profit-driven. The atomic bomb AI analogy is interesting, especially if you look at the differences. Atomic energy was developed by governments for a particular military purpose and AI has been developed by these corporations to create profit. And so that motivation, I think, historically has been dangerous.
Kyle Haines: Yeah. I’m going to hold onto this electricity metaphor because I think it’s so powerful. Well, no, I don’t know that it’s that powerful.
But I think as you were talking about AI and how it is a little bit of a black box. That’s how I’ve heard it described, that in some ways we don’t really think about it in our daily life, but electricity is a little bit of a black box for a lot of people. People fear it. The number of people who are willing to do any wiring in their house is reasonably limited.
If you ask most people, how does electricity work in a toaster and why does it make a toaster hot and toast bread, but my light bulb doesn’t get as hot or my iPad doesn’t get as hot? People don’t really get into the nuance of that.
I’m going to quote an article since you quoted Wired. I’m thinking about an article that I saw in Harvard Business Review that was talking about AI-enabled decision-making. People were more trusting of it when they didn’t understand it than they were when they understood the algorithm. They were less likely when they didn’t understand the algorithm to question. I think they were using the example of routing packages or routing trucks.
They just accepted it when they didn’t know more about it. I’m wondering what thoughts that provokes for you.
Johan Hammerstrom: I think electricity is the underappreciated discovery that’s led to all of the other things that we have in our world right now. And since that time, I think the pattern that we’ve seen over and over again is that we invent something, or we create something, and we dive headfirst into using it. Then only later discover what the dangers are.
The automobile was invented in the 19 – let’s say 20s, commercially available automobiles. The seat belt wasn’t required until the 1960s, and then airbags and other safety features weren’t really brought in until the 90s and the 2000s. So there’s like an 80-year period where we invent something, and then lots of people die in horrible automobile crashes before we build all the safety precautions in.
And you see the same thing with chemicals in the environment. You see the same thing with different kinds of technologies. So I imagine that AI is dangerous to us in ways that we can’t anticipate.
For better or for worse, we’re going to have to learn what those dangers are through experiencing them. I suspect that the dangers of AI are probably not going to be so physical. They’re going to be more mental, psychological. They’re impacting our psyches in ways that we don’t fully appreciate.
And you kind of see that already. People don’t think of Facebook or Instagram as an AI, but the feed of content that we’re given by those systems is algorithmically generated. So in some ways, it’s not that much different from the generative AI systems that have captured everyone’s imagination over the last few years.
No one really knows how the Facebook algorithm works. It’s a vast, complicated system that’s optimized for engaging our attention and generating profit through advertising. What is that doing to us? I think it’s going to take some time to figure out what the effects are. I don’t think I answered your question, though.
Kyle Haines: No. As you were answering the question, I thought back to what you shared earlier about Christopher Nolan and Oppenheimer, and I thought about non-military use of nuclear energy in the form of power plants and things like Three Mile Island and Fukushima and things. You know, learning came much later as the result of misuse or misunderstanding or not being careful enough with it. But that’s sort of the human condition, is that those are the things that we learn from.
Johan Hammerstrom: It’s the whole Pandora’s Box. You just can’t really draw an arbitrary line. We’re going to stop using technology now. That’s not a realistic goal. So trying to be thoughtful, trying to be ethical in how we use technology is so crucial.
Kyle Haines: Yeah, yeah. I love that you talked about Pandora’s box. I was at a convention and there was a lot of fear around AI. And I was thinking about Pandora’s box. So many people in that room wanted AI put back in Pandora’s box. And I thought we’re so far past that point. It’s already out. And now we have to figure out how to use it and use it for good as well as think about how to prevent its use for not good.
Johan Hammerstrom: Yeah. One of the big challenges we face right now is that there’s not really appropriate regulation around the tech sector in general. And it’s very similar to the late 1800s, turn of the century when there was all this new technology. Railroads, electrification, back to your point. And there was very little regulation around those industries. And there were a lot of abuse and a lot of harm was done. And it took several decades for everyone to figure out what an appropriate level of regulation was for those industries.
I feel like we’re in the same place right now where you want to regulate it, but we don’t really know how. You also don’t want to mis-regulate industries. And so it’s just a challenging time where we’re all learning, what does this all mean and what is the proper way to control it? I don’t know how much it can be controlled, but –
Kyle Haines: Yeah. Integrate it. Maybe that’s closer to it than controlling it.
Johan Hammerstrom: Yeah, yeah.
Kyle Haines: So a question I have is, do you use AI at all in your daily life, personal, professional, beyond the things like Facebook and Instagram? I know you’re on Facebook all day long. So you’ve been using that form of it for a while.
Johan Hammerstrom: [Laughs].
Kyle Haines: Maybe my question really is about work. Do you use AI at all? And how are you using it?
Johan Hammerstrom: I have not started to use AI yet. And I feel a little bit embarrassed to say that. You know, as a CEO of a tech company. But having done this for a long time, I’ve seen the hype cycle over and over and over again. I think people are always really afraid that they’re going to miss the boat with technology.
And I feel like far too often, you end up wasting more time getting in too early or chasing these dead ends with new technologies than you do getting left behind.
There are a lot of incentives to get us all to use technology all the time. The thrust of society right now is around getting everybody to use as much technology as possible. So the idea that you’re going to be left behind, I don’t worry about that so much. I have been reluctant. I haven’t felt the incentive or the impetus to use AI because I’m not that impressed with what generative AI creates.
I find that the writing that comes out of generative AI systems is not very good. The images, I’m sure you’ve seen all the, “What if Wes Anderson made Star Wars? What if Wes Anderson made Lord of the Rings?” I don’t find that interesting, you know? The stuff that actual humans are creating to me is more interesting than these. Somebody did a series of AI videos of a sequel to 2001 A Space Odyssey and it was just terrible. It was kind of unwatchable.
Kyle Haines: [Laughs].
Johan Hammerstrom: I’m just not impressed with the stuff I’ve seen. On the one hand, it’s like, wow, it’s amazing that computers can do that. And on the other hand, it’s like, yeah, and it’s not that great. If a human did this, no one would tolerate it.
In the work that I do, in how I spend my days at work, I haven’t really had the –
We’ve been playing with the automated note taker in Microsoft Teams. We’re a Teams- driven organization. All of our meetings are in Teams. And so Microsoft now has these AI tools, these Copilots integrated with Teams. It’ll take a transcript of the meeting. It’ll also write a recap.
So we’ve been kind of testing this and we do a weekly Ask Me Anything meeting where different people in the company present on things that they’re doing. And I’ve been looking at the recaps that the AI has been writing and the recaps are terrible. They don’t do a good job of summarizing what the meeting was about.
They’re basically just articulating, here’s the concept from minutes one through five. Here’s the concept for minutes five through 10, because generative AI is not processing the big picture. It’s just the statistics, like what’s the most likely next word to follow in the sequence. And so I just haven’t found it that useful the few times that I’ve looked at it.
I think it’s very experimental, which is great. I’m all for that and I’m curious to see where it all goes. But I think the experiment isn’t over yet.
Kyle Haines: Yeah. Not to be argumentative, but just a counterpoint. And I said this at the intro, you and I have talked a lot in our meetings about music and television. I think you think a lot of human-generated content is terrible. So it’s not just ChatGPT.
Johan Hammerstrom: [Laughs].
Kyle Haines: I mean, the number of bands that you think should never be played again. In fairness, you know, you have a high threshold.
Johan Hammerstrom: [Laughs]. No, you’re right. You’re right. I guess you could say I have high standards. That’s a nice way to put it. I’m a snob, maybe. AI definitely does not meet my standards. Yeah.
Kyle Haines: Yeah.
Johan Hammerstrom: It’s funny, my younger son, Ian, he likes to insult me, which is part of what being a teen is about. This kind of fell out of favor, but for a couple of years, the biggest insult that he could tell me or that his friends could tell each other is “you’re a bot.” That was the biggest insult. Like, oh, you’re such a bot. Stop being such a bot.
And what it means is like, you’re not a real person. You’re just a computer generated, non-player character in a video game, an NPC. You know, you’re just generic. And that’s how I feel about the stuff I see from AI: this is clearly like generated by a bot. It’s not a human.
Now, there are other potential use cases that could be very interesting, could be very dangerous, like voice simulation. They’re getting to a point where they can train a generative AI system so that you can get it to say whatever you want, and it sounds like a person. Your generative AI Kyle could call me up. A bad actor could call and pretend to be you and get information from me that I would never tell them. Stuff like that’s coming. I think that’s what scares people when they think about the potential for what AI could do.
But in terms of AI coming up with stuff on its own, I don’t know. But if they did like an AI Maroon 5, you know, like we’re going to have AI do Maroon 5 versions of Pixies songs. Would I hate that more than if Maroon 5 actually made that album? [Laughs].
Kyle Haines: I should have bet something. I knew Maroon 5 would somehow come up in today’s –
Johan Hammerstrom: Couldn’t resist.
Kyle Haines: Yeah. And I kind of wonder, is Johan worried about alienating the Maroon 5 segment of our audience? I guess your feelings about Maroon 5 are so intense that you’re willing to take that risk.
Johan Hammerstrom: I’m assuming you’ll edit all of this out, like this all gets cut in post –
Kyle Haines: Yeah, yeah, we’ll anonymize the band. We’ll just, you know, bleep it out.
Johan Hammerstrom: Well, what you can do is take the whole recording, give it to the AI and say, take out the parts that are not very good and just leave me what’s valuable. And they’ll splice you talking and you can just cut me out of the podcast altogether. [Laughs].
Kyle Haines: [Laughs]. Or use just all of the good points you’ve made I can have made in my voice. So it just sounds like a larger perspective piece just from me.
Johan Hammerstrom: Yeah, that’s true. We could – Yeah.
Kyle Haines: Yeah. It’s interesting.
You talked about Microsoft Copilot, and I think that calling AI a copilot is a misnomer. And I think this builds on something that you said, because when I think of a copilot, that’s someone that I can hand something over to while I go take a nap on a transatlantic flight, like for the next four hours, you’ve got this, right? I think of it more as a first officer.
I agree with you, it’s terrible at generating things on its own. But, to answer the question I put to you, I’ve been using it a lot to take the things that I write and ask, are there things that I could make clearer? Are there things I could make better? Can I condense this down? Because I know that it’s too long.
So in my own work life, I’ve found it to be a useful first officer. I think the question for me is, especially when working with clients, what’s that distinction? I don’t want people using AI to answer complex questions and just relying on it solely. And so for me, it’s like, what’s the right way to give prompts to AI to make things better or shorter?
Johan Hammerstrom: Yeah, the whole Copilot, it’s good to really dig into that term because it’s great branding by Microsoft.
The original use case for Open AI by Microsoft was in GitHub. Microsoft owns GitHub, which is sort of the world’s leading code repository that many developers use to code in. And they store versions of their code in GitHub.
The original Copilot was an AI assistant that helped you while you were writing code. And they found that developers who used Copilot were something like 40% more effective. They could code 40% faster and more effectively than developers who didn’t use Copilot.
Now, that particular use case makes sense because when you’re developing, you’re writing out a linear sequence of text. That’s basically the algorithm, the program that you’re writing. And so if that’s the activity that you’re engaged in, then a tool that is highly optimized to determine the next best character to use in a sequence of text is going to be extremely powerful and effective in that activity.
I think in typical Microsoft fashion, they take this tool and now they’re applying it to every problem. And does that approach actually make sense for every problem that you might have in, let’s say, a business setting?
A tool that predicts the next term in the sequence is great to have if you’re writing code. But what if you’re looking at a broad set of data? What if you’re looking at lots of different graphs of different data from your organization and you want to understand what’s the business intelligence here? What is this telling me about my organization? I have some doubts that a generative AI-based Copilot is actually going to give you insights in that context because what you’re doing is synthesizing. It’s pattern recognition.
It’s synthesizing lots of different concepts in novel and creative ways. I’m not sure that’s how Copilot actually works. I could be wrong. And this is an area that I would be interested in exploring a lot more in my daily work life is using AI for data analysis and particularly for business intelligence and reporting. In business, you’re dealing with huge data sets. Could AI be helpful in gleaning insights from those huge data sets without having to go to all the data manipulation and data management that currently needs to be done? Which can be very difficult and time consuming.
Kyle Haines: You hit on something that I think is the limitation of AI. As I understand it and from what I’ve seen, it can’t come back with “I don’t know.” It has to come up with what that next character is.
And so using that example, what I wonder about AI is if I feed a bunch of data into it, is it bound and determined to find some corollary or some insight as opposed to saying, “I don’t see anything here. To be honest, I think that it would be difficult to draw much from this based on either the quality of the data, or the quantity of the data, or the lack of correlation between certain data.” That’s what I wonder about. It always wants to provide an answer. And I don’t think it’s capable of saying, I don’t have a great answer for that, or I don’t think you can draw that inference.
Unless you’re specifically asking for that inference and can I draw an inference? And I think that’s the limitation. The fear that I have is that a lot of people will not make that prompt and will not ask the question, “is this relevant?” And instead, will not ask the question in a way that will allow the model to say, there’s not much here.
Johan Hammerstrom: Yeah. And I’d be curious to know what kind of results you get from nonlinear information.
A business or an organization is a highly complex nonlinear system and it takes a lot of skill to analyze it down to a couple key types of data to understand how the system is operating. And I think that’s part of what we all do when we engage in business intelligence activities is we say we’re going to rely on this information and that information with the understanding that it’s imperfect and we can’t rely on it too much. But does an AI have the ability to do that? Or does it always want to develop conclusive answers to the questions that it’s being posed? And then if you provide that tool to someone who doesn’t have a lot of experience in business analysis or business intelligence, are they just running with what they’re being told instead of really questioning it?
Kyle Haines: Yeah. Yeah. So I have a confession to make.
Johan Hammerstrom: Yeah.
Kyle Haines: People will not be able to see your facial reaction and it’s probably going to be hard for you to hear this, but I tried out AI on Spotify.
Johan Hammerstrom: [Laughs].
Kyle Haines: Yeah. Yeah. I knew it was going to provoke a reaction.
Johan Hammerstrom: How was it? How was it?
Kyle Haines: It was interesting and you hit on some of these points. It was a personalized DJ for me. It referred to me by name. It was a DJ that you would expect to hear on the types of stations that I listen to, like Maroon 5 Radio and things like that.
Johan Hammerstrom: Ed Sheeran.
Kyle Haines: Yeah. Maroon 5, a lot of Third Eye Blind, that sort of thing. All of the bands that you gravitate towards.
What I thought was interesting is that it nailed a playlist. It was all the music that I would like. And there were three songs that I fast forwarded because I was either tired of them and it was smart enough to say, “Kyle, it doesn’t sound like you’re into this. Let me try something new.” But I didn’t hear a single new song. It was an echo chamber. And that’s one fear that I have, is this idea of an echo chamber.
That AI DJ served up to me exactly, based on what it saw in Spotify, what it thought I would like, but nothing new. And it actually, truthfully, didn’t even bring back something from the past that I thought, man, I have not heard this Milli Vanilli song in years. [Laughs]. Like I’m so appreciative that it resurfaced it.
Johan Hammerstrom: You know, it’s interesting. This brings up an important point, which is I do think that this all belongs to the next generation.
My older son, Paul, who’s now at Temple, he loves music. He grew up in a musical world that would have astounded us. He had access to every song.
When we were kids, you’d listen on the radio. And if you had a nice boombox, when a song that you like started playing, you could record it to tape. You’d have to save up your money, go to the record store and buy a cassette, later buy a CD. We lived in a world of scarcity and limits.
And this Gen Z generation has grown up in a world of zero limits. They’ve had the opposite problem. They’ve had to filter. They’ve had to say, I’m going to listen to this and not that, and the range of things that these kids have listened to is just remarkable.
The latest trend among people in their late teens, early 20s is going to thrift shops and buying CDs. They just want to have physical CDs. And then the big thing for him is going to these house shows. That’s what kids at Temple are doing. They go to off-campus housing, go into the basement, and there’s a band there playing all different kinds of music, but live music.
And that’s become the priority for them – experiencing that music live. So I’ll be curious to see where this next generation goes with AI. I feel like they’re the ones who are going to figure out how to use it. I feel like in some ways, I’m kind of almost too old to get on the AI train at this point. [Laughs].
Kyle Haines: I mean, if nothing else, I use it for spell checking. I mean, I’m on the AI train just for spell checking and sometimes condensing my longer emails. Doing spell checks of even how I speak. And rather than saying “longy,” longer. I’ll get AI to clean that up at the end.
Johan Hammerstrom: [Laughs]. Yeah, but I’m excited by the next generation. I mean, we’re always inventing new technology that frightens us. And then we’re always raising the next generation of people who figure out how to exist, how to express their humanity in the context of this new technology and technological systems.
HBO was showing this documentary for a while, Woodstock. And they had the director’s cut, so it was like six hours long; it was just unbelievable how long this was. But it’s a fascinating record of a generation of people who are struggling to be humans in the context of the brand new technology of that era and how they reacted against it and how they responded to it. And, that’s what all of us are given in the moment that we’re in. And I don’t know, it’s fascinating.
Kyle Haines: It is. And you did a masterful job of the entire arc of AI. You started with some of the things that concern you about AI, and setting realistic expectations around AI, and all the way to a beautiful future where AI is more integrated into the human experience. But you have no interest in being part of that because you’re too old.
Johan Hammerstrom: [Laughs].
Kyle Haines: Is that a fair encapsulation of the entire podcast?
Johan Hammerstrom: I’m going to be reading my paper newspaper. But you know, my grandfather, who was greatest generation, fought in World War II, had encyclopaedias, loved information and learning about things, had all these maps and stuff. And in the late 90s, when the internet came out, my mom tried to get him on the internet. And she’s like, you’ve got to try this. Get on the internet. You’re going to learn so many different things. And he was like, you know what, that’s for a new era. That’s not the era that I was born in. And that’s not the era I was meant to live in.
Kyle Haines: Johan, from our breakfast chats to this podcast, these are always some of my favorite conversations. And I really appreciate you being cool about surprising you with today’s topic. Clearly, you didn’t need to do any prep because you’ve given it a lot of thought. And even though you’re a mixture of hesitant and sanguine about it, I learned a lot today and I really appreciate you making time.
Johan Hammerstrom: Oh, it’s a pleasure. Love talking with you. And hopefully, the AI will find some useful pieces in our conversation today.