AI at Work: What Zapier’s Chief People Officer Thinks Comes Next – Brandon Sammut
“Whether it’s kids building games or workers building apps, the moment someone realizes they can create with AI—not just consume—it flips their relationship to learning and work.”
What if AI’s greatest gift to workers isn’t speed or scale—but meaning?
As automation expands, a surprising question is emerging: What does work look like when the boring parts disappear?
Brandon Sammut is helping to answer that. With a background in K–12 education and now serving as Chief People Officer at Zapier, he’s leading efforts to rethink how we hire, coach, and learn in an AI-native workplace.
This isn’t AI as a shiny new tool. It’s AI as infrastructure for learning, for growth, and for reshaping how people experience their jobs.
In this conversation, we explore:
• Why problem-finding—not just problem-solving—is the underrated superpower of the AI era
• How AI coaching removes friction and expands access to support at work
• What Zapier’s AI fluency framework can teach employers, educators, and future workers
On How Zapier Builds AI Coaches That Deliver Impact
A: What is something you’ve changed your mind about in the last year or so?
B: A year ago, I wasn’t optimistic about AI as a coaching modality or a tool for personalizing learning. I understood it conceptually, but I wasn’t seeing good, practical examples. That’s changed.
Now, I’m quite bullish on AI’s ability to deeply personalize education—both for young people and adult workers. What shifted my thinking was seeing real, practical use cases where AI is genuinely useful as a teaching and coaching tool.
When I asked folks at Zapier how they were using AI for coaching and learning, I expected them to weigh the pros and cons between human- and AI-driven coaching. But I hadn’t considered a third, very common alternative: doing nothing.
That hit me over the head. Inaction is the original fallback. We’re busy, fear judgment, or just don’t want to deal with scheduling. Good AI coaching can remove those barriers.
A: That’s a big deal, especially considering how powerful coaching can be in helping people grow, learn new skills, and find direction. What’s surprised you about how people have used these coaches?
B: Zapier’s AI coaches are all built in-house, which is part of what makes them so effective. They understand organizational context—who you are, what your role is, how your team works. That knowledge layer is crucial. You can’t deliver meaningful impact in coaching or learning without context. That’s where we’re starting to see the magic happen.
On the New (Old) Hard Skills That Matter in the Age of Automation
A: This next section is about the future of skills. What’s critically undervalued right now—something people will regret overlooking?
B: My answer doesn’t use the term “artificial intelligence,” but it’s foundational to using AI well. It’s two interrelated skills: problem identification and root cause analysis.
First, problem space identification. AI is a medium, not an outcome. Like any tool—the abacus, the internet, email—it doesn’t tell you what problems are worth solving. We’re going to increasingly rely on humans to bring judgment and subject matter expertise to that task.
Second, root cause analysis. Even when you’ve identified a meaningful problem, you still need to understand its shape. Why does it exist? What’s really driving it? Maybe there are ten reasons, but two account for 90% of the impact. That clarity informs what you build.
Third, people need to understand what good data looks like. Thoughtful AI requires quality data. We need to teach data hygiene early—bake it into education systems at all levels.
A: I don’t know about you, but the minute I tell people what I do, parents ask me what their kids should be learning. How do you answer that?
B: The durable skills we just discussed come to mind first. Then there’s second- and third-order thinking: what happens if I do this? How might this decision influence people or systems down the line?
This is the scientific method—and it’s timeless. You can teach it to kids through hands-on engagement with contemporary tools.
My kids are six and eight. They love games—card games, board games, video games. I wanted to show them they could build things, not just consume them. That could be Legos or papier-mâché, but also low- or no-code tools.
We use Lovable, a no-code app builder. I told them, “You can make a Pac-Man game, but with dragons. Just start typing.” My eight-year-old looked at me like I was joking. But he tried it—and got started. In the process, he learned not just technical skills, but how to articulate what he wanted. That’s the entry point into the scientific method. And he thought he was just playing.
A: You can just build things.
B: You can just build things.
On Why Entry-Level Jobs Are About to Get Way More Interesting
A: The big clickbait question of the day: how is AI going to change early career pathways, especially those entry-level roles that traditionally give people their first professional experience?
B: I know there’s concern that there won’t be enough jobs for young people. I don’t believe that’s true. If anything, the scope of what a 22-year-old can do is expanding.
People are already using AI in fun, everyday ways. When they enter the workforce, they’ll bring a meaningful degree of fluency. If I’m an employer designing jobs, I want to hire those people. I want to give them as much scope and responsibility as they can handle. And I think we’re going to be surprised—pleasantly—by what young people can do with the tools they now have access to.
A: That’s so aligned with what we know about meaningful work: large scope, high accountability, high agency. You once told me that AI will make work suck less by removing the tedium. Can you expand on that?
B: You don’t have to wait for the next generation to see it—we’re seeing it now. People want to feel like they’re making a difference, not just pushing cells around in a spreadsheet. AI allows us to abstract that kind of tedious work and focus on what’s more uniquely human.
On Zapier’s AI Fluency Framework
A: How are you thinking about AI within your own work at Zapier? What new solutions are you most excited about?
B: A few categories come to mind.
First, information access. Customer support is a classic example, but internal support—like HR or IT—is equally powerful. Making it easier for employees to find what they need is huge.
Second, coaching. Coaching has long been a powerful unlock for human potential, but it’s traditionally been too costly and complex to scale. AI changes that. We’re already seeing promising results.
Third, AI as a co-pilot for software development. I won’t go deep here, but the per-person output in software is rising quickly.
Fourth, making sense of data. Data holds incredibly nuanced answers, but cleaning, organizing, and querying it is still highly technical. AI will help flatten that learning curve and make those skills more accessible.
A: Zapier recently released an AI fluency framework—the first I’ve seen from an employer. Tell us about it.
B: It came out of our internal conversations on upskilling. The framework defines what fluency means at Zapier—what we expect from every employee, with role-specific adjustments. We’ve been using it internally for about 100 days, and it’s now a rubric for 100% of new hires.
What’s notable is that half to two-thirds of the framework isn’t about specific tools or prompting—it’s about durable skills: critical thinking, confidence, experimentation, growth mindset, and the ability to produce measurable impact.
At the highest tier—what we call “transformational”—you’re not just using AI competently; you’re applying it in ways that improve productivity, performance, or customer outcomes.
A: How might a school leader or higher ed leader use this framework?
B: It’s role-agnostic. You could apply it to a university model with very little adaptation. There’s nothing in the framework that’s inherently software- or company-specific.
In fact, you could create a version for learners, to inspire them along their own AI fluency journey—from understanding the fundamentals, to improving traditional processes, to reimagining what work looks like altogether.
One Small Signal
A: What’s a small signal in the world that you’ve noticed—something others might be overlooking?
B: The cost of computing. If you’re not working inside an AI-heavy org, it’s easy to miss how fast it’s dropping.
That shift matters. It opens up access. Right now, many of the major models are losing money on free users to gain market share. That’s smart—it builds trust and adoption. Think about rideshare companies: same strategy. But unlike rideshare, AI’s cost structure is dropping, not rising.
That’s a good thing. If AI is going to benefit everyone, it has to stay affordable.
A: Thank you so much for speaking with me today, Brandon—it’s been such a pleasure.
B: Fun to jam with you, Allison. Thanks for having me.


