George Siemens on Why Universities Must Stop Outsourcing AI—and Start Building It.
“Open educational resources scaled content. MOOCs scaled instruction. AI is our chance to scale meaningful engagement and assessment.”
If AI is rewriting the structure of knowledge and work, what exactly are we preparing students for?
As George Siemens puts it:
Almost no one is asking, in a sustained and serious way, what it means to design learning for a future where we don’t know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively. And the fact that this isn’t front and center in higher education conversations baffles me. And honestly, it worries me.
George argues that higher education has been asking the wrong questions and, in the process, surrendering its role in shaping the answers.
This week’s Q&A is with one of the most influential thinkers in learning science and digital education. He coined the theory of connectivism, helped launch the first MOOCs, and now leads Matter and Space at SNHU, an initiative focused on the future of human development in an AI-saturated world.
For the last fifteen years, George has shaped my thinking on learning technology more than anyone else. He’s a wildly effective futurist, though he’d be quick to dodge the compliment with a pointed question and a wry smile.
Here are three provocative ideas from our conversation:
Universities aren’t just slow, they’re outsourcing their future. George argues that institutions must stop renting tools that don’t reflect their values and start building AI infrastructure as a public good.
The most important learning outcomes aren’t being measured. Generative AI opens the door to performance-based assessment at scale, capturing not just what students know, but how they think, create, and adapt in real time.
Well-being isn’t extracurricular. As AI accelerates everything else, George believes the most stable and essential domain of learning will be human wellness. It’s not a support system. It’s the core.
Read on for a wide-ranging conversation about student agency, institutional power, and what higher ed needs to do to stay relevant in the age of intelligent machines.
On Why Universities Should Be Building Ed-Tech
ALLISON: You’ve consistently argued that universities shouldn’t just adopt whatever AI tools vendors and frontier model companies build. They should be creating their own. Why do you believe in that path, and what’s holding them back?
GEORGE: I love universities. I believe deeply in higher education as a public good, a space where human development can flourish, not just a pipeline for commercial outcomes. We need institutions that prioritize learning, meaning-making, and democratic engagement.
But here’s the problem: we have almost no influence over how AI is being built. What we do have is enormous collective spending power. With multi-billion-dollar annual budgets, there is hardly a door in this country that some of our institutions can’t open, especially if they act together. Hyperscalers will absolutely build for us, but only if we show up with vision and a clear voice. Otherwise, we are just renting systems that do not reflect our values and handing over control, especially of our data.
The bigger issue is that we lack a shared sector vision. You recently spoke with Mark Milliron. He is fantastic, but he is an outlier. Most of higher ed has not done the strategic work to define what we want AI to do for us. Without that, we default to writing checks for solutions that were not built with us in mind. That is why we need a “build” culture. Not just in IT departments, but across leadership and pedagogy. We need to treat AI infrastructure as core to our mission, not as something secondary.
ALLISON: Do you think all universities should be doing this “build” work, or just a particular subset?
GEORGE: I will make the provocation that all universities should be doing this work, especially those serving historically underrepresented students. These learners are often an afterthought in the AI tools being developed today. If we want to reduce bias and expand opportunity, we need public institutions at the table. Not just as adopters, but as shapers of the ecosystem.
Building AI-native infrastructure should be treated like building classrooms. It is critical public infrastructure. And students should have an active voice in these decisions. This cannot be a top-down process. If we care about equity and agency, then the people most affected by these tools need to help define them.
On Matter and Space, Incubated by SNHU
ALLISON: Tell us about Matter and Space, which you incubated outside of Southern New Hampshire University and recently merged back in. What strategic principles have guided your work over the last two years?
GEORGE: Absolutely. Matter and Space began as a collaboration between Paul LeBlanc, Tanya Gamby, and myself. Our original intent was to explore how AI is reshaping the human condition and the nature of knowledge itself. That question is still unfolding. From the beginning, we organized our work around three domains: knowledge and skills, human skills, and well-being.
The first domain, knowledge and skills, is what most universities focus on. It is the content and competencies students acquire to become employable. The second, human skills, is about learning how to work with others, communicate effectively, and pursue goals over time. These are often picked up informally or through osmosis, but rarely taught with intent. The third, well-being, is the most neglected. It includes physical, mental, and emotional wellness, and asks what it means to live a good life, particularly as the world accelerates.
We wanted to understand how AI would impact each of these domains differently, and how that might require new learning models. Knowledge and skill acquisition is changing rapidly. As AI reshapes job markets and content mastery, learners will need to adapt in real time. Human skills evolve more slowly, but AI is already influencing how we collaborate and build trust. Well-being may be the most stable of the three, but it is also the most essential. In an AI-saturated world, we return to foundational practices—mindfulness, community, spiritual life, relationships, sleep, and diet. These are not new ideas, but they become newly urgent.
Our core hypothesis is that AI will play a significant role in the first bucket by helping learners navigate shifting skills landscapes. In the second, AI can help reduce friction and surface new insights as students develop relational and interpersonal abilities. And in the third, technologies like wearables provide useful signals, but wellness will always require deeply human anchoring.
ALLISON: What are your plans for the future of Matter and Space?
GEORGE: From the beginning, SNHU was our sole investor. When Paul stepped down as president at SNHU and took on the chair role at Matter and Space, we doubled down on learning: about where AI fits and where it fails, about its limitations, about how people tolerate conversational agents in learning contexts, and about what it means to compute curriculum in ways that preserve learner agency and privacy.
We ran a series of pilots that were promising. As those efforts matured, SNHU decided to bring the platform in-house. That made sense, since they were both our incubator and our investor. Now, the focus is on internal deployment. Understanding how students experience trust with the tool, and how to center wellness as part of the learning journey.
On Future Casting AI’s Affordances for Learning
ALLISON: What most excites you about AI’s affordances? Let’s start with assessment. It may not be flashy, but it is foundational. What we measure signals what we value. Traditional assessment tends to focus on what is easy to standardize and score. How might generative AI help us assess more of what actually matters in learning?
GEORGE: Assessment is a particularly strong use case for AI. We have already demonstrated that we can scale content delivery, instructional resources, and peer networks. AI is now extending that scalability to engagement and assessment.
The real shift is toward performance-based assessment that reflects how knowledge is applied in real contexts. With AI, we can create interactive environments where students analyze complex situations, construct arguments, build projects, and revise their work through dialogue. These forms of assessment allow us to capture the process of learning, not just the final product.
One promising direction is the use of AI agents to support Socratic-style learning. The student engages in conversation with an AI that prompts deeper thinking, challenges assumptions, or simulates a real-world scenario. The interaction itself becomes a kind of assessment. Humans still play a central role in evaluating quality, but AI supports the process by making it more dynamic, personalized, and scalable.
There’s an ethical question: should AI ever evaluate human knowledge, or should a human always make the final call? I believe AI can assess effectively when guided by well-defined rubrics, especially for formative feedback. But high-stakes decisions should remain in human hands. That line between assistance and authority is something we need to be intentional about.
ALLISON: I appreciate this framing. As you’ve said: Open source showed we could scale content. MOOCs showed we could scale instruction. AI is now revealing how we might scale meaningful engagement — and build assessments that reflect the complexity of real learning.
ALLISON: You said the future of instructional design is designing for AI. What does that mean? Why is that exciting?
GEORGE: Learning design is critically needed in universities, and great designers create outstanding programs and products. But their work is changing quickly: they must now consider AI as a critical interlocutor in learning, including understanding what it needs to engage best with students and instructors in particular contexts.
A quick example: We’re discovering learner behaviors – login frequency, assignment grades, peer forum connectedness – that identify at-risk learners. Previously, organizations like Civitas did this assessment: instructors logged into their platform, saw warnings and interventions to initiate. Tools like On Task Learning (by my colleague Abelardo Pardo) automated this somewhat: certain behavior sequences automatically triggered intervention flows nudging students to seek help – and prompted instructors to reach out, which may be even more important.
But AI lets us avoid the hardcoding behind these systems, making them more dynamic, personalized, and scalable. The LMS helps decide needed interventions or strategies – helping us scale support. Learning designers don’t have to design or hard-code one-off relationships anymore. That’s revolutionary. But it requires designing with the understanding that an AI will engage with students, content, and curriculum, and presenting the right information to that AI to personalize student experiences.
ALLISON: We’re at the very beginning of using AI to improve the learning process. What are some of the provocations that you would offer for the future (not just current) affordances of this technology?
GEORGE: I love that question. It’s hard to answer because unlike MOOCs, where affordances were easy to extrapolate, AI’s core components are still evolving. We don’t know how quickly it will improve. A year ago, LLMs couldn’t explain straightforward math or create personalized learning plans for complex topics. Today they can. It’s impossible to predict which affordances will matter because its limitations aren’t yet bound.
That’s why we’re struggling to prepare students with career-ready skills – we don’t know what skills they’ll need. We heard OpenAI and other organizations were writing 50-80% of their code with AI-generated tools, and a year ago, predictions were that software engineers would disappear. Now it’s clear entry-level positions have been more impacted; meanwhile, senior software engineers are more critical than ever, structuring inputs and checking outputs of AI. The problem: how can we create AI experts with needed technical skills at hire?
Most higher education leaders couldn’t redraw the curriculum around this expectation with any certainty. But I believe we must anchor on liberal arts basics: analytical skills, historical appreciation, engaging diverse viewpoints. Technical skills will be more important than ever.
And finally, entrepreneurship. I want every four-year college graduate to have started and failed at a business before graduation, because there’s enormous value in entrepreneurship.
On What Higher Ed and Ed Tech Can Do To Start Re-Designing Education, Together
ALLISON: What should kids be learning today to thrive in 2040?
First, we need experiences where learner journeys are shaped by preferences and interests, aided by the exceptional quantity and quality of open learning material now available. Second, prioritize developing creative skill sets – learning to create art, music, literature, scholarship, innovations, products, and companies. Third, students should leverage AI as part of critical thinking. From there, everything in the existing system needs auditing and evaluation against these objectives.
The idea is to develop graduates with much higher agency than today’s, who are more capable of actively participating in this emerging world.
But, one of the core challenges is that we don’t actually have a good line of sight into the future when it comes to skills. We simply don’t know how profound AI’s impact on work will be. At its heart, this is a story of cognitive escalation. As AI takes over more basic skills, humans abstract up a level. But my concern is that this doesn’t stop. AI keeps advancing toward each new value layer we create through abstraction.
That brings me to what feels like the real crux of the problem: what will we need to know in the future, and how will we come to know it?
Today, universities are fairly clear on what they want students to know within specific subject domains. And we have a reasonably stable answer to how students come to know it, largely through a faculty-centric, lecture-based model. That clarity is dissolving.
Some of the interviews you’ve done in the past have explored the skills side of this question, the “what do people need to know” problem. Carl’s interview, in particular, gets at this well. But for most of us, when we look out at this veiled future and ask, “What are we going to know and do?” the response is closer to existential freeze. We get a bit traumatized, and then we go on with our lives.
We may be only a few years away from much of what is taught in higher education becoming obsolete, and as a sector we mostly just . . . carry on. The uncertainty is enormous. The ambiguity is uncomfortable. And yet the scale of the risk and the scale of our response feel wildly misaligned.
There’s something almost Becker-esque about this. In The Denial of Death, Becker argues that some realities are so overwhelming that we develop elaborate ways of not looking directly at them. I worry the future of knowledge, skills, and learning institutions has become one of those topics. It’s too big, too ominous, too destabilizing to stare at head-on.
I tried to engage this in a talk last year on the art of being human in the age of AI. That conversation was intentionally broad and philosophical. We can talk about projections, about 60 percent of jobs changing, or read indices on skills automation and replacement. But what alarms me is how few people are actually grappling with the deeper question underneath all of that.
Almost no one is asking, in a sustained and serious way, what it means to design learning for a future where we don’t know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively. And the fact that this isn’t front and center in higher education conversations baffles me. And honestly, it worries me.
ALLISON: If the future of AI capability and what it means for work and learning is so uncertain, how should our sector – serving learners, workers, citizens – make decisions about what is worth teaching?
GEORGE: The best way to navigate ambiguity is listening to people and embrace a mindset of curiosity. Talk, connect, feel out trends people see in their part of the world. It’s difficult because the right people aren’t yet grappling with what they should – particularly AI’s capabilities and the emergent activities they enable. If we sit in this intensely curious place of engagement with like-minded people, we have a better chance of meeting this moment.
You’ve done this well on your Substack: your guests discuss future skills and sector needs. It’s helpful because you focus on the next right thing. Like traveling somewhere new, you work through the next task: getting off the plane, finding luggage, seeking your connecting flight. That’s today’s win. Long-term planning feels counterproductive – it encourages overlooking interesting opportunities unfolding as technology advances.
We can’t make intelligent predictions with this wall of uncertainty – anyone proved right in three years is sim ply lucky, not prescient. We’re not in a five-year visioning moment. We’re in an era where curious people willing to have informed, open-hearted conversations challenge almost all core assumptions about education’s meaning. Those people will lead and create the university sector going forward. Not traditional leaders, who’ve been grotesquely uncurious about what these systems mean for learning, economy, and society. It’s akin to electrification – factories weren’t designed for it, and that design, not the technology, limited its application. We’re at a similar point – our existing structures prevent us from seeing a plausible future.
One Small Signal
ALLISON: What’s one small, meaningful signal to which we should be paying more attention?
GEORGE: The pace of change makes it hard to pick up small signals – so many big, important questions about our world remain unanswered, and technology opens new, complicated questions about work, education, democracy, and our collective future almost daily. Wrapping our heads around AI’s implications at this pace is incredibly demanding.
One emerging signal: AI is becoming a divining rod for determining good and evil among humanity. The anti-AI movement is on track to be one of the most powerful social movements we’ve seen in our lifetimes. AI is a brilliant scapegoat for all sorts of issues because it will shape so many parts of life: you lose your job, have a dehumanizing customer service experience, see expenses rise – you blame AI. It feels like it has agency, making it easy to hate. And early frontier companies have done themselves no favors by foregrounding the many ways AI will deepen negative consequences – like job loss – for people. AI brings together divergent voices around this linchpin, amplifying their voice and velocity. We ignore that coalition at our peril.
ALLISON: Right. The most successful social movements take otherwise disparate energy pockets and find a common platform to rally around. How would you characterize your response watching this movement grow?
GEORGE: I’ve argued for at least a year that AI can return us to our humanity, much like centuries ago, mechanization allowed us to till soil more readily than with oxen; those tools liberated much – certainly not all – of humanity from physical toil. Many academics critical of this potential don’t have much mundanity in their lives, so they don’t see how liberating this could be for so many. Work and labor has been dehumanizing over 200 years; my hope is AI will reverse that.
AI can restructure the human experience to allow us greater connection to ourselves, our values, causes we want to participate in, and nature. It could let us feel the joy of being human more often, rather than laboring as machine extensions. This vision might feel naive, but it’s not impossible. It’s very possible if we design for it.
ALLISON: In my view – and this has come up across recent interviews – we’ve done a poor job casting an optimistic vision for AI, both across the industry and within our sector. We have so few positive stories showing what could be possible, high-level and tactical, in the next three to five years. People have nothing to advocate for – only something to fear.
GEORGE: That’s a sharp observation, perhaps an antidote to the anti-AI movement’s rise. Despite our natural distrust of change, we’ve only seen technology advance human life quality and extend lifespan in fantastic ways. Regarding education, AI will give access and create influence where it hasn’t existed before, outside university structures. I often wonder how much AI criticism is motivated by preserving influence rather than investing in collective good.
ALLISON: George, you’ve shaped my thinking on the future of learning for over fifteen years, and it’s been a real pleasure to hear where your mind is now. You continue to challenge the field to think more boldly and more ethically at the same time. Thanks for making time for this conversation.
Explore more on AI x The Future of Education:
Joy, Play, Love — Why the Future of AI-Enabled Education Must Be Uniquely Human — Isabelle Hau
Michael Horn on How AI Will Rewrite the Purpose of School


