AI for Learning: Apolitical
How Apolitical is using generative AI to make learning a more active experience.
👋 This week, another perspective on using AI for Learning. We have an online meetup on Tues, 5 Nov 4pm BST. And Greenworkx have kindly offered to host us for our next in person meetup at their London offices on Tues 26 November. If you’re in London, come along!
“It's called generative AI. But we don't really want it just to be generative. We also want it to be organisational,” observes Lowell Weisbord, Senior Product Manager at the world’s largest network of public servants, Apolitical.
Apolitical helps people in government discover the collective knowledge that public servants around the world already have. They connect the dots and direct public servants to the right articles, discussions, online learning, and put them in context.
Lowell is taking the time to run me through how he and the Apolitical team are building on their freemium offer of online learning and community, and approaching the opportunities of generative AI.
“We're helping public servants build the 21st century skills that they need to succeed: skills for navigating big challenges like climate, digital and… AI.”
“The thing that's really interesting is that we have a unique global network,” he says, reflecting on their starting point for exploring the opportunity. “We have lots of policymakers from around the world speaking to each other. We have lots of exciting free events happening on life in government. We have lots of public servants creating insightful content. Over eight years of articles and discussions.”
Apolitical is also offering AI education for civil servants through the Government AI Campus, supported by Google.org and with courses powered by content from Stanford Online. The goal of the AI Campus is to train one million public servants in AI over the next two years. To build “AI-capable governments.”
“We're keen to stay on top of generative AI. And one of the main ways to learn how to use these, these newer generative tools is by, quite literally just using them, right?” he says. “Playing with them, learning how they work best, and understanding which problems they can help solve. We’re doing everything we can to play with the tools within the constraints of the sensitivities around government data.”
So what have they been playing with and what insights have they gained?
First, find the right problem
Initially, like many organisations, they involved the whole company exploring generative AI’s potential. Running engineering hack days and trying to understand the opportunities. There was an organisation-wide push to experiment with generative AI tools, even if it wasn’t known up front how they would add value to a particular team.
“How do we accelerate our engineering team? How can our course production team or operations use generative AI?” he says. “We really came at it from a company wide perspective. We had a Slack channel, so people could talk about it. We put all of our problems on the table. And we're now using Gen AI across the organisation.”
After this initial burst of R&D and just exploring, they realised that there was a foundational problem they had been trying to solve for a long time, that genAI was a great fit for.
“The fundamental problem we’re tackling is that knowledge is siloed often between governments - and sometimes inside governments - and best practice isn’t always shared,” explains Lowell. “To tackle this challenge, five or six years ago, we received EU Horizon 2020 funding, some of which was to build a knowledge graph database of policy, people, concepts… a semantic graph for government. We used a lot of machine learning and a graph database to connect the dots across government. But technically, turning this into something useful for civil servants on the ground was very challenging.”
When they started playing with generative AI, they realised that this technology could transform their approach. “Scraping information, collating it and being able to put words together… these modern, Large Language Models have really accelerated all that,” he says, reflecting that a LLM is not something they could have built just by themselves.
“When new technologies are introduced, often people don’t start with a user problem,” he reflects. “But we realised that the idea of finding and reorganising information was a problem that we've been working on the whole time, on some level,” he says.
This is the first key takeaway. After exploring, step back to reflect on your purpose and the problems that you originally set out to solve. How can this new technology help you solve them in a new way?
Next, understand the strategic implications
By brainstorming possible uses, the team also started to realise how many strategic issues generative AI started to throw up. “We were in the middle of initiatives around Search Engine Optimisation (SEO). And we started to question, does SEO not matter anymore? That’s quite a fundamental question,” he recalls.
Some of these things were not questions they needed to immediately act upon, but should inform their long term approach.
“I would spend less time thinking about SEO,” he says by way of illustration. “Not that SEO stops being important this second, but in the next five years there’s going to be pretty big changes to the landscape of how people research and find information on the internet. Brainstorming around AI gave us a landscape of both which of our users’ problems we might need to tackle and what our future technology stack should look like.”
Getting started quickly and pragmatically
To learn quickly, the Apolitical product team decided to pick something that was easy to do and would also progress their goals around knowledge discovery.
“We’re a relatively lean startup, we don’t have the capacity to do things that aren’t going to be useful to the customers we’re trying to serve — government,” says Lowell. “So it came down to quite a pragmatic decision to start with search. We have our whole corpus of existing article content. How can we make that massive back catalogue more useful, using Gen AI, while working within government data guidance and ensuring our members trust our use of the technology?”
“We built simple RAG (Retrieval-Augmented Generation) tools on top of our existing search infrastructure. This let users access immediate answers to their questions directly from what would have previously been a long list of search results.”
Existing user research informed this approach. “It's funny, so many of the features we work on, whether it is some sort of community discussion, a public webinar, or a generative AI tool, it often boils down to a very simple question: ‘How can we help policymakers find great information quickly?”’ he says, again reflecting on the need to be anchored to your core ‘Job to be Done’.
Emerging insights on AI and learning
Once the Apolitical team got to grips with the easy wins, they began to consider the potentially more interesting and valuable space of their learning experience.
Apolitical offers flexible, online learning for public servants, co-developed with partners at leading universities and research institutions, including Oxford and the London School of Economics.
“There are a lot of quite pedagogically sound things you can do with generative AI, when you start thinking about it,” he says. “Particularly around the formative learning experience and providing helpful feedback.”
They started to think about how they could start to practically test ideas in the context of online courses. “One question was, if you give people an input box as they're going through a course, what are people going to type into it?” says Lowell. “I couldn’t answer that just by doing user interviews.”
They launched an AI course assistant but purposefully left the design very vanilla. “It was really ‘blank’. Because we didn’t want to guide people yet. We wanted them to let them ask whatever kinds of questions they wanted to. For example, would they ask the course assistant to summarise the lesson? Do they ask for more information on the topic?”
He reflects that this approach was powerful as many of their assumptions about what people would do were wrong. “Themes emerged - but not necessarily the ones we expected,” he says.
The first but not particularly exciting finding was that people used it for customer support questions. “If you put a box that people can type into, they'll ask how to reset their password or something, no matter where it is or what it says,” he smiles, reflecting that customer service is actually a good use case for AI.
But other insights were much more interesting.
“People are using it to help them write something,” he says. “‘Hey, I'm working on this presentation for my department. Can you help me write this thing using what you’ve just taught me in this course?’ That's really interesting.”
“I'd still call that learning. But it's letting the course be used as a functional tool,” he says. “It’s not just putting the knowledge in the person's head, but helping them to use it in their context. ‘How can I apply this information to my problem?’ And that's where people really start to learn.”
They found that typically, course learners didn’t ask for feedback on assignments or ask questions as they had expected. Instead, they tested their assumptions and then guided the output with feedback.
One example is around writing. “One of our most popular online courses in the UK is our ‘Advance Your Writing in Government’ course — people love it but generally learners take it once and there isn’t yet a logical next step for them to continue improving their writing from there” he says.
“A gen AI tool focused on helping you write, drawing on all of the tips and resources from that course, for example giving you feedback as you write an email, a memo or a document, might be an interesting add-on. A day-to-day tool, re-enforcing what you’ve learned, when you need it.”
He reckons that a tool that constantly coaches you to write better, alongside a course that gives you the basics could be a much more rewarding learning experience.
Localising and contextualising
One of the other big areas of opportunity is to help localise and contextualise content for specific geographies or roles in government.
“We can make it relevant to Brazilian public servants by adding local case-studies and other relevant sources to our AI Assistant, giving members access to relevant information when they need it without having to rewrite the entire course,” he says.
They also are guiding people to ask questions about how it applies to their role or context and make their user generated content more useful.
This is where design comes back in. “We're making interface changes to guide the users to the kinds of use cases that we think are interesting, and away from things we can't answer with the AI course assistant,” Lowell says.
Five takeaways
We reflect on the key takeaways from the conversation.
“Don't be prescriptive - be curious,” he says. “To begin with, do less, don't guide people because that's the best way to gather information.”
He also believes that the unknowns around how people use it are balanced by the speed that you can build on top of an out-of-the-box model.
“You can do so much of the testing without needing to build much infrastructure - by using the relevant APIs, sandboxes, and testing your prompting before you deploy any code,” he says. “And the models are gonna get better. If the thing you're building is kind of good, it will probably get quite a bit better pretty quickly.”
Here’s the key things that might apply to others in exploring AI in EdTech.
Experiment but then focus on your key purpose. What problem are you trying to solve and how does this technology help? For Apolitical, it is about organising and connecting existing assets, more than generating new things.
Use the experiments to reflect on the long-term implications. What does it mean for your strategy and technology stack? For example, SEO for discovery?
Learn quickly by doing simple things. For example, improving search or providing a vanilla chatbot, using your knowledge base to understand what people ask.
Consider the opportunities to tailor and contextualise for specific learners. This could be where they are, what they do, their specific situation.
Think about a shift from passive to active learning. What opportunities exist to move from just delivering knowledge to coaching people to use it in context?
This last point, could perhaps be the most fundamental one for learning and development, given how much knowledge is quickly forgotten when it’s not applied.
“What challenges are public servants currently trying to solve and how can Apolitical support them by giving them the knowledge, skills and networks to solve them faster?” reflects Lowell. “Can we provide them with more specific tools to solve those problems, rather than just relying on a knowledge transfer method?”
Apolitical are currently hiring for Learning Designers and Engineers. Get in touch if you have the skills to help them with their mission!
Lowell is an alumni of my Finding Product-Market Fit in EdTech programme. Apply for January cohort now to get an early bird discount.