In this episode, we dive into how artificial intelligence or AI is transforming entrepreneurship education. Our guests for this SDG Learncast are Dorina Dobra-Basca and Guillaume Lamothe from the International Trade Centre’s SME Trade Academy. They will take us inside their AI-powered pilot course which is on “Raising Funds for Your Business,” which engaged over 400 learners across 100 countries.
They were inspired by Benjamin Bloom’s “2-Sigma Problem,” which underscores the effectiveness of one-on-one tutoring to increase learners’ performance. With this in mind, they pilot-tested three AI personas— AI Tutor, AI Moderator, and AI Grader—to provide personalized support, boost motivation, and deepen learning.
We would like to hear their experience but also learn from them what it takes to make AI feel like a mentor, not a machine? How do we balance rigor with inclusivity, especially for learners in the Global South? And what’s next for AI in education beyond the buzz?
If you're an educator, entrepreneur, or simply curious about the future of learning, stay with us for a timely look at how AI can drive inclusive and impactful education.
[What follows is a transcription of the podcast, modified for enhanced web readability.]
Paulyn Duman: Welcome to the SDG Learncast with me, Paulyn Duman. In every episode, I bring you insightful conversations around the subject of sustainable development and learning, helping us all to achieve a sustainable future.
In this episode, we dive into how artificial intelligence, or AI, is transforming entrepreneurship education.
Our guests for this SDG Learncast are Dorina Dobre-Basca and Guillaume Lamothe from the International Trade Centre’s SME Trade Academy. They will take us inside their AI-powered pilot course, Raising Funds for Your Business — which is very interesting for a lot of people listening who are into entrepreneurship.
They were inspired by Benjamin Bloom’s Two Sigma Problem, which underscores the effectiveness of one-on-one tutoring in increasing learners’ performance.
With this in mind, they have piloted three AI personas. We want to hear from their experience: what does it take to make AI feel like a mentor — and not a machine? And how do we balance rigor with inclusivity, especially for learners in the Global South?
So if you're an educator, entrepreneur, or simply curious about the future of learning, stay with us for a timely look at how AI can drive inclusive and impactful education.
So let’s start with you, Dorina and Guillaume. Can you tell us about the ITC SME Trade Academy and your work at the academy?
Dorina Dobre-Basca: My name is Dorina Dobre-Basca. I work with the SME Academy team at ITC. What we do is develop various learning solutions for our projects and partners.
And in my role, what I do is implement those—so I'm the implementer. We have a full-fledged team of more than 13 staff members and consultants throughout the world who are helping us devise these great solutions, and I'm just one of the wheels in the system.
Guillaume Lamothe: And hi, I am Guillaume. I'm the Associate Content Officer.
The SME Trade Academy started in 2014 as an e-learning project for ITC, and I joined shortly after that. I was actually the third person they hired. The original idea was that ITC should be doing some online learning.
Nobody really quite knew what shape that would take. Today, the SME Trade Academy still does mostly online learning, although we've branched out a little bit, and we now do digitally supported training and learning solutions—more than just online.
My role, as my title indicates, is to make sure that the content we publish meets our quality standards, makes sense, and is pedagogically sound.
Paulyn Duman: And I'm curious — what first got you excited about using AI to transform online learning for entrepreneurs? Was there a "wow" moment during the pilot that really stuck with you?
Guillaume Lamothe: So, what got me excited about using AI was that I was one of the early members of the public who got access to ChatGPT-3. I was on the waitlist for several months, and then I got the nod that I was able to use it. And like everybody, I think I had a lot of fun playing around with it—just coming up with ridiculous scenarios.
I was the only one in my office who had access to it for a while, so I got to share it with my colleagues and say, “Hey, doesn’t this look interesting?” And pretty quickly, we thought, wouldn’t it be a good idea to actually use this for learning—see if we can stop just goofing around and actually make some use of it?
We quickly got our programmer to develop an API. We work off the Moodle Learning Management System, and he basically embedded an OpenAI API into Moodle, which didn’t take him that long. It’s a feature that OpenAI allows.
Usually, when you ask open questions in e-learning, you just have fixed feedback. So, as an instructional designer, you try to avoid these because they’re boring. You ask an open question, the learner can write whatever they want in the text box, and then you have fixed feedback—and you hope that the feedback you wrote is relevant to what the learner wrote (or didn’t write).
And immediately, when I tested the question, not only did the AI give me timely feedback, but it was also relevant to what I wrote. The AI was able to pick apart what I got right, what I got wrong—and just that made me go, wow.
It made me want to try to come up with a proper pilot at that point.
Dorina Dobre-Basca: What was great to do, to see, and to experiment with was the mastery learning approach that we thought of using, thanks to these AI agents.
Also, this—coupled with the 24/7 support we were able to provide to our beneficiaries throughout the courses (which, of course, we weren't able to do before)—made me realize that this could level the playing field for everyone.
No matter where our beneficiaries were coming from, they could all get access to one-to-one tutoring—even if they were in the capital city or in a remote location. I'm sure that many of them wouldn’t have had access to one-to-one learning in the past, and we could make that possible thanks to the AI agents.
Then I guess for me, the wow moment—well, there were many—but one of them was that I'm the one who looked more closely at the data provided by this pilot.
And seeing how the information and the numbers spoke for our experiment made me really excited about our pilot—because we could see real progress and increased engagement from one module to the next.
Paulyn Duman: I’m really interested in how you piloted the AI tool, because you used three personas. You had the AI tutor, the AI moderator, and the AI grader.
And by the way — kudos on the paper! It was a short paper, but it was amazingly written and very insightful.
But I think for our listeners who haven’t read through the report yet — and I really invite you to read the short report — can you walk us through how these three AI personas work together, and maybe share a story of how one of them helped a learner in a meaningful way?
Guillaume Lamothe: So initially, our first attempts were unstructured and mostly driven by enthusiasm — “Hey, this looks cool, let’s try this!” — rather than by any sort of deep thought.
We had this API, we had the ability to run questions and have AI give us feedback. So, being naive, we had a first attempt where we basically took a bunch of closed questions — by closed, I mean multiple choice, true/false, mix and match — and just turned them into open questions, got some AI feedback, and ran a test course with that.
We thought, Okay, this is going to be great. The learners are going to get the benefit of AI-powered questions. Everybody’s going to think this is awesome.
And... it failed. It went very poorly.
Most learners skipped the questions because they didn’t feel like answering. Most learners who did answer only wrote a couple of words. There was no repeat interaction with the AI — basically, none of the behaviors we’d hoped to see turned out to work.
Also, another thing that happened: sometimes people had conversations with the AI, which was not the point. You were asked a question — you had to answer it. So, you shouldn’t be asking the AI, Hey, how do I log on to this other thing? That’s not what this is.
So, the whole thing was a huge mess, and it did not work.
For the real pilot, which is what we published about, we thought, Okay, let’s separate what we want the AI to do, and let’s clearly delineate how we want the learner to interact with the AI in different situations.
So, there’s one thing we wanted the AI to do: we wanted it to manage questions and to grade questions — and that became the AI Grader.
The AI Grader is designed not to answer you. It won’t talk or engage in a conversation with you. It will give you feedback based on your answer to the question, and if you don’t answer the question, it’ll tell you to please answer it. So, you can’t actually have a chat with it.
However, we have another AI function — which is like a chatbot — that’s very easy to understand and exists in various private-sector applications. That’s the AI Tutor, which will have a conversation with you (though not about every topic). If you stray off the topic of the course, it’ll tell you to please get back on topic.
That’s where you go if you really have general-purpose questions and want to go back and forth with your tutor.
And then third, we had the AI Moderator, which was basically running the forums for us. We usually have a human do that, but sometimes our courses are quite big. This pilot had over 400 students, and it can get overwhelming for a human to monitor such a forum.
So, we thought, Okay, let’s get an AI Moderator who will summarize learners’ posts, summarize the discussion so far, and try to bring some life into the forums and work that way.
Now — how did that work in practice?
I can give one example. We had a question dealing with the benefits of issuing equity versus taking on debt for a small business. And we could see through the logs that there was a learner who tried the question, didn’t get a very good result, and then went to the tutor and said:
“Hey AI Tutor, can you explain to me what are the benefits and disadvantages of debt versus equity?”
And they got a nice, detailed explanation from the AI Tutor. Then they tried again to answer the question — and got a much better result.
That’s exactly the kind of behavior we were hoping to see from our learners.
It’s also the kind of thing that happens in a face-to-face classroom with humans. If you try to answer a question and don’t get it right, you go ask somebody who knows better than you, “Hey, how does this work?”
And the fact that, this time, it happened to be an AI rather than a human — I think that raises interesting questions.
Dorina Dobre-Basca: One of the comments that I’ve seen quite often was that the AI Grader — although it felt like it was a harsh teacher — drove participants to finalize and move from one module to the next. They found the hints and the information provided to be really useful in their learning journey.
What was good to see was that it was also helping participants solve technical challenges — still related to the course, but the kind of technical issues for which we would otherwise have had to provide human support.
They were getting this help in the middle of the night, when of course none of us would be online for work — and they could continue with their course.
Given that our courses are asynchronous, participants can take them whenever they have the time, as long as they complete them within a specific timeframe.
Guillaume Lamothe: Participants would ask things like, “How can I change my password?” — which is normally something we have somebody to help with. But she’s obviously not working at 3:00 AM.
The fact that they were asking the AI Tutor such questions — we didn’t program the AI Tutor to answer — but it judged that such questions were course-related.
And because it had access to our platform, it was actually able to answer these questions accurately, which we did not plan for or think would happen at all.
But this was an added benefit that we discovered.
Dorina Dobre-Basca: Maybe just another point that we didn’t anticipate was that the AI agents were responding in the language of the participants.
This was great to see because we had participants from over a hundred countries, and they were asking questions in their specific language.
It really helped them to have someone responding in their own language.
Guillaume Lamothe: No, that was also an added benefit that we did not expect because for other languages, usually say for Spanish or French, a greater proportion of our learners have that as one of their first languages, but English, the vast majority of people who take our courses in English, it's a second or third language for them.
And so being able to communicate with the AI and the language of their choice, like Dorina said, massive added benefit that we also didn't plan for.
Paulyn Duman: Let me just go back to something that you said. Some learners found the AI-graded questions perhaps exciting, but they also found them tough, right?
And fewer people completed the course. Maybe you can give us a bit of a snapshot of how many learners actually completed the course, and what their feedback was about the tough AI Tutor.
Dorina Dobre-Basca: From the 400 participants, about 20% completed the course successfully.
Now, generally, for online e-learning — especially on many popular platforms — the average completion rate for free courses is about 13%. So even compared to that, the ratio was good. For our other SME Academy courses, the completion rate is about 30%.
We have to accept that not everyone is going to complete [the course].
What we did to support this specific group was to increase the time allocated to complete the course. This time, we allowed four weeks, because we realized it does take longer.
It required more work and more engagement with the modules, since we didn’t have multiple-choice questions — instead, we had more case studies to be completed by the participants.
Guillaume Lamothe: There’s one thing about the AI — people thought it was harsh. This was a part I was responsible for personally. I was very careful that the grading scheme was objective. So we gave the AI a scheme by which to evaluate participants’ results.
We would ask participants to come up with three valid reasons why a business might want to issue debt. In the course, participants would have been presented with six different reasons. So we wanted them to identify three of them.
The grading scheme would go something like this:
- If you get absolutely nothing, you get zero.
- If you get one reason, you get one point.
- If you get two of the three, you get two points.
And so, there was no real space for learners to say, “Oh, I was misunderstood by the AI.”
We did not receive a single request from learners who thought their grade was wrong. What we did get were learners who said, “Come on… couldn’t you have [just allowed that]… come on.” So that’s what we got.
And so, we did tweak the AI a little bit. We literally changed the prompt to say: “You are a kind and empathetic tutor. You want to motivate learners to continue.”
That changed the voice register the AI was using when talking to students — which seemed to help.
Another thing we did was ask that, when the AI gives feedback, it generally tries to conclude with encouragement or a call to try something else. That also seemed to help.
A lot of the feedback about the AI being harsh, I think, was a matter of tone — and not necessarily that learners were failing more.
Paulyn Duman: But do you think motivation also played a role, since the topic is about fundraising for your business? They definitely want their businesses to succeed.
Guillaume Lamothe: It’s important here to keep in mind that what we did was a pilot. Since then, we did publish the course on our standard platform, and we've run one session of the course as a course.
So the pilot is perhaps not 100% representative because it was advertised as, “Hey, let's test this out. We've got a new technology that we want to run,” and we basically sent out mass emails to our entire database of learners: “Would you like to try this new thing we’re doing with AI? Come do it.”
So the people who took our pilot — probably a large proportion of them — were more interested in testing out the AI tutoring than they were in raising funds for their business.
That would confound the motivation a little bit.
But it's also important to keep in mind that we found that when a learning activity — and by a learning activity, I mean a learning module, a video, a test, anything — one unit of something you're doing online exceeds one hour, learner motivation starts to go down.
Paulyn Duman: But not all learners are motivated. Right. So what did you do to encourage those who are not necessarily motivated to complete the course?
Guillaume Lamothe: Humans like to complete stuff. It sounds silly, but we have a little completion bar at the top. And when you complete a thing — whether that's posting in the forum, whether that's doing a test — your little completion bar gradually fills up with green, right?
It has absolutely no pedagogical importance. It has no influence on whether you get your certificate or not.
So people like completing things.
In terms of motivation, it's important to keep in mind that AI — by virtue of all the AI questions being short essay, open questions — takes longer to complete than a true/false, closed question, multiple choice, etc.
And we have to keep that in mind in our designs.
We can't have these learning activities that take three hours to complete. Nobody wants to do that.
So that's one thing that we've certainly internalized in terms of our module design.
Dorina Dobre-Basca: One thing that we are now re-evaluating quite seriously is basically changing our module design, because AI questions will roughly add a third to the completion time of modules.
And so, we have to look at how we structure the information presented, so that we don’t end up with these two- to three-hour-long learning units — which are difficult for people from a motivation standpoint.
One thing that you asked us is also what we've done to encourage and support participants throughout the process.
One thing we’ve done was to develop videos with very detailed descriptions of the three AI personas. We also included videos on how to take advantage of the AI-powered questions and how to best respond to them. There are written instructions on top of the course but are not necessarily visible...
Guillaume Lamothe: They're visible, people just don't read them.
Dorina Dobre-Basca: Yeah, exactly. They’re there. But once participants are within the module, they might forget, or if they didn’t read them at the beginning, they might have been overlooked.
So we’ve seen that having those video instructions included within the modules themselves really assisted the participants.
Paulyn Duman: That is so interesting. One thing I picked up from what you said is how you made the AI more sympathetic and empathetic, especially in response to the feedback about it being too harsh.
It's also helpful to know that some learners tend not to read what's in front of them, so having videos that explain the AI personas and how to use them effectively really supported those who prefer visual guidance.
Now, I'd like to focus on one specific persona — the AI Moderator. Coming from a learning organization myself, I know how important this function is in both virtual and face-to-face settings. Moderating discussions can be a full-time task.
So my question is: how was the AI Moderator able to improve online discussions — not just by making them more engaging, but also more manageable for human tutors? And was it rewarding for learners too, especially those who are extremely busy?
Guillaume Lamothe: Just to make it clear for the listeners: the way we run forums in our online courses is to serve two purposes.
On the one hand, it's to give a sense of community — that you are in a class with other people and not just by yourself doing this learning activity in a void.
On the other hand, it's also a form of homework that the human tutor — when it is a human — checks.
So we have this thing called a forum task, which both helps guide discussions and, as the name implies, is a task that you have to go and complete.
These are often tasks assigned to talk about your own situation or your own country. For example:
- What is the funding situation like for small businesses where you live?
- Do small and medium enterprises have access to debt financing?
- What are the major banks that might issue these?
- What alternative financing is available where you are?
Those are the kinds of questions we would ask.
So people are answering these, and there is interaction between the learners — but it's also about reflecting on their own situation.
The moderator’s job is then twofold:
- Promote discussion.
- Ensure people are staying on topic and properly answering the task.
The moderator, as we programmed it, did two things:
- It answered every single student post — which can be a huge job for a human when you have over 400 students.
- It did not engage in long, drawn-out discussions. We decided to limit it to a single reply — a response to the student's post in the forum.
Then, every so often, it would roll up the student posts and publish a summary:
“Here’s what happened over the past couple of days.”
We also asked it to pick out what it thought were particularly interesting posts and comment on them. It would call out students by saying things like:
“So-and-so posted about this,” or “This person posted about that.”
And then note, for example:
“These two answers highlight the difference between their two situations in interesting ways. Go read those posts.”
So, it gave a little bit of visibility to people who had put in a particular amount of effort into the forum task.
I think it was very useful. One student — I remember this because I looked at the forum — saw the AI moderator’s reply to their post, and they responded in all capital letters: “AWESOME.”
So I guess they saw value in what the AI moderator was doing.
Also, when you have a huge forum, it gets really overwhelming to just see what happened over the past week. So having these summary posts was really helpful.
What we're thinking for version two of the moderator is to have it start referring students to each other’s posts — so that in its replies it could say:
“That’s very interesting. Such-and-such wrote something similar and seemed to face similar issues. Consider talking to them to get some insight into their experience.”
In the pilot, the AI moderator did not do that. But in order to better promote interaction within the forum, we thought it would be good to program it to start doing something like that.
Dorina Dobre-Basca: We do have this function called Daily Digest, which can be sent from the forum. So once we have this new capability that Guillaume is mentioning, we could have this digest sent to participants on a daily basis.
That way, they can easily see their highlighted tag and go back to the answers they're most interested in — especially where there are participants with similar views.
What I’ve seen a lot in the comments is that participants really felt like they were in a classroom when they were in the forum, because they could exchange various ideas related to financing.
They appreciated the synthesizing of ideas, because it compared contributions from multiple learners, allowing them to see different financing schemes from all around the world.
Paulyn Duman: One of the things that got me interested is how you are planning to tailor it further. I hear there’s a version 2.0 for the moderator, but you’re also looking at learners’ profiles, right?
So there will be more advanced tools, for instance, for business groups or learners with a higher level of knowledge — and perhaps also simpler options and answers for entry-level learners.
How do you ensure that no one gets left behind in this two-speed model? Tell us about your plans.
Dorina Dobre-Basca: We are working on other tools as well. When it comes to the AI-powered online courses we are rolling out, what we do is implement the AI Tutor and the AI Moderator throughout our courses.
These are quite easy to implement — and easy to use as well — even for courses that were developed in the past.
For courses we’re developing now, we integrate AI from the design stage.
I would say we already have a few courses where we've integrated the AI Grader — maybe not at the same capacity as we did in the pilot. In the pilot, for example, the AI Grader gave grades and didn’t allow participants to proceed unless they had received a specific grade or responded in a certain way.
So we are integrating it to a certain level in the new courses. For more advanced courses, we’re putting in a full-fledged AI Grader.
Guillaume Lamothe: Yeah, I'm really torn about this issue because, okay — like Dorina said — on the one hand, there’s AI that’s just beneficial for everything. Having an AI tutor chatbot at the bottom right of your screen to answer your questions whenever you want — there's no downside to that. It's just better. Same thing with the AI Moderator — it’s just good to have. It can work in tandem with a human moderator in the forum, so that’s a no-brainer. We’re rolling that out for everything.
AI question grading, on the other hand, does make an online course more difficult. But that’s not an inherent feature of having an AI grader — it’s because you’re moving from answering multiple-choice closed questions to actually having to write short essays. That’s inherently more cognitively difficult — especially if English isn’t your first language, and especially if, as is the case for many of our learners, you’re not used to writing. It’s not something you do very often.
So, on the one hand, we recognize that difficulty, and we also recognize that we’re going to have to change the presentation and chunking of content to make it more accessible for people.
On the other hand, if we wanted everybody to pass our courses, we would just have a button you could click to get a certificate. Having a certain level of difficulty in exercises is an inherent part of the learning process. So, I would lean toward having AI as an option for advanced learners who want to push themselves more.
However, I want to emphasize that we’ve not made any decision on this — it’s still an active discussion about exactly how AI graders should be used.
We have rolled out a few courses with our partners, including the Swiss Import Promotion Organization (SIPPO), which uses AI graders for a select group of beneficiaries — not just anybody. SIPPO, for example, works with business support organizations. They felt confident that these groups could handle and respond well to AI grading. We designed a course from scratch with them in December 2024, and so far we've had very good results.
So, when you have a clear group of learners in mind who will respond well to it, it works. But making the AI grader the standard for all our learners is perhaps not the way to go — again, not because of AI itself, but simply because being asked to write something is more difficult than choosing from a list of options.
Is it theoretically possible to make AI questions easier? Yes, perhaps. But if we’re just using AI as a fun gimmick where we could have used multiple-choice questions, we might as well stick with multiple-choice, right? We're trying to find the added value of the AI grader here. AI is a tool to reach a goal — AI is not the goal.
Paulyn Duman: Let me just ask you about this: most of your learners at the SME Trade Academy are from the Global South.
Guillaume Lamothe: Yes, 80%
Paulyn Duman: There are a lot of opportunities — and big opportunities — especially around language, but also in adjusting to cultural norms, such as not writing or not responding directly or as quickly as some people might expect.
I can imagine myself in a room of Indonesians and Filipinos, just in my experience also, and we will take a bit more time to respond to certain things. These are challenges we need to consider. So for me, my question is: with 80% of your learners coming from the Global South, what are your biggest opportunities and challenges? And how do you plan to use AI to create more inclusive and accessible entrepreneurship education — or more broadly, education aligned with the SDGs?
Guillaume Lamothe: First, I just want to raise one point. We may have made it seem, with our previous answers, that our completion rate fell off a cliff — but it really did not. Dorina gave us the numbers earlier: we are 10% lower than we would usually be, which is significant, but it's not like nobody completed the course. We did see a drop in completion rate, but it’s not as though the course became impossible for people to complete. So that's one thing.
It’s going to be very important moving forward that we do a few things:
One is to emphasize the multilingual aspect of the course. To a large extent — I don’t know if you talk to AI in different languages; I do — if you talk to it in a different language, it will also change its behavior. ChatGPT, which is the AI we work with for our courses, is sometimes criticized for having a Western mindset — it can sound a bit American, perhaps. But if you talk to it in Tagalog, for example, you’ll probably find it starts behaving more like a Filipino.
So there’s this element where we can use language to overcome some implicit cultural barriers.
There are other elements too — but one that I keep very much in mind is how we are respectful of people’s time. As I’ve said a couple of times, people are busy. We’re doing adult learning. Our learners are often entrepreneurs or they work in businesses — they have other things to do than sit in front of their computer all the time trying to complete learning courses.
So we really have to strike a balance: here’s something you can do relatively quickly in a single sitting, but we’re also not going to just spoon-feed you candy. It’s learning. By definition, it’s going to take effort, and it should feel like work.
We have to find that balance — understanding, as you said, that answering AI questions is more cognitively demanding, and therefore takes longer. So, when we create modules with AI as a central feature, can we shorten them?
Can we include perhaps just one AI question at the end instead of a series? Can we use the AI question as the summative exercise?
These are all design questions we’re asking when we create modules with AI, instead of simply shoehorning AI into existing modules to replace closed questions that were already there.
Dorina Dobre-Basca: I One of our discussions was to use technologies like speech-to-text or text-to-speech, especially for participants with disabilities or low levels of schooling. This is something we're envisioning for the future, and we're currently testing different technologies in this area.
And when it comes to accessibility, maybe just to mention — for all our listeners — that all our courses on the Academy are free and can be accessed on any device, as long as you have internet connectivity. You can use a phone, tablet, laptop — it doesn't matter. You just need internet access, and you're good to go.
Guillaume Lamothe: We haven't gotten it to work quite yet, but if we can, speech-to-text will be a game changer—because writing is hard, especially in your second or third language. If we can enable our learners to simply speak, that’s going to be big.
The challenge, of course, is dealing with a wide range of accents from people across different parts of the world. So the AI’s speech-to-text recognition will need to handle that accurately. But if we can get it to work, it will be a major step forward for accessibility.
Paulyn Duman: And finally, I’m sure many of our listeners—whether educators, entrepreneurs, or young changemakers—are excited to hear what you’ve learned and shared. If you could leave us with one simple message, perhaps a skill, habit, or mindset that we should adopt to start using AI in building a more inclusive and sustainable future—or in the work that educators do—what would that be?
Dorina Dobre-Basca: I don't think we should use AI just for the sake of it. Like any other tool—say, a calculator used to solve a math problem—it’s important to first define the problem and the context in which you're using it. You also need to make sure that the answers the calculator gives are correct based on your specific problem.
The same applies to AI. We need to remember that it’s a tool—something that assists us, helps us get answers or information faster and more easily.
If you want to take courses on how to get the best information out of these AI tools, go ahead—but always remember: it’s just a tool.
Guillaume Lamothe: For the first time, it's possible for most people to have a personal tutor available to them for free, 24/7. And that’s entirely new.
For the longest time, learners were limited to the classroom—which, while the best we had, is inherently not ideal. A classroom environment tends to treat everyone as a single entity, moving at the same pace.
If we can enable individual progression for each learner, that would be a huge game changer.
Paulyn Duman: Thank you so much for your time. If you'd like to know more about their work, please visit the ITC SME Academy—it's available on their website. I’ll also share the report mentioned in this episode, so check the transcript. Guillaume, Dorina—thank you again for your time, and I hope to see you both again soon.
--
Paulyn Duman is the Knowledge Management, Communications, and Reporting Officer at the United Nations System Staff College (UNSSC) Knowledge Centre for Sustainable Development and is a coordinator for the Joint Secretariat of UN SDG:Learn, together with UNITAR.
The opinions expressed in the SDG Learncast podcasts are solely those of the authors. They do not reflect the opinions or views of UN SDG:Learn, its Joint Secretariat, and partners.