Sunday, December 17, 2017

10 uses for Chatbots in learning (with examples)

As chatbots become common in other contexts, such as retail, health and finance, so they will become common in learning. Education is always somewhat behind other sectors in considering and adopting technology but adopt it will. There are several points across the learner journey where bots are already being used and already a range of fascinating examples.
1.    Onboarding bot
Onboarding is notoriously fickle. New starters in at different times, have different needs and the old model of a huge dump of knowledge, documents and compliance courses is still all too common. Bots are being used to introduce new students or staff to the people, environment and purpose of the organisation. New starters have predictable questions, so answers can be provided straight to mobile, directed to people, processes or procedures, where necessary. It is not that the chatbot will provide the entire solution but it will take the pressure off and respond to real queries as they arise. Available 24/7 it can give access to answer as well as people. What better way to present your organization as innovative and responsive to the needs of students and staff?
2.    FAQ bot
In a sense Google is a chatbot. You type something in and up pops a set of ranked links. Increasingly you may even have a short list of more detailed questions you may want to ask. Straight up FAQ chatbots, with a well-defined set of answers to a predictable set of questions can take the load off customer queries, support desks or learner requests. A lot of teaching is admin and a chatbot can relieve that pressure at a very simple level within a definite domain – frequently asked questions.
3. Invisible LMS bot
At another level, the invisible LMS, fronted by a chatbot, allows people to ask for help and shifts formal courses into performance support, within the workflow. LearningPool’s ‘Otto’ is a good example. It sits on top of content, accessible from Facebook, Slack and other commonly used social tools. You get help in various forms, such as simple text, chunks of learning, people to contact and links to external resources as and when you need them. Content is no longer sits in a dead repository, waiting on you to sign in or take courses, but is a dynamic resource, available when you ask it something.
4. Learner engagement bot
Learners are often lazy. Students leave essays and assignments to the last minute, learners fail to do pre-work, and courses– it’s a human failing. They need prompting and cajoling. Learner engagement bots do this, with pushed prompts to students and responses to their queries. ‘Differ’ from Norway does precisely this. It recognizes that learners need to be engaged and helped, even pushed through the learning journey, and that is precisely what 'Differ' does.
5. Learner support bot
Campus support bots or course support bots go one stage further and provide teaching support in some detail. The idea is to take the administrative load off the shoulders of teachers and trainers. Response times to emails from faculty to students can be glacial. Learner support bots can, if trained well, respond with accurate and consistent answers quickly, 24/7.
The Georgia Tech bot Jill Watson, and its descendants, responds in seconds. Indeed they had to slow its response time down to mimic the typing speed of a human. The learners, 350 AI students, didn’t guess that it was a bot and even put it up for a teaching award.
6. Tutor bots
Tutor bots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills. Straight up teaching is another approach, where the bot behaves like a Socratic teacher, asking sprints of questions and providing encouragement and feedback. This type of bot can be used as a supplement to existing courses to encourage engagement. Wildfire, the AI content generation service uses bots of this type to deliver actual teaching on apprenticeship content, as a supplement to courses, also built using AI, in minutes not months. Once the basic knowledge has been acquired, the bot tests the student as well as getting them to apply their knowledge.
7. Mentor bot
The point of a bot may not be to simply answer questions but to mentor learners by providing advice on how to find the information on your own, to promote problem solving. AutoMentor by Roger Schank,  is one such system, where the bot knows the context and provides, not just FAQ answers but advice. Providing answers is not always the best way to teach. At a higher-level chatbots could be used to encourage problem solving and critical skills, by being truly Socratic, acting as a midwife to the students behaviours and thoughts. Roger Schank is using these in defence-funded projects on Cyber Security.
As the dialogue gets better, drawing not only on a solid knowledge-base, good learner engagement through dialogue, focused and detailed feedback but also critical thought in terms of opening up perspectives, encouraging questioning of assumptions, veracity of sources and other aspects of perspectival thought, so critical thinking could also be possible. Bots will be able to analyse text to expose factual, structural or logical weaknesses. The absence of critical thought will be identified as well as suggestions for improving this skill by prompting further research ideas, sound sources and other avenues of thought. This ‘bot as critical companion’ is an interesting line of development.
8. Scenario-based bots
Beyond knowledge, we have the teaching and learning of more sophisticated scenarios, where knowledge can be applied. This is often absent in education, where almost all the effort is put into knowledge acquisition. It is easy to see why – it’s hard and time consuming. Bots can set up problems, prompt through a process, provide feedback and assess effort. Scenarios often involve other people this is where surrogate bots can come in.
9. Practice bots
Practice bots, literally take the role of a customer, patient, learners or any other person and allows learners to practice their customer care, support, healthcare or other soft skills on a responding person (bot). Bots that act as revision bots for exams are also possible.
A bot that mimics someone can be used for practice. For example, the boy with attitude ‘Eli’, developed by Penn State, that mimics an awkward child in the classroom. It is used by student teachers to practice their skills on dealing with such problems before they hit the classroom. Duolingo uses bots after you have gathered an adequate vocabulary, knowledge of grammar and basic competence, to allow practice in a language. This surely makes sense.
10. Wellbeing bots
If a bot is being used in any therapeutic context, its anonymity can be an advantage. From Eliza in the 60s to contemporary therapeutic bots, this has been a rich vein of bot development. There is an example of the word ‘suicidal’ appearing in a student messenger dialogue, that led to a fast intervention, as the student was in real distress. Therapeutic bots are being used in controlled studies to see of they have a beneficial effect on outcomes. Anonymity, in itself, is an advantage in such bots, as the learner may not want to expose their failings.
Bots such as ‘Elli ‘ and ‘Woebot’ are already being subjected to controlled trials to examine the impact on clinical outcomes.
Bot warning
The holy grail in AI is to find generic algorithms that can be used (especially in machine learning) to solve a range of different problems across a number of different domains. This is starting to happen with deep learning (machine learning). The idea is that the teacher bot will replace the skills of a teacher, not just be able to tutor in one subject alone, but be a cross-curricular teacher, especially at the higher levels of learning. It could be cross-departmental, cross-subject and cross-cultural, to produce teaching and learning that will be free from the tyranny of the institution, department, subject or culture in which it is bound. Let’s be clear, this will not happen any time soon.  AI is nowhere near solving the complex problems that this entails. If someone is promising a bot will replace a teacher – show them the door. Bots will augment not automate teaching.
We have to be careful about overreach here. Effective bots are not easy to build, have to be ‘trained (in AI-speak ‘unsupervised’) and are difficult to build. On the other hand trained bots, with good data sets (in AI-speak ‘supervised’), in specific domains, are eminently possible. Another warning is that they are on a collision course with traditional Learning Management Systems, as they usually need a dynamic server-side infrastructure. As for SCORM – the sooner it’s binned the better. Bots fit n more naturally into the xAPI landscape.
Chatbots have real potential in a number of learning activities, all along the learning journey, not as a general; ‘teacher’ but in specific applications within specific domains. They need to be trained, built, tested and improved, which is no easy task, but their efficacy in reducing the workload of teachers, trainers, lecturers and administrators is clear. The dramatic advances in Natural Language Processing have led to Siri, Amazon Echo and Google Home. It is a rapidly developing field of AI and promises to deliver chatbot technology that is better and cheaper by the month.
As a bot does not have the limitations of a human, in terms of forgetting, recall, cognitive bias, cognitive overload, getting ill, sleeping 8 hours a day, retiring and dying - once on the way to acquiring, albeit limited, skills, it will only get better and better. The more students that use its service the better it gets, not only on what it teaches but how it teaches. Courses will be fine-tuned to eliminate weaknesses, and finesse themselves to produce better outcomes.
We have seen how online behaviour has moved from flat page-turning (websites) to posting (Facebook, Twitter) to messaging (Txting, Messenger). We have seen how the web become more natural and human. As interfaces (using AI) have become more frictionless and invisible, conforming to our natural form of communication (dialogue), through text or speech. The web has become more human.
Learning takes effort. Personalised dialogue reframes learning as an exploratory, yet still structured process where the teacher guides and the learner has to make the effort. Taking the friction and cognitive load of the interface out of the equation, means the teacher and learner can focus on the task and effort needed to acquire knowledge and skills. This is the promise of bots. But the process of adoption will be gradual.

Finally, this at last is a form of technology that teachers can appreciate, as it truly tries to improve on what they already do. It takes good teaching as its standard and tries to support and streamline it to produce faster and better outcomes at a lower cost. It takes the admin and pain out of teaching. They are here, more are coming.

 Subscribe to RSS

Thursday, December 14, 2017

7 solid reasons to suppose that chatbot interfaces will work in learning

In Raphael’s painting various luminaries stand or sit in poses on the steps, but look to the left of Plato and Aristotle and you’ll see a poor looking figure in a green robe talking to people – that’s Socrates. Most technology in teaching has run against the Socratic grain, such as the blackboard, turning teachers into preachers and lecturers. With chatbots we may be seeing the return of the Socratic method.
This return is being enabled by AI, in particular Natural Language Processing but also through other AI techniques such as adaptive learning, machine learning, reinforcement learning. AI is largely invisible, but it doe have to reveal itself through its user interface. AI is the new UI but because the AI is doing a lot of the smart, behind the scenes work, it is best fronted by a simple interface, the simpler the better. The messenger interface seems to have won the interface wars, transcending menus and even social media. Simple Socratic dialogue seems to have risen, through the process of natural selection as THE interface of choice, especially on mobile.
So can this combination of AI and Socratic UI have an application in learning? There are several reasons for being positive about this type of interface in learning.
1. Messaging the new interface
We know that messaging, the interface used by chatbots, has overtaken that of social media over the last few years, especially among the young. Look at the mobile home screen of any young person and you’ll see the dominance of chat apps. The Darwinian world of the internet is the perfect testing ground for user interfaces and messaging is what you are most likely to see when looking over the shoulder of a young person.
So one could argue that for younger audiences, chatbots are particularly appropriate, as they already use this as their main form of communication. They have certainly led the way in its use but one could also argue that there are plenty of reasons to suppose that most other people like this form of interface.
2. Frictionless
Easy to use, it allows you to focus on the message not the medium. The world has drifted towards messaging for the simple reason that it is simple. By reducing the interface to its bare essentials, the learner can focus on the more important task of communications and learning. All interfaces aim to be as frictionless as possible and apart from speculative mind reading from the likes of Elon Musk with Neuralink, this is as bare bones as one can get.
3. Reduces cognitive load
Messaging is simple, a radically, stripped down interface that anyone can use. It requires almost no learning and mimics what we all do in real life – simply dialogue. Compared to any other interface it is low on cognitive load. There is little other than a single field into which you type, it therefore goes goes at your pace. What also matters is the degree to which it makes use of NLP (Natural Language Processing) to really understand what you type (or say).
4. Chunking
One of the joys of messaging, and one of the reasons for its success, it that it is succinct. It is by its very nature chunked. If it were not, it wouldn’t work. Imagine being on a flight with someone, you ask them a question and get a1 hour lecture in return or imagine. Chatbots chat, they don’t talk at you.
5. Media equation
In a most likely apocryphal story, where Steve Jobs presented the Apple Mac screen to Steve Wosniak, Jobs had programmed it so say ’Hello…”. Wosniak though it uncessary – but who was right? We want our technology to be friendly, easy to use, almost our companion. This is as true on learning as it is in any other area of human endeavour.
Nass & Reeves, in The Media Equation, did 35 studies to show that we attribute agency to technology, especially computers. We anthropomorphise technology in such a way that we think the bot is human or at least exhibits human attributes. Our faculty of imagination finds this easy, as witnessed by our ready ability to suspend belief in the movies or when watching TV. It takes seconds and works in our favour with chatbots, as dialogue is a natural form of human behaviour and communication.
6. Anonymity
If you have qualms about chat replacing human activity, remember also, that many learners are reluctant to ask their tutor, lecturer, manager or boss questions, for fear of embarrassment, as it may reveal their lack of knowledge. Others are simply quiet, even introverts. Anonymous learning, through a chatbot,  then becomes a virtue not a vice. Wellbeing bots may also want to preserve anonymity. In this sense, chatbots may be superior to live, human teachers and bosses. Time and time again we see how technology is preferred to human contact – ATMs, online retail and so on, in learning, in some circumstances we also witness this phenomenon.
7. Audio possible
The brain is a social organ, likes to receive stuff in chunks and interact when learning. We are social apes, grammatical geniuses at age 3 and learn to listen and speak long before we learn to read and write (which take years). Chatbots, such as Siri and Alexa already exist and, with the addition of text to speech and speech to text, turn chat into the exchange of speech. Reading and writing are replaced by listening and speaking.
Of course, one must be careful here, as chatbots have real limitations. They work best in narrow domains, with a clear purpose. Their ability to deliver full-milk, sustained dialogue is limited. Nevertheless, they can deliver learning functions aright across the learning journey from on-boarding, learner engagement, learner support, mentoring, teaching, assessment, practice and well being.

Chatbot interfaces can be fully scripted using no natural language processing at all or they can use varying levels of NLP to allow for variations on input. At the simplest level it can cope with synonyms and different word order. Large services by the big players, such as IBM and Microsoft offer much more naturalistic interfaces. Whatever your choice, regard the dialogue interface as something separate.

 Subscribe to RSS

Fully Connected by Julia Hobsbawn – I wish I hadn’t

Having seen Julia get torn to pieces by an audience in Berlin, I decided to give the book a go. But first Berlin. After an excruciating anecdote about her being in the company of Royalty in St James Palace and meeting Zak Goldsmith (it made no sense, other than name dropping), she laid out the ideas in her book describing networks as including Facebook, Ebola and Zika –all basically the same thing, a ridiculous conflation of ideas.All this social media is turning us into sheep” she bleated. Then asked “How many of you feel unhappy in your jobs?” Zero hands went up. Oh dear, try again. How many of you feel overloaded?” Three hands in a packed room. Ooops that punctured the proposition.... She then made a quick retreat into some ridiculous generalisations about her being the first to really look at networks, that Trump should be thrown off Twitter (strong anti-freedom of expression line here.... bit worrying). Basically playing the therapeutic contrarian. The audience were having none of it, many of them experts in this field.
Then came the blowback. Stephen Downes, who knows more than most on the subject of networks, was blunt “Everything you’ve said is just wrong” Wow. He then explained that there’s a large literature on networks and that the subject has been studied in depth and that she was low on knowledge and research. He was right. Andrew Keen on Stephen Downes accusation that Hobsbawn was flakey on assumptions and research "Good - glad to see someone with a hard hitting point..." Claire Fox then joined the fray.... pointing out that this contrarian stuff smacks of hysteria – it’s all a bit preachy and mumsy.
So, fast forward, I’m back from Berlin and bought the book – Fully Connected. To be fair I wanted to read the work for myself. Turns out the audience were right. 
Fully Connected
The Preface opens up with a tale about Ebola, setting the whole ‘networks are diseased and I have the cure’ tone of the book. “Culture, diseases, ideas: they’re all about networks” says Hobsbawn. Wow – she’s serious and really does want to conflate these things just to set up the self-help advice. What follows is a series of well-worn stuff about Moore’s Law, Stanley Milgram, Six Degrees of Separation, Taleb’s Black Swan, Tom Peters, Peter Drucker… punctuated by anecdotes about her and her family. It’s a curious mixture of dull, middle-class anecdotes and old school stuff without any real analysis or insights.
Ah, but here comes her insight – her new term ‘social health’. All is revealed. Her vision is pathalogocal, the usual deficit view of the modern world. All of you out there are wrapped up in evil spiders’ webs, diseased, and I have the cure. Her two big ideas are The Way to Wellbeing and The Blended Self. All of this is wrapped up in the pseudo-medical nonsense; Information obesity, Time starvation, Techno-spread, Organisational bloat. It’s like a bad diet book where you’re fed a diet of bad metaphors. Her ‘Hexagon’ of social health is the diagnosis and cure, as she puts herself forward as the next Abraham Maslow – replacing the pyramid with a hexagon – we’re networked geddit?
Part two is even worse. The usual bromides around Disconnecting, Techno-shabbats, Designing your honeycomb, The knowledge dashboard. Only then do you realise that this is a really bad, self-help book based on a few personal anecdotes and no research whatsoever.

The postscript says it all, a rambling piece about the Forth Road Bridge. I grew up in the town beneath that bridge and saw it built – but even I couldn’t see what she was on about. There are some serious writers in this area, like Andrew Keen, Nicholas Carr and others, Julia is not one of them.

 Subscribe to RSS

Tuesday, December 12, 2017

Invisible LMS: the LMS is not dead, but it needs to be invisible – front it with a chatbot

Good is almost invisible. As the most powerful piece of back-end, consumer software ever built, it hides behind a simple letterbox. Most successful interfaces follow this example of Occam’s Razor – the minimum number of entities to reach your goal.
Sadly, the LMS does the opposite. Often difficult to access and navigate, it looks like something from the 90s – that’s because it is something from the 90s. The clue is in the middle word ‘management’. The LMS is largely about managing learners and learning, not engagement. But there’s a breakthrough. What we are seeing now are Learning ENGAGEMENT Systems. It is not that the functionality of an LMS is flawed but its UI/UX is most certainly flawed. Basically repositories, the LMS is insensitive to performance support, learning embedded in workflow and makes people do far too much work. They put obstacles in the way of learning and fail the most basic demands for data, as they are trapped in the hideously inadequate SCORM standard.
First up - we must stop seeing employees as learners. No one calls anyone a learner in real life, no one sees themselves as learners in real life. People are people, doing a job. It’s why I’m allergic to the ‘lifelong learning’ evangelists who often see life as a lifelong course, or life coaches – get a life, not a coach.
So how could we make the LMS more invisible, while retaining and improving functionality?
First up get rid of the multiple sign-ons (to be fair most have), nested menus, lists of courses and general noise. Talk to people. When people want to know something they usually ask someone. So front your LMS/VLE with a chat function. Most young people have already switched to messaging, away from email and even traditional social media.
This is the real screen of a real person, she’s 19. There isn’t even a browser or phone icon – it’s largely messaging. Dialogue is our most natural form of communication, so front learning with dialogue. A chat interface also dramatically reduces cognitive overload. This is why it is so popular – ease of use and seems natural.
Meet Otto
Otto, from Learning Pool is the best example I’ve seen of this. Ask a question and either a human or the back-end LMS (now invisible) will respond and find the relevant answer, resource or learning experience. It can access simple text answers, pieces of e-learning and/or external resources. So, when someone comes across something they don’t understand or need to know for whatever reason, they have an opportunity to simply ask and the chatbot will respond, either with a quick answer or a flow of questions that try to pinpoint what you really need. If the system can’t deliver it knows someone who can.

It’s not just the LMS that can be made invisible, it’s the whole structure of ‘learning’ – the idea that learning is something separate, done in courses and formal. Training gets a bad rap for a reason – it’s all a bit, well, dull and inflexible. At one point in my life I point blank refused to be in a room with round tables, a flipchart, coloured pens and a bowl of mints for inspiration. The sooner that becomes invisible the better. Book webinar on chatvbots in learning here.

 Subscribe to RSS