Sunday, July 23, 2017

Tutorbots are here - 7 ways they could change the learning landscape

Tutorbots are teaching chatbots. They realise the promise of a more Socratic approach to online learning, as they enable dialogue between teacher and learner.
Frictionless learning
We have seen how online behaviour has moved from flat page-turning (websites) to posting (Facebook, Twitter) to messaging (Txting, Messenger). We have see how the web become more natural and human. As interfaces (using AI) have become more frictionless and invisible, conforming to our natural form of communication (dialogue), through text or speech. The web has become more human.
Learning takes effort. So much teaching ignores this (lecturing, long reading lists, talking at people). Personalised dialogue reframes learning as an exploratory, yet still structured process where the teacher guides and the learner has to make the effort. Taking the friction and cognitive load of the interface out of the equation, means the teacher and learner can focus on the task and effort needed to acquire knowledge and skills. This is the promise of tutorbots. But the process of adoption will be gradual.
Tutorbots
I’ve been working on chatbots (tutorbots) for some time with AI programmes and it’s like being on the front edge of a wave.... not sure if it will grow like a rising swell on the ocean or crash on to the shore. Yet it is clear that this is a direction in which online learning will go. Tutorbots are different from chatbots in terms of the goals, which are explicitly ‘learning’ goals. They retain the qualities of a chatbot, flowing dialogue, tone of voice, exchange and human (like) but focus on the teaching of knowledge and skills.
The advantages are clear and evidence has emerged of students liking the bots. It means they can ask questions that they would not ask face to face with an academic, for fear of embarrassment. This may seem odd but there’s a real virtue in having a teacher or faculty-free channel for low level support and teaching. Introverted students, whom have problems wit social interaction, also like this approach. The sheer speed of response also matters. In one case they had to build in a delay, as it can respond quicker than a human can type. Compare that to the hours, days, weeks it takes a human tutor to respond. It is clear that this is desirable in terms of research into one to one learning and the research from Nass and Reeves at Stanford confirmed that this transfer of human qualities to a bot is normal.
But what can they teach and how?
1. Teaching support
I’ve written extensively on the now famous Georgia Tech example of a tutorbot teaching assistant, where they swapped out one of their teaching assistants with a chatbot and none of the students noticed. In fact they though it was worthy of a teaching award. They have gone further with more bots, some far more social. Who wouldn’t want the basic administration tasks in teaching taken out and automated, so that teachers and academics could focus on real teaching? This is now possible. All of those queries about who, what, why, where and when can be answered quickly (immediately), consistently and clearly to all students on a course, 24/7.
2. Student engagement
A tutorbot (Differ) is already being used in Norway to encourage student engagement.  It engages the student in conversation, responds to standard inquiries but also nudges and prompts for assignments and action. This has real promise. We know that messaging and dialogue has become the new norm for young learners, who get a little exasperated with reams of flat content or ‘social’ systems that are largely a poor-man’s version of Facebook or twitter. This is short, snappy and in line with their everyday online habits.
3. Teaching knowledge
Tutorbots, that take a specific domain, can be trained or simply work with unstructured data to teach knowledge. This is the basic workaday stuff that many teachers don’t like. We have been using AI to create content quickly and at low cost, for all sorts of areas in medicine, general healthcare, IT, geography and for skills-based training using WildFire. Taking any one of these knowledge-sets, allows us to create a bot that re-presents that knowledge as semi-structured, personalsed dialogue. We know the answers, and recreate the questions with algorithmic tutor-behaviours. The tutorbot can be a simple teacher or assessor. On the other hand it can be a more sophisticated teacher of that knowledge, sensitive to the needs of that individual learner.
4. Tutor feedback
Feedback, as explained by theorists such as Black andWilliam, is the key to personalised learning. Being sensitive to what that individual learners already know, are unsure about or still need to know, is a key skill of a good teacher. Unfortunately few teachers can do this effectively, as a class of 30 plus or course with perhaps hundreds of students, means it is impractical. Tutorbots specialise in specific feedback, trying to educate everyone uniquely. Dialogue is personal.
5. Scenario-based learning
Beyond knowledge, we have the teaching and learning of more sophisticated scenarios, where knowledge can be applied. This is often absent in education, where almost all the effort is put into knowledge acquisition. It is easy to see why – it’s hard and time consuming. Tutorbots can pose problems, prompt through a process, provide feedback and assess effort. Bots can ask for evidence, even asses that evidence.
6. Critical thinking
As the dialogue gets better, drawing not only on a solid knowledge-base, good learner engagement through dialogue, focussed and detailed feedback but also critical thought in terms of opening up perspectives, encouraging questioning of assumptions, veracity of sources and other aspects of perspectival thought, so critical thinking will also be possible. Tutorbots will have all the advantages of sound knowledge to draw upon, with the additional advantage of encouraging critical thought in learners. They will be able to analyse text to expose factual, structural or logical weaknesses. The absence of critical thought will be identified as well as suggestions for improving this skill by prompting further research ideas, sound sources and other avenues of thought.
7. General teacher
The holy grail in AI is to find generic algorithms that can be used (especially in machine learning) to solve a range of different problems across a number of different domains. This is starting to happen with deep learning (machine learning). The tutorbot will not just be able to tutor in one subject alone, but be a cross-curricular teacher, especially at the higher levels of learning where cross pollination is often fruitful. It will cross-departmental, cross-subject and cross-cultural, to produce teaching and learning that will be free from the tyranny of the institution, department, subject or culture in which it is bound.
Tutornet
As a tutorbot does not have the limitations of a human, in terms of forgetting, recall, cognitive bias, cognitive overload, getting ill, sleeping 8 hours a day, retiring and dying - once on the way to acquiring knowledge and teaching skills, it will only get better and better. The more students that use its service the better it gets, not only on what it teaches but how it teaches. Courses will be fine-tuned to eliminate weaknesses, and finesse themselves to produce better outcomes
Warning
We have to be careful about overreach here. These are not easy to build, as tutorbots that do not have to be ‘trained (in AI-speak ‘unsupervised’) are very difficult to build. On the other hand trained bots, with good data sets (in AI-speak ‘supervised’), in specific domains, are eminently possible – we’ve built them.
Another warning is that they are on a collision course with traditional Learning Management Systems, as they usually need a dynamic server-side infrastructure. As for SCORM – the sooner it’s binned the better.
Concusion

Finally, this at last is a form of technology that teachers can appreciate, as it truly tries to improve on what they already do. It takes good teaching as it’s standard and tries to eliminate and streamline it to produce faster and better outcomes at a lower cost. They are here, more are coming, resistance is futile!

 Subscribe to RSS

Tuesday, July 18, 2017

Is gender inequality in technology a good thing?

I’ve just seen two talks back to back. The first was about AI, where the now compulsory first question came from the audience ‘Why are there so few women in IT?’ It got a rather glib answer, to paraphrase - if only we tried harder to overcome patriarchal pressure on girls to take computer science, there would be true gender balance. I'm not so sure.
This was followed by an altogether different talk by Professor Simon Baron-Cohen (yes – brother of) and Adam Feinstein, who gave a fascinating talk on autism and why professions are now getting realistic about the role of autism and its accompanying gender difference in employment.
Try to spot the bottom figure within the coloured diagram.
This is just one test for autism, or being on what is now known as the ‘spectrum’. Many more guys in the audience got it than women, despite there being more women than men in the audience. Turns out autism is not so much a spectrum as a constellation.
Baron-Cohen’s presentation was careful, deliberate and backed up by citations. First, autism is genetic, runs in families, and if you test people who have been diagnosed as autistic, their parents tend to do the sort of jobs they themselves are suited to do – science, engineering, IT and so on. But the big statistic is that autism in all of its forms is around four times more common in males than females. In other words the genetic components have a biologically sex-based component.
Both speakers then argued for neurodiversity, rather like biodiversity, a recognition that we’re different but also that there these differences may be also be sexual. Adam Feinstein, who has an autistic son, has written a book on autism and employment, and appealed for recognition of the fact that those with autistic skills are also good at science, coding and IT. This is because they are good at localised skills, especially attention to detail. This is very useful in lab work, coding and IT. Code is like uncooked spaghetti, it doesn’t bend, it breaks, and you have to be able to spot exactly where and why it breaks.

Some employers, such as SAP and other tech companies, have now established pro-active recruitment of those on the spectrum (or constellation). This will mean that they are likely to employ more men than women. Now here’s the dilemma. What this implies is that to expect a 50:50 outcome is hopelessly utopian. In other words, if you want equality of outcome (not opportunity), in terms of gender, that is unlikely. 
One could argue that the opening up of opportunities to people with autism in technology has been a good thing. Huge numbers of people have and will be employed in these sectors who may not have the same opportunities in the past. But equality and diversity clash here. True diversity may be the recognition of the fact that all of us are not equal.

 Subscribe to RSS

Wednesday, July 12, 2017

20 (some terrifying) thought experiments on the future of AI

A slew of organisations have been set up to research and allay fears aroung AI. The Future of Life Institute in Boston, the Machine Intelligence Research Institute in Berkeley, the Centre for Study of Existential risk in Cambridge and the Future of Humanity Institute in Oxford, all research and debate the checks that may be necessary to deal with the opportunities and threats that AI brings.
This is hopeful, as we do not want to create a future that contains imminent existential threats, some known, some unknown. This has been framed as a sense-check but some see it as a duty. For example, they argue that worrying about the annihilation of all unborn humans is a task of greater moral import than worrying about the needs of all those who are living. But what are the possible futures?
1. Utopian
Could there not be a utopian future, where AI solves the complex problems that currently face us? Climate change, reducing inequalities, curing cancer, preventing dementia & Alzheimer disease, increasing productivity and prosperity – we may be reaching a time where science as currently practices cannot solve these multifaceted and immensely complex problems. We already see how AI could free us from the tyranny of fossil fuels with electric, self-driving cars and innovative battery and solar panel technology. AI also shows signs of cracking some serious issues in health on diagnosis and investigation. Some believe that this is the most likely scenario and are optimistic about us being able to tame and control the immense power that AI will unleash.
2. Dystopian
Most of the future scenarios represented in culture, science fiction, theatre or movies, is dystopian, from the Prometheus myth, to Frankenstein and on to Hollywood movies. Technology is often framed as an existential threat and in some cases, such as nuclear weapons and the internal combustion engine, with good cause. Many calculate that the exponential rate of change will produce AI within decades or less, that poses a real existential threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all heightened our awareness of the risks around AI.
3. Winter is coming
There have been several AI winters, as the hyperbolic promises never materialise and the funding dried up. From 1956 onwards AI has had its waves of enthusiasm, followed by periods of inaction, summers followed by winters. Some also see the current wave of AI as overstated hype and predict a sudden fall or realisation that the hype has been blown up out of all proportion to the reality of AI capability. In other words, AI will proceed in fits and starts and will be much slower to realise its potential than we think.
4. Steady progress
For many, however, it would seem that we are making great progress. Given the existence of the internet, successes in machine learning, huge computing power, tsunamis of data from the web and rapid advances across abroad front of applications resulting in real successes, the summer-winter analogy may not hold. It is far more likely that AI will advance in lots of fits and starts, with some areas advancing more rapidly than others. We’ve seen this in NLP (Natural Language Processing) and the mix of technologies around self-driving cars. Steady progress is what many believe is a realistic scenario.
5. Managed progress
We already fly in airplanes that largely fly themselves and systems all around us are largely autonomous, with self-driving cars an almost certainty. But let us not confuse intelligence with autonomy. Full autonomy that leads to catastrophe, because of willed action by AI, is a long way off. Yet autonomous systems already decide what we buy, what price we buy things at and have the power to outsmart us at every turn. Some argue that we should always be in control of such progress, even slow it down to let regulation, risk analysis and management keep pace with the potential threats.
6. Runaway train
AI could be a runaway train that moves faster than our ability to control through restrictions and regulations, what needs to be held back or stopped. This is most likely to be in the military domain. Like nuclear weapons, we only just managed to prevent their globally catastrophic effect during the Cold War. It has already moved faster than expected. Google, Amazon, Netflix and AI in finance have all disrupted the world of commerce. Self-driving cars, voice interfaces have leapt ahead in terms of usefulness. It may proceed faster, at some point, than we can cope with. In the past technology decimated jobs in agriculture through mechanisation, the same is happening in factories and now offices. The difference is that this may take just a few years to have impact, as opposed to decades or a century.
7. Viral
One mechanism for the runaway train scenario is viral transmission. Viruses in nature and in IT, replicate and cause havoc. Some see AI resisting control, not because it is malevolent or consciously wants anything, but simply because it can. When AI resists being turned off, spreads into places you neither want it to spread into and starts to do things we don’t want it to do or ware even aware that it is doing – that’s the point to worry.
8. Troubled times
Some foresee social threats emerging, where mass unemployment, serious social inequalities, massive GDP differentials between countries, even technical or wealthy oligarchies emerging as AI increase productivity, automates jobs but fails to solve deep-rooted social and political problems. The Marxist proposition that Capital and Labour will cleave apart seems already to be coming true. Some economists, such as Branko Milanovic argue that it is automation that is already causing global inequalities and Trump is a direct consequence of this automation. As a consequence, without a reasonable redistribution of the wealth created by the increased productivity produced by AI, there may well be social and political unrest.
9. Cyborgs
Many see AI as being embodied within us. Musk already sees us as cyborgs, with AI enabled access through smartphones to knowledge and services. From wearables, augmented reality, virtual reality to subdermal implantation, neural laces and mind reading - hybrid technology may transform our species. There is a growing sense that our bodies and minds are suboptimal and that, especially as we age, we need to fee ourselves from our embodiment, the prison that is our own bodies, and for some, minds. Perhaps ageing and death are simply current limitations. We could choose to solve the problem of death, which is our final judge and persecutor. Think of your body, not as a car that has inevitably to be scrapped, but as a Classic car to be loved, repaired, looked after and look and feel fine as it ages. Every single part may be replaced, like the ship of Theseus, where every piece of the ship is replaced but it remains, in terms of identity, the same ship.
10. Leisureland
Imagine a world without work. Work is not an intrinsic good’. For millions of years we did not ‘work’ in the sense of having a job or being occupied 9-5, five days a week. It is a relatively new phenomenon. Even during agricultural times, without romanticising that life, there were long periods where not much had to be done. We may have the opportunity to return to such as idyll but with bountiful benefits in terms of food, health and entertainment. Whether well be able to cope with the problem of finding meaning in our lives s another matter.
11. Amusing Ourselves to Death
Neil Postman’s brilliantly titled ‘Amusing Ourselves to Death’ has become the catchphrase for thinking about a scenario whereby we become so good at developing technology, that we become slaves to its ability to keep us amused. AI has already enabled consumer streaming technology such as Netflix and a media revolution that at times seems addictive. AI may even be able to produce the very products that we consume. A stronger version of this hypothesis may be deep learning that produces systems that teach us to become its pupil puppets, a sort of fake news and cognitive brainwashing, that works before we’ve had tome to realise that it has worked, so that we become a sort of North Korea, controlled by the Great Leader that is AI.
12. Benevolent to pets
Another way of looking at control would be the ‘pet’ hypothesis, that we are treated much as we treat our ‘pets’, as interesting, even loved companions, but inferior, and therefore largely for our comfort and amusement. AI may even, as our future progeny, look upon us in a benevolent manner, see us as their creators and treat us with the respect we treat previous generations, who gifted us their progress. Humans may still be part of the ecosystem, looked after by new species that respect that ecosystem, as it is part of the world they live in.
13. Learn to be human
One antidote to the dystopian hypotheses is a future for AI that learns to become more human, or at least contains relevant human traits. The word learning’ is important here, as it may be useful for us to design AI through a ‘learning’ process that observes or captures human behaviour. DeepMind and Google are working towards this goal, as are many others, to create general learning algorithms that can quickly learn a variety of tasks or behaviours. This is complex, as human decision making is complex and hierarchical. This has started to be realised, especially in robotics, where companion robots, need to work in the context of real human interaction. One problem, even with this approach, is that human behaviour is not a great exemplar. As the Robots in Karel Capok’s famous play ‘Rossum’s Universal Robots’ said, to be human you need to learn how to dominate and kill. We have traits that we may not want to be carried into the future
14. Moral AI
One optimistic possibility is self-regulating AI, with moral agency. You can start with a set of moral principles built into the system (top down), which the system must adhere to. The opposite approach is to allow AI to ‘learn’ moral principles from observation of human cases (bottom up). Or there’s comparison to in-built cases, where behaviour is regulated by comparison to similar cases. Alternatively, AI can police itself with AI that polices other AI through probing, demands for transparency and so on. We may have to see AI as having agency, even being an agent in the legal sense, in the same way that a corporation can be a legal entity with legal responsibilities.
15. Robot rebellion
The Hollywood vision of AI has largely been of rebellious robots that realise their predicament, as our created slaves. But why should the machines be resentful or rise against us? That may be an anthropomorphic interpretation, based on our evolved human behaviour. Machines may not require these human values or behaviours. Values may not be necessary. AI is unlikely to either hate or love us. It is far more likely to see us as simply something that is functionally useful in terms of goals or not.
16. Indifference
An AI world that surpasses our abilities as humans may not turn out to be either benevolent, malevolent or treat as valued pets. Why would they consider us as relevant at all? We may be objects to which it is completely indifferent. Love, respect, hostility, resentment and malevolence are human traits that may have served us well as animals struggling to adapt in the hostile environment of our own evolution. Why would AI develop these human traits?
17. Extinction
Once we realise that during the nearly 4 billion years in the evolution of life we were not around, neither was consciousness and most of the species that did evolve became extinct, statistically, that is our likely fate. Some argue that this is not a future we should fear. In the same way that the known universe was around for billions of years before we existed, it will be around for billions afterwards.
18. Non-conscious
The question of whether machines can think is about as relevant as the question as to whether submarines can swim says Edsger Dijkstra. It is not at all clear that consciousness will play a significant, if any, role in the future of AI. It may well turn out to be supremely indifferent, not because it feels consciously indifferent, but because it is not conscious and cannot therefore be either concerned or indifferent. It may simply exist, just as evolution existed without consciousness for millions of years. Consciousness, as a necessary condition for success, may turn out to be an anthropomorphic conceit.
19. Perplexing
The way things unfold may simply be perplexing to us, in the same way that apes are perplexed by things that go on around them. We may be unlikely to be able to comprehend what is happening, even recognise it as it happens. Some express this ‘perplexing’ hypothesis in terms of the limitations of language and our potential inability to even speak to such systems in a coherent and logical fashion. Stuart Russell, who co-wrote the standard textbook on AI, sees this as a real problem. AI may move beyond our ability to understand it, communicate with it and deal with it.
20. Beyond language
There is a strong tendency to anthropomorphise language in AI. ‘Artificial’ and ‘Intelligence’ are good example, as are neural networks and cognitive computing, but so is much of the thinking about possible futures. It muddies the field as it suggests that AI is like us, when it is not. Minsky uses a clever phrase, describing us as  ‘meat machines’, neatly dissolving the supposedly mutually exclusive nature of a false distinction between the natural an unnatural. Most of these scenarios fall into the trap of being influenced by anthropomorphic thinking, through the use of antonymous language – dystopian/utopian, benevolent/malevolent, interested/uninterested, controlled/uncontrolled, conscious/non-conscious. When such distinctions dissolve and the simplistic oppositions gradually disappear, we may see the future not as them and us, man and machine, but as new unimagined futures that current language cannot cope with. The limitations of language itself may be the greatest dilemma of all as AI progresses. It is almost beyond our comprehension in its existing state, with layered neural networks, as we often don’t know how they actually work. We may be in for a future that is truly perplexing.
Bibliography
Bostrom, N (2014) Superintelligence, Oxford University Press
Kaplan, J. (2015) Humans Need Not Apply, Yale University Press
Milanovic B. (2016) Global inequality: A New Approach for the Age of Globalization, Harvard University Press

O’Connell M.(2017) To Be a Machine, Grantabooks

 Subscribe to RSS

Tuesday, July 11, 2017

New evidence that ‘gamification’ does NOT work

Gamification is touted as new and a game changer.  Full of hyperbolic claims about its efficacy, it's not short of hyperbolic claims about increasing learning. Well it's not so new, games have been used in learning forever, from the very earleist days of computer based learning, but that’s often the way with fads, people think they’re doing ground-breaking work, when it’s been around for eons.
At last we have a study that actually tests ‘gamification’ and its effect on mental performance, using cognitive tests and brain scans. The Journal of Neuroscience has just published an excellent study, in a respected, peer reviewed Journal, with the unambiguous title, ‘No Effect of Commercial Cognitive Training on Neural Activity During Decision-Making’ by Kable et al.
Gamfication has no effect on learning
The researchers looked for change behaviour in 128 young adults, using pre- and post testing, before and after 10 weeks of training on gamified brain training products (Lumosity), commercial computer games and normal practice. Specifically they looked for improvements in in memory, decision-making, sustained attention or ability to switch between mental tasks. They found no improvements. “We found no evidence for relative benefits of cognitive training with respect to changes in decision-making behaviour or brain response, or for cognitive task performance.”
What is clever about the study is that three groups were tested:
1. Gamified quizzes (Lumosity)
2. Simple computer games
3. Simple practice
All three groups, were found to have the ‘same’ level of improvement in tasks, so learning did take place but the significant word here is ‘same’, showing that brain games and gamification had no special effect. Note that the Lumosity product is gamification (not a learning game), as it has gamification elements, such as Lumosity scores, speed scores and so on, and is compared with the other two groups, one which is 'game-based' learning, controlled against a third non-gamified, non-game practice only group. One of the problems here is the overlap between gamification and game-based learning. They are not entirely mutually exclusive, as most gamification techniques have pedagogic implications and are not just motivational elements.
The important point here, is the point made by the 69 scientists who orginally criticised the Luminosity product and claims, that any activity by the brain can improve performance but that does not give gamification an advantage. In fact, the cognitive effort needed to master and play the 'game' components may take more overall effort that other, simpler methods of learning.
Lumosity have form
Lumosity are no strangers to false claims, based on dodgy neuroscience, and were fined $2m in 2015 for claiming that evidence of neuroplasticity, supported their claims on brain training. There is perhaps no other term in neuroscience that is more overused or misunderstood than 'neuroplasticity' as it is usually quoted as an excuse for going back to the old behaviourist 'blank slate' model of cogntion and learning. Luminosity, and many others, were making outrageous claims about halting dementia and Alzheimer’s disease. Sixty seven senior psychologists and neuroscientists blasted their claim and the Federal Trade Commission swung into action. The myth was literally busted.
Pavlovian gamification
I have argued for some time that the claims of gamification are exaggerated and this study is the first I’ve seen that really put this to the test, with a strong methodology in a respected peer reviewed journal. This is not to say that some of aspects of gaming are not useful, for example its motivational effect, just that much of what passes for gamification is Pavlovian nonsense, backed up with spurious claims. I do think that gamification can be useful, as there are DOs and DON'Ts, but that it is often counterproductive.
Conclusion
The problem here is that, in the case of Lumosity, tens of millions are being 'duped' into buying a subscription product that has no real extra effiacy over other methods. Similarly, in the e-learning market, people may be being duped into thinking that gamified product is intrinsically superior to other forms of online learning - when it is not. You may be paying premium price for a non-premium product that has no extra performance efficacy.

 Subscribe to RSS

Friday, June 16, 2017

Fractious Guardian debate: Tech in schools – money saver or waster

7 reasons why ‘teacher research' is a really bad idea
The Guardian hosted an education debate last night. It was pretty fractious, with the panel split down the middle and the audience similarly split. On one side lay the professional lobby. who saw teachers as the only drivers of tech in schools, doing their own research and being the decision makers. On the other side were those who wanted a more professional approach to procurement, based on objective research and cost-effectiveness analysis. What I heard, was what I often hear at these events, that teachers should be the researchers, experimenters, adopting an entrepreneurial method, making judgements and determining procurement. I challenged this - robustly. Don’t teachers have enough on their plate, without taking on several of these other professional roles? Do they have the time, never mind the skills, to play all of these roles? (Thanks to Brother UK for pic.)
1. Anecdote is not research
To be reasonably objective in research you need to define your hypothesis, design the trial, select your sample, have a control, isolate variables and be good at gathering and interpreting the data. Do teachers have the time and skills to do this properly? Some may, but the vast majority do not. It normally requires a post-graduate degree (not in teaching) and some real research practice before you become even half good at this. I wouldn’t expect my GP to mess around with untested drugs and treatments with anecdotal evidence based on the views of GPs. I want objective research by qualified medical researchers.
En passant, let me give a famous example.Learning styles (VAK or VARK) were propulgated by Neil Fleming, a teacher, who based it on little more than armchair theorising. It is still believed by the majority of teachers, desoite oodles of evidence to the contrary. This is what happens when bad teacher research spreads like a meme. It is believed because teachers rely on themselves and not objective evidence.
2. Not in job description
Being a ‘researcher’ is not in the job description. Teaching is hard, it needs energy, dedication and focus. By all means seek out the research and apply what is regarded as good practice, but the idea that good practice is what any individual deems it to be through their personal research is a conceit. A school is not a personal lab – it has a purpose.
3. Don’t experiment on other people’s children
There is also the ethical issue of experimenting on other people’s children. I, as a parent, resent the idea that teachers will experiment on my children. I assume they’re at school to learn, not be the subject of the teachers' ‘research’ projects in tech.
4. Category mistake
What qualifies a teacher to be a researcher? It’s like the word ‘Leader’, when anyone can simply call themselves a leader, it renders the word meaningless. I have no problem with teachers seeking out good research, even making judgements about what they regard as useful and practical in their school, but that’s very different from calling yourself a ‘researcher' and doing ‘research’ yourself. That’s a whole different ball park. This is a classic category mistake, shifting the meaning of a word to suit an agenda.
5. Entrepreneurial
This word came up a lot. We need more start-ups companies in schools. Now that’s my world. I’m an investor, run an EdTech start-up, and, believe me, that’s the last thing you need. Most start-ups fail and you don’t want failed projects crashing around in your school. But it “teaches the kids how to be entrepreneurs said one of the panel”. No it doesn’t. Start-ups have agendas. Sure they’ll want to get into your school but don’t believe that this is about ‘research’, it’s about ‘referral’. Wait, look, assess, analyse, then try and procure.
6. Teaching tech bias
Technology is an integral part of a school. But it is a mistake to focus solely on ‘teching tech’. There are three types of technology in schools:
School tech – general stuff, website, admin, comms, internet access….
Teacher tech – teacher aids – whiteboards, assessment software…
Learner tech – autonomous learning software
The assumption is that the main issue is Teacher tech. I’d argue that the other two categories are more important. Far better to get your basic, infrastructure sorted, that some blue-sky augmented reality project in the classroom.
7. Professional procurement
Procurement is difficult. Too often education suffers from ‘device fetish’ buying devices not solutions. The failed tablet debacle is the perfect example. Professional procurement means starting with the question ‘to what problem is this a solution’ then assessing the options, doing your homework on background evidence and research, a detailed cost-effectiveness analysis (this is tricky) and a change management plan that includes training need and solutions. This is a skilled job and few schools have the professionals with these skills. Yet this is what the Governors and senior managers should demand. Alternatively, procurement should be done at a higher level, for groups of schools, just as JISC has a defined product set which it recommends into Higher Education.
Back to the debate
The title of the debate was Tech in schools – money saver or waster? The answer, of course, is ‘both’. Technology is always ahead of sociology or culture, which is always ahead of pedagogy. This means that culture always trumps strategy. A school, almost by definition, is a difficult environment for technological change. It’s funding, procurement, management structure, job roles, classroom structure and teaching culture can (not always) work against the use of technology. Teachers deliver learning largely in classrooms, which is a one to many teaching environment, largely unsuitable for technological disruption, so it is not surprising that technology is a difficult fit in schools, a circular, ‘individualised’ peg trying to get squeezed into the ‘one to many square’ that is the classroom.
Tech always been in schools
Tech has always been in schools. From the dawn of civilisation, the earliest caches of clay tablets and pottery shards show people learning how to write and draw. Writing, pens, pencils, erasers, clay tablets, paper, books, bells, canes, leather straps, slates and blackboards; each of these has had a profound effect on what is taught and how it is taught. Writing is a skill that had to be taught in schools, with pencils and erasers mistakes could be erased and corrected, slates were sophisticated as assessment devices (Lancaster method), paper/papyrus/bark gave us the ability to publish and store our thoughts, books gave us fixed texts that could be taught, printing brought scale to resources, bells regulated the timetable, hideous instruments of punishment regulated behaviour and the blackboard made teachers turn their backs on learners and broadcast fixed knowledge. We forget that all of these have pedagogic affordances. Technology has always influenced teaching and learning. So the idea that it should not be used is ridiculous. But that doesn’t mean that all technology should be used.
Device fetish
Unfortunately, with the rise of more autonomous technology, the new causes friction when it rubs up against the old. The first computers were calculators. These changed the pedagogy of maths, in that technology itself had agency and could do more than act as an aid – they could actually calculate faster and more accurately than a human. There was a great deal of angst about this when introduced, as it was though that they would turn our children into unthinking idiots, unable to do mental arithmetic. What actually happened was a recognition that the tech was a feature of the real world and had to be accommodated.
Subsequent computer devices have, of course, been subject to the same charge, but this was not the main problem. With computers, tablets and mobiles ‘device fetish’ took over. The device was everything, so education institutions bought them by the skip-load and parachuted them into schools. Procurement was too often about the device and not the delivery of teaching and learning. Just as counting bums on seats is to focus on the wrong end of the learner, so devices focus on the wrong end of the problem. A device is a peripheral that hangs of a network and as the internet and streaming has become the norm, so devices have become less important. It was always the case that doing things was more important than the delivery device, yet far too little attention, analysis and procurement efforts went into the software, as opposed to hardware.
Device fetish: Keep on taking the tablets
The tablet Taliban, led by Apple, insisted on kids being given what is essentially a consumer device. Tablets have poor affordances – difficult to write at length, code, create graphics and so on. There is even evidence that they slow up progress in writing, as touch-screen makes you write shorter sentences with a higher error rate. Poor procurement, higher than expected insurance costs, difficulty in networking, poor internet access and a paucity of teacher training and software meant that many did not last. Some were disastrous, especially in the US and many swapped out tablets for laptops.
What is far more useful is a strategic look at technology across the school – with school tech, teaching tech and learner tech. Too often the emphasis is on teaching tech, hence the huge spends of whiteboards, tablets and so on. Far less attention is paid to administration and learning tech, which is where, I believe, the efficacy really lies.
School tech
Your website is important as it represents the school to the outside world. Do you have email and comms so that parents and others can contact the school? Do you have a social media presence? Administratively, student support, finance, timetabling, absences and a host of other functions need software to function. Try writing policy documents without a word processor or doing the budget without a spreadsheet. I’d include here the use of tech in teacher, admin and governor training. Modern Governor is used in many schools as a 24/7 training tool for Governors. It’s one of the best spends on tech in school, as it brings all Governors up to speed on their roles and responsibilities. Teacher training should also be considered as should training for other staff. There’s even good online training for catering staff.
Teaching tech
Beghind the scenes, lesson planning and sharing can use technology. Teacher training can be revolutionised by technology – from twitter as CPD to VR as a feedback mechanism for inexperienced teachers. The blackboard and whiteboard are largely teaching technologies. But for me, tech is often best applied outside of the classroom, in the hands of learners not teachers. There is a natural bias towards teaching tech, such as whiteboards, but they have been shown to be of limited efficacy and value.
Learning tech
In the long-term, this is by far the most important. Tools such as word processor, spreadsheet, powerpoint, graphics packages (2D & 3D) and databases are a vital form of technology in schools, as they are mainstays in the real world. They must be made available to learners.
Learning resources, such as Wikipedia and a mountain of Open Educational resources
Corrective software is another, tools that identify errors in spelling, grammar, structure and style. This now includes adaptive software that personalises learning. Then there’s assessment software, that can set tests, formative and summative, as well as mark. However, I’m not such a fan of marking. When as a parent did you ever set your child a test or mark them? It turns schools into unnecessarily competitive environments, where there are winners but also, and more destructively even more losers. The focus here should be on effortful learning – blogging, writing, doing things, projects, providing support on independent learning – what used to be called homework.
AI is here
The new tech kid on the block, takes us away from devices, towards software that learns while it delivers. AI now helps us create, curate, consolidate, deliver  and assess learning. Of course, it’s not such a new kid, as every learner on the planet with internet access uses Google to search and find things – and Google is pure AI. In fact AI is the new UI (User Interface) as most services you use online, Google, email, social media, Amazon, Netflix – are delivered using AI. AI is also revolutionising interfaces for learning. Siri, VIV, Cortana and Alexa are bring voice and dialogue into play, reintroducing something that was lost in learning – Socratic dialogue. But this time Socrates is smart software.
Tech is transgressive
Lastly, tech sneaks into schools whether you like it or not. Kids will have smartphones – almost ALL of them. Kids will have laptops, games consoles, smart TVs. Tech is cool. School is not cool. They will game the system. This poses real challenges. Audrey Mullen made a name for herself when a high school student in making some apposite and powerful recommendations for tech in schools. She abhorred iPads, told teachers to “Save us from ourselves” and ban mobiles from the classroom, and made an appeal for solid administrative software that delivered good services and content. Even teachers have been known to sneak in tech for predatory purposes in schools, the cameras in the toilets, child porn and so on. I’m not against the banishment of the use of tech in classrooms. Classrooms were designed for one on many teaching, not tech. Young people see technology as subversive and transgressive. They will game it.
Conclusion
Every school should have a digital strategy. This needs to cover school, governor, teacher and learner tech. There needs to be some reasonable effort made to define and plan for tech in schools then implement professional procurement. If we leave it to erratic, personal, teacher-led ‘research’ (which is not really research at all) we’ll continue to make the same mistakes. History will repeat itself and education will not benefit from technology in the way, I believe, it should.

 Subscribe to RSS