Why “#thedress” is the perfect perception tester.

I know, you’re all over the dress. You’ve moved on to (checks Twitter) “#HouseOfCards”, Boris Nemtsov and the new Samsung gadgets. I wanted to touch on some of the things I mentioned in yesterday’s post and why that dress picture was so useful.

The first reason is that issues of conflict caused by different perception are not new. You only have to look at the furore surrounding the introduction of Impressionism, the scandal of the colour palette of the Fauvists, the outrage over Marcel Duchamp’s readymades and Dada in general, to see that art is an area that is constantly generating debate and argument over what is, and what is not, art. One of the biggest changes has been the move away from representative art to abstract art, mainly because we are no longer capable of making the simple objective comparison of “that painting looks like the thing that it’s a painting of.” (Let’s not even start on the ongoing linguistic violence over ending sentences with prepositions.)

Once we move art into the abstract, suddenly we are asking a question beyond “does it look like something?” and move into the realm of “does it remind us of something?”, “does it make us feel something?” and “does it make us think about the original object in a different way?” You don’t have to go all the way to using body fluids and live otters in performance pieces to start running into the refrains so often heard in art galleries: “I don’t get it”, “I could have done that”, “It’s all a con”, “It doesn’t look like anything” and “I don’t like it.”

Kazimir Malevich's Suprematism with Blue Triangle and Black Square (1915).

Kazimir Malevich’s Suprematism with Blue Triangle and Black Square (1915).

This was a radical departure from art of the time, part of the Suprematism movement that flourished briefly before Stalin suppressed it, heavily and brutally. Art like this was considered subversive, dangerous and a real threat to the morality of the citizenry. Not bad for two simple shapes, is it? And, yet, many people will look at this and use of the above phrases. There is an enormous range of perception on this very simple (yet deeply complicated) piece of art.

The viewer is, of course, completely entitled to their subjective opinion on art but this is, for many cases, a perceptual issue caused by a lack of familiarity with the intentions, practices and goals of abstract art. When we were still painting pictures of houses and rich people, there were many pictures from the 16th to 18th century which contain really badly painted animals. It’s worth going to an historical art museum just to look at all the crap animals. Looking at early European artists trying to capture Australian fauna gives you the same experience – people weren’t painting what they were seeing, they were painting a reasonable approximation of the representation and putting that into the picture. Yet this was accepted and it was accepted because it was a commonly held perception. This also explains offensive (and totally unrealistic) caricatures along racial, gender or religious lines: you accept the stereotype as a reasonable portrayal because of shared perception. (And, no, I’m not putting pictures of that up.)

But, when we talk about art or food, it’s easy to get caught up in things like cultural capital, the assets we have that aren’t money but allow us to be more socially mobile. “Knowing” about art, wine or food has real weight in certain social situations, so the background here matters. Thus, to illustrate that two people can look at the same abstract piece and have one be enraptured while the other wants their money back is not a clean perceptual distinction, free of outside influence. We can’t say “human perception is very a personal business” based on this alone because there are too many arguments to be made about prior knowledge, art appreciation, socioeconomic factors and cultural capital.

But let’s look at another argument starter, the dreaded Monty Hall Problem, where there are three doors, a good prize behind one, and you have to pick a door to try and win a prize. If the host opens a door showing you where the prize isn’t, do you switch or not? (The correctly formulated problem is designed so that switching is the right thing to do but, again, so much argument.) This is, again, a perceptual issue because of how people think about probability and how much weight they invest in their decision making process, how they feel when discussing it and so on. I’ve seen people get into serious arguments about this and this doesn’t even scratch the surface of the incredible abuse Marilyn vos Savant suffered when she had the audacity to post the correct solution to the problem.

This is another great example of what happens when the human perceptual system, environmental factors and facts get jammed together but… it’s also not clean because you can start talking about previous mathematical experience, logical thinking approaches, textual analysis and so on. It’s easy to say that “ah, this isn’t just a human perceptual thing, it’s everything else.

This is why I love that stupid dress picture. You don’t need to have any prior knowledge of art, cultural capital, mathematical background, history of game shows or whatever. All you need are eyes and relatively functional colour sense of colour. (The dress doesn’t even hit most of the colour blindness issues, interestingly.)

The dress is the clearest example we have that two people can look at the same thing and it’s perception issues that are inbuilt and beyond their control that cause them to have a difference of opinion. We finally have a universal example of how being human is not being sure of the world that we live in and one that we can reproduce anytime we want, without having to carry out any more preparation than “have you seen this dress?”

What we do with it is, as always, the important question now. For me, it’s a reminder to think about issues of perception before I explode with rage across the Internet. Some things will still just be dumb, cruel or evil – the dress won’t heal the world but it does give us a new filter to apply. But it’s simple and clean, and that’s why I think the dress is one of the best things to happen recently to help to bring us together in our discussions so that we can sort out important things and get them done.


That’s not the smell of success, your brain is on fire.

Would you mind putting out the hippocampus when you have a chance?

Would you mind putting out the hippocampus when you have a chance?

I’ve written before about the issues of prolonged human workload leading to ethical problems and the fact that working more than 40 hours a week on a regular basis is downright unproductive because you get less efficient and error-prone. This is not some 1968 French student revolutionary musing on what benefits the soul of a true human, this is industrial research by Henry Ford and the U.S. Army, neither of whom cold be classified as Foucault-worshipping Situationist yurt-dwelling flower children, that shows that there are limits to how long you can work in a sustained weekly pattern and get useful things done, while maintaining your awareness of the world around you.

The myth won’t die, sadly, because physical presence and hours attending work are very easy to measure, while productive outputs and their origins in a useful process on a personal or group basis are much harder to measure. A cynic might note that the people who are around when there is credit to take may end up being the people who (reluctantly, of course) take the credit. But we know that it’s rubbish. And the people who’ve confirmed this are both philosophers and the commercial sector. One day, perhaps.

But anyone who has studied cognitive load issues, the way that the human thinking processes perform as they work and are stressed, will be aware that we have a finite amount of working memory. We can really only track so many things at one time and when we exceed that, we get issues like the helmet fire that I refer to in the first linked piece, where you can’t perform any task efficiently and you lose track of where you are.

So what about multi-tasking?

Ready for this?

We don’t.

There’s a ton of research on this but I’m going to link you to a recent article by Daniel Levitin in the Guardian Q&A. The article covers the fact that what we are really doing is switching quickly from one task to another, dumping one set of information from working memory and loading in another, which of course means that working on two things at once is less efficient than doing two things one after the other.

But it’s more poisonous than that. The sensation of multi-tasking is actually quite rewarding as we get a regular burst of the “oooh, shiny” rewards our brain gives us for finding something new and we enter a heightened state of task readiness (fight or flight) that also can make us feel, for want of a better word, more alive. But we’re burning up the brain’s fuel at a fearsome rate to be less efficient so we’re going to tire more quickly.

Get the idea? Multi-tasking is horribly inefficient task switching that feels good but makes us tired faster and does things less well. But when we achieve tiny tasks in this death spiral of activity, like replying to an e-mail, we get a burst of reward hormones. So if your multi-tasking includes something like checking e-mails when they come in, you’re going to get more and more distracted by that, to the detriment of every other task. But you’re going to keep doing them because multi-tasking.

I regularly get told, by parents, that their children are able to multi-task really well. They can do X, watch TV, do Y and it’s amazing. Well, your children are my students and everything I’ve seen confirms what the research tells me – no, they can’t but they can give a convincing impression when asked. When you dig into what gets produced, it’s a different story. If someone sits down and does the work as a single task, it will take them a shorter time and they will do a better job than if they juggle five things. The five things will take more than five times as long (up to 10, which really blows out time estimation) and will not be done as well, nor will the students learn about the work in the right way. (You can actually sabotage long term storage by multi-tasking in the wrong way.) The most successful study groups around the Uni are small, focused groups that stay on one task until it’s done and then move on. The ones with music and no focus will be sitting there for hours after the others are gone. Fun? Yes. Efficient? No. And most of my students need to be at least reasonably efficient to get everything done. Have some fun but try to get all the work done too – it’s educational, I hear. 🙂

It’s really not a surprise that we haven’t changed humanity in one or two generations. Our brains are just not built in a way that can (yet) provide assistance with the quite large amount of work required to perform multi-tasking.

We can handle multiple tasks, no doubt at all, but we’ve just got to make sure, for our own well-being and overall ability to complete the task, that we don’t fall into the attractive, but deceptive, trap that we are some sort of parallel supercomputer.


I Am Self-righteous, You Are Loud, She is Ignored

If we’ve learned anything from recent Internet debates that have become almost Lovecraftian in the way that a single word uttered in the wrong place can cause an outbreaking of chaos, it is that the establishment of a mutually acceptable tone is the only sensible way to manage any conversation that is conducted outside of body-language cues. Or, in short, we need to work out how to stop people screaming at each other when they’re safely behind their keyboards or (worse) anonymity.

As a scientist, I’m very familiar with the approach that says that all ideas can be questioned and it is only by ferocious interrogation of reality, ideas, theory and perception that we can arrive at a sound basis for moving forward.

But, as a human, I’m aware that conducting ourselves as if everyone is made of uncaring steel is, to be put it mildly, a very poor way to educate and it’s a lousy way to arrive at complex consensus. In fact, while we claim such an approach is inherently meritocratic, as good ideas must flourish under such rigour, it’s more likely that we will only hear ideas from people who can endure the system, regardless of whether those people have the best ideas. A recent book, “The Tyranny of the Meritocracy” by Lani Guinier, looks at how supposedly meritocratic systems in education are really measures of privilege levels prior to going into education and that education is more about cultivating merit, rather than scoring a measure of merit that is actually something else.

This isn’t to say that face-to-face arguments are isolated from the effects that are caused by antagonists competing to see who can keep making their point for the longest time. If one person doesn’t wish to concede the argument but the other can’t see any point in making progress, it is more likely for the (for want of a better term) stubborn party to claim that they have won because they have reached a point where the other person is “giving up”. But this illustrates the key flaw that underlies many arguments – that one “wins” or “loses”.

In scientific argument, in theory, we all get together in large rooms, put on our discussion togas and have at ignorance until we force it into knowledge. In reality, what happens is someone gets up and presents and the overall impression of competency is formed by:

  • The gender, age, rank, race and linguistic grasp of the speaker
  • Their status in the community
  • How familiar the audience are with the work
  • How attentive the audience are and whether they’re all working on grants or e-mail
  • How much they have invested in the speaker being right or wrong
  • Objective scientific assessment

We know about the first one because we keep doing studies that tell us that women cannot be assessed fairly by the majority of people, even in blind trials where all that changes on a CV is the name. We know that status has a terrible influence on how we perceive people. Dunning-Kruger (for all of its faults) and novelty effects influence how critical we can be. We can go through all of these and we come back to the fact that our pure discussion is tainted by the rituals and traditions of presentation, with our vaunted scientific objectivity coming in after we’ve stripped off everything else.

It is still there, don’t get me wrong, but you stand a much better chance of getting a full critical hearing with a prepared, specialist audience who have come together with a clear intention to attempt to find out what is going on than an intention to destroy what is being presented. There is always going to be something wrong or unknown but, if you address the theory rather than the person, you’ll get somewhere.

I often refer to this as the difference between scientists and lawyers. If we’re tying to build a better science then we’re always trying to improve understanding through genuine discovery. Defence lawyers are trying to sow doubt in the mind of judges and juries, invalidating evidence for reasons that are nothing to do with the strength of the evidence, and preventing wider causal linkages from forming that would be to the detriment of their client. (Simplistic, I know.)

Any scientific theory must be able to stand up to scientific enquiry because that’s how it works. But the moment we turn such a process into an inquisition where the process becomes one that the person has to endure then we are no longer assessing the strength of the science – we are seeing if we can shout someone into giving up.

As I wrote in the title, when we are self-righteous, whether legitimately or not, we will be happy to yell from the rooftops. If someone else is doing it with us then we might think they are loud but how can someone else’s voice be heard if we have defined all exchange in terms of this exhausting primal scream? If that person comes from a traditionally under-represented or under-privileged group then they may have no way at all to break in.

The mutual establishment of tone is essential if we to hear all of the voices who are able to contribute to the improvement and development of ideas and, right now, we are downright terrible at it. For all we know, the cure for cancer has been ignored because it had the audacity to show up in the mind of a shy, female, junior researcher in a traditionally hierarchical lab that will let her have her own ideas investigated when she gets to be a professor.

Or it it would have occurred to someone had she received education but she’s stuck in the fields and won’t ever get more than a grade 5 education. That’s not a meritocracy.

One of the reasons I think that we’re so bad at establishing tone and seeing past the illusion of meritocracy is the reason that we’ve always been bad at handling bullying: we are more likely to see a spill-over reaction from the target than the initial action except in the most obvious cases of physical bullying. Human language and body-assisted communication are subtle and words are more than words. Let’s look at this sentence:

“I’m sure he’s doing the best he can.”

You can adjust this sentence to be incredibly praising, condescending, downright insulting, dismissive and indifferent without touching the content of the sentence. But, written like this, it is robbed of tone and context. If someone has been “needled” with statements like this for months, then a sudden outburst is increasingly likely, especially in stressful situations. This is the point at which someone says “But I only said … ” If our workplaces our innately rife with inter-privilege tension and high stress due to the collapse of the middle class – no wonder people blow up!

We have the same problem in the on-line community from an approach called Sea-Lioning, where persistent questioning is deployed in a way that, with each question isolated, appears innocuous but, as a whole, forms a bullying technique to undermine and intimidate the original writer. Now some of this is because there are people who honestly cannot tell what a mutually respectful tone look like and really want to know the answer. But, if you look at the cartoon I linked to, you can easily see how this can be abused and, in particular, how it can be used to shut down people who are expressing ideas in new space. We also don’t get the warning signs of tone. Worse still, we often can’t or don’t walk away because we maintain a connection that the other person can jump on anytime they want to. (The best thing you can do sometimes on Facebook is to stop notifications because you stop getting tapped on the shoulder by people trying to get up your nose. It is like a drink of cool water on a hot day, sometimes. I do, however, realise that this is easier to say than do.)

From XKCD #386 – “Duty Calls”

When students communicate over our on-line forums, we do keep an eye on them for behaviour that is disrespectful or downright rude so that we can step in and moderate the forum, but we don’t require moderation before comment. Again, we have the notion that all ideas can be questioned, because SCIENCE, but the moment we realise that some questions can be asked not to advance the debate but to undermine and intimidate, we have to look very carefully at the overall context and how we construct useful discussion, without being incredibly prescriptive about what form discussion takes.

I recently stepped in to a discussion about some PhD research that was being carried out at my University because it became apparent that someone was acting in, if not bad faith, an aggressive manner that was not actually achieving any useful discussion. When questions were answered, the answers were dismissed, the argument recast and, to be blunt, a lot of random stuff was injected to discredit the researcher (for no good reason). When I stepped in to point out that this was off track, my points were side-stepped, a new argument came up and then I realised that I was dealing with a most amphibious mammal.

The reason I bring this up is that when I commented on the post, I immediately got positive feedback from a number of people on the forum who had been uncomfortable with what had been going on but didn’t know what to do about it. This is the worst thing about people who set a negative tone and hold it down, we end up with social conventions of politeness stopping other people from commenting or saying anything because it’s possible that the argument is being made in good faith. This is precisely the trap a bad faith actor wants to lock people into and, yet, it’s also the thing that keeps most discussions civil.

Thanks, Internet trolls. You’re really helping to make the world a better place.

These days my first action is to step in and ask people to clarify things, in the most non-confrontational way I can muster because asking people “What do you mean” can be incredibly hostile by itself! This quickly establishes people who aren’t willing to engage properly because they’ll start wriggling and the Sea-Lion effect kicks in – accusations of rudeness, unwillingness to debate – which is really, when it comes down to it:

I WANT TO TALK AT YOU LIKE THIS HOW DARE YOU NOT LET ME DO IT!

This isn’t the open approach to science. This is thuggery. This is privilege. This is the same old rubbish that is currently destroying the world because we can’t seem to be able to work together without getting caught up in these stupid games. I dream of a better world where people can say any combination of “I use  Mac/PC/Java/Python” without being insulted but I am, after all, an Idealist.

The summary? The merit of your argument is not determined by how loudly you shout and how many other people you silence.

I expect my students to engage with each other in good faith on the forums, be respectful and think about how their actions affect other people. I’m really beginning to wonder if that’s the best preparation for a world where a toxic on-line debate can break over into the real world, where SWAT team attacks and document revelation demonstrate what happens when people get too carried away in on-line forums.

We’re stopping people from being heard when they have something to say and that’s wrong, especially when it’s done maliciously by people who are demanding to say something and then say nothing. We should be better at this by now.


In Praise of the Beautiful Machines

Some mechanisms are more beautiful than others.

Some mechanisms are more beautiful than others.

I posted recently about the increasingly negative reaction to the “sentient machines” that might arise in the future. Discussion continues, of course, because we love a drama. Bill Gates can’t understand why more people aren’t worried about the machine future.

…AI could grow too strong for people to control.

Scientists attending the recent AI conference (AAAI15) thinks that the fears are unfounded.

“The thing I would say is AI will empower us not exterminate us… It could set AI back if people took what some are saying literally and seriously.” Oren Etzioni, CEO of the Allen Institute for AI.

If you’ve read my previous post then you’ll know that I fall into the second camp. I think that we don’t have to be scared of the rise of the intelligent AI but the people at AAAI15 are some of the best in the field so it’s nice that they ask think that we’re worrying about something that is far, far off in the future. I like to discuss these sorts of things in ethics classes because my students have a very different attitude to these things than I do – twenty five years is a large separation – and I value their perspective on things that will most likely happen during their stewardship.

I asked my students about the ethical scenario proposed by Philippa Foot, “The Trolley Problem“. To summarise, a runaway trolley is coming down the tracks and you have to decide whether to be passive and let five people die or be active and kill one person to save five. I put it to my students in terms of self-driving cars where you are in one car by yourself and there is another car with five people in it. Driving along a bridge, a truck jackknifes in front of you and your car has to decide whether to drive ahead and kill you or move to the side and drive the car containing five people off the cliff, saving you. (Other people have thought about in the context of Google’s self-driving cars. What should the cars do?)

One of my students asked me why the car she was in wouldn’t just put on the brakes. I answered that it was too close and the road was slippery. Her answer was excellent:

Why wouldn’t a self-driving car have adjusted for the conditions and slowed down?

Of course! The trolley problem is predicated upon the condition that the trolley is running away and we have to make a decision where only two results can come out but there is no “runaway” scenario for any sensible model of a self-driving car, any more than planes flip upside down for no reason. Yes, the self-driving car may end up in a catastrophic situation due to something totally unexpected but the everyday events of “driving too fast in the wet” and “chain collision” are not issues that will affect the self-driving car.

But we’re just talking about vaguely smart cars, because the super-intelligent machine is some time away from us. What is more likely to happen soon is what has been happening since we developed machines: the ongoing integration of machines into human life to make things easier. Does this mean changes? Well, yes, most likely. Does this mean the annihilation of everything that we value? No, really not. Let me put this in context.

As I write this, I am listening to two compositions by Karlheinz Stockhausen, playing simultaneously but offset, “Kontakte” and “Telemusik“, works that combine musical instruments, electronic sounds, and tape recordings. I like both of them but I prefer to listen to the (intentionally sterile) Telemusik by starting Koktakte first for 2:49 and then kicking off Telemusik, blending the two and finishing on the longer Kontakte. These works, which are highly non-traditional and use sound in very different ways to traditional orchestral arrangement, may sound quite strange and, to an audience familiar with popular music quite strange, they were written in 1959 and 1966 respectively. These innovative works are now in their middle-age. They are unusual works, certainly, and a number of you will peer at your speakers one they start playing but… did their production lead to the rejection of the popular, classic, rock or folk music output of the 1960s? No.

We now have a lot of electronic music, synthesisers, samplers, software-driven music software, but we still have musicians. It’s hard to measure the numbers (this link is very good) but electronic systems have allowed us to greatly increase the number of composers although we seem to be seeing a slow drop in the number of musicians. In many ways, the electronic revolution has allowed more people to perform because your band can be (for some purposes) a band in a box. Jazz is a different beast, of course, as is classical, due to the level of training and study required. Jazz improvisation is a hard problem (you can find papers on it from 2009 onwards and now buy a so-so jazz improviser for your iPad) and hard problems with high variability are not easy to solve, even computationally.

So the increased portability of music via electronic means has an impact in some areas such as percussion, pop, rock, and electronic (duh) but it doesn’t replace the things where humans shine and, right now, a trained listener is going to know the difference.

I have some of these gadgets in my own (tiny) studio and they’re beautiful. They’re not as good as having the London Symphony Orchestra in your back room but they let me create, compose and put together pleasant sounding things. A small collection of beautiful machines make my life better by helping me to create.

Now think about growing older. About losing strength, balance, and muscular control. About trying to get out of bed five times before you succeed or losing your continence and having to deal with that on top of everything else.

Now think about a beautiful machine that is relatively smart. It is tuned to wrap itself gently around your limbs and body to support you, to help you keep muscle tone safely, to stop you from falling over, to be able to walk at full speed, to take you home when you’re lost and with a few controlling aspects to allow you to say when and where you go to the bathroom.

Isn’t that machine helping you to be yourself, rather than trapping you in the decaying organic machine that served you well until your telomerase ran out?

Think about quiet roads with 5% of the current traffic, where self-driving cars move from point to point and charge themselves in between journeys, where you can sit and read or work as you travel to and from the places you want to go, where there are no traffic lights most of the time because there is just a neat dance between aware vehicles, where bad weather conditions means everyone slows down or even deliberately link up with shock absorbent bumper systems to ensure maximum road holding.

Which of these scenarios stops you being human? Do any of them stop you thinking? Some of you will still want to drive and I suppose that there could be roads set aside for people who insisted upon maintaining their cars but be prepared to pay for the additional insurance costs and public risk. From this article, and the enclosed U Texas report, if only 10% of the cars on the road were autonomous, reduced injuries and reclaimed time and fuel would save $37 billion a year. At 90%, it’s almost $450 billion a year. The Word Food Programme estimates that $3.2 billion would feed the 66,000,000 hungry school-aged children in the world. A 90% autonomous vehicle rate in the US alone could probably feed the world. And that’s a side benefit. We’re talking about a massive reduction in accidents due to human error because (ta-dahh) no human control.

Most of us don’t actually drive our cars. They spend 5% of their time on the road, during which time we are stuck behind other people, breathing fumes and unable to do anything else. What we think about as the pleasurable experience of driving is not the majority experience for most drivers. It’s ripe for automation and, almost every way you slice it, it’s better for the individual and for society as a whole.

But we are always scared of the unknown. There’s a reason that the demons of myth used to live in caves and under ground and come out at night. We hate the dark because we can’t see what’s going on. But increased machine autonomy, towards machine intelligence, doesn’t have to mean that we create monsters that want to destroy us. The far more likely outcome is a group of beautiful machines that make it easier and better for us to enjoy our lives and to have more time to be human.

We are not competing for food – machines don’t eat. We are not competing for space – machines are far more concentrated than we are. We are not even competing for energy – machines can operate in more hostile ranges than we can and are far more suited for direct hook-up to solar and wind power, with no intermediate feeding stage.

We don’t have to be in opposition unless we build machines that are as scared of the unknown as we are. We don’t have to be scared of something that might be as smart as we are.

If we can get it right, we stand to benefit greatly from the rise of the beautiful machine. But we’re not going to do that by starting from a basis of fear. That’s why I told you about that student. She’d realised that our older way of thinking about something was based on a fear of losing control when, if we handed over control properly, we would be able to achieve something very, very valuable.


5 Things: Necessary Assumptions of Truth

I’m (still) in the middle of writing a large summary of my thoughts on education and how can we develop a better way to provide education to as many students as possible. Unsurprisingly, this is a large undertaking and I’m expecting that the final document will be interesting and fairly controversial. I suspect that one of the major problems will stem from things that I believe that we have to assume are true. Now this is always challenging, especially where evidence is lacking, but the reason that I present for some of these things to be held as true is that, if we hold them as false, then we make them false as a self-fulfilling prophecy. This may not be purely because of our theoretical framework but it may be because of what we do in implementation when we implicitly declare that something no longer needs to be worried about.

I am looking to build a better Machine for Education but such a thing is always built on the assumption that better is something that you can achieve.

"Machine". Mono print on lino with wooden tools. (C) Nick Falkner, 2014

“Machine”. Mono print on lino with wooden tools. (C) Nick Falkner, 2014

The reason for making these assumptions of truth is very simple. When I speak of a “Machine for Education”, I am not moving towards some cyberpunk dystopian future, I am recognising that we are already all embedded inside a framework that turns human energy into educational activity, it’s just that the current machine places stress upon its human components, rather than taking the strain in its mechanical/procedural/technological elements. An aeroplane is a machine for flying and it works because it does not require constant human physical effort simply to keep it in the air. We have replaced the flapping wings of early designs with engines, hydraulics, computers and metal. The reason an aeroplane is a good machine is because the stress is taken on the machine itself, which can take it, with sensible constructions of human elements around it that make it a manageable occupation. (When we place airline workers under undue stress, we see the effect on the machine through reduced efficiency in maintenance and decision making, so this isn’t a perfect system.) Similarly, the development of the driverless car is a recognition of two key facts: firstly, that most cars spend most of their time not being driven and, secondly, that the activity of driving for many people is a chore that is neither enjoyable nor efficiently productive. The car is a good machine where most of the wear happens in the machine but we can make it better as a transport device by further removing the human being as a weak point, as a stress accumulator and as a part of the machine that gets worn down but is not easy to repair or rebuild. We also make the machine more efficient by potentially reducing the number required, given the known usage patterns. (Ultimately, the driverless car is the ultimate micro-light urban transit system.)

So what are these assumptions of truth?

  1. That our educational system can always be improved and, hence, is ready for improvement now.

    It has always surprised me when some people look at dull and lifeless chalk-and-talk, based on notes from 20 years ago, and see no need for improvement, instead suggesting punitive measures to force students to sit and pretend to listen. We have more evidence from research as to what works than we have ever had before and, in conjunction with centuries of careful thought, have a great opportunity to make change.

  2. That everyone on the planet can benefit from an improved educational system.

    Yes, this means that you have to assume that, one day, we could reach everyone on the planet. We cannot assume that a certain group can be ignored and then move on. This, of course, doesn’t mean that it all has to happen tomorrow but it does mean that any planning for extending our systems must have the potential to reach everyone in the country of origin and, by extension, when we have every country, we have the world.

  3. That an educational system can develop students in terms of depth of knowledge and skills but also in terms of their scholarship, breadth of knowledge, and range of skills.

    We currently focus heavily on training for quite narrowly specified professions in the general case and we do this to the detriment of developing the student as a scholar, as a designer, as a thinker, as a philosopher, as an artist and as a citizen. This will vary from person to person but a rich educational grounding is the foundation for better things in later life, more flexibility in work and the potential for more creativity and autonomy in leisure. Ultimately, we want our graduates to be as free to create as they are to consume, rather than consigning them to work in tight constraint.

  4. That we can construct environments where all students can legitimately demonstrate that they have achieved the goals of the course.

    This is a very challenging one so I’ve worded it carefully. I have a problem with curve grading, as everyone probably knows, and it really bothers me that someone can fail because someone else passed. I also think that most of our constraints are highly artificial and they are in place because this is what we did before. If we start from the assumption that we can construct a system where everyone can legitimately pass then we change the nature of the system we build.

  5. That all outcomes in an educational system can be the combination of personal actions and systemic actions, thus all outcomes must be perceived and solutions developed through both lenses.

    So students are handing in their work late? This assumption requires us to look across all of their activity to work out why this is happening. This behaviour may have been set in place earlier on in their educational career so this is a combination of the student activity triggers of value, motivation and instrumentality and a feedback system that is part of an earlier component of the educational system. This does not absolve the student of questionable practices or ‘anti-educational’ behaviour but it requires us to not immediately assume that they are a ‘bad student’ as an easy out.

Ultimately, these are just some of the things I’m looking out and I’m sure that there will be discussion in the comments but I have set these to stop the shortcut thinking that does not lead to a solution because it pushes the problem to a space where it does not have to be solved. If we start from the assumption of no bad students then we have to collect actual evidence to the contrary that survives analysis and peer review to locate where the help needs to be given. And this is very much my focus – support and help to bring people back to a positive educational experience. It’s too easy to assume things are false when it makes the job easier – as well absent a very human response for an over-worked sector. I think it’s time to plant some flags of assumed truths to change the way we talk and think about these things.


Ending the Milling Mindset

This is the second in a set of posts that are critical of current approaches to education. In this post, I’m going to extend the idea of rejecting an industrial revolutionary model of student production and match our new model for manufacturing, additive processes, to a new way to produce students. (I note that this is already happening in a number of places, so I’m not claiming some sort of amazing vision here, but I wanted to share the idea more widely.)

Traditional statistics is often taught with an example where you try to estimate how well a manufacturing machine is performing by measuring its outputs. You determine the mean and variation of the output and then use some solid calculations to then determine if the machine is going to produce a sufficient number of accurately produced widgets to keep your employers at WidgetCo happy. This is an important measure for things such as getting the weight right across a number of bags of rice or correctly producing bottles that hold the correct volume of wine. (Consumers get cranky if some bags are relatively empty or they have lost a glass of wine due to fill variations.)

If we are measuring this ‘fill’ variation, then we are going to expect deviation from the mean in two directions: too empty and too full. Very few customers are going to complain about too much but the size of the variation can rarely be constrained in just one direction, so we need to limit how widely that fill needle swings. Obviously, it is better to be slightly too full (on average) than too empty (on average) although if we are too generous then the producer loses money. Oh, money, how you make us think in such scrubby, little ways.

When it comes to producing items, rather than filling, we often use a machine milling approach, where a block of something is etched away through mechanical or chemical processes until we are left with what we want. Here, our tolerance for variation will be set based on the accuracy of our mill to reproduce the template.

In both the fill and the mill cases, imagine a production line that travels on a single pass through loading, activity (fill/mill) and then measurement to determine how well this unit conforms to the desired level. What happens to those items that don’t meet requirements? Well, if we catch them early enough then, if it’s cost effective, we can empty the filled items back into a central store and pass them through again – but this is wasteful in terms of cost and energy, not to mention that contents may not be able to be removed and then put back in again. In the milling case, the most likely deviance is that we’ve got the milling process wrong and taken away things in the wrong place or to the wrong extent. Realistically, while some cases of recycling the rejects can occur, a lot of rejected product is thrown away.

If we run our students as if they are on a production line along these lines then, totally unsurprisingly, we start to set up a nice little reject pile of our own. The students have a single pass through a set of assignments, often without the ability to go and retake a particular learning activity. If they fail sufficient of these tests, then they don’t meet our requirements and they are rejected from that course. Now some students will over perform against our expectations and, one small positive, they will then be recognised as students of distinction and not rejected. However, if we consider our student failure rate to reflect our production wastage, then failure rates of 20% or higher start to look a little… inefficient. These failure rates are only economically manageable (let us switch off our ethical brains for a moment) if we have enough students or they are considered sufficiently cheap that we can produce at 80% and still make money. (While some production lines would be crippled by a 10% failure rate, for something like electric drive trains for cars, there are some small and cheap items where there is a high failure rate but the costing model allows the business to stay economical.) Let us be honest – every University in the world is now concerned with their retention and progression rates, which is the official way of saying that we want students to stay in our degrees and pass our courses. Maybe the single pass industrial line model is not the best one.

Why carve back to try to reveal people, when we could build people up instead?

Why carve back to try to reveal people, when we could build people up instead?

Enter the additive model, via the world of 3D printing. 3D printing works by laying down the material from scratch and producing something where there is no wastage of material. Each item is produced as a single item, from the ground up. In this case, problems can still occur. The initial track of plastic/metal/material may not adhere to the plate and this means that the item doesn’t have a solid base. However, we can observe this and stop printing as soon as we realise this is occurring. Then we try again, perhaps using a slightly different approach to get the base to stick. In student terms, this is poor transition from the school environment, because nothing is sticking to the established base! Perhaps the most important idea, especially as we develop 3D printing techniques that don’t require us to deposit in sequential layers but instead allows us to create points in space, is that we can identify those areas where a student is incomplete and then build up that area.

In an additive model, we identify a deficiency in order to correct rather than to reject. The growing area of learning analytics gives us the ability to more closely monitor where a student has a deficiency of knowledge or practice. However, such identification is useless unless we then act to address it. Here, a small failure has become something that we use to make things better, rather than a small indicator of the inescapable fate of failure later on. We can still identify those students who are excelling but, now, instead of just patting them on the back, we can build them up in additional interesting ways, should they wish to engage. We can stop them getting bored by altering the challenge as, if we can target knowledge deficiency and address that, then we must be able to identify extension areas as well – using the same analytics and response techniques.

Additive manufacturing is going to change the way the world works because we no longer need to carve out what we want, we can build what we want, on demand, and stop when it’s done, rather than lamenting a big pile of wood shavings that never amounted to a table leg. A constructive educational focus rejects high failure rates as being indicative of missed opportunities to address knowledge deficiencies and focuses on a deep knowledge of the student to help the student to build themselves up. This does not make a course simpler or drop the quality, it merely reduces unnecessary (and uneconomical) wastage. There is as much room for excellence in an additive educational framework – if anything, you should get more out of your high achievers.

We stand at a very interesting point in history. It is time to revisit what we are doing and think about what we can learn from the other changes going on in the world, especially if it is going to lead to better educational results.


Thoughts on the colonising effect of education.

This is going to be longer than usual but these thoughts have been running around in my mind for a while and, rather than break them up, I thought I’d put them all together here. My apologies for the long read but, to help you, here’s the executive summary. Firstly, we’re not going to get anywhere until all of us truly accept that University students are not some sort of different species but that they are actually junior versions of ourselves – not inferior, just less advanced. Secondly, education is heavily colonising but what we often tend to pass on to our students are mechanisms for conformity rather than the important aspects of knowledge, creativity and confidence.

Let me start with some background and look at the primary and secondary schooling system. There is what we often refer to as traditional education: classroom full of students sitting in rows, writing down the words spoken by the person at the front. Assignments test your ability to learn and repeat the words and apply this is well-defined ways to a set of problems. Then we have progressive education that, depending upon your socio-political alignment and philosophical bent, is either a way of engaging students and teachers in the process for better outcomes, more critical thought and a higher degree of creativity; or it is cats and dogs lying down together, panic in the streets, a descent into radicalism and anarchy. (There is, of course, a middle ground, where the cats and dogs sleep in different spots, in rows, but engage in discussions of Foucault.) Dewey wrote on the tension between these two apparatus (seriously, is there anything he didn’t write on?) but, as we know, he was highly opposed to the lining up on students in ranks, like some sort of prison, so let’s examine why.

Simply put, the traditional model is an excellent way to prepare students for factory work but it’s not a great way to prepare them for a job that requires independence or creativity. You sit at your desk, the teacher reads out the instructions, you copy down the instructions, you are assigned piece work to do, you follow the instructions, your work is assessed to determine if it is acceptable, if not, you may have to redo it or it is just rejected. If enough of your work is deemed acceptable, then you are now a successful widget and may take your place in the community. Of course, it will help if your job is very similar to this. However, if your deviation from the norm is towards the unacceptable side then you may not be able to graduate until you conform.

Now, you might be able to argue this on accuracy, were it not for the constraining behavioural overtones in all of this. It’s not about doing the work, it’s about doing the work, quietly, while sitting for long stretches, without complaint and then handing back work that you had no part in defining for someone else to tell you what is acceptable. A pure model of this form cripples independence because there is no scope for independent creation as it must, by definition, deviate and thus be unacceptable.

Progressive models change this. They break up the structure of the classroom, change the way that work is assigned and, in many cases, change the power relationship between student and teacher. The teacher is still authoritative in terms of information but can potentially handle some (controlled for societal reasons) deviation and creativity from their student groups.

The great sad truth of University is that we have a lot more ability to be progressive because we don’t have to worry about too many severe behavioural issues as there is enough traditional education going on below these levels (or too few management resources for children in need) that it is highly unlikely that students with severe behavioural issues will graduate from high school, let alone make it to University with the requisite grades.

But let’s return to the term ‘colonising’, because it is a loaded term. We colonise when we send a group of settlers to a new place and attempt to assert control over it, often implicit in this is the notion that the place we have colonised is now for our own use. Ultimately, those being colonised can fight or they can assimilate. The most likely outcome if the original inhabitants fight is they they are destroyed, if those colonising are technologically superior or greatly outnumber them. Far more likely, and as seen all around the world, is the requirement for the original inhabitants to be assimilated to the now dominant colonist culture. Under assimilation, original cultures shrink to accommodate new rules, requirements, and taboos from the colonists.

In the case of education, students come to a University in order to obtain the benefits of the University culture so they are seeking to be colonised by the rules and values of the University. But it’s very important to realise that any positive colonisation value (and this is a very rare case, it’s worth noting) comes with a large number of negatives. If students come from a non-Western pedagogical tradition, then many requirements at Universities in Australia, the UK and America will be at odds with the way that they have learned previously, whether it’s power distances, collectivism/individualism issues or even in the way that work is going to be assigned and assessed. If students come from a highly traditional educational background, then they will struggle if we break up the desks and expect them to be independent and creative. Their previous experiences define their educational culture and we would expect the same tensions between colonist and coloniser as we would see in any encounter in the past.

I recently purchased a game called “Dog Eat Dog“, which is a game designed to allow you to explore the difficult power dynamics of the colonist/colonised relationship in the Pacific. Liam Burke, the author, is a second-generation half-Filipino who grew up in Hawaii and he developed the game while thinking about his experiences growing up and drawing on other resources from the local Filipino community.

The game is very simple. You have a number of players. One will play the colonist forces (all of them). Each other player will play a native. How do you select the colonist? Well, it’s a simple question: Which player at the table is the richest?

As you can tell, the game starts in uncomfortable territory and, from that point on, it can be very challenging as the the native players will try to run small scenarios that the colonist will continually interrupt, redirect and adjudicate to see how well the natives are playing by the colonist’s rules. And the first rule is:

The (Native people) are inferior to the (Occupation people).

After every scenario, more rules are added and the native population can either conform (for which they are rewarded) or deviate (for which they are punished). It actually lies inside the colonist’s ability to kill all the natives in the first turn, should they wish to do so, because this happened often enough that Burke left it in the rules. At the end of the game, the colonists may be rebuffed but, in order to do that, the natives have become adept at following the rules and this is, of course, at the expense of their own culture.

This is a difficult game to explain in the short form but the PDF is only $10 and I think it’s an important read for just about anyone. It’s a short rule book, with a quick history of Pacific settlement and exemplars, produced from a successful Kickstarter.

Let’s move this into the educational sphere. It would be delightful if I couldn’t say this but, let’s be honest, our entire system is often built upon the premise that:

The students are inferior to the teachers.

Let’s play this out in a traditional model. Every time the students get together in order to do anything, we are there to assess how well they are following the rules. If they behave, they get grades (progress towards graduation). If they don’t conform, then they don’t progress and, because everyone has finite resources, eventually they will drop out, possibly doing something disastrous in the process. (In the original game, the native population can run amok if they are punished too much, which has far too many unpleasant historical precedents.) Every time that we have an encounter with the students, they have to come up with a rule to work out how they can’t make the same mistake again. This new rule is one that they’re judged against.

When I realised how close a parallel this, a very cold shiver went down my spine. But I also realised how much I’d been doing to break out of this system, by treating students as equals with mutual respect, by listening and trying to be more flexible, by interpreting a more rigid pedagogical structure through filters that met everyone’s requirements. But unless I change the system, I am merely one of the “good” overseers on a penal plantation. When the students leave my care, if I know they are being treated badly, I am still culpable.

As I started with, valuing knowledge, accuracy,  being productive (in an academic sense), being curious and being creative are all things that we should be passing on from our culture but these are very hard things to pass on with a punishment/reward modality as they are all cognitive in aspect. What is far easier to do is to pass on culture such as sitting silently, being bound by late penalties, conformity to the rules and the worst excesses of the Banking model of education (after Freire) where students are empty receiving objects that we, as teachers, fill up. There is no agency in such a model, nor room for creativity. The jug does not choose the liquid that fills it.

It is easy to see examples all around us of the level of disrespect levelled at colonised peoples, from the mindless (and well-repudiated) nonsense spouted in Australian newspapers about Aboriginal people to the racist stereotyping that persists despite the overwhelming evidence of equality between races and genders. It is also as easy to see how badly students can be treated by some staff. When we write off a group of students because they are ‘bad students’ then we have made them part of a group that we don’t respect – and this empowers us to not have to treat them as well as we treat ourselves.

We have to start from the basic premise that our students are at University because they want to be like us, but like the admirable parts of us, not the conformist, factory model, industrial revolution prison aspects. They are junior lawyers, young engineers, apprentice architects when they come to us – they do not have to prove their humanity in order to be treated with respect. However, this does have to be mutual and it’s important to reflect upon the role that we have as a mentor, someone who has greater knowledge in an area and can share it with a more junior associate to bring them up to the same level one day.

If we regard students as being worthy of respect, as being potential peers, then we are more likely to treat them with a respect that engenders a reciprocal relationship. Treat your students like idiots and we all know how that goes.

The colonial mindset is poisonous because of the inherent superiority and because of the value of conformity to imposed rules above the potential to be gained from incorporating new and useful aspects of other cultures. There are many positive aspects of University culture but they can happily coexist with other educational traditions and cultures – the New Zealand higher educational system is making great steps in this direction to be able to respect both Maori tradition and the desire of young people to work in a westernised society without compromising their traditions.

We have to start from the premise that all people are equal, because to do otherwise is to make people unequal. We then must regard our students as ourselves, just younger, less experienced and only slightly less occasionally confused than we were at that age. We must carefully examine how we expose students to our important cultural aspects and decide what is and what is not important. However, if all we turn out at the end of a 3-4 year degree is someone who can perform a better model of piece work and is too heavily intimidated into conformity that they cannot do anything else – then we have failed our students and ourselves.

The game I mentioned, “Dog Eat Dog”, starts with a quote by a R. Zamora Linmark from his poem “They Like You Because You Eat Dog”. Linmark is a Filipino American poet, novelist, and playwright, who was educated in Honolulu. His challenging poem talks about the ways that a second-class citizenry are racially classified with positive and negative aspects (the exoticism is balanced against a ‘brutish’ sexuality, for example) but finishes with something that is even more challenging. Even when a native population fully assimilates, it is never enough for the coloniser, because they are still not quite them.

“They like you because you’re a copycat, want to be just like them. They like you because—give it a few more years—you’ll be just like them.
And when that time comes, will they like you more?”

R. Zamora Linmark, “They Like You Because You Eat Dog”, from “Rolling the R’s”

I had a discussion once with a remote colleague who said that he was worried the graduates of his own institution weren’t his first choice to supervise for PhDs as they weren’t good enough. I wonder whose fault he thought that was?


5 Things: Ethics, Morality and Truth

Sometimes the only exposure my students will have to the study of ethics is (sorry, ethical philosophers) me and my “freeze-dried, snap-frozen, instant peas” version of the study of ethical issues. (In the land of the unethical, the mono-principled man is king?)

Tasty, tasty, frozen peas. Hey, is that Diogenes?

Tasty, tasty, frozen peas. Hey, is that Diogenes?

Here are a quick five things that loosely summarise my loose summaries.

  1. Ethics, Morals and Truth are Different Things. Morals are a person’s standards of belief concerning acceptable behaviour (we often throw around words like good and bad here). Ethics are the set of moral principles that guide a person’s behaviour or that of a group. Truth is the set of things that are real and factual, or those things that are accepted as true. Does that clear it up? Things that are true can be part of an unethical set of beliefs put together by immoral people. Immoral people can actually behave ethically consistently while still appear unethical and immoral from your group. Ethics often require you to start juggling things to work out a best or most consistent course of action, which is a luxury that we generally don’t have with the truth.
  2. Being Good is Not the Same Thing as Trying to Do the Right Thing. Trying to do the right thing is the field where your actions are guided by your ethical principles. Trying to be the best person you can be (Hello, Captain America) is virtue ethics. Both being good and doing the right thing can be guided by rules or by looking at outcomes but one is concerned who you are trying to be and the other is concerned with what you are trying to do. Yes, this means you can be a total ratbag as long as you behave the right way in the face of every ethical dilemma. (My apologies to any rats with bags.)
  3. You Can Follow Rules Or You Can Aim For The Best Outcome (Or Do Both, Actually). There are two basic breakdowns I’ve mentioned before: one follows rules and by doing that then the outcome doesn’t matter, the other tries to get the best outcome and this excuses any rules you break on the way to your good outcome. Or you can mix them together and hybridise it, even throwing in virtue ethics, which is what we tend to do because very few of us are moral philosophers and most of us are human beings. 🙂
  4. Consistency is Important. If you make decisions one way when it’s you and another way when it’s someone else then there’s a very good chance that you’re not applying a consistent ethical framework, you’re rationalising. (Often referred to as special pleading because you are special and different.) If you treat one group of people one way, and another completely differently, then I think you can guess that your ethics are too heavily biassed to actually be considered consistent – or all that ethical.
  5. Questioning Your Existing Frameworks Can Be Very Important. The chances that you managed to get everything right as you moved into adulthood is, really, surprisingly low, especially as most ethical and moral thinking is done in response to situations in your life. However, it’s important to think about how you can change your thinking in a way that forms a sound and consistent basis to build your ethical thinking upon. This can be very, very challenging, especially when the situation you’re involved in is particular painful or terrifying.

And that’s it. A rapid, shallow run through a deeply complex and rewarding area that everyone should delve into at some stage in their lives.


The Fragile Student Relationship (working from #Unstuck #by Julie Felner @felner)

I was referred some time ago to a great site called “Unstuck”, which has some accompanying iPad software, that helps you to think about how to move past those stuck moments in your life and career to get things going. They recently posted an interesting item on “How to work like a human” and I thought that a lot of what they talked about had direct relevance to how we treat students and how we work with them to achieve things. The article is by Julie Felner and I strongly suggest that you read it, but here are my thoughts on her headings, as they apply to education and students.

Ultimately, if we all work together like human beings, we’re going to get on better than if we treat our students as answer machines and they treat us as certification machines. Here’s what optimising for one thing, mechanistically, can get you:

This robot is the business at climbing hills. Dancing like a fool, not so much. It's not human.

This robot is the business at climbing hills. Dancing like a fool, not so much. It’s not human.

But if we’re going to be human, we need to be connected. Here are some signs that you’re not really connected to your students.

  1. Anything that’s not work you treat with a one word response. A student comes to see you and you don’t have time to talk about anything but assignment X or project Y. I realise time is scarce but, if we’re trying to build people, we have to talk to people, like people.
  2. You’re impatient when they take time to learn or adjust. Oh yeah, we’ve all done this. How can they not pick it up immediately? What’s wrong with them? Don’t they know I’m busy?
  3. Sleep and food are for the weak – and don’t get sick. There are no human-centred reasons for not getting something done. I’m scheduling all of these activities back-to-back for two months. If you want it, you’ll work for it.
  4. We never ask how the students are doing. By which I mean, asking genuinely and eking out a genuine response, if some prodding is required. Not intrusively but out of genuine interest. How are they doing with this course?
  5. We shut them down. Here’s the criticism. No, I don’t care about the response. No, that’s it. We’re done. End of discussion. There are times when we do have to drawn an end to a discussion but there’s a big difference between closing off something that’s going nowhere and delivering everything as if no discussion is possible.

Here is my take on Julie’s suggestions for how we can be more human at work, which works for the Higher Ed community just as well.

  1. Treat every relationship as one that matters. The squeaky wheels and the high achievers get a lot of our time but all of our students are actually entitled to have the same level of relationship with us. Is it easy to get that balance? No. Is it a worthwhile goal? Yes.
  2. Generously and regularly express your gratitude. When students do something well, we should let them know- as soon as possible. I regularly thank my students for good attendance, handing things in on time, making good contributions and doing the prep work. Yes, they should be doing it but let’s not get into how many things that should be done aren’t done. I believe in this strongly and it’s one of the easiest things to start doing straight away.
  3. Don’t be too rigid about your interactions. We all have time issues but maybe you can see students and talk to them when you pass them in the corridor, if both of you have time. If someone’s been trying to see you, can you grab them from a work area or make a few minutes before or after a lecture? Can you talk with them over lunch if you’re both really pressed for time? It’s one thing to have consulting hours but it’s another to make yourself totally unavailable outside of that time. When students are seeking help, it’s when they need help the most. Always convenient? No. Always impossible to manage? No. Probably useful? Yes.
  4. Don’t pretend to be perfect. Firstly, students generally know when you’re lying to them and especially when you’re fudging your answers. Don’t know the answer? Let them know, look it up and respond when you do. Don’t know much about the course itself? Well, finding out before you start teaching is a really good idea because otherwise you’re going to be saying “I don’t know a lot” and there’s a big, big gap between showing your humanity and obviously not caring about your teaching. Fix problems when they arise and don’t try to make it appear that it wasn’t a problem. Be as honest as you can about that in your particular circumstances (some teaching environments have more disciplinary implications than others and I do get that).
  5. Make fewer assumptions about your students and ask more questions. The demographics of our student body have shifted. More of my students are in part-time or full-time work. More are older. More are married. Not all of them have gone through a particular elective path. Not every previous course contains the same materials it did 10 years ago. Every time a colleague starts a sentence with “I would have thought” or “Surely”, they are (almost always) projecting their assumptions on to the student body, rather than asking “Have you”, “Did you” or “Do you know”?

Julie made the final point that sometimes we can’t get things done to the deadline. In her words:

You sometimes have to sacrifice a deadline in order to preserve something far more important — a relationship, a person’s well-being, the quality of the work

I completely agree because deadlines are a tool but, particularly in academia, the deadline is actually rarely as important as people. If our goal is to provide a good learning environment, working our students to zombie status because “that’s what happened to us” is bordering on a cycle of abuse, rather than a commitment to quality of education.

We all want to be human with our students because that’s how we’re most likely to get them to engage with us as a human too! I liked this article and I hope you enjoyed my take on it. Thank you, Julie Felner!


When Does Collaborative Work Fall Into This Trap?

A recent study has shown that crowdsourcing activities are prone to bringing out the competitors’ worst competitive instincts.

“[T]he openness makes crowdsourcing solutions vulnerable to malicious behaviour of other interested parties,” said one of the study’s authors, Victor Naroditskiy from the University of Southampton, in a release on the study. “Malicious behaviour can take many forms, ranging from sabotaging problem progress to submitting misinformation. This comes to the front in crowdsourcing contests where a single winner takes the prize.” (emphasis mine)

You can read more about it here but it’s not a pretty story. Looks like a pretty good reason to be very careful about how we construct competitive challenges in the classroom!

We both want to build this but I WILL DO IT WITH YOUR BONES!

We both want to build this but I WILL DO IT WITH YOUR BONES!