The Big Picture and the Drug of Easy Understanding: Part I
Posted: July 9, 2012 Filed under: Education, Opinion | Tags: authenticity, education, educational problem, feedback, fiero, games, Generation Why, higher education, measurement, principles of design, reflection, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design Leave a commentThere is a tendency to frame artistic works such as films and books inside a larger frame. It’s hard to find a fantasy novel that isn’t “Book 1 of the Mallomarion Epistemology Cycle” or a certain type of mainstream film that doesn’t relate to a previous film (as II, III or higher) or as a re-interpretation of a film in the face of another canon (the re-re-reboot cycle). There are still independent artistic endeavours within this, certainly, but there is also a strong temptation to assess something’s critical success and then go on to make another version of it, in an attempt to make more money. Some things were always multi-part entities in the planning and early stages (such as the Lord of the Rings books and hence movies), some had multiplicity thrust upon them after unlikely success (yes, Star Wars, I’m looking at you, although you are strangely similar to Hidden Fortress so you aren’t even the start point of the cycle).
From a commercial viewpoint, selling something that only sells itself is nowhere near as interesting as selling something that draws you into a consumption cycle. This does, however, have a nasty habit of affecting the underlying works. You only have to look at the relative length of the Harry Potter books, and the quality of editing contained within, to realise that Rowling reached a point where people stopped cutting her books down – even if that led to chapters of aimless meandering in a tent in later books. Books one to three are, to me, far, far better than the later ones, where commercial influence, the desire to have a blockbuster and the pressure of producing works that would continue to bring in more consumers and potentially transfer better to the screen made some (at least for me) detrimental changes to the work.
This is the lure of the Big Picture – that we can place everything inside a grand plan, a scheme laid out from the beginning, and it will validate everything that has gone before, while including everything that is yet to come. Thus, all answers will be given, our confusion will turn to understanding and we will get that nice warm feeling from wrapping everything up. In many respects, however, the number of things that are actually developed within a frame like this, and remain consistent, is very small. Stephen King experimented with serial writing (short instalments released regularly) for a while, including the original version of “The Green Mile”. He is a very talented and experienced writer and he still found that he had made some errors in already published instalments that he had to either ignore or correct in later instalments. Although he had a clear plan for the work, he introduced errors to public view and he discovered them in later full fleshings of the writing. He makes a note in the book of the Green Mile that one of the most obvious, to him, was having someone scratch their nose with their hand while in a straitjacket. Not having all of the work to look at leaves you open to these kinds of errors, even where you do have a plan, unless you have implemented everything fully before you deploy it.
So it’s no surprise that we’re utterly confused by the prequels to Star Wars, because (despite Lucas’ protestations), it is obvious that there was not even a detailed sketch of what would happen. The same can be said of the series “Lost” where any consistency that was able to be salvaged from it was a happy accident, as the writers had no idea what half of the early things actually were – it just seemed cool. And, as far as I’m concerned, there is no movie called Highlander 2.
(I should note that this post is Part 1 of 2, but I am writing both parts side by side, to try and prevent myself from depending in Part 2 upon something that I got wrong in Part 1.)
To take this into an educational space, it is tempting to try and construct learning from a sequence of high-reward moments of understanding. Our students are both delighted and delightful when they “get” something – it’s a joy to behold and one of the great rewards of the teacher. But, much like watching TED talks every day won’t turn you into a genius, it is the total construction of the learning experience that provides something that is consistent throughout and does not have to endure any unexpected reversals or contradictions later on. We don’t have a commercial focus here to hook the students. Instead, we want to keep them going throughout the necessary, but occasionally less exciting, foundation work that will build them up to the point where they are ready to go, in Martin Gardner’s words, “A-ha!”
My problem arises if I teach something that, when I develop a later part of the course, turns out to not provide a complete basis, reinterprets the work in a way that doesn’t support a later point or places an emphasis upon the wrong aspect. Perhaps we are just making the students look at the wrong thing, only to realise later that had we looked at the details, rather than our overall plan, we would have noticed this error. But, now, it is too late and the wrong message is out there.
This is one of the problems of gamification, as I’ve referred to previously, in that we focus on the drug of understanding as a fiero (fierce joy) moment to the exclusion of the actual education experience that the game and reward elements should be reinforcing. This is one of the problems of stating that something is within a structure when it isn’t: any coincidence of aims or correlation of activities is a happy accident, serendipity rather than strategy.
In tomorrow’s post, I’ll discuss some more aspects of this and the implications that I believe it has for all of us as educators.
How Do We Recognise Mastery? What Is My Masterpiece?
Posted: July 8, 2012 Filed under: Education | Tags: education, higher education, identity, in the student's head, journeyman, mastery, reflection, resources, student perspective, teaching approaches, thinking, tools, universal principles of design 1 CommentA few posts ago, and my goodness that’s a lot of words, I posted on issues of identity and examined the PhD in the light of it being a journeyman qualification, one that indicates the end of an apprenticeship and a readiness to go out into the world. That, however, is only half of the overall story of the apprentice, because there is a level above journeyman and that is, in all of its gendered glory, “master”. In the world of the trade and craft guilds, the designation of Mastery was only given when a journeyman applied to the guild and provided a piece of work that demonstrated their mastery of the appropriate craft. These works, if accepted, paved the way for journeyman to become Master, to become capable of training more apprentices and retaining their own journeymen, and were referred to as “Masterpieces”.
We use the term a bit more loosely these days, especially when coupled with the word “theatre”, but the sense remains. A Masterpiece is a piece of work that demonstrates your mastery of the craft and any sensible group of experts within your discipline would recognise it as such and declare you worthy to join them.
On reflection, after my last post on identity, I realised that I had placed the PhD into a very specific place, based on the PhD culture of my own discipline and my own experience. There are people who work their way up through a discipline for years, advancing steadily through their craft via diploma, recognition of prior learning and finally degree. Finally, having functioned as practitioner, they move into the academy in order to make their definitive contribution and it is as practitioner-academics that they create their final thesis which, in some regard, has more than a hint of the mastery of the craft about it and is far more likely to be a masterpiece than, say, my three year musing on big systems and XML. I regard myself more as an academic-practitioner as while I have previous knowledge, my research work began afresh and my PhD formed the basis of my qualification for entry into the profession of academic (journeyman) rather than the condensation of my life’s contribution as a practitioner, placed within the academic sphere to change teaching, research and policy (masterpiece).
However, this really doesn’t clear the issue up at all, all it does is emphasise that it is the recognition of the masterpiece that determines one’s mastery, which in turn requires that we have strong “guilds” or their equivalent in order to be able to clearly state when something has been produced to a level that we have met this particular skill battier.
Now, in terms of supervising other PhD students, I can do that now but, until my first student completes successfully (fingers crossed for December), I cannot be a principal supervisor. I am apprenticed, again, in effect until I have demonstrated sufficient mastery. So my PhD qualification is, again, rendered at the journeyman level. If I still had my network certifications from my previous life, I could instruct people in networking within certain corporate frameworks, but I (again) only had journeyman qualifications here. I have a friend who has achieved mastery in the networking discipline and the difference in our skill levels is amazing but, rather sadly, he has no masterpiece to show for his efforts. He worked to solve some difficult problems, and sat some very hard exams, and provided that he repeats this performance every 2 years, he will make lots of money doing interesting things involving networks. There is not, however, a single artefact of his that he can point to, which asserts that from that point on, he had mastery of a certain set of skills.
And this is very much the way of modern mastery. Why does my friend have to resit his exams? Because things are changing very quickly these days and, because of the Internet, we can propagate those changes almost immediately. A master craftsman of the 17th Century would learn new techniques, certainly, but having achieved mastery, he would enjoy maybe 20-30 more years of relatively low change until he died of some unspeakable disease or a falling giraffe. These days, while master craftsman certainly exist and are recognised as such, in many scientific disciplines, we tend to award this towards the end of someone’s life, at a time when their practical life is relatively close to over and I wonder if that is to stop the embarrassment of a recognised master who knows nothing about what has happened in the field because it has all moved on.
How do we recognise mastery in science, literature or academia? Well, there are significant Fellowships (the Royal Society springs to mind), important prizes (the Nobel, the Pulitzer) and awards (the Turing and the like). Of course, there is one award that recognises early achievement, the Fields Medal in mathematics, which may only be awarded to someone who is not yet 40, specifically to try and encourage the recipients to go further and do more. A lot of these awards and prizes, however, allow the luxury of a Masterpiece, especially those awards which are given for a specific piece of work. But which of J. M. Coetzee’s works was the definitive masterpiece that granted him the Nobel in Literature, the one that tipped the balance? Where is the specific masterpiece that I can pass to other guild members (not that I am one) and admire, wish that I had created, and learn from? Even where we have the books, we still don’t have a clear notion of what we are looking at. (I realise that Coetzee’s skills were clearly identified in the award, as well as his focus, and I am certainly not disputing the validity – but which is the book I give to someone to explain why he is a master?)
It is much harder to see where we give our students the ability to produce master works of any kind, even within our capstone courses. The works produced under capstone are more likely to be fit-for-purpose, complete but unremarkable, and therefore fit to judge for the end of apprenticeship, but no further. If they then progress to Honours, Masters or PhD, they do not so much have an opportunity to produce a masterpiece, what they are doing is conducting an apprenticeship for a new trade. (This varies by profession and intent. I can quite happily see that a PhD in Creative Writing has a masterpiece component attached to it, whereas a PhD in other disciplines may not.)
But, given that the international recognition of mastery is in a highly refined atmosphere and can, at most, accommodate a very small number of people, how do we even recognise those few masterpieces that will occur outside of the defining masterworks of a generation? For me, as a personal reflection, I am coming to terms with the fact that any masterpiece that I do produce, a work of great import or even a student (in some respects) that goes on to change the world, may have a very short shelf-life compared to other crafts. I also have to accept that the guild that accepts it as master work may never even contact me to tell me what they think – I’ll just have to watch my citation index go up and use it to get myself promoted.
I don’t have a complete answer to this, and I know that there’s a lot more thinking to do, but are we looking at the end of masterpieces or do we just have to adopt a different lens for seeing them, as well as a different group for judging them?
When the Stakes are High, the Tests Had Better Be Up to It.
Posted: July 2, 2012 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, curriculum, design, education, educational problem, ethics, feedback, Generation Why, higher education, identity, plagiarism, reflection, resources, student perspective, teaching, teaching approaches, testing, thinking, universal principles of design, work/life balance Leave a comment(This is on the stronger opinion side but, in the case of standardised testing as it is currently practised, this will be a polarising issue. Please feel free to read the next article and not this one.)

If you make a mistake, please erase everything from the worksheet, and then leave the room, as you have just wasted 12 years of education.
A friend on FB (thanks, Julie!) linked me to an article in the Washington Post that some of you may have seen. The article is called “The Complete List of Problems with High-Stakes Standardised Tests” by Marion Brady, in the words of the article. a “teacher, administrator, curriculum designer and author”. (That’s attribution, not scare quotes.)
Brady provides a (rather long but highly interesting) list of problems with the now very widespread standardised testing regime that is an integral part of student assessment in some countries. Here. Brady focuses on the US but there is little doubt that the same problems would exist in other areas. From my readings and discussions with US teachers, he is discussing issues that are well-known problems in the area but they are slightly intimidating when presented as a block.
So many problems are covered here, from an incorrect focus on simplistic repetition of knowledge because it’s easier to assess, to the way that it encourages extrinsic motivations (bribery or punishment in the simplest form), to the focus on test providers as the stewards and guides of knowledge rather than the teachers. There are some key problems, and phrases, that I found most disturbing, and I quote some of them here:
[Teachers oppose the tests because they]
“unfairly advantage those who can afford test prep; hide problems created by margin-of-error computations in scoring; penalize test-takers who think in non-standard ways”
“wrongly assume that what the young will need to know in the future is already known; emphasize minimum achievement to the neglect of maximum performance; create unreasonable pressures to cheat.”
“are open to massive scoring errors with life-changing consequences”
“because they provide minimal to no useful feedback”
This is completely at odds with what we would consider to be reasonable education practice in any other area. If I had comments from students that identified that I was practising 10% of this, I would be having a most interesting discussion with my Head of School concerning what I was doing – and a carpeting would be completely fair! This isn’t how we should teach and we know it.
I spoke yesterday about an assault on critical thinking as being an assault on our civilisation, short-sightedly stabbing away at helping people to think as if it will really achieve what (those trying to undermine critical thinking) actually wanted. I don’t think that anyone can actually permanently stop information spreading, when that information can be observed in the natural world, but short-sightedness, malign manipulation of the truth and ignorance can certainly prevent individuals from gaining access to information – especially if we are peddling the lie that “everything which needs to be discovered is already known.”
We can, we have and we probably (I hope) always will work around these obstacles in information, these dark ages as I referred to them yesterday, but at what cost of the great minds who cannot be applied to important problems because they were born to poor families, in the ‘wrong’ state, in a district with no budget for schools, or had to compete against a system that never encouraged them to actually think?
The child who would have developed free safe power, starship drives, applicable zero-inflation stable economic models, or the “cure for cancer” may be sitting at the back of a poorly maintained, un-airconditioned, classroom somewhere, doodling away, and slowly drifting from us. When he or she encounters the standardised test, unprepared, untrained, and tries to answer it to the extent of his or her prodigious intellect, what will happen? Are you sufficiently happy with the system that you think that this child will receive a fair hearing?
We know that students learn from us, in every way. If we teach something in one way but we reward them for doing something else in a test, is it any surprise that they learn for the test and come to distrust what we talk about outside of these tests? I loathe the question “will this be in the exam” as much as the next teacher but, of course, if that is how we have prioritised learning and rewarded the student, then they would be foolish not to ask this question. If the standardised test is the one that decides your future, then, without doubt, this is the one that you must set as your goal, whether student, teacher, district or state!
Of course, it is the future of the child that is most threatened by all of this, as well as the future of the teaching profession. Poor results on a standardised test for a student may mean significantly reduced opportunity, and reduced opportunity, unless your redemptive mechanisms are first class, means limited pathways into the future. The most insidious thread through all of this is the idea that a standardised test can be easily manipulated through a strategy of learning what the answer should be, to a test question, rather than what it is, within the body of knowledge. We now combine the disadvantaged student having their future restricted, competing against the privileged student who has been heavily channeled into a mode that allows them to artificially excel, with no guarantee that they have the requisite aptitude to enjoy or take advantage of the increased opportunities. This means that both groups are equally in trouble, as far as realising their ambitions, because one cannot even see the opportunity while the other may have no real means for transforming opportunity into achievement.
The desire to control the world, to change the perception of inconvenient facts, to avoid hard questions, to never be challenged – all of these desires appear to be on the rise. This is the desire to make the world bend to our will, the real world’s actual composition and nature apparently not mattering much. It always helps me to remember that Cnut stood in the waves and commanded them not to come in order to prove that he could not control the waves – many people think that Cnut was defeated in his arrogance, when he was attempting to demonstrate his mortality and humility, in the face of his courtiers telling him that he had power above that of mortal men.
How unsurprising that so many people misrepresent this.
Who Knew That the Slippery Slope Was Real?
Posted: June 26, 2012 Filed under: Education, Opinion | Tags: advocacy, blogging, curriculum, design, education, educational problem, feedback, higher education, in the student's head, motivation, plagiarism, resources, student perspective, thinking, time banking, tools, universal principles of design 1 CommentTake a look at this picture.
One thing you might have noticed, if you’ve looked carefully, is that this man appears to have had some reconstructive surgery on the right side of his face and there is a colour difference, which is slightly accentuated by the lack of beard stubble. What if I were to tell you that this man was offered the chance to have fake stubble tattooed onto that section and, when he declined because he felt strange about it, received a higher level of pressure and, in his words, guilt trip than for any other procedure during the extensive time he spent in hospital receiving skin grafts and burn treatments. Why was the doctor pressuring him?
Because he had already performed the tattooing remediation on two people and needed a third for the paper. In Dan’s words, again, the doctor was a fantastic physician, thoughtful, and he cared but he had a conflict of interest that meant that he moved to a different mode of behaviour. For me, I had to look a couple of times because the asymmetry that the doctor referred to is not that apparent at first glance. Yet the doctor felt compelled, by interests that were now Dan’s, to make Dan self-conscious about the perceived problem.
A friend on Facebook (thanks, Bill!) posted a link to an excellent article in Wired, entitled “Why We Lie, Cheat, Go to Prison and Eat Chocolate Cake” by Dan Ariely, the man pictured above. Dan is a professor of behavioural economics and psychology at Duke and his new book explores the reasons that we lie to each other. I was interested in this because I’m always looking for explanations of student behaviour and I want to understand their motivations. I know that my students will rationalise and do some strange things but, if I’m forewarned, maybe I can construct activities and courses in a way that heads this off at the pass.
There were several points of interest to me. The first was the question whether a cost/benefit analysis of dishonesty – do something bad, go to prison – actually has the effect that we intend. As Ariely points out, if you talk to the people who got caught, the long-term outcome of their actions was never something that they thought about. He also discusses the notion of someone taking small steps, a little each time, that move them from law abiding, for want of a better word, to dishonest. Rather than set out to do bad things in one giant leap, people tend to take small steps, rationalising each one, and after each step opening up a range of darker and darker options.
Welcome to the slippery slope – beloved argument of rubicose conservative politicians since time immemorial. Except that, in this case, it appears that the slop is piecewise composed on tiny little steps. Yes, each step requires a decision, so there isn’t the momentum that we commonly associate with the slope, but each step, in some sense, takes you to larger and larger steps away from the honest place from which you started.
Ariely discusses an experiment where he gave two groups designer sunglasses and told one group that they had the real thing, and the other that they had fakes, and then asked them to complete a test and then gave them a chance to cheat. The people who had been randomly assigned into the ‘fake sunglasses’ group cheated more than the others. Now there are many possible reasons for this. One of them is the idea that if you know that are signalling your status deceptively to the world, which is Ariely’s argument, you are in a mindset where you have taken a step towards dishonesty. Cheating a little more is an easier step. I can see many interpretations of this, because of the nature of the cheating which is in reporting how many questions you completed on the test, where self-esteem issues caused by being in the ‘fake’ group may lead to you over-promoting yourself in the reporting of your success on the quiz – but it’s still cheating. Ultimately, whatever is motivating people to take that step, the step appears to be easier if you are already inside the dishonest space, even to a degree.
[Note: Previous paragraph was edited slightly after initial publication due to terrible auto-correcting slipping by me. Thanks, Gary!]
Where does something like copying software or illicitly downloading music come into this? Does this constant reminder of your small, well-rationalised, step into low-level lawlessness have any impact on the other decisions that you make? It’s an interesting question because, according to the outline in Ariely’s sunglasses experiment, we would expect it to be more of a problem if the products became part of your projected image. We know that having developed a systematic technological solution for downloading is the first hurdle in terms of achieving downloads but is it also the first hurdle in making steadily less legitimate decisions? I actually have no idea but would be very interested to see some research in this area. I feel it’s too glib to assume a relationship, because it is so ‘slippery slope’ argument, but Ariely’s work now makes me wonder. Is it possible that, after downloading enough music or software, you could actually rationalise the theft of a car? Especially if you were only ‘borrowing’ it? (Personally, I doubt it because I think that there are several steps in between.) I don’t have a stake in this fight – I have a personal code for behaviour in this sphere that I can live with but I see some benefits in asking and trying to answer these questions from something other that personal experience.
Returning to the article, of particular interest to me was the discussion of an honour code, such as Princeton’s, where students sign a pledge. Ariely sees it as benefit as a reminder to people that is active for some time but, ultimately, would have little value over several years because, as we’ve already discussed, people rationalise in small increments over the short term rather than constructing long-term models where the pledge would make a difference. Sign a pledge in 2012 and it may just not have any impact on you by the middle of 2012, let alone at the end of 2015 when you’re trying to graduate. Potentially, at almost any cost.
In terms of ongoing reminders, and a signature on a piece of work saying (in effect) “I didn’t cheat”, Ariely asks what happens if you have to sign the honour clause after you’ve finished a test – well, if you’ve finished then any cheating has already occurred so the honour clause is useless then. If you remind people at the start of every assignment, every test, and get them to pledge at the beginning then this should have an impact – a halo effect to an extent, or a reminder of expectation that will make it harder for you to rationalise your dishonesty.
In our school we have an electronic submission system that require students to use to submit their assignments. It has boiler plate ‘anti-plagiarism’ text and you must accept the conditions to submit. However, this is your final act before submission and you have already finished the code, which falls immediately into the trap mentioned in the previous paragraph. Dan Ariely’s answers have made me think about how we can change this to make it more of an upfront reminder, rather than an ‘after the fact – oh it may be too late now’ auto-accept at the end of the activity. And, yes, reminder structures and behaviour modifiers in time banking are also being reviewed and added in the light of these new ideas.
The Wired Q&A is very interesting and covers a lot of ground but, realistically, I think I have to go and buy Dan Ariely’s book(s), prepare myself for some harsh reflection and thought, and plan for a long weekend of reading.
Time Banking: Aiming for the 40 hour week.
Posted: June 24, 2012 Filed under: Education | Tags: education, educational problem, higher education, in the student's head, learning, measurement, MIKE, principles of design, resources, student perspective, teaching, teaching approaches, time banking, tools, universal principles of design, work/life balance 5 CommentsI was reading an article on metafilter on the perception of future leisure from earlier last century and one of the commenters linked to a great article on “Why Crunch Mode Doesn’t Work: Six Lessons” via the International Game Designers Association. This article was partially in response to the quality of life discussions that ensued after ea_spouse outed the lifestyle (LiveJournal link) caused by her spouse’s ludicrous hours working for Electronic Arts, a game company. One of the key quotes from ea_spouse was this:
Now, it seems, is the “real” crunch, the one that the producers of this title so wisely prepared their team for by running them into the ground ahead of time. The current mandatory hours are 9am to 10pm — seven days a week — with the occasional Saturday evening off for good behavior (at 6:30pm). This averages out to an eighty-five hour work week. Complaints that these once more extended hours combined with the team’s existing fatigue would result in a greater number of mistakes made and an even greater amount of wasted energy were ignored.
This is an incredible workload and, as Evan Robinson notes in the “Crunch Mode” article, this is not only incredible but it’s downright stupid because every serious investigation into the effect of working more than 40 hours a week, for extended periods, and for reducing sleep and accumulating sleep deficit has come to the same conclusion: hours worked after a certain point are not just worthless, they reduce worth from hours already worked.
Robinsons cites studies and practices coming from industrialists as Henry Ford, who reduced shift length to a 40-hour work week in 1926, attracting huge criticism, because 12 years of research had shown that the shorter work week meant more output, not less. These studies have been going on since the 18th century and well into the 60’s at least and they all show the same thing: working eight hours a day, five days a week gives you more productivity because you get fewer mistakes, you get less fatigue accumulation and you have workers that are producing during their optimal production times (first 4-6 hours of work) without sliding into their negatively productive zones.
As Robinson notes, the games industry doesn’t seem to have got the memo. The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. The fact that you can get fired for having the audacity to try and work otherwise also helps a great deal in motivating people to adopt the strategy.
Why spend so many hours in the office? Remember when I said that it’s sometimes hard for people to see what I’m doing because, when I’m thinking or planning, I can look like I’m sitting in the office doing nothing? Imagine what it looks like if, two weeks before a big deadline, someone walks into the office at 5:30pm and everyone’s gone home. What does this look like? Because of our conditioning, which I’ll talk about shortly, it looks like we’ve all decided to put our lives before the work – it looks like less than total commitment.
As a manager, if you can tell everyone above you that you have people at their desks 80+ hours a week and will have for the next three months, then you’re saying that “this work is important and we can’t do any more.” The fact that people were probably only useful for the first 6 hours of every day, and even then only for the first couple of months, doesn’t matter because it’s hard to see what someone is doing if all you focus on is the output. Those 80+ hour weeks are probably only now necessary because everyone is so tired, so overworked and so cognitively impaired, that they are taking 4 times as long to achieve anything.
Yes, that’s right. All the evidence says that more than 2 months of overtime and you would have been better off staying at 40 hours/week in terms of measurable output and quality of productivity.
Robinson lists six lessons, which I’ll summarise here because I want to talk about it terms of students and why forward planning for assignments is good practice for better smoothing of time management in the future. Here are the six lessons:
- Productivity varies over the course of the workday, with greatest productivity in the first 4-6 hours. After enough hours, you become unproductive and, eventually, destructive in terms of your output.
- Productivity is hard to quantify for knowledge workers.
- Five day weeks of eight house days maximise long-term output in every industry that has been studied in the past century.
- At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
- Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
- Error rates climb with hours worked and especially with loss of sleep.
My students have approximately 40 hours of assigned work a week, consisting of contact time and assignments, but many of them never really think about that. Most plan in other things around their ‘free time’ (they may need to work, they may play in a band, they may be looking after families or they may have an active social life) and they fit the assignment work and other study into the gaps that are left. Immediately, they will be over the 40 hour marker for work. If they have a part-time job, the three months of one of my semesters will, if not managed correctly, give them a lumpy time schedule alternating between some work and far too much work.
Many of my students don’t know how they are spending their time. They switch on the computer, look at the assignment, Skype, browse, try something, compile, walk away, grab a bite, web surf, try something else – wow, three hours of programming! This assignment is really hard! That’s not all of them but it’s enough of them that we spend time on process awareness: working out what you do so you know how to improve it.
Many of my students see sports drinks, energy drinks and caffeine as a licence to not sleep. It doesn’t work long term as most of us know, for exactly the reasons that long term overwork and sleeplessness don’t work. Stimulants can keep you awake but you will still be carrying most if not all of your cognitive impairment.
Finally, and most importantly, enough of my students don’t realise that everything I’ve said up until now means that they are trying to sit my course with half a brain after about the halfway point, if not sooner if they didn’t rest much between semesters.
I’ve talked about the theoretical basis for time banking and the pedagogical basis for time banking: this is the industrial basis for time banking. One day I hope that at least some of my students will be running parts of their industries and that we have taught them enough about sensible time management and work/life balance that, as people in control of a company, they look at real measures of productivity, they look at all of the masses of data supporting sensible ongoing work rates and that they champion and adopt these practices.
As Robinson says towards the end of the article:
Managers decide to crunch because they want to be able to tell their bosses “I did everything I could.” They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven’t really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done. (Emphasis mine.)
If my students can see all of their requirements ahead of time, know what is expected, have been given enough process awareness, and have the will and the skill to undertake the activities, then we can potentially teach them a better way to get things done if we focus on time management in a self-regulated framework, rather than imposed deadlines in a rigid authority-based framework. Of course, I still have a lot of work to to demonstrate that this will work but, from industrial experience, we have yet another very good reason to try.
Flow, Happiness and the Pursuit of Significance
Posted: June 22, 2012 Filed under: Education | Tags: Csíkszentmihályi, curriculum, education, educational research, flow, higher education, learning, measurement, MIKE, reflection, resources, student perspective, teaching, teaching approaches, time banking, tools, universal principles of design, vygotsky, Zone of proximal development Leave a commentI’ve just been reading Deirdre McCloskey’s article on “Happyism” in The New Republic. While there are a number of points I could pick at in the article, I question her specific example of statistical significance and I think she’s oversimplified a number of the philosophical points, there are a lot of interesting thoughts and arguments within the article.
One of my challenges in connecting with my students is that of making them understand what the benefit is to them of adopting, or accepting, suggestions from me as to how to become better as discipline practitioners, as students and, to some extent, as people. It would be nice if doing the right thing in this regard could give the students a tangible and measurable benefit that they could accumulate on some sort of meter – I have performed well, my “success” meter has gone up by three units. As McCloskey points out, this effectively requires us to have a meter for something that we could call happiness, but it is then tied directly to events that give us pleasure, rather than a sequence of events that could give us happiness. Workflows (chains of actions that lead to an eventual outcome) can be assessed for accuracy and then the outcome measured, but it is only when the workflow is complete that we can assess the ‘success’ of the workflow and then derive pleasure, and hence happiness, from the completion of the workflow. Yes, we can compose a workflow from sub-workflows but we will hit the same problem if we focus on an outcome-based model – at some stage, we are likely to be carrying out an action that can lead to an event from which we can derive a notion of success, but this requires us to be foresighted and see the events as a chain that results in this outcome.
And this is very hard to meter and display in a way that says anything other than “Keep going!” Unsurprisingly, this is not really the best way to provide useful feedback, reward or fodder for self-actualisation.
I have a standing joke that, as a runner, I go to a sports doctor because if I go to a General Practitioner and say “My leg hurts after I run”, the GP will just say “Stop running.” I am enough of a doctor to say that to myself – so I seek someone who is trained to deal with my specific problems and who can give me a range of feedback that may include “stop running” because my injuries are serious or chronic, but can provide me with far more useful information from which I can make an informed choice. The happiness meter must be able to work with workflow in some way that is useful – keep going is not enough. We therefore need to look at the happiness meter.
McCloskey identifies Bentham, founder of utilitarianism, as the original “pleasure meter” proponent and implicitly addressed the beneficial calculus as subverting our assessment of “happiness units” (utils) into a form that assumes that we can reasonably compare utils between different people and that we can assemble all of our life’s experiences in a meaningful way in terms of utils in the first place!
To address the issue of workflow itself, McCloskey refers to the work of Mihály Csíkszentmihályi on flow: “the absorption in a task just within our competence”. I have talked about this before, in terms of Vygotsky’s zone of proximal development and the use of a group to assist people who are just outside of the zone of flow. The string of activities can now be measured in terms of satisfaction or immersion, as well as the outcomes of this process. Of course, we have the outcomes of the process in terms of direct products and we have outcomes in terms of personal achievement at producing those products. Which of these go onto the until meter, given that they are utterly self-assessed, subjective and, arguably, orthogonal in some cases. (If you have ever done your best, been proud of what you did, but failed in your objective, you know what I’m talking about.)
My reading of McCloskey is probably a little generous because I find her overall argument appealing. I believe that her argument may be distilled are:
- If we are going to measure, we must measure sensibly and be very clear in our context and the interpretation of significance.
- If we are going to base any activity on our measurement, then the activity we create or change must be related to the field of measurement.
Looking at the student experience in this light, asking students if they are happy with something is, ultimately, a pointless activity unless I either provide well-defined training in my measurement system and scale, or I am looking for a measurement of better or worse. This is confounded by simple cognitive biasses including, but not limited to, the Hawthorne Effect and confirmation bias. However, measuring what my students are doing, as Csíkszentmihályi did in the flow experiments, will show me if they are so engaged with their activities that they are staying in the flow zone. Similarly, looking at participation and measuring outputs in collaborative activities where I would expect the zone of proximal development to be in effect is going to be far more revealing than asking students if they liked something or not.
As McCloskey discusses, there is a point at which we don’t seem to get any happier but it is very hard to tell if this is a fault in our measurement and our presumption of a three-point non-interval scale and it then often degenerates into a form of intellectual snobbery that, unsurprisingly, favours the elites who will be studying the non-elites. (As an aside, I learnt a new word. Clerisy: “A distinct class of learned or literary people” If you’re going to talk about the literate elites, it’s nice to have a single word to do so!) In student terms, does this mean that there is a point at which even the most keen of our best and brightest will not try some of our new approaches? The question, of course, is whether the pursuit of happiness is paralleling the quest for knowledge, or whether this is all one long endured workflow that results in a pleasure quantum labelled ‘graduation’.
As I said, I found it to be an interesting and thoughtful piece, despite some problems and I recommend it to you, even if we must then start an large debate in the comments on how much I misled you!
Your love is like bad measurement.
Posted: June 19, 2012 Filed under: Education, Opinion | Tags: advocacy, data visualisation, education, educational problem, ethics, higher education, learning, measurement, MIKE, teaching, teaching approaches, thinking, universal principles of design, workload Leave a comment(This is my 200th post. I’ve allowed myself a little more latitude on the opinionated scale. Educational content is still present but you may find some of the content slightly more confronting than usual. I’ve also allowed myself an awful pun in the title.)
People like numbers. They like solid figures, percentages, clear statements and certainty. It’s a great shame that mis-measurement is so easy to do, when you search for these figures, and so much a part of our lives. Today, I’m going to discuss precision and recall, because I eventually want to talk about bad measurement. It’s very easy to get measurement wrong but, even when it’s conducted correctly, the way that we measure or the reasons that we have for measuring can make even the most precise and delicate measurements useless to us for an objective scientific purpose. This is still bad measurement.
I’m going to give you a big bag of stones. Some of the stones have diamonds hidden inside them. Some of the stones are red on the outside. Let’s say that you decide that you are going to assume that all stones that have been coloured red contain diamonds. You pull out all of the red stones, but what you actually want is diamonds. The number of red stones are referred to as the number of retrieved instances – the things that you have selected out of that original bag of stones. Now, you get to crack them open and find out how many of them have diamonds. Let’s say you have R red stones and D1 diamonds that you found once you opened up the red stones. The precision is the fraction D1/R: what percentage of the stones that you selected (Red) were actually the ones that you wanted (Diamonds). Now let’s say that there are D2 diamonds (where D2 is greater than or equal to zero) left back in the bag. The total number of diamonds in that original bag was D1+D2, right? The recall is the fraction of the total number of things that you wanted (Diamonds, given by D1+D2) that you actually got (Diamonds that were also painted Red, which is D1). So this fraction is D1/(D1+D2),the number you got divided by the number that there were there for you to actually get.
If I don’t have any other mechanism that I can rely upon for picking diamonds out of the bag (assuming no-one has conveniently painted them red), and I want all of the diamonds, then I need to take all of them out. This will give me a recall of 100% (D2 will be 0 as there will be nothing left in the bag and the fraction will be D1/D1). Hooray! I have all of the diamonds! There’s only one problem – there are still only so many diamonds in that bag and (maybe) a lot more stones, so my precision may be terrible. More importantly, my technique sucks (to use an official term) and I have no actual way of finding diamonds. I just happen to have used a mechanism that gets me everything so it must, as a side effect, get me all of the diamonds. I haven’t actually done anything except move everything from one bag to another.
One of the things about selection mechanisms is that people often seem happy to talk about one side of the precision/recall issue. “I got all of them” is fine but not if you haven’t actually reduced your problem at all. “All the ones I picked were the right ones” sounds fantastic until you realise that you don’t know how many were left behind that were also the ones that you wanted. If we can specify solutions (or selection strategies) in terms of their precision and their recall, we can start to compare them. This is an example of how something that appears to be straightforward can actually be a bad measurement – leave out one side of precision or recall and you have no real way of assessing the utility of what it is that you’re talking about, despite having some concrete numbers to fall back on.
You may have heard this expressed in another way. Let’s assume that you can have a mechanism for determining if people are innocent or guilty of a crime. If it was a perfect mechanism, then only innocent people would go free and only guilt people would go to jail. (Let’s assume it’s a crime for which a custodial sentence is appropriate.) Now, let’s assume that we don’t have a perfect mechanism so we have to make a choice – either we set up our system so that no innocent person goes to jail, or we set up our system so that no guilty person is set free. It’s fairly easy to see how our interpretation of the presumption of innocence, the notion of reasonable doubt and even evidentiary laws would be constructed in different ways under either of these assumptions. Ultimately, this is an issue of precision and recall and by understanding these concepts we can define what we are actually trying to achieve. (The foundation of most modern law is that innocent people don’t go to jail. A number of changes in certain areas are moving more towards a ‘no one who may be guilty of crimes of a certain type will escape us’ model and, unsurprisingly, this is causing problems due to inconsistent applications of our simple definitions from above.)
The reason that I brought all of this up was to talk about bad measurement, where we measure things and then over-interpet (torture the data) or over-assume (the only way that this could have happened was…) or over-claim (this always means that). It is possible to have a precise measurement of something and still be completely wrong about why it is occurring. It is possible that all of the data that we collect is the wrong data – collected because our fundamental hypothesis is in error. Data gives us information but our interpretative framework is crucial in determining what use we can make of this data. I talked about this yesterday and stressed the importance of having enough data, but you really have to know what your data means in order to be sure that you can even start to understand what ‘enough data’ means.
One example is the miasma theory of disease – the idea that bad smells caused disease outbreaks. You could construct a gadget that measured smells and then, say in 18th Century England, correlate this with disease outbreaks – and get quite a good correlation. This is still a bad measurement because we’re actually measuring two effects, rather than a cause (dead mammals introducing decaying matter/faecal bacteria etc into water or food pathways) and the effects (smell of decomposition, and diseases like cholera, E. Coli contamination, and so on). We can collect as much ‘smell’ data as we like, but we’re unlikely to learn much more because any techniques that focus on the smell and reducing it will only work if we do things like remove the odiferous elements, rather than just using scent bags and pomanders to mask the smell.
To look at another example, let’s talk about the number of women in Computer Science at the tertiary level. In Australia, it’s certainly pretty low in many Universities. Now, we can measure the number of women in Computer Science and we can tell you exactly how many are in a given class, what their average marks are, and all sorts of statistical data about them. The risk here is that, from the measurements alone, I may have no real idea of what has led to the low enrolments for women in Computer Science.
I have heard, far too many times, that there are too few women in Computer Science because women are ‘not good at maths/computer science/non-humanities courses’ and, as I also mentioned recently when talking about the work of Professor Seron, this doesn’t appear to the reason at all. When we look at female academic performance, reasons for doing the degree and try to separate men and women, we don’t get the clear separation that would support this assertion. In fact, what we see is that the representation of women in Computer Science is far lower than we would expect to see from the (marginally small) difference that does appear at the very top end of the data. Interesting. Once we actually start measuring, we have to question our hypothesis.
Or we can abandon our principles and our heritage as scientists and just measure something else that agrees with us.
You don’t have to get your measurement methods wrong to conduct bad measurement. You can also be looking for the wrong thing and measure it precisely, because you are attempting to find data that verifies your hypothesis, but rather than being open to change if you find contradiction, you can twist your measurements to meet your hypothesis, you can only collect the data that supports your assumptions and you can over-generalise from a small scale, or from another area.
When we look at the data, and survey people to find out the reasons behind the numbers, we reduce the risk that our measurements don’t actually serve a clear scientific purpose. For example, and as I’ve mentioned before, the reason that there are too few women studying Computer Science appears to be unpleasantly circular and relates to the fact that there are too few women in the discipline over all, reducing support in the workplace, development opportunities and producing a two-speed system that excludes the ‘newcomers’. Sorry, Ada and Grace (to name but two), it turns out that we seem to have very short memories.
Too often, measurement is conducted to reassure ourselves of our confirmed and immutable beliefs – people measure to say that ‘this race of people are all criminals/cheats/have this characteristic’ or ‘women cannot carry out this action’ or ‘poor people always perform this set of actions’ without necessarily asking themselves if the measurement is going to be useful, or if this is useful pursuit as part of something larger. Measuring in a way that really doesn’t provide any more information is just an empty and disingenuous confirmation. This is forcing people into a ghetto, then declaring that “all of these people live in a ghetto so they must like living in a ghetto”.
Presented a certain way, poor and misleading measurement can only lead to questionable interpretation, usually to serve a less than noble and utterly non-scientific goal. It’s bad enough when the media does it but it’s terrible when scientists, educators and academics do it.
Without valid data, collected on the understanding that a world-changing piece of data could actually change our data, all our work is worthless. A world based on data collection purely for the sake of propping up, with no possibility of discovery and adaptation, is a world of very bad measurement.
What are the Fiction and Non-Fiction Equivalents of Computer Science?
Posted: June 9, 2012 Filed under: Education, Opinion | Tags: data visualisation, design, education, educational problem, herdsa, higher education, icer, learning, principles of design, reflection, student perspective, teaching, teaching approaches, thinking, universal principles of design 2 CommentsI commented yesterday that I wanted to talk about something covered in Mark’s blog, namely if it was possible to create an analogy between Common Core standards in different disciplines with English Language Arts and CS as the two exemplars. In particular, Mark pondered, and I quote him verbatim:
”Students should read as much nonfiction as fiction.” What does that mean in terms of the notations of computing? Students should read as many program proofs as programs? Students should read as much code as comments?
This a great question and I’m not sure that I have much of an answer but I’ve been enjoying thinking about it. We bandy the terms syntax and semantics around in Computer Science a lot: the legal structures of the programs we write and the meanings of the components and the programs. Is it even meaningful to talk about fiction and non-fiction in these terms and where do these fit? I’ve gone in a slightly different direction from Mark but I hope to bring it back to his suggestions later on.
I’m not an English specialist, so please forgive me or provide constructive guidance as you need to, but both fiction and non-fiction rely upon the same syntactic elements and the same semantic elements in linguistic terms – so the fact that we must have legal programs with well-defined syntax and semantics pose no obstacle to a fictional/non-fictional interpretation.
Forgive me as I go to Wikipedia for definitions for fiction and non-fiction for a moment:
“Non-fiction (or nonfiction) is the form of any narrative, account, or other communicative work whose assertions and descriptions are understood to be factual.” (Warning, embedded Wikipedia links)
“Fiction is the form of any narrative or informative work that deals, in part or in whole, with information or events that are not factual, but rather, imaginary—that is, invented by the author” (Again, beware Wikipedia).
Now here we can start to see something that we can get our teeth into. Many computer programs model reality and are computerised representation of concrete systems, while others may have no physical analogue at all or model a system that has never or may never exist. Are our simulations and emulations of large-scale system non-fiction? If so, is a virtual reality fictional because it has never existed or non-fictional because we are simulating realistic gravity? (But, of course, fiction is often written in a real world setting but with imaginary elements.)
From a software engineering perspective, I can see an advantage to making statements regarding abstract representations and concrete analogues, much as I can see a separation in graphics and game design between narrative/event engine construction and the physics engine underneath.
Is this enough of a separation? Mark’s comments on proof versus program is an interesting one: if we had an idea (an author’s creation) then it is a fiction until we can determine that it exists, but proof or implementation provides this proof of existence. In my mind, a proof and a program are both non-fiction in terms of their reification, but the idea that they span may still be fictional. Comments versus code is also very interesting – comments do not change the behaviour of code but explain, from the author’s mind, what has happened. (Given some student code and comment combinations, I can happily see a code as non-fiction, comment as fiction modality – or even comment as magical reality!)
Of course, this is all an enjoyable mental exercise, but what can I take from this and use in my teaching. Is there a particular set of code or comments that students should read for maximum benefit and can we make a separation that, even if not partitioned so neatly across two sets, gives us the idea of what constitutes a balanced diet of the products of our discipline?
I’d love to see some discussion on this but, if nothing here, then I’m happy to buy the first round of drinks at HERDSA or ICER to start a really good conversation going!
Learning from other people – Academic Summer Camp (except in winter???)
Posted: June 3, 2012 Filed under: Education | Tags: data visualisation, education, grand challenge, higher education, in the student's head, learning, principles of design, R, reflection, resources, summer camp, text analysis, tools, universal principles of design, work/life balance, workload Leave a commentI’ve just signed up for the Digital Humanities Winter Institute course on “Large-scale text analysis with R”. K read about it on ProfHacker and passed it on to me thinking I’d be interested. Of course, I was, but it goes well beyond learning R itself. R is a statistically focused programming package that is available for free for most platforms. It’s the statistical (and free, did I mention that?) cousin to the mathematically inclined Matlab.
I’ve spoken about R before and I’ve done a bit of work in it but, and here’s why I’m going, I’ve done all of it from within a heavily quantitative Computer Science framework. What excites me about this course is that I will be working with people from a completely different spectrum and with a set of text analyses with which I’m not very familiar at all. Let me post the text of the course here (from this website) [my bold]:
Large-Scale Text Analysis with R
Instructor: Matt Jockers, Assistant Professor of Digital Humanities, Department of English, University of Nebraska, LincolnText collections such as the HathiTrust Digital Library and Google Books have provided scholars in many fields with convenient access to their materials in digital form, but text analysis at the scale of millions or billions of words still requires the use of tools and methods that may initially seem complex or esoteric to researchers in the humanities. Large-Scale Text Analysis with R will provide a practical introduction to a range of text analysis tools and methods. The course will include units on data extraction, stylistic analysis, authorship attribution, genre detection, gender detection, unsupervised clustering, supervised classification, topic modeling, and sentiment analysis. The main computing environment for the course will be R, “the open source programming language and software environment for statistical computing and graphics.” While no programming experience is required, students should have basic computer skills and be familiar with their computer’s file system and comfortable with the command line. The course will cover best practices in data gathering and preparation, as well as addressing some of the theoretical questions that arise when employing a quantitative methodology for the study of literature. Participants will be given a “sample corpus” to use in class exercises, but some class time will be available for independent work and participants are encouraged to bring their own text corpora and research questions so they may apply their newly learned skills to projects of their own.
There are two things I like about this: firstly that I will be exposed to such a different type and approach to analysis that is going to be immediately useful in the corpus analyses that we’re planning to carry out on our own corpora, but, secondly, because I will have an intensive dedicated block of time in which to pursue it. January is often a time to take leave (as it’s Summer in Australia) – instead, I’ll be rugged up in the Maryland chill, sitting with like-minded people and indulging myself in data analysis and learning, learning, learning, to bring knowledge home for my own students and my research group.
So, this is my Summer Camp. My time to really indulge myself in my coding and just hack away at analyses and see what happens.
I’ve also signed up to a group who are going to work on the “Million Syllabi Project Hack-a-thon“, where “we explore new ways of using the million syllabi dataset gathered by Dan Cohen’s Syllabus Finder Tool” (from the web site). 10 years worth of syllabi to explore, at a time when my school is looking for ways to be able to teach into more areas, to meet more needs, to create a clear and attractive identity for our discipline? A community of hackers looking at ways of recomposing, reinterpreting and understanding what is in this corpus?
How can I not go? I hope to see some of you there! I’ll be the one who sounds Australian and shivers a lot.
Proscription and Prescription: Bitter Medicine for Teachers
Posted: May 24, 2012 Filed under: Education | Tags: advocacy, blogging, curriculum, design, education, educational problem, higher education, learning, measurement, principles of design, reflection, resources, teaching, teaching approaches, tools, universal principles of design, workload Leave a commentAustralia is a big country. A very big country. Despite being the size of the continental USA, it has only 22,000,000 people, scattered across the country and concentrated in large cities. This allows for a great deal of regional variation in terms of local culture, accents (yes, there is more than one Australian accent) and local industry requirements. Because of this, despite having national educational standards and shared ideas of what constitutes acceptable entry levels for University, there are understandable regional differences in the primary, secondary and tertiary studies.
Maintaining standards is hard, especially when you start to consider regional issues – whose standards are you maintaining. How do you set these standards? Are they prescriptions (a list of things that you must do) or proscriptions (a list of things that you mustn’t do)? There’s a big difference in course and program definition depending upon how you do this. If you prescribe a set textbook then everyone has to use it to teach with but can bring in other materials. If you proscribe unauthorised textbooks then you have suddenly reduced the amount of initiative and independence that can be displayed by your staff.
As always, I’m going to draw an analogue with our students to think about how we guide them. Do we tell them what we want and identify those aspects that we want them to use, or do we tell them what not to do, limit their options and then look surprised when they don’t explore the space and hand in something that conforms in a dull and lifeless manner?
I’m a big fan of combining prescription, in terms of desirable characteristics, and proscription, in terms of pitfalls and traps, but in an oversight model that presents the desirable aspects first and monitors the situation to see if behaviour is straying towards the proscribed. Having said that, the frequent flyers of the proscription world, plagiarism and cheating, always get mentioned up front – but as the weak twin of the appropriate techniques of independent research, thoughtful summarisation, correct attribution and doing your own work. Rather than just saying “DO NOT CHEAT”, I try to frame it in terms of what the correct behaviour is and how we classify it if someone goes off that path.
However, any compulsory inclusions or unarguable exclusions must be justified for the situation at hand – and should be both defensible and absolutely necessary. When we start looking at a higher level, above the individual school to the district, to the region, to the state, to the country, any complex set of prescriptions and proscriptions is very likely to start causing regional problems. Why? Because not all regions are the same. Because not all districts have the money to meet your prescriptions. Because not all cultures may agree with your proscriptions.
This post was triggered by a post from a great teacher I know, to whom I am also related, who talked about having to take everything unofficial out of her class. Her frustration with this, the way it made her feel, the way it would restrict her – an award winning teacher – made me realise how privileged I am to work in a place where nobody really ever tells me what to do or how to teach. While it’s good for me to remember that I am privileged in this regard, perhaps it’s also good to think about the constant clash between state, bureaucracy and education that exist in some other places.






