Identity: Who am I?

The theme of last week’s HERDSA conference was supposed to be ‘connections’ and, while we certainly discussed that a lot, the fundamental requirement for a connection is that there is something similar between two points that allows them to connect in the first place. The underpinning of all of the connections was knowing enough about yourself or your area to work out who you could or should connect with. Even where we talked about inter-disciplinary issues, we established a commonality in our desire to learn from others, a need to educate. This was a discussion of greater identity – what we were beyond the basic statements of “I am from discipline X” and an affirmation of our desire to be seen as educators.

I have some more posts to make on the final talk on HERDSA, which moved me a great deal and gave me some very interesting pathways along which to think, but today I’m going to restrict myself to musing on identity and how we establish it.

Indigenous identity is an important part of life in Australia, whether those who wish to ignore the issue like it or not. The traditional owners of the land had ways very different from those of the white colonists and the clash of cultures has caused a great deal of sorrow and loss over the years, but it has also given rise to a great many meaningful and valuable opportunities where two cultures sit down and attempt to view each other. Something I find interesting, as someone who works with knowledge, is the care and attention given to statements of who people are, within their own culture.

When the speaker, about whom I will write much more, stood up to give the final keynote of HERDSA, we had already had her identified by her people and her place in New Zealand, and she spoke in another tongue when she began speaking, as the indigenous peoples here also often do. Because she is indigenous to another land, she then apologised for her pronunciation of local words because, of course, it is not as if this is British films of the 30s and every foreigner speaks the same ‘foreign lingo’. Her identity, her cultural locale, her zone of expertise and stewardship were clearly identified before she spoke but, as she immediately acknowledged, her status and place did not grant her mystical insights into the issues of local people. I found this very respectful but, of course, I look in from outside, without a clear notion of my own identity, and therefore I cannot speak for the traditional people of the region of Tasmania which I was visiting.

When I am introduced to people, my bio says something like this: “Nick Falkner is the Associate Dean of Information Technology for the Faculty of…” and then goes on to mention my linkage to the school of Computer Science as a lecturer, and I will probably mention my PhD if it hasn’t been put into the title. Why? Because it helps people to place me in context, to value the weight of my words, to determine if the knowledge that I speak comes from a point of authority.

But is this my identity?

My PhD is not the same as an initiation into sacred knowledge. We accept that gaining the PhD is the first step along the road, and not more than that. It is at best the journeyman qualification: apprenticeship complete and trade competent but not yet a master. Journeyman comes from the French journee (day) and refers to the fact that you can charge a wage for a day’s work – you have established your value. However, with increasing pressures on the PhD completion time, tied to funding and available resources, you cannot chip away at your task of knowledge until your apprenticeship is complete, regardless of the time it takes. Now, your supervisor, in the role of master, examines your works, guides you towards crafting that can be completed in time and then stamps you ready (with the help of many others) with a possible burden to be incurred as you pick up the additional skills.

There is so much disparity in what a PhD means, by discipline, by country, even by University within a state, that it is the loosest possible description of journeyman possible. No trades body would certify an electrician under such a rubbery and relaxed definition, without reserving the right to assess their skill at the trade.

So, in my bio, when I recite my list of the symbols and people that I come from, I list one that is either highly meaningful or absolutely meaningless, depending on where it comes from. But this is the line of my academic knowledge – my descent. And, on writing this, I realise that I have no idea who the supervisors of my supervisors were. I can tell you that I was the student of Dr Andrew Wendelborn and Dr Paul Coddington but there, it stops. Of course, I realise that there are people who can, and do, trace their thesis path back to Isaac Newton but, in our culture, where knowledge is largely mutated in transmission, rather than held sacred in one form as immutable knowledge, a grand truth handed down from the ancient and unknowable entities of the past, such a descent is an accident of structure and geography.

When someone calls himself a man of a tribe, they are saying much more than “I live in this area”, they are identifying themselves as someone who is linked to a tradition and carries on the essential knowledge of the tradition. This is part of their fabric.

When I call myself an Associate Dean, a lecturer, or a Computer Scientist, I’m telling you what I do, rather than addressing my fundamental identity. Now, I’m certainly not saying that I need to move myself to a place where I only pass on immutable knowledge or that I need to start some connection with the sacred. I have no real connection to the mystic and never really have, despite some false starts. But I have always been a thinker and now I’m thinking more about why I work the way I do – mostly it’s because I don’t know how to quantify myself or assess my worth except in terms of the things that I produce, the jobs that I do, the titles that I can present when asked a question that I loathe: “what do you do?”

For me, the answer for many years has always, implicitly, been “Never enough” and, on reflection, this is the answer of a man who really doesn’t understand his own nature enough to know when he can rest, or when he is done. My apprenticeship is long over and I am becoming more and more expert every day. But before I can claim mastery, I have to have knowledge and part of that knowledge is knowledge of myself.

Calling myself Nick Falkner is a label – it says nothing about who I am. In many senses, knowing what I am is identifying those aspects to which I could apply a modifier such as ‘better’ or ‘worse’ with a real sense of being able to achieve a change. I am the only Nickolas (middle names suppressed to avoid identity theft) Falkner that there is so I am the best and worst. I seek a functional model of my identity that allows me to able to improve myself and identify when I am heading the other way.

It is, of course, possible to be a disrespectful man of a tribe, a betraying man, a gluttonous woman, an untrustworthy elder, an ungrateful child. But this says nothing about the gender as a whole, and less about actual identity, and, frankly, falling back to those accidents of birth that define physical and spatial characteristics (including the location of birth or residence) seems rather weak in terms of the  nature of identity.

Looking back through this blog, my identity becomes more clear to me but, because I am still unsure, I will probably spend some time working on this – drawing diagrams, thinking and discussing my thoughts with my wife. My partnership, my relationship, my bond with my wife is a core part of my identity but, of course, it is a shared component and describing myself in terms of this alone is like calling San Francisco “the place between the bridges”. We lose the point inside the connections.

Am I an educator? Am I a teacher? There is a subtle difference in that educators can plan and direct education, whereas teachers teach, but this is an empty sophistry for a busy century so let us establish a rough equivalence. Am I more than this? Am I a transformer? A creator?

I am leaving my journeyman days behind me. I am now on the path to mastery but the real question is, always, “Master of what?”

I look at the questions of identity that I have been posing, in reaction to HERDSA, in reaction to my increasing exposure to the nature of people and self that is now surrounding me as I learn more about the Australasian indigenous cultures. It is time to look at everything I’ve been doing and work out, behind the name and the titles and the qualifications that I have put in my biography for so long (as if they told anyone who I was), and think about who and what I actually am.

Because, of course, once I have mastery of that, then I can help other people. And, if I know anything about myself, it is that helping people is and will hopefully always be one of the best pieces of my identity.


HERDSA 2012: Session 1 notes – Student Wellbeing

I won’t be giving detailed comments on all sessions – firstly, I can’t attend everything and, secondly, I don’t want you all to die of word poisoning – but I’ve been to a number of talks and thought I’d discuss those here that really made me think. (My apologies for the delay. I seem to be coming down with a cold/flu and it’s slowing me down.)

In Session 1, I went to a talk entitled “Integrating teaching, learning, support and wellbeing in Universities”, presented by Dr Helen Stallman from University of Queensland. The core of this talk was that, if we want to support our students academically, we have to support them in every other way as well. The more distressed students are, the less well they do academically. If we want good outcomes, we have to able to support students’ wellbeing and mental health. We already provide counselling and support skill workshops but very few students will go and access these resources, until they actually need them.

This is a problem. Tell a student at the start of the course, when they are fine, where they can find help and they won’t remember it when they actually may need to know where that resource is. We have a low participation in many of the counselling and support skill workshop activities – it is not on the student’s agenda to go to one of these courses, it is on their agenda is to get a good mark. Pressured for time, competing demands, anything ‘optional’ is not a priority.

The student needs to identify that they have a problem, then they have to be able to find the solution! Many University webpages not actually useful in this regard, although they contain a lot of marketing information on the front page.

What if we have an at-risk profile that we can use to identify students? It’s not 100% accurate. Students who are ‘at risk’ may not have problems but students who don’t have the profile may still have problems! We don’t necessarily know what’s going on with our students. Where we have 100s of students, how can we know all of them? (This is one of the big drivers for my work in submission management and elastic time – identifying students who are at risk as soon as they may be at risk.)

So let me reiterate the problem with the timing of information: we tend to mention support services once, at the start. People don’t access resources unless they’re relevant and useful at the particular time. Talking to people when they don’t have a problem – they’ll forget it.

So what are the characteristics of interventions that promote student success:

  • Inclusive of all students (and you can find it)
  • Encourages self-management skills  (Don’t smother them! Our goal is not dependency, it’s self-regulation)
  • Promotes academic achievement (highest potential for each of our students)
  • Promotes wellbeing (not just professional capabilities but personal capabilities and competencies)
  • Minimally sufficient (students/academics/unis are not doing more work than they need to, and only providing the level of input that is required to achieve this goal.)
  • Sustainable (easy for students and academics)

Dr Stallman then talked about two tools – the Learning Thermometer and The Desk. Student reflection and system interface gives us the Learning Thermometer, then automated and personalised student feedback is added, put in by academic. Support and intervention, web-based, as a loop around student feedback. Student privacy data is maintained and student gets to choose intervention that is appropriate. Effectively, the Learning Thermometer tells the student which services are available, as and when they are needed, based on their results, their feedback and the lecturer’s input.

This is designed to promote self-management skills and makes the student think “What can I do? What are the things that I can do?” Gives students of knowledge of which resources they can access. (And this resource is called “The Desk”) Who are the people who can help me?

What is being asked is: What are the issues that get in the way of achieving academic success?

About “The Desk”: it contains quizzes related to all part of the desk that gives students personalised feedback to give them module suggestions as appropriate. Have a summary sheet of what you’ve done so you can always remember it. Tools section to give you short tips on how to fix things. Coffee House social media centre to share information and pictures (recipes and anything really).

To allow teachers to work out what is going on, an addition to the Learning Thermometer can give the teacher feedback based on reflection and the interface. Early feedback to academics allows us to improve learning outcomes. THese improvements in teaching practices. (Student satisfaction correlates poorly with final mark, this is more than satisfaction.)

The final items in the talk focussed on:

  • A universal model of prevention
  • All students can be resilient
  • Resources need to be timely relevant and useful
  • Multiple access points
  • Integrated within the learning environment

What are the implications?

  • Focus on prevention
  • Close the loop between learning, teaching, wellbeing and support
  • More resilient students
  • Better student graduate outcomes.

Overall a very interesting talk, which a lot of things to think about. How can I position my support resources so that students know where to go as and when they need them? Is ‘resiliency’ an implicit or explicit goal inside my outcomes and syllabus structure? Do the mechanisms that I provide for assessment work within this framework?

With my Time Banking hat on, I am always thinking about how I can be fair but flexible, consistent but compassionate, and maintain quality while maintaining humanity. This talk is yet more information to consider as I look at alternative ways to work with students for their own benefit, while improving their performance at the same time.

Contact details and information on tools discussed:

h.stallman@uq.edu.au
http://www.thelearningthermometer.org.au
http://www.thedesk.org.au
thedesk@uq.edu.au


Dewey Defeats Truman – again!

The US Presidential race in 1948 was apparently decided when the Chicago Tribune decided to publish their now infamous headline “Dewey Defeats Truman” (Wikipedia link). As it happened, Truman had defeated Dewey in an upset victory. The rather embarrassing mistake was a combination of an early press deadline, early polls and depending upon someone who had been reliable in their predictions previously. What was worse was that the early editions had predicted a significant reversed result, with a sweeping victory for Dewey. Even as other results came in indicating that this wasn’t so, the paper stuck to the headline, while watering down the story.

Ultimately, roughly 150,000 papers were printed that were, effectively, utter and total nonsense.

Because he’s a President, I doubt that Truman actually used the phrase “neener, neener”. (Associated Press, photo by Byron Rollins, via Wikipedia)

This is a famous story in media reporting and, in many ways, it gives us a pretty simple lesson: Don’t Run The Story Until You Have the Facts. Which brings me to the reporting on the US Supreme Court regarding the constitutionality of the controversial health care bill.

Students have to understand how knowledge is constructed, if they are to assist in their own development, and the construction of what is accepted to be fact is strongly influenced by the media, both traditional and new. We’ve moved to a highly dynamic form of media that can have a direct influence on events as they unfold. Forty years ago, you’d read about an earthquake that killed hundreds. Today, dynamic reporting of earthquakes on social media save lives because people can actually get out of the way or get help to people faster.

I’m a great fan of watching new media reporting, because the way that it is reported is so fluid and dynamic. An earthquake happens somewhere and the twitter reporting of it shows up as a corresponding twitter quake. People then react and spread the news, editing starts to happen and, boom, you have an emergent news feed built by hundreds of thousands of people. However, traditional media, which has a higher level of information access and paid staff to support, does not necessarily work the same way. Trad media builds stories on facts, produces them, has time to edit, commits them to press or air and has a well-paid set of information providers and presenters to make it all happen. (Yes, I know there are degrees in here and there are ‘Indy’ trad media groups, but I think you get my point.)

It was very interesting, therefore, to see a number of trad news sources get the decision on the health care bill completely and utterly wrong. When the court’s decision was being read out, an event that I watched through many eyes as I was monitoring feed and reaction to feed, CNN threw up a headline, before the decision had been announced saying that the bill had been defeated.

And FOX news reported the same thing.

Only one problem. It wasn’t true.

As this fact became apparent, first of all, the main stories changed, then the feeds published from the main stories changed and then, because nobody had printed a paper yet, some of the more egregious errors disappeared from web sites and feeds – never to be seen again.

Oh wait, the Internet is Forever, so those ‘disappeared’ feeds had already been copied, pictured and cached.

Now, because of the way that the presenting Justice was actually speaking, you could be forgiven for thinking that he was about to say that the bill had been defeated. Except for the fact that there were no actual print deadlines in play here – what tripped up CNN and FOX appears to have been their desire to report a certain type of story first. In the case of FOX, had the bill been defeated, it’s not hard to imagine them actually ringing up President Obama to say “neener, neener”. (FOX news is not the President, so is not held to the same standards of decorum.)

The final comment on this story, and which should tell you volumes about traditional news gathering mechanisms in the 21st century, is that there was an error in a twitter/blog feed reporting on the decision which made an erroneous claim about the tax liability of citizens who wished to opt out of the program. So, just to be clear, we’re talking about a non-fact-checked social media side feed and there’s a mistake in it. Which then a very large number of traditional news sources presented as fact, because it appears that a large amount of their expensive resource gathering and fact checking amounts to “Get Brad and Janet to check out what’s happening on Twitter”. They they all had to fix and edit (AGAIN) once they discovered that they had effectively reported an error made by someone sitting in the room, typing onto a social media feed, as if it had gone through any kind of informational hygiene process.

Here are my final thoughts. As an experiment, for about a week, read Fark, Metafilter and The Register. Then see how many days it is before the same stories show up on your television, radio and print news. See how much change the stories have gone through, if any. Then look for stories that go the other way around. You may find it interesting when you work out which sources you trust as authorities, especially those that appear more trustworthy because they are traditional.

(Note: Apologies for the delay in posting. As part of my new work routine, I rearranged some time and I realised that posting 6 hours late wouldn’t hurt anyone.)


You’re Welcome On My Lawn But Leaf Blowers Are Not

I was looking at a piece of software the other day and, despite it being a well-used and large-userbase piece of code, I was musing that I had never found it be particularly fit for purpose. (No, I won’t tell you what it is – I’m allergic to defamation suits.) However, my real objections to it, in simple terms, sound a bit trivial to my own ears and I’ve never really had the words or metaphors to describe it to other people.

Until today.

My wife and I were walking in to work today and saw, in the distance, a haze of yellow dust, rising up in front of three men who were walking towards us, line abreast, as a street sweeping unit slowly accompanied them along the road. Each of the men had a leaf blower that they were swinging around, kicking up all of the Plain Tree pollen/dust (which is highly irritating) and pushing it towards us in a cloud. They did stop when they saw us coming but, given how much dust was in the air, it’s 8 hours later and I’m still getting grit out of my eyes.

Weirdly enough, this image comes from a gaming site, discussing mecha formations. The Internet constantly amazes me.

Now, I have no problem with streets being kept clean and free of debris and I have a lot of respect for the sweepers, cleaners and garbage removal people who stop us from dying in a MegaCholera outbreak from living in cities – but I really don’t like leaf blowers. On reflection, there are a number of things that I don’t like for similar reasons so let me refer back to the piece of software I was complaining about and call it a leaf blower.

Why? Well, primarily, it’s because leaf blowers are a noisy and inefficient way to not actually solve the problem. Leaf blowers move the problem to someone else. Leaf blowers are the socially acceptable face of picking up a bag of garbage and throwing it on your neighbour’s front porch. Today was a great example – all of the dust and street debris was being blown out of the city towards the Park lands where, presumably, this would become someone else’s problem. The fact that a public thoroughfare was a pollen-ridden nightmare for 30 minutes or so was also, apparently, collateral damage.

Now, of course, there are people who use leaf blowers to push leaves into big piles that they then pick up, but there are leaf vacuums and brooms and things like that which will do a more effective job with either less noise or more efficiently. (And a lot of people just blow it off their property as if it will magically disappear.) The catch is, of course, better solutions generally require more effort.

The problem with a broom is that pushing a broom is a laborious and tiring task, and it’s quite reasonable for large-scale tasks like this that we have mechanical alternatives. For brief tidy up and small spaces, however, the broom is king. The problem with the leaf vacuum is that it has to be emptied and they are, because of their size and nature, often more expensive than the leaf blower. You probably couldn’t afford to have as many of these on your cleanup crew’s equipment roster. So brooms are cheap but hard manual labour compared to expensive leaf vacuums which fulfil the social contract but require regular emptying.

Enter the leaf blower – low effort, relatively low cost, no need to empty the bag, just blow it off the property. It is, however, an easy way to not actually solve the problem.

And this, funnily enough, describes the software that I didn’t like (and many other things in a similar vein). Cost-wise it’s a sensible decision, compared to building it yourself and in terms of maintenance. It’s pretty easy to use. There’s no need to worry about being sensible or parsimonious with resources. You just do stuff in it with a small amount of time and you’re done.

The only problem is that what you are encouraged to produce by default, the affordance of the software, is not actually the solution to the problem the the software theoretically solves. It is an approximation to the answer but, in effect, you’ve handed the real problem to someone else – in my case, the student, because it’s software of an educational nature. This then feeds load straight back to you, your teaching assistants and support staff. Any effort you’ve expended is wasted and you didn’t even solve the problem.

I’ve talked before about trying to assess what knowledge workers are doing, rather than concentrating on the number of hours that they are spending at their desk, and the ‘desk hours’ metric is yet another example of leaf blowing. Cheap and easy metric, neither effective nor useful, and realistically any sensible interpretation requires you to go back and work out what people are actually doing during those hours – problem not solved, just shunted along, with a bit of wasted effort and a false sense of achievement.

Solving problems is sometimes difficult and it regularly requires careful thought and effort. There may be a cost involved. If we try to come up with something that looks like a solution, but all it does is blow the leaves around, then we probably haven’t actually solved anything.


Student Reflections – The End of Semester Process Report

I’ve mentioned before that I have two process awareness reports in one of my first-year courses. One comes just after the monster “Library” prac, and one is right at the end of the course. These encourage the students to reflect on their assignment work and think about their software development process. I’ve just finished marking the final one and, as last year, it’s a predominantly positive and rewarding experience.

When faced with 2-4 pages of text to produce, most of my students sit down and write several, fairly densely packed pages telling me about the things that they’ve discovered along the way: lessons learned, pit traps avoided and (interestingly) the holes that they did fall into. It’s rare that I get cynical replies and for this course, from over 100 responses, I think that I had about 5 disappointing ones.

The disappointing ones included ones that posted about how I had to give them marks for something that was rubbish (uh, no I didn’t, read the assignment spec and the forum carefully), ones that were scrawled together in about a minute and said nothing, and the ones that were the outpourings of someone who wasn’t really happy with where they were, rather than something I could easily fix. Let’s move on from these.

I want to talk about the ones who had crafted beautiful diagrams where they proudly displayed their software process. The ones who shared great ideas about how to help students in the next offering. The ones who shared the links that they found useful with me, in case other students would like them. The ones who were quietly proud of mastering their areas of difficulty and welcomed the opportunity to tell someone about it. The one who used this quote from Confucius:

“A man without distant care must have near sorrow”

(人无远虑 必有近忧)

To explain why you had to look into the future when you did software design – don’t leave your assignments to the last minute, he was saying, look ahead! (I am, obviously, going to use that for teaching next semester!)

The Confucian Symbol. Something else to put in my lecture slides for Semester 2, 2012.

Overall, I find these reports to be a resolutely uplifting experience. The vast majority of my students have learnt what I wanted them to learn and have improved their professional skills but, as well, a large number of them have realised that the assignments, together with the lectures, develop their knowledge. Here is one of my favourite student quotes about the assignments themselves, which tells me that we’re starting to get the design right:

The real payoff was towards the end of the assignment. Often it would be possible to “just type code” and earn at least half the marks fairly easily. However there was always a more complex final-­part to the assignment, one that I could not complete unless I approached it in a systematic, well thought out way. The assignments made it easy to see that a program of any real complexity would be nearly impossible to build without a well-­defined design.

But students were also thinking about how they were going to take more general lessons out of this. Here’s another quote I like:

Three improvements that I am aiming to take on board for future subjects are: putting together a study timetable early on in the game; taking the time to read and understand the problem I’ve been given; and put enough time aside to produce a concise design which includes testing strategies.

The exam for this course has just been held and we’re assembling the final marks for inspection on Friday, which will tell us how this new offering has gone. But, at this stage, I have an incredibly valuable resource of student feedback to draw on when I have to do any minor adjustments to make this course better for the next offering.

From a load perspective, yes, having two essays in an otherwise computationally based course does put load on the lecturer/marker but I am very happy to pay that price. It’s such a good way to find out what my students are thinking and, from a personal perspective, be a little more confident that my co-teaching staff and I are making a positive change in these students’ lives. Better still, by sharing comments from cohort to cohort, we provide an authenticity to the advice that I would be hard pressed to achieve.

I think that this course, the first one I’ve really designed from the ground up and I’m aware of how rare that opportunity is, is actually turning into something good. And that, unsurprisingly, makes me very happy.


Time Banking: Aiming for the 40 hour week.

I was reading an article on metafilter on the perception of future leisure from earlier last century and one of the commenters linked to a great article on “Why Crunch Mode Doesn’t Work: Six Lessons” via the International Game Designers Association. This article was partially in response to the quality of life discussions that ensued after ea_spouse outed the lifestyle (LiveJournal link) caused by her spouse’s ludicrous hours working for Electronic Arts, a game company. One of the key quotes from ea_spouse was this:

Now, it seems, is the “real” crunch, the one that the producers of this title so wisely prepared their team for by running them into the ground ahead of time. The current mandatory hours are 9am to 10pm — seven days a week — with the occasional Saturday evening off for good behavior (at 6:30pm). This averages out to an eighty-five hour work week. Complaints that these once more extended hours combined with the team’s existing fatigue would result in a greater number of mistakes made and an even greater amount of wasted energy were ignored.

The badge is fastened with two pins that go straight into your chest.

This is an incredible workload and, as Evan Robinson notes in the “Crunch Mode” article, this is not only incredible but it’s downright stupid because every serious investigation into the effect of working more than 40 hours a week, for extended periods, and for reducing sleep and accumulating sleep deficit has come to the same conclusion: hours worked after a certain point are not just worthless, they reduce worth from hours already worked.

Robinsons cites studies and practices coming from industrialists as Henry Ford, who reduced shift length to a 40-hour work week in 1926, attracting huge criticism, because 12 years of research had shown that the shorter work week meant more output, not less. These studies have been going on since the 18th century and well into the 60’s at least and they all show the same thing: working eight hours a day, five days a week gives you more productivity because you get fewer mistakes, you get less fatigue accumulation and you have workers that are producing during their optimal production times (first 4-6 hours of work) without sliding into their negatively productive zones.

As Robinson notes, the games industry doesn’t seem to have got the memo. The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. The fact that you can get fired for having the audacity to try and work otherwise also helps a great deal in motivating people to adopt the strategy.

Why spend so many hours in the office? Remember when I said that it’s sometimes hard for people to see what I’m doing because, when I’m thinking or planning, I can look like I’m sitting in the office doing nothing? Imagine what it looks like if, two weeks before a big deadline, someone walks into the office at 5:30pm and everyone’s gone home. What does this look like? Because of our conditioning, which I’ll talk about shortly, it looks like we’ve all decided to put our lives before the work – it looks like less than total commitment.

As a manager, if you can tell everyone above you that you have people at their desks 80+ hours a week and will have for the next three months, then you’re saying that “this work is important and we can’t do any more.” The fact that people were probably only useful for the first 6 hours of every day, and even then only for the first couple of months, doesn’t matter because it’s hard to see what someone is doing if all you focus on is the output. Those 80+ hour weeks are probably only now necessary because everyone is so tired, so overworked and so cognitively impaired, that they are taking 4 times as long to achieve anything.

Yes, that’s right. All the evidence says that more than 2 months of overtime and you would have been better off staying at 40 hours/week in terms of measurable output and quality of productivity.

Robinson lists six lessons, which I’ll summarise here because I want to talk about it terms of students and why forward planning for assignments is good practice for better smoothing of time management in the future. Here are the six lessons:

  1. Productivity varies over the course of the workday, with greatest productivity in the first 4-6 hours. After enough hours, you become unproductive and, eventually, destructive in terms of your output.
  2. Productivity is hard to quantify for knowledge workers.
  3. Five day weeks of eight house days maximise long-term output in every industry that has been studied in the past century.
  4. At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
  5. Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
  6. Error rates climb with hours worked and especially with loss of sleep.

My students have approximately 40 hours of assigned work a week, consisting of contact time and assignments, but many of them never really think about that. Most plan in other things around their ‘free time’ (they may need to work, they may play in a band, they may be looking after families or they may have an active social life) and they fit the assignment work and other study into the gaps that are left. Immediately, they will be over the 40 hour marker for work. If they have a part-time job, the three months of one of my semesters will, if not managed correctly, give them a lumpy time schedule alternating between some work and far too much work.

Many of my students don’t know how they are spending their time. They switch on the computer, look at the assignment, Skype, browse, try something, compile, walk away, grab a bite, web surf, try something else – wow, three hours of programming! This assignment is really hard! That’s not all of them but it’s enough of them that we spend time on process awareness: working out what you do so you know how to improve it.

Many of my students see sports drinks, energy drinks and caffeine as a licence to not sleep. It doesn’t work long term as most of us know, for exactly the reasons that long term overwork and sleeplessness don’t work. Stimulants can keep you awake but you will still be carrying most if not all of your cognitive impairment.

Finally, and most importantly, enough of my students don’t realise that everything I’ve said up until now means that they are trying to sit my course with half a brain after about the halfway point, if not sooner if they didn’t rest much between semesters.

I’ve talked about the theoretical basis for time banking and the pedagogical basis for time banking: this is the industrial basis for time banking. One day I hope that at least some of my students will be running parts of their industries and that we have taught them enough about sensible time management and work/life balance that, as people in control of a company, they look at real measures of productivity, they look at all of the masses of data supporting sensible ongoing work rates and that they champion and adopt these practices.

As Robinson says towards the end of the article:

Managers decide to crunch because they want to be able to tell their bosses “I did everything I could.” They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven’t really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done. (Emphasis mine.)

If my students can see all of their requirements ahead of time, know what is expected, have been given enough process awareness, and have the will and the skill to undertake the activities, then we can potentially teach them a better way to get things done if we focus on time management in a self-regulated framework, rather than imposed deadlines in a rigid authority-based framework. Of course, I still have a lot of work to to demonstrate that this will work but, from industrial experience, we have yet another very good reason to try.


Flow, Happiness and the Pursuit of Significance

I’ve just been reading Deirdre McCloskey’s article on “Happyism” in The New Republic. While there are a number of points I could pick at in the article, I question her specific example of statistical significance and I think she’s oversimplified a number of the philosophical points, there are a lot of interesting thoughts and arguments within the article.

One of my challenges in connecting with my students is that of making them understand what the benefit is to them of adopting, or accepting, suggestions from me as to how to become better as discipline practitioners, as students and, to some extent, as people. It would be nice if doing the right thing in this regard could give the students a tangible and measurable benefit that they could accumulate on some sort of meter – I have performed well, my “success” meter has gone up by three units. As McCloskey points out, this effectively requires us to have a meter for something that we could call happiness, but it is then tied directly to events that give us pleasure, rather than a sequence of events that could give us happiness. Workflows (chains of actions that lead to an eventual outcome) can be assessed for accuracy and then the outcome measured, but it is only when the workflow is complete that we can assess the ‘success’ of the workflow and then derive pleasure, and hence happiness, from the completion of the workflow. Yes, we can compose a workflow from sub-workflows but we will hit the same problem if we focus on an outcome-based model – at some stage, we are likely to be carrying out an action that can lead to an event from which we can derive a notion of success, but this requires us to be foresighted and see the events as a chain that results in this outcome.

And this is very hard to meter and display in a way that says anything other than “Keep going!” Unsurprisingly, this is not really the best way to provide useful feedback, reward or fodder for self-actualisation.

I have a standing joke that, as a runner, I go to a sports doctor because if I go to a General Practitioner and say “My leg hurts after I run”, the GP will just say “Stop running.” I am enough of a doctor to say that to myself – so I seek someone who is trained to deal with my specific problems and who can give me a range of feedback that may include “stop running” because my injuries are serious or chronic, but can provide me with far more useful information from which I can make an informed choice. The happiness meter must be able to work with workflow in some way that is useful – keep going is not enough. We therefore need to look at the happiness meter.

McCloskey identifies Bentham, founder of utilitarianism, as the original “pleasure meter” proponent and implicitly addressed the beneficial calculus as subverting our assessment of “happiness units” (utils) into a form that assumes that we can reasonably compare utils between different people and that we can assemble all of our life’s experiences in a meaningful way in terms of utils in the first place!

To address the issue of workflow itself, McCloskey refers to the work of Mihály Csíkszentmihályi on flow: “the absorption in a task just within our competence”. I have talked about this before, in terms of Vygotsky’s zone of proximal development and the use of a group to assist people who are just outside of the zone of flow. The string of activities can now be measured in terms of satisfaction or immersion, as well as the outcomes of this process. Of course, we have the outcomes of the process in terms of direct products and we have outcomes in terms of personal achievement at producing those products. Which of these go onto the until meter, given that they are utterly self-assessed, subjective and, arguably, orthogonal in some cases. (If you have ever done your best, been proud of what you did, but failed in your objective, you know what I’m talking about.)

My reading of McCloskey is probably a little generous because I find her overall argument appealing. I believe that her argument may be distilled are:

  • If we are going to measure, we must measure sensibly and be very clear in our context and the interpretation of significance.
  • If we are going to base any activity on our measurement, then the activity we create or change must be related to the field of measurement.

Looking at the student experience in this light, asking students if they are happy with something is, ultimately, a pointless activity unless I either provide well-defined training in my measurement system and scale, or I am looking for a measurement of better or worse. This is confounded by simple cognitive biasses including, but not limited to, the Hawthorne Effect and confirmation bias. However, measuring what my students are doing, as Csíkszentmihályi did in the flow experiments, will show me if they are so engaged with their activities that they are staying in the flow zone. Similarly, looking at participation and measuring outputs in collaborative activities where I would expect the zone of proximal development to be in effect is going to be far more revealing than asking students if they liked something or not.

As McCloskey discusses, there is a point at which we don’t seem to get any happier but it is very hard to tell if this is a fault in our measurement and our presumption of a three-point non-interval scale and it then often degenerates into a form of intellectual snobbery that, unsurprisingly, favours the elites who will be studying the non-elites. (As an aside, I learnt a new word. Clerisy: “A distinct class of learned or literary people” If you’re going to talk about the literate elites, it’s nice to have a single word to do so!) In student terms, does this mean that there is a point at which even the most keen of our best and brightest will not try some of our new approaches? The question, of course, is whether the pursuit of happiness is paralleling the quest for knowledge, or whether this is all one long endured workflow that results in a pleasure quantum labelled ‘graduation’.

As I said, I found it to be an interesting and thoughtful piece, despite some problems and I recommend it to you, even if we must then start an large debate in the comments on how much I misled you!


Speaking of measurement

In a delightfully serendipitous alignment of the planets, today marks my 200th post and my 10,000th view. Given that posting something new every day, which strives if not succeeds at being useful and interesting, is sometimes a very demanding commitment, the knowledge that people are reading does help me to keep it going. However, it’s the comments, both here and on FB, that show that people can sometimes actually make use of what I’m talking about that is the real motivator for me.

via http://10000.brisseaux.com/ (This looked smaller in preview but I really liked its solidity so didn’t want to scale it)

Thank you, everyone, for your continued reading and support, and to everyone else out there blogging who is showing me how it can be done better (and there are a lot of people who are doing it much better than I am).

Have a great day, wherever you are!


Your love is like bad measurement.

(This is my 200th post. I’ve allowed myself a little more latitude on the opinionated scale. Educational content is still present but you may find some of the content slightly more confronting than usual. I’ve also allowed myself an awful pun in the title.)

People like numbers. They like solid figures, percentages, clear statements and certainty. It’s a great shame that mis-measurement is so easy to do, when you search for these figures, and so much a part of our lives. Today, I’m going to discuss precision and recall, because I eventually want to talk about bad measurement. It’s very easy to get measurement wrong but, even when it’s conducted correctly, the way that we measure or the reasons that we have for measuring can make even the most precise and delicate measurements useless to us for an objective scientific purpose. This is still bad measurement.

I’m going to give you a big bag of stones. Some of the stones have diamonds hidden inside them. Some of the stones are red on the outside. Let’s say that you decide that you are going to assume that all stones that have been coloured red contain diamonds. You pull out all of the red stones, but what you actually want is diamonds. The number of red stones are referred to as the number of retrieved instances – the things that you have selected out of that original bag of stones. Now, you get to crack them open and find out how many of them have diamonds. Let’s say you have R red stones and D1 diamonds that you found once you opened up the red stones. The precision is the fraction D1/R: what percentage of the stones that you selected (Red) were actually the ones that you wanted (Diamonds). Now let’s say that there are D2 diamonds (where D2 is greater than or equal to zero) left back in the bag. The total number of diamonds in that original bag was D1+D2, right? The recall is the fraction of the total number of things that you wanted (Diamonds, given by D1+D2) that you actually got (Diamonds that were also painted Red, which is D1). So this fraction is D1/(D1+D2),the number you got divided by the number that there were there for you to actually get.

Sorry, Logan5, your time is up.

If I don’t have any other mechanism that I can rely upon for picking diamonds out of the bag (assuming no-one has conveniently painted them red), and I want all of the diamonds, then I need to take all of them out. This will give me a recall of 100% (D2 will be 0 as there will be nothing left in the bag and the fraction will be D1/D1). Hooray! I have all of the diamonds! There’s only one problem – there are still only so many diamonds in that bag and (maybe) a lot more stones, so my precision may be terrible. More importantly, my technique sucks (to use an official term) and I have no actual way of finding diamonds. I just happen to have used a mechanism that gets me everything so it must, as a side effect, get me all of the diamonds. I haven’t actually done anything except move everything from one bag to another.

One of the things about selection mechanisms is that people often seem happy to talk about one side of the precision/recall issue. “I got all of them” is fine but not if you haven’t actually reduced your problem at all. “All the ones I picked were the right ones” sounds fantastic until you realise that you don’t know how many were left behind that were also the ones that you wanted. If we can specify solutions (or selection strategies) in terms of their precision and their recall, we can start to compare them. This is an example of how something that appears to be straightforward can actually be a bad measurement – leave out one side of precision or recall and you have no real way of assessing the utility of what it is that you’re talking about, despite having some concrete numbers to fall back on.

You may have heard this expressed in another way. Let’s assume that you can have a mechanism for determining if people are innocent or guilty of a crime. If it was a perfect mechanism, then only innocent people would go free and only guilt people would go to jail. (Let’s assume it’s a crime for which a custodial sentence is appropriate.) Now, let’s assume that we don’t have a perfect mechanism so we have to make a choice – either we set up our system so that no innocent person goes to jail, or we set up our system so that no guilty person is set free. It’s fairly easy to see how our interpretation of the presumption of innocence, the notion of reasonable doubt and even evidentiary laws would be constructed in different ways under either of these assumptions. Ultimately, this is an issue of precision and recall and by understanding these concepts we can define what we are actually trying to achieve. (The foundation of most modern law is that innocent people don’t go to jail. A number of changes in certain areas are moving more towards a ‘no one who may be guilty of crimes of a certain type will escape us’ model and, unsurprisingly, this is causing problems due to inconsistent applications of our simple definitions from above.)

The reason that I brought all of this up was to talk about bad measurement, where we measure things and then over-interpet (torture the data) or over-assume (the only way that this could have happened was…) or over-claim (this always means that). It is possible to have a precise measurement of something and still be completely wrong about why it is occurring. It is possible that all of the data that we collect is the wrong data – collected because our fundamental hypothesis is in error. Data gives us information but our interpretative framework is crucial in determining what use we can make of this data. I talked about this yesterday and stressed the importance of having enough data, but you really have to know what your data means in order to be sure that you can even start to understand what ‘enough data’ means.

One example is the miasma theory of disease – the idea that bad smells caused disease outbreaks. You could construct a gadget that measured smells and then, say in 18th Century England, correlate this with disease outbreaks – and get quite a good correlation. This is still a bad measurement because we’re actually measuring two effects, rather than a cause (dead mammals introducing decaying matter/faecal bacteria etc into water or food pathways) and the effects (smell of decomposition, and diseases like cholera, E. Coli contamination, and so on). We can collect as much ‘smell’ data as we like, but we’re unlikely to learn much more because any techniques that focus on the smell and reducing it will only work if we do things like remove the odiferous elements, rather than just using scent bags and pomanders to mask the smell.

To look at another example, let’s talk about the number of women in Computer Science at the tertiary level. In Australia, it’s certainly pretty low in many Universities. Now, we can measure the number of women in Computer Science and we can tell you exactly how many are in a given class, what their average marks are, and all sorts of statistical data about them. The risk here is that, from the measurements alone, I may have no real idea of what has led to the low enrolments for women in Computer Science.

I have heard, far too many times, that there are too few women in Computer Science because women are ‘not good at maths/computer science/non-humanities courses’ and, as I also mentioned recently when talking about the work of Professor Seron, this doesn’t appear to the reason at all. When we look at female academic performance, reasons for doing the degree and try to separate men and women, we don’t get the clear separation that would support this assertion. In fact, what we see is that the representation of women in Computer Science is far lower than we would expect to see from the (marginally small) difference that does appear at the very top end of the data. Interesting. Once we actually start measuring, we have to question our hypothesis.

Or we can abandon our principles and our heritage as scientists and just measure something else that agrees with us.

You don’t have to get your measurement methods wrong to conduct bad measurement. You can also be looking for the wrong thing and measure it precisely, because you are attempting to find data that verifies your hypothesis, but rather than being open to change if you find contradiction, you can twist your measurements to meet your hypothesis, you can only collect the data that supports your assumptions and you can over-generalise from a small scale, or from another area.

When we look at the data, and survey people to find out the reasons behind the numbers, we reduce the risk that our measurements don’t actually serve a clear scientific purpose. For example, and as I’ve mentioned before, the reason that there are too few women studying Computer Science appears to be unpleasantly circular and relates to the fact that there are too few women in the discipline over all, reducing support in the workplace, development opportunities and producing a two-speed system that excludes the ‘newcomers’. Sorry, Ada and Grace (to name but two), it turns out that we seem to have very short memories.

Too often, measurement is conducted to reassure ourselves of our confirmed and immutable beliefs – people measure to say that ‘this race of people are all criminals/cheats/have this characteristic’ or ‘women cannot carry out this action’ or ‘poor people always perform this set of actions’ without necessarily asking themselves if the measurement is going to be useful, or if this is useful pursuit as part of something larger. Measuring in a way that really doesn’t provide any more information is just an empty and disingenuous confirmation. This is forcing people into a ghetto, then declaring that “all of these people live in a ghetto so they must like living in a ghetto”.

Presented a certain way, poor and misleading measurement can only lead to questionable interpretation, usually to serve a less than noble and utterly non-scientific goal. It’s bad enough when the media does it but it’s terrible when scientists, educators and academics do it.

Without valid data, collected on the understanding that a world-changing piece of data could actually change our data, all our work is worthless. A world based on data collection purely for the sake of propping up, with no possibility of discovery and adaptation, is a world of very bad measurement.


The Many Types of Failure: What Does Zero Mean When Nothing Is Handed Up?

You may have read about the Edmonton, Canada, teacher who expected to be sacked for handing out zeros. It’s been linked to sites as diverse as Metafilter, where a long and interesting debate ensued, and Cracked, where it was labelled one of the ongoing ‘pussifications’ of schools. (Seriously? I know you’re a humour site but was there some other way you could have put that? Very disappointed.)

Basically, the Edmonton Public School Board decided that, rather than just give a zero for a missed assignment, this would be used as a cue for follow-up work and additional classes at school or home. Their argument – you can’t mark work that hasn’t been submitted, let’s use this as a trigger to try and get submission, in case the source is external or behavioural. This, of course, puts the onus on the school to track the students, get the additional work completed, and then mark out of sequence. Lynden Dorval, the high school teacher who is at the centre of this, believe that there is too much manpower involved in doing this and that giving the student a zero forces them to come to you instead.

Some of you may never have seen one of these before. This is a zero, which is the lowest mark you can be awarded for any activity. (I hope!)

Now, of course, this has split people into two fairly neat camps – those who believe that Dorval is the “hero of zero” and those who can see the benefit of the approach, including taking into account that students still can fail if they don’t do enough work. (Where do I stand? I’d like to know a lot more than one news story before I ‘pick a side’.) I would note that a lot of tired argument and pejorative terminology has also come to the fore – you can read most of the buzzwords used against ‘progressives’ in this article, if you really want to. (I can probably summarise it for you but I wouldn’t do it objectively. This is just one example of those who are feting Dorval.)

Of course, rather than get into a heated debate where I really don’t have enough information to contribute, I’d rather talk about the basic concept – what exactly does a zero mean? If you hand something in and it meets none of my requirements, then a zero is the correct and obvious mark. But what happens if you don’t hand anything in?

With the marking approach that I practice and advertise, which uses time-based mark penalties for late submission, students are awarded marks for what they get right, rather than have marks deducted for what they do wrong. Under this scheme, “no submission” gives me nothing to mark, which means that I cannot give you any marks legitimately – so is this a straight-forward zero situation? The time penalties are in place as part of the professional skill requirements and are clearly advertised, and consistently policed. I note that I am still happy to give students the same level of feedback on late work, including their final mark without penalty, which meets all of the pedagogical requirements, but the time management issues can cost a student some, most or all of their marks. (Obviously, I’m actively working on improving engagement with time management through mechanisms that are not penalty based but that’s for other posts.)

As an aside, we have three distinct fail grades for courses at my University:

  • Withdraw Fail (WF), where a student has dropped the course but after the census date. They pay the money, it stays on their record, but as a WF.
  • Fail (F), student did something but not enough to pass.
  • Fail No Submission (FNS), student submitted no work for assessment throughout the course.

Interestingly, for my Uni, FNS has a numerical grade of 0, although this is not shown on the transcript. Zero, in the course sense, means that you did absolutely nothing. In many senses, this represents the nadir of student engagement, given that many courses have somewhere from 1-5, maybe even 10%, of marks available for very simple activities that require very little effort.

My biggest problem with late work, or no submission, is that one of the strongest messages I have from that enormous data corpus of student submission that I keep talking about is that starting a pattern of late or no submission is an excellent indicator of reduced overall performance and, with recent analysis, a sharply decreased likelihood of making it to third year (final year) in your college studies. So I really want students to hand something in – which brings me to the crux of the way that we deal with poor submission patterns.

Whichever approach I take should be the one that is most likely to bring students back into a regular submission pattern. 

If the Public School Board’s approach is increasing completion rates and this has a knock-on effect which increases completion rates in the future? Maybe it’s time to look at that resourcing profile and put the required money into this project. If it’s a transient peak that falls off because we’re just passing people who should be failing? Fuhgeddaboutit.

To quote Sherlock Holmes (Conan Doyle, naturally): 

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. (A Scandal in Bohemia)

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (The Adventure of the Copper Beeches)

It is very easy to take a side on this and it is very easy to see how both sides could have merit. The issue, however, is what each of these approaches actually does to encourage students to submit their assignment work in a more timely fashion. Experiments, experimental design, surveys, longitudinal analysis, data, data, data!

If I may end by waxing lyrical for a moment (and you will see why I stick to technical writing):

If zeroes make Heroes, then zeroes they must have! If nulls make for dulls, then we must seek other ways!