Heading to SIGCSE!

Snowed under – get it?

I’m pretty snowed under for the rest of the week and, while I dig myself out of a giant pile of papers on teaching first year programmers (apparently it’s harder than throwing Cay’s book at them and yelling “LEARN!”), I thought I’d talk about some of the things that are going on in our Computer Science Education Research Group. The first thing to mention is, of course, the group is still pretty new – it’s not quite “new car smell” territory but we are certainly still finding out exactly which direction we’re going to take and, while that’s exciting, it also makes for bitten fingernails at paper acceptance notification time.

We submitted a number of papers to SIGCSE and a special session on Contributing Student Pedagogy and collaboration, following up on our multi-year study on this and Computer Science Education paper. One of the papers and the special session have been accepted, which is fantastic news for the group. Two other papers weren’t accepted. While one was a slightly unfortunate near-miss (but very well done, lead author who shall remain nameless [LAWSRN]), the other was a crowd splitter. The feedback on both was excellent and it’s given me a lot to think about, as I was lead on the paper that really didn’t meet the bar. As always, it’s a juggling act to work out what to put into a paper in order to support the argument to someone outside the group and, in hindsight quite rightly, the reviewers thought that I’d missed the mark and needed to try a different tack. However, with one exception, the reviewers thought that there was something there worth pursuing and that is, really, such an important piece of knowledge that it justifies the price of admission.

Yes, I’d have preferred to have got it right first time but the argument is crucial here and I know that I’m proposing something that is a little unorthodox. The messenger has to be able to deliver the message. Marathons are not about messengers who run three steps and drop dead before they did anything useful!

The acceptances are great news for the group and will help to shape what we do for the next 12-18 months. We also now have some papers that, with some improvement, can be sent to another appropriate conference. I always tell my students that academic writing is almost never wasted because if it’s not used here, or published there, the least that you can learn is not to write like that or not about that topic. Usually, however, rewriting and reevaluation makes work stronger and more likely to find a place where you can share it with the world.

We’re already planning follow-up studies in November on some of the work that will be published at SIGCSE and the nature of our investigations are to try and turn our findings into practically applicable steps that any teacher can take to improve participation and knowledge transfer. These are just some of the useful ideas that we hope to have ready for March but we’ll see how much we get done. As always. We’re coming up to the busy end of semester with final marking, exams and all of that, as well as the descent into admin madness as we lose the excuse of “hey, I’d love to do that but I’m teaching.” I have to make sure that I wrestle enough research time into my calendar to pursue some of the exciting work that we have planned.

I look forward to seeing some of you in Colorado in March to talk about how it went!

Things to do in Denver when you’re Ed?


Students and Programming: A stroll through the archives in the contemplation of self-regulation.

I’ve been digging back into the foundations of Computer Science Education to develop some more breadth in the area and trying to fill in some of the reading holes that have developed as I’ve chased certain ideas forward. I’ve been looking at Maye’s “Psychology of How Novices Learn Computer Programming” from 1981, following it forward to a number of papers including McCracken (Chair) et al’s “A multi-national, multi-institutional study of assessment of programming skills of first-year CS students”. Among the many interesting items presented in this paper was a measure of Degree of Closeness (DoC): a quantification of how close the student had come to providing a correct solution, assessed on their source code. The DoC is rated on a five-point scale, with 1 being the furthest from a correct solution. These “DoC 1” students are of a great deal of interest to me because they include those students who submitted nothing – possible evidence of disengagement or just the student being overwhelmed. In fact the DoC 1 students were classified into three types:

  • Type 1: The student handed up an empty file.
  • Type 2: The student’s work showed no evidence of a plan.
  • Type 3: The student appeared to have a plan but didn’t carry it out.

Why did the students do something without a plan? The authors hypothesise that the student may have been following a heuristic approach, doing what they could, until they could go no further. Type 3 was further subdivided into 3a (the student had a good plan or structure) and 3b (the student had a poor plan or structure). All of these, however, have one thing in common and that is that they can indicate a lack of resource organisation, which may be identified as a shortfall in metacognition. On reflection, however, many of these students blamed external factors for their problems. The Type 1 students blamed the time that they had to undertake the task, the lab machines, their lack of familiarity with the language. The DoC 5 students (from the same school) described their difficulties in terms of the process of creating a solution. Other comments from DoC 1 and 2 students included information such as insufficient time, students “not being good” at whatever this question was asking and, in one case, “Too cold environment, problem was too hard.” The most frequent complaint among the low performing students was that they had not had enough time, the presumption being that, had enough time been available, a solution was possible. Combine this with the students who handed up nothing or had no plan and we must start to question this assertion. (It is worth noting that some low-performing students had taken this test as their first ever solo lab-based examination so we cannot just dismiss all of these comments!)

The paper discusses a lot more and is rather critical of its own procedure (perhaps the time pressure was too high, the specifications a little cluttered, highly procedural rather than OO) and I would not argue with the authors on any of this but, from my perspective, I am zooming in on the issue of time because, if you’ve read any of my stuff before, you’ll know that I am working in self-regulation and time management. I look at the Types of DoC 1 students and I can see exactly what I saw in my own student timeliness data and reflection reports: a lack of ability to organise resources. This is now, apparently, combined with a persistent belief that fixing this was beyond the student’s control. It’s unsurprising that handing up nothing suddenly became a valid option.

The null submission could be a clear indicator of organisational ability, where the student can’t muster any kind of solution to the problem at all. Not one line of code or approximate solution. What is puzzling about this is that the activity was, in fact, heavily scheduled. Students sat in a lab and undertook it. There was no other task for them to perform except to do this code in either 1 or 1.5 hours. To not do anything at all may be a reaction to time pressure (as the authors raised) or it could be complete ignorance of how to solve the problem. There’s too much uncertainty here for me to say much more about this.

The “no plan” solution can likely be explained by the heuristic focus and I’ve certainly seen evidence of it. One of the most unforgiving aspects of the heuristic solution is that, without a design, it is easy to end up in a place where you are running out of time and have no idea of where to go to solve unforeseen problems that have arisen. These students are the ones who I would expect to start the last day that something is due and throw together a solution, working later and panicking more as they realised that their code wasn’t working. Having done a bit here and a piece there, they may cobble something together and hand it up but it is unlikely to work and is never robust.

The “I planned it but I couldn’t do it” group fall heavily into the problem space of self-regulation, because they had managed to organise their resources – so why didn’t anything come out? Did they procrastinate? Was their meta-planning process deficient, in that they spent most of their time perfecting a plan and not leaving enough time to make it happen? I have a number of students who have a tendency to go down the rabbit hole when chasing design issues and I sometimes have to reach down, grab them by the ears and haul them out. The reality of time constraints is that you have to work out what you can do and then do as much as you can with that time.

This is fascinating because I’m really trying to work out at which point students will give up and DoC 1 basically amounts to an “I didn’t manage it” mark in my local system. I have data that shows the marks students get from automated marking (immediate assessment) so I can look to see how long people will try to get above what (effectively) would be above DoC 1, and probably up around DoC 3. (The paper defines DoC 3 as “In reading the source code, the outline of a viable solution was apparent, including meaningful comments, stub code, or a good start on the code.” This would be enough to meet our assessment requirements although the mark wouldn’t be great.) DoC 1 would, I suspect, amount to “no submission” in many cases so my DoC 1 students are those who stayed enrolled (and sat the exam) but never created a repository or submission. (There are so many degrees of disengagement!)

I, of course, now have to move further forward along this paper line and I will hopefully intersect with my ‘contemporary’ reading into student programming activity. I will be reading pretty solidly on all of this for the upcoming months as we try to refine the time management and self-regulation strategies that we’ll be employing next year.


Thoughts on Overloading: I Still Appear to be Ignoring My Own Advice

The delicate art of Highway Jenga(TM)

I was musing recently on the inherent issues with giving students more work to do, if they are already overloaded to a point where they start doing questionable things (like cheating). A friend of mine is also going through a contemplation of how he seems to be so busy that fitting in everything that he wants to do keeps him up until midnight. My answer to him, which includes some previous comments from other people, is revealing – not least because I am talking through my own lens, and I appear to still feel that I am doing too much.

Because I am a little too busy, I am going to repost (with some editing to remove personal detail and clarify) what I wrote to him, which distils a lot of my thoughts over the past few months on overloading. This was all in answer to the question: “How do people fit everything in?

You have deliberately committed to a large number of things and you wish to perform all of them at a high standard. However, to do this requires that you spend a very large amount of time, including those things that you need to do for your work.

Most people do one of three things:

    1. they do not commit to as much,
    2. they do commit to as much but do it badly, or
    3. they lie about what they are doing because claiming to be a work powerhouse is a status symbol.

A very, very small group of people can buck the well documented long-term effects of overwork but these peopler are in the minority. I would like to tell you what generally happens to people who over-commit, while readily admitting that this might not apply to you. Most of this is based on research, informed by bitter personal experience.

The long-term effects of overwork (as a result of over-commitment) are sinister and self-defeating. As fatigue increases, errors increase. The introduction of errors requires you to spend more time to achieve tasks because you are now doing the original task AND fixing errors, whether the errors are being injected by you or they are actually just unforeseen events because your metacognitive skills (resource organisation) are being impaired by fatigue.

However, it’s worse than that because you start to lose situational awareness as well. You start to perform tasks because they are there to perform, without necessarily worrying about why or how you’re doing it. Suddenly, not only are you tired and risking the introduction of errors, you start to lose the ability to question whether you should be carrying out a certain action in the first place.

Then it gets worse again because not only do obstacles now appear to be thrown up with more regularity (because your error rates are going up, your frustration levels are high and you’re losing resource organisational ability) but even the completion of goals merely becomes something that facilitates more work. Having completed job X, because you’re over-committed, you must immediately commence job X+1. Goal completion, which should be a time for celebration and reflection, now becomes a way to open more gateways of burden. Goals delayed become a source of frustration. The likely outcome is diminished enjoyment and an encroaching sense of work, work, work

[I have removed a paragraph here that contained too much personal detail of my friend.]

So, the question is whether your work is too much, given everything else that you want to do, and only you can answer this question as to whether you are frustrated by it most of the time and whether you are enjoying achieving goals, or if they are merely opening more doors of work. I don’t expect you to reply on this one but it’s an important question – how do you feel when you open your eyes in the morning? How often are you angry at things? Is this something that you want to continue for the foreseeable future? 

Would you still do it, if you didn’t have to pay the rent and eat?

Regrettably, one of the biggest problems with over-commitment is not having time to adequately reflect. However, long term over-commitment is clearly demonstrated (through research) to be bad for manual labourers, soldiers, professionals, and knowledge workers. The loss of situational awareness and cognitive function are not good for anyone. 

My belief is that an approach based on listening to your body and working within sensible and sustainable limits is possible for all aspects of life but readily acknowledge that transition away from over-commitment to sustainable commitment can be very, very hard. I’m facing that challenge at the moment and know that it is anything but easy. I’m not trying to lecture you, I’m trying to share my own take on it, which may or may not apply. However, you should always feel free to drop by for a coffee to chat, if you like, and I hope that you have some easier and less committed times ahead.

Reading through this, I reminded of how much work I have left to do in order to reduce my overall commitments to sensible levels. It’s hard, sometimes, because there are so many things that I want to do but I can easily point to a couple of indicators that tell me that I still don’t quite have the balance right. For example, I’m managing my time at the moment, but that’s probably because being unable to run has given me roughly 8 hours  a week back to spend elsewhere. I am getting things done because I am using up almost all of that running time but working in it instead. And that, put simply, means I’m regularly working longer hours than I should.

Looking back at the advice, I am projecting my own problems with goals: completing something merely unlocks new burdens, and there is very little feeling of finalisation. I am very careful to try and give my students closure points, guidance and a knowledge of when to stop. Time to take a weekend and reflect on how I can get that back for myself – and still do everything cool that I want to do! 🙂


Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach”

I was fortunate to be able to attend a 3 hour workshop today presented by Professor Wageeh Boles, Queensland University of Technology, and Professor Jeffrey (Jeff) Froyd, Texas A&M, on how we could assess student learning against the accreditation competency standards in Engineering. I’ve seen Wageeh present before in his capacity as an Australian Learning and Teaching Council ALTC National Teaching Fellowship and greatly enjoyed it, so I was looking forward to today. (Note: the ALTC has been replaced with the Office for Learning and Teaching, OLT, but a number of schemes are still labelled under the old title. Fortunately, I speak acronym.)

Both Wageeh and Jeff spoke at length about why we were undertaking assessment and we started by looking at the big picture: University graduate capabilities and the Engineers Australia accreditation criteria. Like it or not, we live in a world where people expect our students to be able to achieve well-defined things and be able to demonstrate certain skills. To focus on the course, unit, teaching and learning objectives and assessment alone, without framing this in the national and University expectations is to risk not producing the students that are expected or desired. Ultimately if the high level and local requirements aren’t linked then they should be because otherwise we’re probably not pursuing the right objectives. (Is it too soon to mention pedagogical luck again?)

We then discussed three types of assessment:

  • Assessment FOR Learning: Which is for teachers and allows them to determine the next steps in advancing learning.
  • Assessment AS Learning: Which is for students and allows them to monitor and reflect upon their own progress (effectively formative).
  • Assessment OF Learning: Which is used to assess what the students have learned and is most often characterised as summative learning.

But, after being asked about the formative/summative approach, this was recast into a decision making framework. We carry out assessment of all kinds to allow people to make better decisions and the people, in this situation, are Educators and Students. When we see the results of the summative assessment we, as teachers, can then ask “What decisions do we need to make for this class?” to improve the levels of knowledge demonstrated in the summative. When the students see the result of formative assessment, we then have the question “What decisions do students need to make” to improve their own understanding. The final aspect, Assessment FOR Learning, is going to cover those areas of assessment that help both educators and students to make better decisions by making changes to the overall course in response to what we’re seeing.

This is a powerful concept as it identifies assessment in terms of responsible groups: this assessment involves one group, the other or both and this is why you need to think about the results. (As an aside, this is why I strongly subscribe to the idea that formative assessment should never have an extrinsic motivating aspect, like empty or easy submission marks, because it stops the student focussing on the feedback, which will help their decisions, and makes it look summative, which suddenly starts to look like the educator’s problem.)

One point that came out repeatedly was that our assessment methods should be varied. If your entire assessment is based on a single exam, of one type of question, at the end of the semester then you really only have a single point of data. Anyone who has ever drawn a line on a graph knows that a single point tells you nothing about the shape of the line and, ultimately, the more points that yo can plot accurately, the more you can work out what is actually happening. However, varying assessment methods doesn’t mean replicating or proxying the exam, it means providing different assessment types, varying questions, changing assessment over time. (Yes, this was stressed: changing assessment from offering to offering is important and is much a part of varying assessment as any other component.)

All delightful music to my ears, which was just was well as we all worked very hard, talking, discussing and sharing ideas throughout the groups. We had a range of people who were mostly from within the Faculty and, while it was a small group and full of the usual faces, we all worked well, had an open discussion and there were some first-timers who obviously learned a lot.

What I found great about this was that it was very strongly practical. We worked on our own courses, looked for points for improvement and I took away four points of improvement that I’m currently working on: a fantastic result for a three-hour investment. Our students don’t need to just have done assessment that makes it look like they know their stuff, they have to actually know their stuff and be confident with it. Job ready. Able to stand up and demonstrate their skills. Ready for reality.

As was discussed in the workshop, assessment of learning occurs when Lecturers:

  • Use evidence of student learning
  • to make judgements on student achievement
  • against goals and standards

And this identifies some of our key problems. We often gather all of the evidence, whether it’s final grades or Student Evaluations, at a point when the students have left, or are just about to leave, the course. How can we change this course for that student? We are always working one step in the past. Even if we do have the data, do we have the time and the knowledge to make the right judgement? If so, is it defensible, fair and meeting the standards that we should be meeting? We can’t apply standards from 20 years ago because that’s what we’re used to. The future, in Australia, is death by educational acronyms (AQF, TEQSA, EA, ACS, OLT…) but these are the standards by which we are accredited and these are the yardsticks by which our students will be judged. If we want to change those then, sure, we can argue this at the Government level but until then, these have to be taken into account, along with all of our discipline, faculty and University requirements.

I think that this will probably spill over in a second post but, in short, if you get a chance to see Wageeh and Jeff on the road with this workshop then, please, set aside the time to go and leave time for a chat afterwards. This is one of the most rewarding and useful activities that I’ve done this year – and I’ve had a very good year for thinking about CS Education.


Howdy, Partner

I am giving a talk on Friday about the partnership relationship between teacher and student and, in my opinion, why we often accidentally attack this through a less-than-optimal approach to assessment and deadlines. I’ve spoken before about how an arbitrary deadline that is convenient for administrative reasons is effectively pedagogically and ethically indefensible. For all that we disparage our students, if we do, for focusing on marks and sometimes resorting to cheating rather than focusing on educational goals, we leave ourselves open to valid accusations of hypocrisy if we have the same ‘ends justify the means’ approach to setting deadlines.

Consistency and authenticity are vital if we are going to build solid relationships, but let me go further. We’re not just building a relationship, we’re building an expectation of continuity over time. If students know that their interests are being considered, that what we are teaching is necessary and that we will always try to deal with them fairly, they are far more likely to invest the effort that we wish them to invest  and develop the knowledge. More importantly, a good relationship is resilient, in that the occasional hiccup doesn’t destroy the whole thing. If we have been consistent and fair, and forces beyond our control affect something that we’ve tried to do, my experience is that students tolerate it quite well. If, however, you have been arbitrary, unprepared, inconsistent and indifferent, then you will (fairly or not) be blamed for anything else that goes wrong.

We cannot apply one rule to ourselves and a different one to our students and expect them to take us seriously. If you accept no work if it’s over 1 second late and keep showing up to lectures late and unprepared, then your students have every right to roll their eyes and not take you seriously. This doesn’t excuse them if they cheat, however, but you have certainly not laid the groundwork for a solid partnership. Why partnership? Because the students in higher education should graduate as your professional peers, even if they are not yet your peers in academia. I do not teach in the school system and I do not have to deal with developmental stages of the child (although I’m up to my armpits in neo-Piagetian development in the knowledge areas, of course).

We return to the scaffolding argument again. Much as I should be able to remove the supports for their coding and writing development over their degree, I should also be able to remove the supports for their professional skills, team-based activities and deadlines because, in a few short months, they will be out in the work force and they will need these skills! If I take a strictly hierarchical approach where a student is innately subordinate to me, I do not prepare them for a number of their work experiences and I risk limiting their development. If I combine my expertise and my oversight requirements with a notion of partnership, then I can work with the student for some things and prepare the student for a realistic workplace. Yes, there are rules and genuine deadlines but the majority experience in the professional workplace relies upon autonomy and self-regulation, if we are to get useful and creative output from these new graduates.

If I demand compliance, I may achieve it, but we are more than well aware that extrinsic motivating factors stifle creativity and it is only at those jobs where almost no cognitive function is required that the carrot and the stick show any impact. Partnership requires me to explain what I want and why I need it – why it’s useful. This, in turn, requires me to actually know this and to have designed a course where I can give a genuine answer that illustrates these points!

“Because I said so,” is the last resort of the tired parent and it shouldn’t be the backbone of an entire deadline methodology. Yes, there are deadlines and they are important but this does not mean that every single requirement falls into the same category or should be treated in the same way. By being honest about this, by allowing for exchange at the peer-level where possible and appropriate, and by trying to be consistent about the application of necessary rules to both parties, rather than applying them arbitrarily, we actually are making our students work harder but for a more personal benefit. It is easy to react to blind authority and be resentful, to excuse bad behaviour because you’re attending a ‘bad course’. It is much harder for the student to come up with comfortable false rationalisations when they have a more equal say, when they are informed in advance as to what is and what is not important, and when the deadlines are set by necessity rather than fiat.

I think a lot of people miss one of the key aspects of fixing assessment: we’re not trying to give students an easier ride, we’re trying to get them to do better work. Better work usually requires more effort but this additional effort is now directed along the lines that should develop better knowledge. Partnership is not some way for students to negotiate their way out of submissions, it’s a way that, among other things, allows me to get students to recognise how much work they actually have to do in order to achieve useful things.

If I can’t answer the question “Why do my students have to do this?” when I ask it of myself, I should immediately revisit the activity and learning design to fix things so that I either have an answer or I have a brand new piece of work for them to do.


Conference Blogging! (Redux)

I’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.

I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.

Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.

For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)

I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.

Bloom’s Revised Taxonomy

I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.

Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.

Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?

I’ll leave the rest of this discussion for after my talk on Monday.


The Precipice of “Everything’s Late”

I spent most of today working on the paper that I alluded to earlier where, after over a year of trying to work on it, I hadn’t made any progress. Having finally managed to dig myself out of the pit I was in, I had the mental and timeline capacity to sit down for the 6 hours it required and go through it all.

Climbers, eh?

In thinking about procrastination, you have to take into account something important: the fact that most of us work in a hyperbolic model where we expend no effort until the deadline is right upon us and then we put everything in, this is temporal discounting. Essentially we place less importance on things in the future than the things that are important to us now. For complex, multi-stage tasks over some time this is an exceedingly bad strategy, especially if we focus on the deadline of delivery, rather than the starting point. If we underestimate the time it requires and we construct our ‘panic now’ strategy based on our proximity to the deadline, then we are at serious risk of missing the starting point because, when it arrives, it just won’t be that important.

Now, let’s increase the difficulty of the whole thing and remember that the more things we have to think about in the present, the greater the risk that we’re going to exceed our capacity for cognitive load and hit the ‘helmet fire’ point – we will be unable to do anything because we’ve run out of the ability to choose what to do effectively. Of course, because we suffer from a hyperbolic discounting problem, we might do things now that are easy to do (because we can see both the beginning and end points inside our window of visibility) and this runs the risk that the things we leave to do later are far more complicated.

This is one of the nastiest implications of poor time management: you might actually not be procrastinating in terms of doing nothing, you might be working constantly but doing the wrong things. Combine this with the pressures of life, the influence of mood and mental state, and we have a pit that can open very wide – and you disappear into it wondering what happened because you thought you were doing so much!

This is a terrible problem for students because, let’s be honest, in your teens there are a lot of important things that are not quite assignments or studying for exams. (Hey, it’s true later too, we just have to pretend to be grownups.) Some of my students are absolutely flat out with activities, a lot of which are actually quite useful, but because they haven’t worked out which ones have to be done now they do the ones that can be done now – the pit opens and looms.

One of the big advantages of reviewing large tasks to break them into components is that you start to see how many ‘time units’ have to be carried out in order to reach your goal. Putting it into any kind of tracking system (even if it’s as simple as an Excel spreadsheet), allows you to see it compared to other things: it reduces the effect of temporal discounting.

When I first put in everything that I had to do as appointments in my calendar, I assumed that I had made a mistake because I had run out of time in the week and was, in some cases, triple booked, even after I spilled over to weekends. This wasn’t a mistake in assembling the calendar, this was an indication that I’d overcommitted and, over the past few months, I’ve been streamlining down so that my worst week still has a few hours free. (Yeah, yeah, not perfect, but there you go.) However, there was this little problem that anything that had been pushed into the late queue got later and later – the whole ‘deal with it soon’ became ‘deal with it now’ or ‘I should have dealt with that by now’.

Like students, my overcommitment wasn’t an obvious “Yes, I want to work too hard” commitment, it snuck in as bits and pieces. A commitment here, a commitment there, a ‘yes’, a ‘sure, I can do that’, and because you sometimes have to make decisions on the fly, you suddenly look around and think “What happened”? The last thing I want to do here is lecture, I want to understand how I can take my experience, learn from it, and pass something useful on. The basic message is that we all work very hard and sometimes don’t make the best decisions. For me, the challenge is now, knowing this, how can I construct something that tries and defeats this self-destructive behaviour in my students?

This week marks the time where I hope to have cleared everything on the ‘now/by now’ queue and finally be ahead. My friends know that I’ve said that a lot this year but it’s hard to read and think in the area of time management without learning something. (Some people might argue but I don’t write here to tell you that I have everything sorted, I write here to think and hopefully pass something on through the processes I’m going through.)


Time Banking: More and more reading.

I’ve spent most of the last week putting together the ideas of time banking, reviewing my reading list and then digging for more papers to read and integrate. It’s always a bit of a worry when you go to see if what you’ve been thinking about for 12 months has just been published by someone else but, fortunately, most people are still using traditional deadlines so I’m safe. I read a lot of papers but none more than when I’m planning or writing a paper: I need to know what else has happened if I’m to frame my work correctly and not accidentally re-invent the wheel. Especially if it’s a triangular wheel that never worked.

My focus is Time Banking so that’s what I’ve been searching for – concepts, names, similarities, to make sure that what I’m doing will make an additional contribution. This isn’t to say that Time Banking hasn’t been used before as a term or even a concept. I’ve been aware of several universities who allow a fixed number of extra days that students can draw on (Stanford being the obvious example) and the concept of banking your time is certainly not new – there’s even a Dilbert cartoon for it! There are papers on time banking, at low granularity and with little student control – it’s more of a convenient deadline extender rather than a mechanism for developing metacognition in order to promote self-regulating learning strategies in the student. Which is good because that’s the approach I’m taking.

The reasoning and methodology that I’m using does appear to be relatively novel and it encompasses a whole range of issues: pedagogy, self-regulation, ethics and evidence-based analysis of how deadlines are currently working for us. It’s a lot to fit into one paper but I have hope that I can at least cover the philosophical background of why what I’m doing is a good idea, not just because I want to convince my peers but because I want volunteers for when pilot schemes start to occur.

It’s not enough that something is a good idea, or that it reads well, it has to work. It has to be able to de deployed, we have to be able to measure it, collect evidence and say “Yes, this is what we wanted.” Then we publish lots more papers and win major awards – Profit! (Actually, if it’s a really good idea then we want everyone to do it. Widespread adoption that enhances education is the real profit.)

Like this but with less underpants collecting and more revolutionising education.

More seriously, I love writing papers because I really have to think deeply about what I’m saying. How does it fit with existing research? Has this been tried before? If so, did it work? Did it fail? What am I doing that is different? What am I really trying to achieve?

How can I convince another educator that this is actually a good idea?

The first draft of the paper is written and now my co-authors are scouring it, playing Devil’s advocate, and seeing how many useful and repairable holes they can tear in it in order to make it worthy of publication. Then it will go off at some point and a number of nice people will push it out to sea and shoot at it with large weapons to see if it sinks or swims. Then I get feedback (and hopefully a publication) and everyone learns something.

I’m really looking forward to seeing the first actual submission draft – I want to see what the polished ideas look like!


Musing on scaffolding: Why Do We Keep Needing Deadlines?

One of the things about being a Computer Science researcher who is on the way to becoming a Computer Science Education Researcher is the sheer volume of educational literature that you have to read up on. There’s nothing more embarrassing than having an “A-ha!” moment that turns out to have been covered 50 years and the equivalent of saying “Water – when it freezes – becomes this new solid form I call Falkneranium!”

Ahem. So my apologies to all who read my ravings and think “You know, X said that … and a little better, if truth be told.” However, a great way to pick up on other things is to read other people’s blogs because they reinforce and develop your knowledge, as well as giving you links to interesting papers. Even when you’ve seen a concept before, unsurprisingly, watching experts work with that concept can be highly informative.

I was reading Mark Guzdial’s blog some time ago and his post on the Khan Academy’s take on Computer Science appealed to me for a number of reasons, not least for his discussion of scaffolding; in this case, a tutor-guided exploration of a space with students that is based upon modelling, coaching and exploration. Importantly, however, this scaffolding fades over time as the student develops their own expertise and needs our help less. It’s like learning to ride a bike – start with trainer wheels, progress to a running-alongside parent, aspire to free wheeling! (But call a parent if you fall over or it’s too wet to ride home.)

One of my key areas of interest is self-regulation in students – producing students who no longer need me because they are self-aware, reflective, critical thinkers, conscious of how they fit into the discipline and (sufficiently) expert to be able to go out into the world. My thinking around Time Banking is one of the ways that students can become self-regulating – they manage their own time in a mature and aware fashion without me having to waggle a finger at them to get them to do something.

Today, R (postdoc in the  Computer Science Education Research Group) and I were brainstorming ideas for upcoming papers over about a 2 hour period. I love a good brainstorm because, for some time afterwards, ideas and phrases come to me that allow me to really think about what I’m doing. Combining my reading of Mark’s blog and the associated links, especially about the deliberate reduction of scaffolding over time, with my thoughts on time management and pedagogy, I had this thought:

If imposed deadlines have any impact upon the development of student timeliness, why do we continue to need them into the final year of undergraduate and beyond? When do the trainer wheels come off?

Now, of course, the first response is that they are an administrative requirement, a necessary evil, so they are (somehow) exempt from a pedagogical critique. Hmm. For detailed reasons that will go into the paper I’m writing, I don’t really buy that. Yes, every course (and program) has a final administrative requirement. Yes, we need time to mark and return assignments (or to provide feedback on those assignments, depending on the nature of the assessment obviously). But all of the data I have says that not only do the majority of students hand up on the last day (if not later), but that they continue to do so into later years – getting later and later as they progress, rather than earlier and earlier. Our administrative requirement appears to have no pedagogical analogue.

So here is another reason to look at these deadlines, or at least at the way that we impose them in my institution. If an entry test didn’t correlate at all with performance, we’d change it. If a degree turned out students who couldn’t function in the world, industry consultation would pretty smartly suggest that we change it. Yet deadlines, which we accept with little comment most of the time, only appear to work when they are imposed but, over time, appear to show no development of the related skill that they supposedly practice – timeliness. Instead, we appear to enforce compliance and, as we would expect from behavioural training on external factors, we must continue to apply the external stimulus in order to elicit the appropriate compliance.

Scaffolding works. Is it possible to apply a deadline system that also fades out over time as our students become more expert in their own time management?

I have two days of paper writing on Thursday and Friday and ‘m very much looking forward to the further exploration of these ideas, especially as I continue to delve into the deep literature pile that I’ve accumulated!


More Thoughts on Partnership: Teacher/Student

I’ve just received some feedback on an abstract piece that is going into a local educational research conference. I talked about the issues with arbitrary allocation of deadlines outside of the framing of sound educational design and about how it fundamentally undermines any notion of partnership between teacher and student. The responses were very positive although I’m always wary when people staring using phrases like “should generate vigorous debate around expectations of academics” and “It may be controversial, but [probably] in a good way”. What interests me is how I got to the point of presenting something that might be considered heretical – I started by just looking at the data and, as I uncovered unexpected features, I started to ask ‘why’ and that’s how I got here.

When the data doesn’t fit your hypothesis, it’s time to look at your data collection, your analysis, your hypothesis and the body of evidence supporting your hypothesis. Fortunately, Bayes’ Theorem nicely sums it up for us: your belief in your hypothesis after you collect your evidence is proportional to how strongly your hypothesis was originally supported, modified by the chances of seeing what you did given the existing hypothesis. If your data cannot be supported under your hypothesis – something is wrong. We, of course, should never just ignore the evidence as it is in the exploration that we are truly scientists. Similarly, it is in the exploration of our learning and teaching, and thinking about and working on our relationship with our students, that I feel that we are truly teachers.

All your Bayes are belong to us. (Sorry.)

Once I accepted that I wasn’t in competition with my students and that my role was not to guard the world from them, but to prepare them for the world, my job got easier in many ways and infinitely more enjoyable. However, I am well aware that any decisions I make in terms of changing how I teach, what I teach or why I teach have to be based in sound evidence and not just any warm and fuzzy feelings about partnership. Partnership, of course, implies negotiation from both sides – if I want to turn out students who will be able to work without me, I have to teach them how and when to negotiate. When can we discuss terms and when do we just have to do things?

My concern with the phrase “everything is negotiable” is that it, to me, subsumes the notions that “everything is equivalent” and “every notion is of equal worth”, neither of which I hold to be true from a scientific or educational perspective. I believe that many things that we hold to be non-negotiable, for reasons of convenience, are actually negotiable but it’s an inaccurate slippery slope argument to assume that this means that we  must immediately then devolve to an “everything is acceptable” mode.

Once again we return to authenticity. There’s no point in someone saying “we value your feedback” if it never shows up in final documents or isn’t recorded. There’s no point in me talking about partnership if what I mean is that you are a partner to me but I am a boss to you – this asymmetry immediately reveals the lack of depth in my commitment. And, be in no doubt, a partnership is a commitment, whether it’s 1:1 or 1:360. It requires effort, maintenance, mutual respect, understanding and a commitment from both sides. For me, it makes my life easier because my students are less likely to frame me in a way that gets in the way of the teaching process and, more importantly, allows them to believe that their role is not just as passive receivers of what I deign to transmit. This, I hope, will allow them to continue their transition to self-regulation more easily and will make them less dependent on just trying to make me happy – because I want them to focus on their own learning and development, not what pleases me!

One of the best definitions of science for me is that it doesn’t just explain, it predicts. Post-hoc explanation, with no predictive power, has questionable value as there is no requirement for an evidentiary standard or framing ontology to give us logical consistency. Seeing the data that set me on this course made me realise that I could come up with many explanations but I needed a solid framework for the discussion, one that would give me enough to be able to construct the next set of analyses or experiments that would start to give me a ‘why’ and, therefore, a ‘what will happen next’ aspect.