Time Banking: Foundations

Short post today because I’ve spent so much time looking at research and fixing papers and catching up on things that I haven’t left myself much time to blog. Sorry about that! Today’s post is talking about one of the most vital aspects of time banking and one that I’ve been working on slightly under the radar – the theoretical underpinnings based on work in education, psychology and economics.

Foundations: More complex than they first appear.

Today we’ve been looking at key papers in educational psychology on motivation – but the one that stood out today was Zimmerman (90), “Self-regulated learning and academic achievement: An overview.” in Educational Psychologist, 25.  I want my students to become their own time managers but that’s really just a facet of self-regulation. It’s important to place all of this “let’s get rubber with time” into context and build on the good science that has gone before. I want my students to have the will to learn and practice, and the skill to do so competently – one without the other is no good to me.

This is, of course, just one of the aspects that we have to look at. Do I even know how I’m planning to address the students? Within an operant framework of punishment and reward or a phenomenological framework of self-esteem? How am I expecting them to think? These seem like rather theoretical matters but I need to know how existing timeliness issues are being perceived. If students think that they’re working in a reward/punishment framework then my solution has to take that into account. Of course, this takes us well into the world of surveying and qualitative analysis, but to design this survey we need sound theory and good hypotheses so that we can start in the ballpark of the right answer and iteratively improve.

We’re looking at motivation as the key driver here. Yes, we’re interested in student resilience and performance, but it’s the motivation to move to self-regulation that is what we’re trying to maximise. Today’s readings and sketching will be just one day out of many more to come as we further refine our search from the current broader fan to a more concentrated beam.

What of the economic factors? There is no doubt that the time bank forms a primitive economy out of ‘hours’ of a student’s time but it’s one where the budget doesn’t have to balance across the class, just across an individual student. This makes things easier to an extent as I don’t have to consider a multi-agent market beyond two people: the student and me. However, the student still has private information from me, the quality, progress and provenance of their work, and I have private information from them, in terms of the final mark. Can I make the system strategy proof, where students have no incentive to lie about how much work they’ve done or don’t try to present their private information in a way that is inherently non-truthful? Can I also produce a system where I don’t overly manipulate the system through the construction of the oracle or my support mechanisms? There’s a lot of great work out there on markets and economies so I have a great deal of reading to do here as well.

So, short post – but a long and fascinating day.


Your love is like bad measurement.

(This is my 200th post. I’ve allowed myself a little more latitude on the opinionated scale. Educational content is still present but you may find some of the content slightly more confronting than usual. I’ve also allowed myself an awful pun in the title.)

People like numbers. They like solid figures, percentages, clear statements and certainty. It’s a great shame that mis-measurement is so easy to do, when you search for these figures, and so much a part of our lives. Today, I’m going to discuss precision and recall, because I eventually want to talk about bad measurement. It’s very easy to get measurement wrong but, even when it’s conducted correctly, the way that we measure or the reasons that we have for measuring can make even the most precise and delicate measurements useless to us for an objective scientific purpose. This is still bad measurement.

I’m going to give you a big bag of stones. Some of the stones have diamonds hidden inside them. Some of the stones are red on the outside. Let’s say that you decide that you are going to assume that all stones that have been coloured red contain diamonds. You pull out all of the red stones, but what you actually want is diamonds. The number of red stones are referred to as the number of retrieved instances – the things that you have selected out of that original bag of stones. Now, you get to crack them open and find out how many of them have diamonds. Let’s say you have R red stones and D1 diamonds that you found once you opened up the red stones. The precision is the fraction D1/R: what percentage of the stones that you selected (Red) were actually the ones that you wanted (Diamonds). Now let’s say that there are D2 diamonds (where D2 is greater than or equal to zero) left back in the bag. The total number of diamonds in that original bag was D1+D2, right? The recall is the fraction of the total number of things that you wanted (Diamonds, given by D1+D2) that you actually got (Diamonds that were also painted Red, which is D1). So this fraction is D1/(D1+D2),the number you got divided by the number that there were there for you to actually get.

Sorry, Logan5, your time is up.

If I don’t have any other mechanism that I can rely upon for picking diamonds out of the bag (assuming no-one has conveniently painted them red), and I want all of the diamonds, then I need to take all of them out. This will give me a recall of 100% (D2 will be 0 as there will be nothing left in the bag and the fraction will be D1/D1). Hooray! I have all of the diamonds! There’s only one problem – there are still only so many diamonds in that bag and (maybe) a lot more stones, so my precision may be terrible. More importantly, my technique sucks (to use an official term) and I have no actual way of finding diamonds. I just happen to have used a mechanism that gets me everything so it must, as a side effect, get me all of the diamonds. I haven’t actually done anything except move everything from one bag to another.

One of the things about selection mechanisms is that people often seem happy to talk about one side of the precision/recall issue. “I got all of them” is fine but not if you haven’t actually reduced your problem at all. “All the ones I picked were the right ones” sounds fantastic until you realise that you don’t know how many were left behind that were also the ones that you wanted. If we can specify solutions (or selection strategies) in terms of their precision and their recall, we can start to compare them. This is an example of how something that appears to be straightforward can actually be a bad measurement – leave out one side of precision or recall and you have no real way of assessing the utility of what it is that you’re talking about, despite having some concrete numbers to fall back on.

You may have heard this expressed in another way. Let’s assume that you can have a mechanism for determining if people are innocent or guilty of a crime. If it was a perfect mechanism, then only innocent people would go free and only guilt people would go to jail. (Let’s assume it’s a crime for which a custodial sentence is appropriate.) Now, let’s assume that we don’t have a perfect mechanism so we have to make a choice – either we set up our system so that no innocent person goes to jail, or we set up our system so that no guilty person is set free. It’s fairly easy to see how our interpretation of the presumption of innocence, the notion of reasonable doubt and even evidentiary laws would be constructed in different ways under either of these assumptions. Ultimately, this is an issue of precision and recall and by understanding these concepts we can define what we are actually trying to achieve. (The foundation of most modern law is that innocent people don’t go to jail. A number of changes in certain areas are moving more towards a ‘no one who may be guilty of crimes of a certain type will escape us’ model and, unsurprisingly, this is causing problems due to inconsistent applications of our simple definitions from above.)

The reason that I brought all of this up was to talk about bad measurement, where we measure things and then over-interpet (torture the data) or over-assume (the only way that this could have happened was…) or over-claim (this always means that). It is possible to have a precise measurement of something and still be completely wrong about why it is occurring. It is possible that all of the data that we collect is the wrong data – collected because our fundamental hypothesis is in error. Data gives us information but our interpretative framework is crucial in determining what use we can make of this data. I talked about this yesterday and stressed the importance of having enough data, but you really have to know what your data means in order to be sure that you can even start to understand what ‘enough data’ means.

One example is the miasma theory of disease – the idea that bad smells caused disease outbreaks. You could construct a gadget that measured smells and then, say in 18th Century England, correlate this with disease outbreaks – and get quite a good correlation. This is still a bad measurement because we’re actually measuring two effects, rather than a cause (dead mammals introducing decaying matter/faecal bacteria etc into water or food pathways) and the effects (smell of decomposition, and diseases like cholera, E. Coli contamination, and so on). We can collect as much ‘smell’ data as we like, but we’re unlikely to learn much more because any techniques that focus on the smell and reducing it will only work if we do things like remove the odiferous elements, rather than just using scent bags and pomanders to mask the smell.

To look at another example, let’s talk about the number of women in Computer Science at the tertiary level. In Australia, it’s certainly pretty low in many Universities. Now, we can measure the number of women in Computer Science and we can tell you exactly how many are in a given class, what their average marks are, and all sorts of statistical data about them. The risk here is that, from the measurements alone, I may have no real idea of what has led to the low enrolments for women in Computer Science.

I have heard, far too many times, that there are too few women in Computer Science because women are ‘not good at maths/computer science/non-humanities courses’ and, as I also mentioned recently when talking about the work of Professor Seron, this doesn’t appear to the reason at all. When we look at female academic performance, reasons for doing the degree and try to separate men and women, we don’t get the clear separation that would support this assertion. In fact, what we see is that the representation of women in Computer Science is far lower than we would expect to see from the (marginally small) difference that does appear at the very top end of the data. Interesting. Once we actually start measuring, we have to question our hypothesis.

Or we can abandon our principles and our heritage as scientists and just measure something else that agrees with us.

You don’t have to get your measurement methods wrong to conduct bad measurement. You can also be looking for the wrong thing and measure it precisely, because you are attempting to find data that verifies your hypothesis, but rather than being open to change if you find contradiction, you can twist your measurements to meet your hypothesis, you can only collect the data that supports your assumptions and you can over-generalise from a small scale, or from another area.

When we look at the data, and survey people to find out the reasons behind the numbers, we reduce the risk that our measurements don’t actually serve a clear scientific purpose. For example, and as I’ve mentioned before, the reason that there are too few women studying Computer Science appears to be unpleasantly circular and relates to the fact that there are too few women in the discipline over all, reducing support in the workplace, development opportunities and producing a two-speed system that excludes the ‘newcomers’. Sorry, Ada and Grace (to name but two), it turns out that we seem to have very short memories.

Too often, measurement is conducted to reassure ourselves of our confirmed and immutable beliefs – people measure to say that ‘this race of people are all criminals/cheats/have this characteristic’ or ‘women cannot carry out this action’ or ‘poor people always perform this set of actions’ without necessarily asking themselves if the measurement is going to be useful, or if this is useful pursuit as part of something larger. Measuring in a way that really doesn’t provide any more information is just an empty and disingenuous confirmation. This is forcing people into a ghetto, then declaring that “all of these people live in a ghetto so they must like living in a ghetto”.

Presented a certain way, poor and misleading measurement can only lead to questionable interpretation, usually to serve a less than noble and utterly non-scientific goal. It’s bad enough when the media does it but it’s terrible when scientists, educators and academics do it.

Without valid data, collected on the understanding that a world-changing piece of data could actually change our data, all our work is worthless. A world based on data collection purely for the sake of propping up, with no possibility of discovery and adaptation, is a world of very bad measurement.


The Many Types of Failure: What Does Zero Mean When Nothing Is Handed Up?

You may have read about the Edmonton, Canada, teacher who expected to be sacked for handing out zeros. It’s been linked to sites as diverse as Metafilter, where a long and interesting debate ensued, and Cracked, where it was labelled one of the ongoing ‘pussifications’ of schools. (Seriously? I know you’re a humour site but was there some other way you could have put that? Very disappointed.)

Basically, the Edmonton Public School Board decided that, rather than just give a zero for a missed assignment, this would be used as a cue for follow-up work and additional classes at school or home. Their argument – you can’t mark work that hasn’t been submitted, let’s use this as a trigger to try and get submission, in case the source is external or behavioural. This, of course, puts the onus on the school to track the students, get the additional work completed, and then mark out of sequence. Lynden Dorval, the high school teacher who is at the centre of this, believe that there is too much manpower involved in doing this and that giving the student a zero forces them to come to you instead.

Some of you may never have seen one of these before. This is a zero, which is the lowest mark you can be awarded for any activity. (I hope!)

Now, of course, this has split people into two fairly neat camps – those who believe that Dorval is the “hero of zero” and those who can see the benefit of the approach, including taking into account that students still can fail if they don’t do enough work. (Where do I stand? I’d like to know a lot more than one news story before I ‘pick a side’.) I would note that a lot of tired argument and pejorative terminology has also come to the fore – you can read most of the buzzwords used against ‘progressives’ in this article, if you really want to. (I can probably summarise it for you but I wouldn’t do it objectively. This is just one example of those who are feting Dorval.)

Of course, rather than get into a heated debate where I really don’t have enough information to contribute, I’d rather talk about the basic concept – what exactly does a zero mean? If you hand something in and it meets none of my requirements, then a zero is the correct and obvious mark. But what happens if you don’t hand anything in?

With the marking approach that I practice and advertise, which uses time-based mark penalties for late submission, students are awarded marks for what they get right, rather than have marks deducted for what they do wrong. Under this scheme, “no submission” gives me nothing to mark, which means that I cannot give you any marks legitimately – so is this a straight-forward zero situation? The time penalties are in place as part of the professional skill requirements and are clearly advertised, and consistently policed. I note that I am still happy to give students the same level of feedback on late work, including their final mark without penalty, which meets all of the pedagogical requirements, but the time management issues can cost a student some, most or all of their marks. (Obviously, I’m actively working on improving engagement with time management through mechanisms that are not penalty based but that’s for other posts.)

As an aside, we have three distinct fail grades for courses at my University:

  • Withdraw Fail (WF), where a student has dropped the course but after the census date. They pay the money, it stays on their record, but as a WF.
  • Fail (F), student did something but not enough to pass.
  • Fail No Submission (FNS), student submitted no work for assessment throughout the course.

Interestingly, for my Uni, FNS has a numerical grade of 0, although this is not shown on the transcript. Zero, in the course sense, means that you did absolutely nothing. In many senses, this represents the nadir of student engagement, given that many courses have somewhere from 1-5, maybe even 10%, of marks available for very simple activities that require very little effort.

My biggest problem with late work, or no submission, is that one of the strongest messages I have from that enormous data corpus of student submission that I keep talking about is that starting a pattern of late or no submission is an excellent indicator of reduced overall performance and, with recent analysis, a sharply decreased likelihood of making it to third year (final year) in your college studies. So I really want students to hand something in – which brings me to the crux of the way that we deal with poor submission patterns.

Whichever approach I take should be the one that is most likely to bring students back into a regular submission pattern. 

If the Public School Board’s approach is increasing completion rates and this has a knock-on effect which increases completion rates in the future? Maybe it’s time to look at that resourcing profile and put the required money into this project. If it’s a transient peak that falls off because we’re just passing people who should be failing? Fuhgeddaboutit.

To quote Sherlock Holmes (Conan Doyle, naturally): 

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. (A Scandal in Bohemia)

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (The Adventure of the Copper Beeches)

It is very easy to take a side on this and it is very easy to see how both sides could have merit. The issue, however, is what each of these approaches actually does to encourage students to submit their assignment work in a more timely fashion. Experiments, experimental design, surveys, longitudinal analysis, data, data, data!

If I may end by waxing lyrical for a moment (and you will see why I stick to technical writing):

If zeroes make Heroes, then zeroes they must have! If nulls make for dulls, then we must seek other ways!


Time Banking IV: The Role of the Oracle

I’ve never really gone into much detail on how I would make a system like Time Banking work. If a student can meet my requirements and submit their work early then, obviously, I have to provide some sort of mechanism that allows the students to know that my requirements have been met. The first option is that I mark everything as it comes in and then give the student their mark, allowing them to resubmit until they get 100%.

That’s not going to work, unfortunately, as, like so many people, I don’t have the time to mark every student’s assignment over and over again. I wait until all assignments have been submitted, review them as a group, mark them as a group and get the best use out of staying in the same contextual framework and working on the same assignments. If I took a piecemeal approach to marking, it would take me longer and, especially if the student still had some work to do, I could end up marking the same assignment 3,4, however many times and multiplying my load in an unsupportable way.

Now, of course I can come up with simple measures that the students can check for themselves. Of course, the problem we have here is setting something that a student can mis-measure as easily as they measure. If I say  “You must have at least three pages for an essay” I risk getting three pages of rubbish or triple spaced 18 point print. It’s the same for any measure of quantity (number of words, number of citations, length of comments and so on) instead of quality. The problem is, once again, that if the students were capable of determining the quality of their own work and determining the effort and quality required to pass, they wouldn’t need time banking because their processes are already mature!

So I’m looking for an indicator of quality that a student can use to check their work and that costs me only (at most) a small amount of effort. In Computer Science, I can ask the students to test their work against a set of known inputs and then running their program to see what outputs we get. There is then the immediate problem of students hacking their code and just throwing it against the testing suite to see if they can fluke their way to a solution. So, even when I have an idea of how my oracle, my measure of meeting requirements, is going to work, there are still many implementation details to sort out.

Fortunately, to help me, I have over five years worth of student data through our automated assignment submission gateway where some assignments have an oracle, some have a detailed oracle, some have a limited oracle and some just say “Thanks for your submission.” The next stage in the design of the oracle is to go back and to see what impact the indications of progress and completeness had on the students. Most importantly, for me, is the indication of how many marks a student had to get in order to stop trying to make fresh submissions. If before the due date, did they always strive for 100%? If late, did they tend to stop at more than 50% of achieved marks, or more than 40% in the case of trying to avoid receiving a failing grade based on low assignment submission?

Are there significant and measurable differences between assignments with an oracle and those that have none (or a ‘stub’, so to speak)? I know what many people expect to find in the data, but now I have the data and I can go and interrogate that!

Every time that I have questions like this about the implementation, I have a large advantage in that I already have a large control body of data, before any attempts were made to introduce time banking. I can look at this to see what student behaviour is like and try to extract these elements and use them to assist students in smoothing out their application of effort and develop more mature time management approaches.

Now to see what the data actually says – I hope to post more on this particular aspect in the next week or so.


Time Banking III: Cheating and Meta-Cheating

One of the problems with setting up any new marking system is that, especially when you’re trying to do something a bit out of the ordinary, you have to make sure that you don’t produce a system that can be gamed or manipulated to let people get an unfair advantage. (Students are very resourceful when it comes to this – anyone who has received a mysteriously corrupted Word document of precisely the right length and with enough relevant strings to look convincing, on more than one occasion from the same student and they then are able to hand up a working one the next Monday, knows exactly what I’m talking about.)

As part of my design, I have to be clear to the students what I do and don’t consider to be reasonable behaviour (returning to Dickinson and McIntyre, I need to be clear in my origination and leadership role). Let me illustrate this with an anecdote from decades ago.

In the early 90s, I helped to write and run a number of Multi User Dungeons (MUDs) – the text-based fore-runners of the Massively Multiplayer On-line Role Playing Games, such as World of Warcraft. The games had very little graphical complexity and we spent most of our time writing the code that drove things like hitting orcs with swords or allowing people to cast spells. Because of the many interactions between the software components in the code, it was possible for unexpected things to happen – not just bugs where code stopped working but strange ‘features’ where things kept working but in an odd way. I knew a guy, let’s call him K, who was a long-term player of MUDs. If the MUD was any good, he’d not only played it, he’d effectively beaten it. He knew every trick, every lurk, the best way to attack a monster but, more interestingly, he had a nose for spotting errors in the code and taking advantage of them. One time, in a game we were writing, we spotted K walking around with something like 20-30 ’empty’ water bottles on him. (As game writers, wizards, we could examine any object in the game, which included seeing what players were carrying.)

A bit like this, but all on one person’s shoulders and no wheels.

This was weird. Players had a limited amount of stuff that they could carry, and K should have had no reason to carry those bottles. When we examined him, we discovered that we’d made an error in the code so that, when you drank from a bottle and emptied it, the bottle ended up weighing LESS THAN NOTHING. (It was a text game and our testing wasn’t always fantastic – I learnt!) So K was carrying around the in-game equivalent of helium balloons that allowed him to carry a lot more than he usually would.

Of course, once we detected it, we fixed the code and K stopped carrying so many empty bottles. (Although, I have no doubt that he personally checked each and every container we put into the game from that point on to see if could get it to happen again.) Did we punish him? No. We knew that K would need some ‘flexibility’ in his exploration of the game, knowing that he would press hard against the rubber sheet to see how much he could bend reality, but also knowing that he would spot problems that would take us weeks or months of time to find on our own. We took him into our new and vulnerable game knowing that if he tried to actually break or crash the game, or share the things he’d learned, we’d close off his access. And he knew that too.

Had I placed a limit in play that said “Cheating detected = Immediate Booting from the game”, K would have left immediately. I suspect he would have taken umbrage at the term ‘cheating’, as he generally saw it as “this is the way the world works – it’s not my fault that your world behaves strangely”. (Let’s not get into this debate right now, we’re not in the educational plagiarism/cheating space right now.)

We gave K some exploration space, more than many people would feel comfortable with, but we maintained some hard pragmatic limits to keep things working and we maintained the authority required to exercise these limits. In return, K helped us although, of course, he played for the fun of the game and, I suspect, the joy of discovering crazy bugs. However, overall, this approach saved us effort and load, and allowed us to focus on other things with our limited resources. Of course, to make this work required careful orientation and monitoring on our behalf. Nothing, after all, comes for free.

If I’d asked K to fill out forms describing the bugs he’d found, he’d never have done it. If I’d had to write detailed test documents for him, I wouldn’t have had time to do anything else. But it also illustrates something that I have to be very cautious of, which I’ve embodied as the ‘no cheating/gaming’ guideline for Time Banking. One of the problems with students at early development stages is that they can assume that their approach is right, or even assert that their approach is the correct one, when it is not aligned with our goals or intentions at all. Therefore, we have to be clear on the goals and open about our intentions. Given that the goal of Time Banking is to develop mature approach to time management, using the team approach I’ve already discussed, I need to be very clear in the guidance I give to students.

However, I also need to be realistic. There is a possibility that, especially on the first run, I introduce a feature in either the design or the supporting system that allows students to do something that they shouldn’t. So here’s my plan for dealing with this:

  1. There is a clear no-cheating policy. Get caught doing anything that tries to subvert the system or get you more hours in any other way than submitting your own work early and it’s treated as a cheating incident and you’re removed from the time bank.
  2. Reporting a significant fault in the system, that you have either deduced, or observed, is worth 24 hours of time to the first person who reports it. (Significant needs definition but it’s more than typos.)

I need the stick. Some of my students need to know that the stick is there, even if the stick is never needed, but I really can’t stand the stick. I have always preferred the carrot. Find me a problem and you get an automatic one-day extension, good for any assignment in the bank. Heck, I could even see my way clear to making this ‘liftable’ hours – 24 hours you can hand on to a friend if you want. If part of your team thinking extends to other people and, instead of a gifted student handing out their assignment, they hand out some hours, I have no problem with that. (Mr Pragmatism, of course, places a limit on the number of unearned hours you can do this with, from the recipient’s, not the donor’s perspective. If I want behaviour to change, then people have to act to change themselves.)

My design needs to keep the load down, the rewards up but, most importantly, the rewards have to move the students towards the same goals as the primary activity or I will cause off-task optimisation and I really don’t want to do that.

I’m working on a discussion document to go out to people who think this is a great idea, a terrible idea, the worst idea ever, something that they’d like to do, so that I can bring all of the thoughts back together and, as a group of people dedicated to education, come up with something that might be useful – OR, and it’s a big or, come up with the dragon slaying notion that kills time banking stone dead and provides the sound theoretical and evidence-based support as to why we must and always should use deadlines. I’m prepared for one, the other, both or neither to be true, along with degrees along the axis.

 


Time Banking II: We Are a Team

In between getting my camera ready copy together for ICER, and I’m still pumped that our paper got into ICER, I’ve been delving deep into the literature and the psychological and pedagogical background that I need to confirm before I go too much further with Time Banking. (I first mentioned this concept here. The term is already used in a general sense to talk about an exchange of services based on time as a currency. I use it here within the framework of student assignment submission.) I’m not just reading in CS Ed, of course, but across Ed, sociology, psychology and just about anywhere else where people have started to consider time as a manageable or tradable asset. I thought I’d take this post to outline some of the most important concepts behind it and provide some rationale for decisions that have already been made. I’ve already posted the guidelines for this, which can be distilled down to “not all events can be banked”, “additional load must be low”, “pragmatic limits apply”, “bad (cheating or gaming) behaviour is actively discouraged” and “it must integrate with our existing systems”.

Time/Bank currency design by Lawrence Weiner. Photo by Julieta Aranda. (Question for Nick – do I need something like this for my students?)

Our goal, of course, is to get students to think about their time management in a more holistic fashion and to start thinking about their future activities sometime sooner the 24 hours before the due date. Rather than students being receivers and storers of deadline, can we allow them to construct their own timelines, within a set of limits? (Ben-Ari, 1998, “Constructivism in Computer Science Education”, SIGCSE,  although Ben-Ari referred to knowledge in this context and I’m adapting it to a knowledge of temporal requirements, which depends upon a mature assessment of the work involved and a sound knowledge of your own skill level.) The model that I am working with is effectively a team-based model, drawing on Dickinson and McIntyre’s 1997 work “Team Performance Assessment and Measurement: Theory, Methods and Applications.”, but where the team consists of a given student, my marking team and me. Ultimately our product is the submitted artefact and we are all trying to facilitate its timely production, but if I want students to be constructive and participative, rather than merely compliant and receptive, I have to involve them in the process. Dickinson and McIntyre identified seven roles in their model: orientation, leadership, monitoring, feedback, back-up (assisting/supporting), coordination and communication. Some of these roles are obviously mine, as the lecturer, such as orientation (establishing norms and keeping the group cohesive) and monitoring (observing performance and recognising correct contribution). However, a number of these can easily be shared between lecturer and student, although we must be clear as to who holds each role at a given time. In particular, if I hold onto deadlines and make them completely immutable then I have take the coordination role and handed over a very small fragment of that to the student. By holding onto that authority, whether it makes sense or not, I’m forcing the student into an authority-dependent mode.

(We could, of course, get into quite a discussion as to whether the benefit is primarily Piagiatien because we are connecting new experiences with established ideas, or Vygotskian because of the contact with the More Knowledgable Other and time spent in the Zone of Proximal Development. Let’s just say that either approach supports the importance of me working with a student in a more fluid and interactive manner than a more rigid and authoritarian relationship.)

Yes, I know, some deadlines are actually fixed and I accept that. I’m not saying that we abandon all deadlines or notion of immutability. What I am, however, saying is that we want our students to function in working teams, to collaborate, to produce good work, to know when to work harder earlier to make it easier for themselves later on. Rather than give them a tiny sandpit in which to play, I propose that we give them a larger space to work with. It’s still a space with edges, limits, defined acceptable behaviour – our monitoring and feedback roles are one of our most important contributions to our students after all – but it is a space in which a student can have more freedom of action and, for certain roles including coordination, start to construct their own successful framework for achievement.

Much as reading Vygotsky gives you useful information and theoretical background, without necessarily telling you how to teach, reading through all of these ideas doesn’t immediately give me a fully-formed implementation. This is why the guidelines were the first things I developed once I had some grip on the ideas, because I needed to place some pragmatic limits that would allow me to think about this within a teaching framework.  The goal is to get students to use the process to improve their time management and process awareness and we need to set limits on possible behaviour to make sure that they are meeting the goal. “Hacks” to their own production process, such as those that allow them to legitimately reduce their development time (such as starting the work early, or going through an early prototype design) are the point of the exercise. “Hacks” that allow them to artificially generate extra hours in the time bank are not the point at all. So this places a requirement on the design to be robust and not susceptible to gaming, and on the orientation, leadership and monitoring roles as practiced by me and my staff. But it also requires the participants to enter into the spirit of it or choose not to participate, rather than attempting to undermine it or act to spite it.

The spontaneous generation of hours was something that I really wanted to avoid. When I sketched out my first solution, I realised that I had made the system far too complex by granting time credits immediately, when a ‘qualifying’ submission was made, and that later submissions required retraction of the original grant, followed by a subsequent addition operation. In fact, I had set up a potential race condition that made it much more difficult to guarantee that a student was using genuine extension credit time. The current solution? Students don’t get credit added to their account until a fixed point has passed, beyond which no further submissions can take place. This was the first of the pragmatic limits – there does exist a ‘no more submissions’ point but we are relatively elastic to that point. (It also stops students trying to use obtained credit for assignment X to try and hand up an improved version of X after the due date. We’re not being picky here but this isn’t the behaviour we want – we want students to think more than a week in advance because that is the skill that, if practised correctly, will really improve their time management.)

My first and my most immediate concern was that students may adapt to this ‘last hand-in barrier’ but our collected data doesn’t support this hypothesis, although there are some concerning subgroups that we are currently tearing apart to see if we can get more evidence on the small group of students who do seem to go to a final marks barrier that occurs after the main submission date.

I hope to write more on this over the next few days, discussing in more detail my support for requiring a ‘no more submissions’ point at all. As always, discussion is very welcome!


Let’s Turn Down the Stupid (Ignorance is Our Enemy)

(This is a longish opinion piece that has very little educational discussion. I leave it you as to whether you wish to read it or not.)

I realise that a number of you may read my blog posts and think “Well, how nice for him. He has tenure in a ‘good’ University, has none of his own kids to worry about and is obviously socially mobile and affluent.” Some of you may even have looked up my public record salary when I talk about underpaying teachers and wondered why I don’t just shut up and enjoy my life, rather than blathering on here. It would be easy to cast me as some kind of Mr Happy/Pollyanna figure, always seeing the positive and rushing out onto the sports field with a rousing “We’re all winners, children” attitude.

Nothing could be further from the truth. I get up every day knowing that the chances are that I will not make a difference, that all of my work will be undone by a scare campaign in a newspaper, that I may catch a completely preventable disease because too few people got vaccinated, that I and my family may not have enough food or lose my house because people ignore science, that anti-scientific behaviour is clawing back many of the victories that we have already achieved.

I’m no Pollyanna. I get up every day ready to fight ignorance and try to bring knowledge to places where ignorance reigns. Sometimes I manage it – those are good days. But I can’t just talk to my own students, I have to reach out into the community because I see such a small percentage of a small percentage as my students. If I want lasting change, and I believe that most educators are all trying to change the world for the better, then I have to deal with the fact that my message, and my students, have to be able to be seen outside of our very small and isolated community.

This morning, while out running, we had gone a bit over 14 kilometres (about 9 miles) when I saw a cyclist up ahead off us, stopped on a little wooden ramp that went under one of the bridges. He heard us coming and waved us down, very quickly.

Someone had strung fishing line across the path, carefully anchored on both sides, at around mid-chest height for adult runners and walkers, or neck/head height for children.

Of course, the moment we realised this we looked around for the utter idiots who were no doubt waiting to film this or watch it but they showed a modicum of sense in that we couldn’t see them. (Of course, what could we have done even if we had seen them. They were most likely children and the police aren’t likely to get involved for a ‘fishing line’ related incident.) What irritated me most about this was that I was running with someone who was worried about the future and I was solemnly telling her that I had great hope for the future, that the problems could be solved if we worked at it and this is what I always tried to get across to my students.

And then we nearly got garrotted by an utterly thoughtless act of stupidity. Even a second’s thought would lead you to the understanding that this was more than a joke, it was potentially deadly. And yet, the people who put this up, who I have no doubt waited to watch or film it, were incapable of doing this. I can only hope that they were too young, or too mentally incapacitated, to know better. Because when someone knowingly does this, it takes them from ignorance to evil. Fortunately, the number of truly evil people, people who do these things in full knowledge and delight, are small. At least, that’s what I tell myself to get myself to sleep at night. We must always be watchful for evil but in the same way that we watch for the infrequently bad storm – when we see the signs, we batten down, but we don’t live our lives in the storm cellar. Ignorance, for me, is far more prevalent and influential than evil – and often has very similar effects as it can take people from us, by killing or injuring them or by placing them into so much mental or physical pain that they can no longer do what they could have done with their lives.

The biggest obstacle we face is ignorance and acts taken in ignorance, whether accidentally or wilfully so. There’s no point me training up the greatest mind in the history of the world, only for that person to be killed by someone throwing a rock off a bridge for fun. Today, I could easily have been seriously injured because someone thought it was funny to watch people run into an obstacle at speed. Yes, the line probably would have broken and I was unlikely to have suffered too much harm. Unless it didn’t. Unless it took out an eye.

But I’m not giving up. I say, mostly joking, when I run across things like this “This is why we fight.” and I mean it. This is exactly why education is important. This is why teachers are important. This is why knowledge is important. Because, without all of these, ignorance will win and it will eventually kill us.

I am sick of stupid, ignorant and evil people. I’m sick of grown men getting away with disgraceful behaviour because “boys will be boys”. I’m sick of any ignorant or thoughtless act being tolerated with “Oh well, these things happen”. However, me being sick of this does nothing unless I act to stop it. Me acting to stop it may do nothing. Me doing nothing to stop it definitely does nothing.


Today, As I Was Crawling Across the Floor…

As I believe I’ve already mentioned, I play a number of board games but, before you think “Oh no, not Monopoly!”, these are along the lines of the German-style board games, games that place some emphasis on strategy, don’t depend too heavily on luck, may have collaborative elements (or an entirely collaborative theme), tend not to be straight war games and manage to keep all the players in the game until the end. Notably, German-style board games don’t have to be German! While some of the ones that I enjoy (Settlers of Cataan, Ticket to Ride and Shadows Over Camelot) are European, a number are not (Arkham Horror, Battlestar Galactica and Lords of Waterdeep). A number of these require cooperative and collaborative play to succeed – some have a traitor within.

I have discussed these games with students on a number of occasions as many students have no idea that such games exist. The idea of players working together against a common enemy (Arkham Horror) appeals to a lot of people, especially as it allows you to share success. One of the best things about games that are well-designed to reward player action and keep everyone in the game is that the tension builds to a point a final victory gives everyone fiero – that powerful surge of joy.

Now, while there are many games out there, I decided to go through the full game design process to get my head around the components required to achieve a playable game. I’ve designed some games before and, after a brief time spent playing them, I’ve left most of them back on the shelf. Why? Poor game design, generally. As a writer, I have a tendency to think of stories and to run narrative in my head – in game terms, this is only one possible passage through the game. One of the strengths of computer games such as Deus Ex is the ability to play multiple times and get something new out: to shake up the events and run it in your order, forming a new narrative. (In DE, technically, you were on rails the whole time, the strength of the game is in the illusion of free will.)

Why is it important for me to try and design a good game? Because it requires a sound assessment of what is required, reflection upon how I can model a situation in a game, good design, solid prototyping, testing, feedback, revision, modification, re-testing, thought, evaluation and then more and more refinement. From a methodological point of view, my question to myself is “Can I build a game that is worth playing based on a general sketch of the problem, a few good ideas and then a solid process to allow me to build game features in the way that I would build code features?”

Right now I’m in the requirements gathering phase and this is proving to be very interesting. I’m working on a Zombie game (oh no, not another one) but I want to have a three-stage game where the options available to players, resources and freedom of action, change dramatically during each stage. I want it to be based in London. I want to allow for players to develop their characters as they play through a given game. I want player actions to have a lasting impact in the game, for decisions to matter. I want the game to generate a different board and base scenario set every time, to prevent players learning a default strategy. I want the whole thing to run, as a board game, in the German style. I want the instructions to fit onto 8 A4 pages – with pictures.

(I should note that I’ve been playing games for a long time and made a lot of notes about rules and mechanics that I like, so this has all formed part of my requirements gathering, but I’m not trying to put a new skin on an old game – I’m trying to create something interesting that is also not a blatant rip-off. Also, yes, I know that there are already a lot of zombie games out there. That isn’t the point.)

I’ve been crawling the web for pictures of London, layouts, property values, building types and other things to get London into my head. Because the board has to change every time, and I can’t use computer generation, I need a modular board structure. That, of course, requires that the modules make sense and feel like London, and that the composition of these modules also makes sense. I need the size of the board to make the players work for their victories and not make victory too easy or too hard to attain. (So, I’m building bounds into the modularity and composition that I can tune based on what happens in play testing.)

I knew this but my research nailed it as a requirement: London is about as far away from being a grid layout as you can get, with a river snaking through it. Because of this, and my randomisation and modularity requirements, I had to think about a system that allowed me to put the elements together but that didn’t make London look like New York.  Instead, I’ve opted for a tiled layout based on hexagons. They tesselate nicely, you can’t run in straight lines, and you can’t see further than the side of one hex, which reflects the problems of working in London without having to force someone to copy out a section of the London map with all of its terrible twists and turns.

The other thing I really wanted to know was “How fast do zombies move?” and, rather than just look it up, I’ve spent a bit of this afternoon shambling around the house and timing myself to see what the traditional “slow” zombie does. Standard walking and running are easy (I have a good feel for those figures) but then I thought about that stalwart of zombie movies – the legless crawler. So, in the interests of research, I measured off a 10m course and dragged myself across the floor only using my arms. Then I added a fudge factor to account for the smoothness of the floor and, voila, a range of speeds that tell me how long zombies will take to move across my maps.

Why do I need to do this? Because I’ve never done it before. From now on, if someone asks me what the estimated speed of a legless zombie is on a level surface, I can say “Oh, about 0.25m/s” and really stop the conversation at the Vice Chancellor’s cocktail party.

Requirements gathering, around a problem specification, is a vital activity because if it’s done properly then you gain more and more understanding of the problem and, even though initially the questions seem to explode, you can move to a point that you have answered most of the important questions. By the time I’ve finished this stage, I should have refined my problem statement into a form that allows me to write the proper design and then build the first prototype without too many further questions. I should have the base rules down in a form that I can give to somebody and see what they do.

By doing this, I’m practising my own Software Engineering skills in a very different way, which makes me think about them outside of the comfortable framework of a programming language. Students often head off to start writing code because it’s easier to sit and write code that might work, instead of spending the time doing the far more difficulty activities of problem specification, requirements gathering, specification refinement and full design. I don’t get much of a chance to work on commercial software these days, so a zombie game on the weekends is an unusual, if rewarding, way to practice these skills.

Sliding across the floor is murder on the knees, though…


What are the Fiction and Non-Fiction Equivalents of Computer Science?

I commented yesterday that I wanted to talk about something covered in Mark’s blog, namely if it was possible to create an analogy between Common Core standards in different disciplines with English Language Arts and CS as the two exemplars. In particular, Mark pondered, and I quote him verbatim:

”Students should read as much nonfiction as fiction.”  What does that mean in terms of the notations of computing? Students should read as many program proofs as programs?  Students should read as much code as comments?

This a great question and I’m not sure that I have much of an answer but I’ve been enjoying thinking about it. We bandy the terms syntax and semantics around in Computer Science a lot: the legal structures of the programs we write and the meanings of the components and the programs. Is it even meaningful to talk about fiction and non-fiction in these terms and where do these fit? I’ve gone in a slightly different direction from Mark but I hope to bring it back to his suggestions later on.

I’m not an English specialist, so please forgive me or provide constructive guidance as you need to, but both fiction and non-fiction rely upon the same syntactic elements and the same semantic elements in linguistic terms – so the fact that we must have legal programs with well-defined syntax and semantics pose no obstacle to a fictional/non-fictional interpretation.

Forgive me as I go to Wikipedia for definitions for fiction and non-fiction for a moment:

“Non-fiction (or nonfiction) is the form of any narrativeaccount, or other communicative work whose assertions and descriptions are understood to be factual.” (Warning, embedded Wikipedia links)

“Fiction is the form of any narrative or informative work that deals, in part or in whole, with information or events that are not factual, but rather, imaginary—that is, invented by the author” (Again, beware Wikipedia).

Now here we can start to see something that we can get our teeth into. Many computer programs model reality and are computerised representation of concrete systems, while others may have no physical analogue at all or model a system that has never or may never exist. Are our simulations and emulations of large-scale system non-fiction? If so, is a virtual reality fictional because it has never existed or non-fictional because we are simulating realistic gravity? (But, of course, fiction is often written in a real world setting but with imaginary elements.)

From a software engineering perspective, I can see an advantage to making statements regarding abstract representations and concrete analogues, much as I can see a separation in graphics and game design between narrative/event engine construction and the physics engine underneath.

Is this enough of a separation? Mark’s comments on proof versus program is an interesting one: if we had an idea (an author’s creation) then it is a fiction until we can determine that it exists, but proof or implementation provides this proof of existence. In my mind, a proof and a program are both non-fiction in terms of their reification, but the idea that they span may still be fictional. Comments versus code is also very interesting – comments do not change the behaviour of code but explain, from the author’s mind, what has happened. (Given some student code and comment combinations, I can happily see a code as non-fiction, comment as fiction modality – or even comment as magical reality!)

Of course, this is all an enjoyable mental exercise, but what can I take from this and use in my teaching. Is there a particular set of code or comments that students should read for maximum benefit and can we make a separation that, even if not partitioned so neatly across two sets, gives us the idea of what constitutes a balanced diet of the products of our discipline?

I’d love to see some discussion on this but, if nothing here, then I’m happy to buy the first round of drinks at HERDSA or ICER to start a really good conversation going!


What’s the Big Idea?

I was reading Mark Guzdial’s blog just before sitting down to write tonight and came across this post. Mark was musing about the parallels between the Common Core standards of English Language arts and those of Computing Literacy. He also mentioned the CS:Principles program – an AP course designed to give an understanding of fundamental principles, the breadth of application and the way that computing can change the world.

I want to talk more about the parallels that Mark mentioned but I’ll do that in another post because I read through the CS:Principles Big Ideas and wanted to share them with you. There are seven big ideas:

  1. Creativity, recognising the innately creative nature of computing;
  2. Abstraction, where we rise above detail to allow us to focus on the right things;
  3. Data, where data is the foundation of the creation of knowledge;
  4. Algorithms, to develop solutions to computational problems;
  5. Programming, the enabler of our dreams of solutions and the way that we turn algorithms into solution – the basis of our expression;
  6. Internet, the ties that bind all modern computing together; and
  7. Impact, the fact that Computing can, and regularly does, change the world.

I think that I’m going to refer to these with the NSF Grand Challenges as part of my new Grand Challenges course, because there is a lot of similarity. I’ve nearly got the design finished so it’s not too late to incorporate new material. (I don’t like trying to rearrange courses too late into the process because I use a lot of linked assessment and scaffolding, it gets very tricky and easy to make mistakes if I try and insert a late design change.)

For me, the first and the last ideas are among the most important. Yes, you may be able to plod your way through simple work in computing but really good solutions require skill, practice, and creativity. When you get a really good solution or approach to a problem, you are going to change things – possibly even the world. It looks like someone took the fundamentals of computing and jammed together between two pieces of amazing stuff, framing the discipline inside the right context for a change. Instead of putting computing in a nerd sandwich, it’s in an awesome sandwich. I like that a lot.

It turns out that there are a lot of images when you search for “Awesome Sandwich”.

Allowing yourself to be creative, understanding abstraction, knowing how to put data together, working out to move the data around in the right ways and then coding it correctly, using all of the resources that you have to hand and that you can reach out and touch through the Internet – that’s how to change the world.