Recursive Tutorial: A tutorial on writing a tutorial

I assigned the Grand Challenge students a slightly strange problem for yesterday’s tutorial: “How would you write an R tutorial for Year 11 High School Students?” R is an open source statistics package that is incredibly powerful and versatile but it is nowhere near as friendly to use or accessible as traditional GUI tools such as Microsoft Excel. R has some menus and buttons on it but most of these are used to control the environment, rather than applying the statistical and mathematical functions. R Studio is an associated Integrated Development Environment (IDE) that makes working with R easier but, at its core, R relies upon you knowing enough R to type the right commands.

Discussing this with students, we compared Excel and R to find out what the core differences were and some of them are not important early on but become more important later. Excel, for example, allows you to quickly paste and move around data, apply some functions, draw some graphs and come to a result quickly, mostly by pushing buttons and using on-line help with a little typing. But, and it’s an important but, unless you write a program in Excel (and not that many people do), re-applying all of that manipulation to a new data source requires you to click and push and move across the screen all over again. You have to recreate a long and complicated combination of mechanical and cognitive functions. R, by contrast, requires you to type commands to get things to happen but it remembers them by default and you can easily extract them. Because of how R works, you drag in data (from a file, say) and then execute a set of manipulation steps. If you’re familiar with R then this is straight-forward. If not, then steep learning curve. However, re-using these instructions and manipulations on a new data source is trivial. You change the file and re-run all of the steps.

Why am I talking about new data sources? Because it’s often the case that you want to do the same thing with new data OR you realise that the data you were working with was incomplete or in error. Unless you write a lot of Visual Basic in Excel (and that no longer works on Macs so it’s not a transferable option), your Excel spreadsheet with changed data requires you to potentially reapply or check the application of everything in the spreadsheet, especially if there is any sorting of data, creation of new columns or summary data – and let’s not even start talking about pivot tables! But, for single run, for finance, for counting stuff, Excel is almost always going to be more easy to teach people to use than R. For scientists, however, R is better to use for two very important reasons: it is less likely to do something that is irreversible to your data and the vast majority of its default choices are sensible.

The students came up with a list of things that Excel does (good and bad): it’s strongly visual, lay-user friendly, tells you what you can do, does what it damn well wants to, data changes may require manual reapplication. There’s a corresponding list for R: steep learning curve, visual display for R environment but command-line interface for commands, does what you tell it to do (except when it’s too smart). I surveyed the class to find out who was using R rather than Excel and the majority of students were using R for their analysis but, and again it’s an important but, only because they had to. In situations where Excel was enough (simple manipulation, straight forward analysis), then Excel got used because Excel is far easier to use and far friendlier.

The big question for the students was “How do I start doing something?” In Excel, you type numbers into the spreadsheet and then can just start selecting things using a relatively good on-line help system. In R you are faced with a blinking prompt and you have to know enough to type streams of commands like this:

newtab <-read.csv("~/days.txt",header=FALSE)
plot(seq(1,nrow(newtab)),newtab$V1) 
boxplot(newtab) 
abline(a=1500,b=0) 
mean(newtab)

And, with a whole set of other commands, you can get graphs like this. (I realise that this is not a box plot!)

Once you’re used to it, this is meaningful, powerful and re-applicable. I can update the data and re-run this to my heart’s content, analysing vast quantities of data without having to keep mouse clicking into cells. But let’s remember our context. I’m not talking about higher education students, I’m talking about school students and it’s important to remember that teaching people something before they’re ready to use it or before they have an opportunity to use it is potentially not the best use of effort.

My students pointed out that the school students of today are all learning how to use graphing calculators, with giant user manuals, and (in some cases) the students switch on their calculators to see a menu rather than the traditional calculator single line. But the syntax and input modes for calculators vary widely. Some use ( ) for operations like sin, so a student will see sin(30) when they start doing trig, whereas some don’t. This means that some of the students I might want to teach R to have not necessarily got their head around the fact that functions exist, except as something that Excel requires them to do. Let’s go to the why here, because it’s important. Why are students learning how to use these graphing calculators? So they can pass their exams, where the competent and efficient use of these things will help them. Yes, it appears that students may be carrying out the kind of operations I would like them to put into a more powerful tool, but why should they?

If a teach a high school student about Excel then there are many places that they might use this kind of software: micro-budgeting, keeping track of things, the ‘simple’ approximation of a database storing books or things like that. However, the general practice of using Excel is familiarisation with a GUI interface that is very, very common and that most students need experience with. If I teach them R then I might be extending their knowledge but (a) the majority are probably not yet ready for it and (b) they are highly unlikely to need to use it for anything in the near future.

The conclusion that my students reached was that, if we really wanted to provide exposure to an industry-like scientific or engineering tool at the earlier stage, then why not use one that was friendlier, more helpful but still had a more scientific focus. They suggested Matlab (as a number of them had been exposed) or Mathematica. Now this whole exercise was designed to get them to practice their thinking about outreach, community, communication and sharing knowledge, so I wasn’t ever actually planning to run an R tutorial at Year 11. But these students thought through and asked the very important questions:

  • Who is this aimed at?
  • What do they already know?
  • What do they need to know?
  • Why are we doing this?

Of course, I have also learned a great deal from this as well – I had no idea that the calculators had quite got to this point, nor that there were schools were students would have to select through a graphical menu to get to the simple “3+3 EXE” section of the calculator! Don’t tell my Grand Challenge students but I think I’m learning roughly as much as they are!


Students and Programming: A stroll through the archives in the contemplation of self-regulation.

I’ve been digging back into the foundations of Computer Science Education to develop some more breadth in the area and trying to fill in some of the reading holes that have developed as I’ve chased certain ideas forward. I’ve been looking at Maye’s “Psychology of How Novices Learn Computer Programming” from 1981, following it forward to a number of papers including McCracken (Chair) et al’s “A multi-national, multi-institutional study of assessment of programming skills of first-year CS students”. Among the many interesting items presented in this paper was a measure of Degree of Closeness (DoC): a quantification of how close the student had come to providing a correct solution, assessed on their source code. The DoC is rated on a five-point scale, with 1 being the furthest from a correct solution. These “DoC 1” students are of a great deal of interest to me because they include those students who submitted nothing – possible evidence of disengagement or just the student being overwhelmed. In fact the DoC 1 students were classified into three types:

  • Type 1: The student handed up an empty file.
  • Type 2: The student’s work showed no evidence of a plan.
  • Type 3: The student appeared to have a plan but didn’t carry it out.

Why did the students do something without a plan? The authors hypothesise that the student may have been following a heuristic approach, doing what they could, until they could go no further. Type 3 was further subdivided into 3a (the student had a good plan or structure) and 3b (the student had a poor plan or structure). All of these, however, have one thing in common and that is that they can indicate a lack of resource organisation, which may be identified as a shortfall in metacognition. On reflection, however, many of these students blamed external factors for their problems. The Type 1 students blamed the time that they had to undertake the task, the lab machines, their lack of familiarity with the language. The DoC 5 students (from the same school) described their difficulties in terms of the process of creating a solution. Other comments from DoC 1 and 2 students included information such as insufficient time, students “not being good” at whatever this question was asking and, in one case, “Too cold environment, problem was too hard.” The most frequent complaint among the low performing students was that they had not had enough time, the presumption being that, had enough time been available, a solution was possible. Combine this with the students who handed up nothing or had no plan and we must start to question this assertion. (It is worth noting that some low-performing students had taken this test as their first ever solo lab-based examination so we cannot just dismiss all of these comments!)

The paper discusses a lot more and is rather critical of its own procedure (perhaps the time pressure was too high, the specifications a little cluttered, highly procedural rather than OO) and I would not argue with the authors on any of this but, from my perspective, I am zooming in on the issue of time because, if you’ve read any of my stuff before, you’ll know that I am working in self-regulation and time management. I look at the Types of DoC 1 students and I can see exactly what I saw in my own student timeliness data and reflection reports: a lack of ability to organise resources. This is now, apparently, combined with a persistent belief that fixing this was beyond the student’s control. It’s unsurprising that handing up nothing suddenly became a valid option.

The null submission could be a clear indicator of organisational ability, where the student can’t muster any kind of solution to the problem at all. Not one line of code or approximate solution. What is puzzling about this is that the activity was, in fact, heavily scheduled. Students sat in a lab and undertook it. There was no other task for them to perform except to do this code in either 1 or 1.5 hours. To not do anything at all may be a reaction to time pressure (as the authors raised) or it could be complete ignorance of how to solve the problem. There’s too much uncertainty here for me to say much more about this.

The “no plan” solution can likely be explained by the heuristic focus and I’ve certainly seen evidence of it. One of the most unforgiving aspects of the heuristic solution is that, without a design, it is easy to end up in a place where you are running out of time and have no idea of where to go to solve unforeseen problems that have arisen. These students are the ones who I would expect to start the last day that something is due and throw together a solution, working later and panicking more as they realised that their code wasn’t working. Having done a bit here and a piece there, they may cobble something together and hand it up but it is unlikely to work and is never robust.

The “I planned it but I couldn’t do it” group fall heavily into the problem space of self-regulation, because they had managed to organise their resources – so why didn’t anything come out? Did they procrastinate? Was their meta-planning process deficient, in that they spent most of their time perfecting a plan and not leaving enough time to make it happen? I have a number of students who have a tendency to go down the rabbit hole when chasing design issues and I sometimes have to reach down, grab them by the ears and haul them out. The reality of time constraints is that you have to work out what you can do and then do as much as you can with that time.

This is fascinating because I’m really trying to work out at which point students will give up and DoC 1 basically amounts to an “I didn’t manage it” mark in my local system. I have data that shows the marks students get from automated marking (immediate assessment) so I can look to see how long people will try to get above what (effectively) would be above DoC 1, and probably up around DoC 3. (The paper defines DoC 3 as “In reading the source code, the outline of a viable solution was apparent, including meaningful comments, stub code, or a good start on the code.” This would be enough to meet our assessment requirements although the mark wouldn’t be great.) DoC 1 would, I suspect, amount to “no submission” in many cases so my DoC 1 students are those who stayed enrolled (and sat the exam) but never created a repository or submission. (There are so many degrees of disengagement!)

I, of course, now have to move further forward along this paper line and I will hopefully intersect with my ‘contemporary’ reading into student programming activity. I will be reading pretty solidly on all of this for the upcoming months as we try to refine the time management and self-regulation strategies that we’ll be employing next year.


Polymaths, Philomaths and Teaching Philosophy: Why we can’t have the first without the second, and the second should be the goal of the third.

You may have heard the term polymath, a person who possesses knowledge across multiple fields, or if you’re particularly unlucky, you’ve been at one of those cocktail parties where someone hands you a business card that says, simply, “Firstname Surname, Polymath” and you have formed a very interesting idea of what a polymath is.  We normally reserve this term for people who excel across multiple fields such as, to drawn examples from this Harvard Business Review blog by Kyle Wiens, Leonard da Vinci (artist and inventor), Benjamin Franklin, Paul Robeson or Steve Jobs. (Let me start to address the article’s gender imbalance with Hypatia of Alexandria, Natalie Portman, Maya Angelou and Mayim Bialik, to name a small group of multidisciplinary women, admittedly focussing on the Erdös-Bacon intersection.) By focusing on those who excel, we do automatically associate a higher degree of assumed depth of knowledge across these multiple fields. The term “Renaissance [person]” is often bandied about as well.

Da Vinci, seen here inventing the cell phone. Sadly, it was to be over 500 years before the cell phone tower was invented so he never received a call. His monthly bill was still enormous.

Now, I have worked as a system administrator and programmer, a winemaker and I’m now an academic in Computer Science, being slowly migrated into some aspects of managerialism, who hopes shortly to start a PhD in Creative Writing. Do I consider myself to be a polymath? No, absolutely not, and I struggle to think of anyone who would think of me that way, either. I have a lot of interests but, while I have had different areas of expertise over the years, I’ve never managed the assumed highly parallel nature of expertise that would be required to be considered a polymath, of any standing. I have academic recognition of some of these interests but this changes neither the value (to me or others) nor has it ever been required to be well-lettered to be in the group mentioned above.

I describe myself, if I have to, as a philomath, someone who is a lover of learning. (For both of the words, the math suffix comes from the Greek and means to learn, but poly means much/many and philo means lovingso a polymath is ‘many learnéd’.) The immediate pejorative for someone who leans lots of things across areas is the infamous “Jack of all trades” and its companion “master of none”. I love to learn new things, I like studying but I also like applying it. I am confident that the time I spent in each discipline was valuable and that I knew my stuff. However, the main point I’d like to state here is that you cannot be a polymath without first having been a philomath – I don’t see how you can develop good depth in many areas unless you have a genuine love of learning. So every polymath was first a philomath.

Now let’s talk about my students. If they are at all interested in anything I’m teaching them, and let’s assume that at least some of them love various parts of a course at some stage, then they are looking to develop more knowledge in one area of learning. However, looking at my students as mono-cultural beings who only exist when they are studying, say, the use of the linked list in programming, is to sell them very, very short indeed. My students love doing a wide range of things. Yes, those who love learning in my higher educational context will probably do better but I guarantee you that every single student you have loves doing something, and most likely that’s more than one thing! So every single one of my students is inherently a philomath – but the problems arise when what they love to learn is not what I want to teach!

This leads me to the philosophy of learning and teaching, how we frame, study and solve the problems of trying to construct knowledge and transform it to allow its successful transfer to other people, as well as how we prepare students to receive, use and develop it. It makes sense that the state that we wish to develop on our students is philomathy. Students are already learning from, interested and loving their lives and the important affairs of the world as they see them, so to get them interested in what we want to teach them requires us to acknowledge that we are only one part of their lives. I rarely meet a student who cannot provide a deep, accurate and informative discourse on something in their lives. If we accept this then, rather than demanding an unnatural automaton who rewrites their entire being to only accept our words on some sort of diabolical Turing Tape of compliance, we now have a much easier path, in some respects, because accepting this means that our students will spend time on something in the depth that we want – it is now a matter of finding out how to tap into this. At this point, the yellow rag of populism is often raised, unfairly in most cases, because it is assumed that students will only study things which are ‘pop’ or ‘easy’. There is nothing ‘easy’ about most of the pastimes at which our students excel and they will expend vast amount of efforts on tasks if they can see a clear reason to do so, it appears to be a fair return on investment, and they feel that they have reasonable autonomy in the process. Most of my students work harder for themselves than they ever will for me: all I do is provide a framework that allows them to achieve something and this, in turn, allows them to develop a love. Once the love has been generated, the philomathic wheel turns and knowledge (most of the time) develops.

Whether you agree on the nature of the tasks or not, I hope that you can see why the love of learning should be a core focus of our philosophy. Our students should engage because they want to and not just because we force them to do so. Only one of these approaches will persist when you remove the rewards and the punishments and, while Skinner may disagree, we appear to be more than rats, especially when we engage our delightfully odd brains to try and solve tasks that are not simply rote learned. Inspiring the love of learning in any one of our disciplines puts a student on the philomathic path but this requires us to accept that their love of learning may have manifested in many other areas, that may be confusedly described as without worth, and that all we are doing is to try and get them to bring their love to something that will be of benefit to them in their studies and, assuming we’ve set the course up correctly, their lives in our profession.


Sources of Knowledge: Stickiness and the Chasm Between Theory and Practice.

Like all sources, it helps to know the origin and the purity.

My head is still full of my current crop of research papers and, while I can’t go into details, I can discuss something that I’m noticing more and more as I read into the area of Computer Science Education. Firstly, how much I have left to learn and, secondly, how difficult it is sometimes to track down ideas and establish novelty, provenance and worth. I read Mark Guzdial’s blog a lot because Mark has spent a lot of time being very clever in this area (Sorry, Mark, it’s true) but he is also an excellent connecter of the reader to good sources of information, as well as reminding us when something pops up that is effectively a rehash of an old idea. This level of knowledge and ability to discuss ideas is handy when we keep seeing some of the same old ideas pop up, from one source or another, over time. I’ve spoken before about how the development of the mass-accessible library didn’t end the importance of the University or school, and Mark makes a similar note in a recent post on MOOCs when he points us to an article on mail delivery lessons from a hundred years before and how this didn’t lead to the dissolution of the education system. Face-to-face continues to be important, as do bricks and mortar, so while the MOOC is a fascinating new tool and methodology with great promise, the predicted demise of the school and college may (once again) turn out to be premature.

If you’ve read Malcolm Gladwell’s “The Tipping Point”, you’ll be familiar with the notion that ideas need to have certain characteristics, and certain human agents, before they become truly persuasive and widely adopted. If you’ve read Dawkin’s “Selfish Gene” (published over a decade before) then you’ll understand that Gladwell’s book would be stronger if it recognised a debt to Dawkins’ coining of the term meme, for self-replicating beliefs and behaviours. Gladwell’s book, as a source, is a fairly unscientific restatement of some existing ideas with a useful narrative structure, despite depending on some now questionable case studies. In many ways, it is an example of itself because Gladwell turned existing published information into a form where, with his own additions, he has identified a useful way to discuss certain systems of behaviour. Better still, people do (still) read it.

(A quick use of Google Trends shows me that people search for “The Tipping Point” roughly twice as much as “The Selfish Gene” but for “Richard Dawkins” twice as much as “Malcolm Gladwell”. Given Dawkins’ very high profile in belligerent atheism, this is not overly surprising.)

Gladwell identified the following three rules of epidemics (in terms of the spread of ideas):

  1. The Law of the Few: There are a small group of people who make a big difference to the proliferation of an idea. The mavens accumulate knowledge and know a lot about the area. The connectors are the gregarious and sociable people who know a lot of other people and, in Gladwell’s words, “have a gift for bringing the word together”. The final type of people are salespeople or (more palatably) persuaders, the people who convince us that something is a good idea. Gladwell’s thesis is that it is not just about the message, but that the messenger matters.
  2. The Stickiness Factor: Ideas have to be memorable in order to spread effectively so there is something about the specific content of the message that will determine its impact. Content matters.
  3. The Power of Context: We are all heavily influenced by and sensitive to our environment. Context matters.

Dawkins’ meme is a very sticky idea and, while there’s a lot of discussion about the Selfish Gene, we now have the field of memetics and the fact that the word ‘meme’ is used (almost correctly) thousands, if not millions, of times a day. Every time that you’ve seen a prawn running on a treadmill while Yakity Sax plays, you can think of Richard Dawkins and thank him for giving you a word to describe this.

My early impressions of some of the problem with the representation of earlier ideas in CS Ed, as if they are new, makes me wonder if there is a fundamental problem with the stickiness of some of these ideas. I would argue that the most successful educational researchers, and I’ve had the privilege to see some of them, are in fact strong combinations of Gladwell’s few. Academics must be, by definition, mavens, information specialists in our domains. We must be able to reach out to our communities and spread our knowledge – is this enough for us to be called connectors? We have to survive peer review, formal discussions and criticism and we have to be able to argue our ideas, on the reasonable understanding that it is our ideas and not ourselves that is potentially at fault. Does this also make us persuaders? If we can find all of these “few” in our community, and we already a community of the few, where does it leave us in terms of explaining why we, in at least some areas, keep rehashing the same old ideas. Do we fail to appreciate the context of those colleagues we seek to reach or are our ideas just not sticky enough? (Context is crucial here, in my opinion, because it is very easy to to explain a new idea in a way that effectively says “You’ve been doing it wrong all these years. Now fix it or you’re a bad person.” This is going to create a hostile environment. Once again, context matters but this time it is in terms of establishing context.)

I wonder if this is compounded in Computer Science by the ability to separate theory from practice, and to draw in new practice from both an educational research focus and an industrial focus? To explain why teamwork actually works, we move into social constructivism and to Vygotsky, via Ben-Ari in many cases, Bandura, cognitive apprenticeship – that’s an educational research focus. To say that teamwork works, because we’ve got some good results from industry and we’re supported by figures such as Brooks, Boehm and Humphrey and their case studies in large-scale development – that’s an industrial focus. The practice of teamwork is sticky, that ship has sailed in software development, but does the stickiness of the practice transfer to the stickiness of the underlying why? The answer, I believe, is ‘no’ and I’m beginning to wonder if a very sticky “what” is actually acting against the stickiness of the “why”. Why ask “why?” when you know that it works? This seems to be a running together of the importance of stickiness and the environment of the CS Ed researcher as a theoretical educationalist, working in a field that has a strong industrial focus, with practitioner feedback and accreditation demands pushing a large stream of “what do to”.

It has been a thoughtful week and, once again, I admit my novice status here. Is this the real problem? If so, how can we fix it?

 


Making Time For Students

I was reminded of my slightly overloaded calendar today as students came and went throughout the day, I raced in and out of project meetings and RV and I worked on some papers that we’re trying to get together for an upcoming submission date in the next few months. I wish I could talk about the research but, given that it will all have to go into peer review and some of the people reading this may end up being on those panels, it will all have to wait until we get accepted or it comes back on fire with a note written in blood saying “Don’t call us…”

For those following the Australian Research scene, you might know that the Australian Federal Government had put a hold on releasing information on key research funding schemes and that this has led to uncertainty for those people whose salaries are paid by research grants. Why is this important in a learning and teaching blog? Because the majority of Higher Education academics are involved in research, teaching and administration but it’s not too much of a generalisation to say that those who are the most successful have substantial help on the research front from well-established groups and staff who are paid to do research full-time.

Right now, as I write this, our postdoc (RV) is reviewing the terminology of certain aspects of the discipline to allow us to continue our research. RV is running citation analyses, digging through papers, peering at my scrawl on the whiteboard and providing a vital aspect to the project: uninterrupted dedication to the research question. I’m seeing students, holding meetings, dealing with technical problems, worrying about my own grants, preparing for a new course roll-out on Monday… and writing this. RV’s role is rapidly becoming critical to my ability to work.

There are thousands of dedicated researchers like RV across Australia and it is easy to quantify their contribution to research, but easy to overlook their implicit benefit in terms of learning and teaching. Every senior academic who is involved in research and teaching will most likely only still be teaching because they someone to carry on the research and maintain the focus and continuity that only comes from having one major area to work on.

I think of it in terms of gearing. When I’m talking to other researchers, I use one set of mental gears. Inside my own group, I use another because we are all much more closely aligned. I use a completely different set when I talk to students and this set varies by year level, course and student! Making time for students is not just a case of having an hour in my calendar. Making time for students is a matter of making the mental space for a discussion that will be at the appropriate level. It’s having enough time to have a chat rather than a rapid-fire exchange. I don’t always succeed at this because far too many of my students apologise to me for taking up my time. Argh! My time is student time! It’s what I get a good 40% of my salary for! (Not that we’re counting. Like most academics, when asked what percentage of my time I spent on the three areas of research, teaching and admin, I say 50,50,40. 🙂 )

Now I am not, by any means, a senior academic and I am very early on in this process, so you can imagine how important those research staff are going to be in keeping projects going for senior staff who are having to make those gear changes at a very rapid speed across much larger domains. Knowledge workers need the time and headspace to think and switching context takes up valuable time, as well as tiring you out if you do it often enough.

On that basis, the recent news that the Government is unfreezing the medical research schemes and at least some of the major awards for everyone else is good news. My own grant in this area is highly unlikely to get up – my relief is not actually for myself, here – but we are already worried about an increased rate of departure for those researchers who are concerned about having a job next year and are, because of their skills and experience, highly mobile. The impact of these people leaving will not just be felt in terms of research output, which has a multi-year lag, but will be felt immediately wherever learning and teaching depended upon someone having the time and mental space to do it, because they had a member of the research staff supporting their other work. Universities are a complex ecosystem and there are very important connections between staff in different areas and areas of focus that are not immediately apparent when you make the simplistic distinction of staff (professional and academic) and, for academics, research/teaching/admin, research/admin, teaching/admin, pure research and pure teaching. The number of courses that I have to teach depends upon the number of staff available to teach, as well as the number of courses and students, and the number of staff (or their available hours) is directly affected by the number of people who help them.

It’s good news that the research funds are starting to unfreeze because it will say to the people who are depending upon grant money that an answer is coming soon. It’s also saying to the rest of us that we can start to think about planning and allocation for 2013 with more certainty, because the monies will be coming at some point.

This, in turn, stops me having to worry about things like contingency plans, who is going to be working with me, and how I will fund research assistants into 2014 because now I have a possibility of a grant, rather than a placeholder in a frozen scheme. This reduces my  current overheads (for a while) and frees up some headspace. With any luck, the next student who walks into my office will not realise exactly how busy I am – and that’s the way that I like it.


A Study in Ethics: Lance Armstrong and Why You Shouldn’t Burn Your Bracelet.

If you haven’t heard about the recent USADA release of new evidence against Lance Armstrong, former star of cycling and Chairman for his own LIVESTRONG Cancer Foundation, then let me summarise it: it’s pretty damning. After reviewing this and other evidence, I have little doubt that Lance Armstrong systematically and deliberately engaged in the procurement, distribution, promotion and consumption of banned substances while he was engaged in an activity that explicitly prohibited this. I also have very little doubt that he engaged in practices, such as blood transfusion, intimidation and the manipulation of colleagues and competitors, again in a way that contravened the rules of his sport and in a way that led the sport into disrepute. The USADA report contains a lot of the missing detail, witness reports, accounts and evidence that, up until now, has allowed Lance Armstrong to maintain that delightful state of grace that is plausible deniability. He has now been banned for life, although he can appeal, his sponsors are leaving him and he has stepped down as the Chairman of his charity.

I plan to use Armstrong in my discussions of ethics over the next year for a number of reasons and this is an early musing, so it’ll be raw and I welcome discussion. Here are my initial reasons and thoughts:

  1. It’s general knowledge and everyone knows enough about this case to have formed an opinion. Many of the other case studies I use refer to the past or situations that are not as widely distributed.
  2. It’s a scenario that (either way) is easy to believe and grounded in the experience of my students.
  3. Lance Armstrong appears to have been making decisions that impacted his team, his competitors, his entire sport. His area of influence is large.
  4. There is an associated entity that is heavily linked with Lance’s personal profile, the LIVESTRONG Cancer Charity.

Points 1 and 2 allows me to talk about Lance Armstrong and have everyone say “Oh, yeah!” as opposed to other classic discussions such as Tuskegee, Monster Study, Zimbardo, etc, where I first have to explain the situation, then the scenario and they try to make people believe that this could happen! Believing that a professional sports person may have taken drugs is, in many ways, far easier to get across than complicated stories of making children stutter. Point 3 allows me to get away from thee “So what if someone decides to do X to themselves?” argument – which is a red herring anyway in a competitive situation based (even in theory) on a level playing field. Rationalisations of the actions taken by an individual do not apply when they are imposed on another group, so many of the “my right to swing my arm ends at your nose” arguments that students effectively bring up in discussing moral and ethical behaviour will not stand up against the large body of evidence that Armstrong intimidated other riders, forced their silence, and required team members to follow the same regime. I expect that we’ll still have to have the “So what if everyone dopes” argument in terms of “are people choosing?” and “what are the ethical implications if generalised?” approaches.

But it is this last theme that I really wish to explore. I read a Gawker article telling everyone to rip off their yellow wristbands and that I strongly disagree with. Lance Armstrong is, most likely, a systematic cheat who has been, and still is, lying about his ongoing cheating in order to continue as many of his activities as possible, as well as maintaining some sense of fan base. The time where he could have apologised for his actions, stood up and taken a stand, is pretty much over. Sponsors who have stood by other athletes at difficult times have left him, because the evidence is so overwhelming.

But to say that this has anything to do with LIVESTRONG is an excellent example of the Genetic Fallacy – that is, because something came from Lance Armstrong, it is now somehow automatically bad. Would I drink from a Coke he gave me? Probably not. Do I still wish his large and influential cancer charity all the success in the world? Yes, of course. LIVESTRONG gave out roughly $30,000,000 last year across its programs and that’s a good thing.

It’s a terrible shame that, for so many years, Armstrong’s work with the charity was, more than slightly cynically, used to say what a good person he was despite the allegations. (There’s a great Onion piece from a couple of years ago that now seems bizarrely prescient). Much as LIVESTRONG is not guaranteed to be bad because Armstrong is a doper, running and setting up LIVESTRONG doesn’t absolve Armstrong from actions in other spheres. A Yahoo sports article describes his charity as being used as a ‘moral cloak’, although smokescreen might be the better word. But we need to look further.

To what does LIVESTRONG owe its success? Would it be as popular and successful if Armstrong hadn’t come back from cancer (he continues to be a cancer survivor) and then hadn’t won all of those tours? Given that his success was, apparently, completely dependent upon illegal activity, aren’t we now indebted to Armstrong’s illegal activity for the millions of dollars that have gone to help people with cancer?

We can talk about moral luckfalse dichotomy and false antecedent/consequent (depending on which way around you wish to frame it) in this and this leads us into all sorts of weird and wonderful discussions, from a well-known and much discussed current affairs issue. But the core is quite simple: Armstrong’s actions had a significantly negative effect upon his world but at least one of the actions that he took has had a positive outcome. Whatever his motivation and intention, the outcome is beneficial. LIVESTRONG now has a challenge to see if it is big enough to survive this reversal of fortune but this is, most definitely, not the time to burn the bracelet. Turn it around, if you want, but, until it turns out that LIVESTRONG is some sort of giant front for clubbing baby harp seals, we can’t just lump this in with the unethical actions of one man.

I was thinking about what Armstrong could do now and, while I believe that he will never be able to do many of the things that he used to do (pro cycling/speaking arrangements/public figure), we know that he is quite good at two things:

  1. Riding a bike
  2. Getting drugs into difficult places.

One of the major problems in the world is getting the right pharmaceuticals to the right people because of government issues, instability and poverty. There are probably worse things for Armstrong to do than cycle from point to point, sneaking medicine past border guards, shinning down drain pipes to provide retrovirals to the poor in the slums of a poor city and hiking miles so that someone doesn’t die today. (I know, that’s all a bit hair shirt –  I’m not suggesting that seeking atonement is either required or sensible.) More seriously, the end of my ethical study in Armstrong will only be written when he works out what he wants to do next. Then my students can look at it, scratch their heads and try to work out where that now places him in terms of morality and ethics.


Thoughts on Overloading: I Still Appear to be Ignoring My Own Advice

The delicate art of Highway Jenga(TM)

I was musing recently on the inherent issues with giving students more work to do, if they are already overloaded to a point where they start doing questionable things (like cheating). A friend of mine is also going through a contemplation of how he seems to be so busy that fitting in everything that he wants to do keeps him up until midnight. My answer to him, which includes some previous comments from other people, is revealing – not least because I am talking through my own lens, and I appear to still feel that I am doing too much.

Because I am a little too busy, I am going to repost (with some editing to remove personal detail and clarify) what I wrote to him, which distils a lot of my thoughts over the past few months on overloading. This was all in answer to the question: “How do people fit everything in?

You have deliberately committed to a large number of things and you wish to perform all of them at a high standard. However, to do this requires that you spend a very large amount of time, including those things that you need to do for your work.

Most people do one of three things:

    1. they do not commit to as much,
    2. they do commit to as much but do it badly, or
    3. they lie about what they are doing because claiming to be a work powerhouse is a status symbol.

A very, very small group of people can buck the well documented long-term effects of overwork but these peopler are in the minority. I would like to tell you what generally happens to people who over-commit, while readily admitting that this might not apply to you. Most of this is based on research, informed by bitter personal experience.

The long-term effects of overwork (as a result of over-commitment) are sinister and self-defeating. As fatigue increases, errors increase. The introduction of errors requires you to spend more time to achieve tasks because you are now doing the original task AND fixing errors, whether the errors are being injected by you or they are actually just unforeseen events because your metacognitive skills (resource organisation) are being impaired by fatigue.

However, it’s worse than that because you start to lose situational awareness as well. You start to perform tasks because they are there to perform, without necessarily worrying about why or how you’re doing it. Suddenly, not only are you tired and risking the introduction of errors, you start to lose the ability to question whether you should be carrying out a certain action in the first place.

Then it gets worse again because not only do obstacles now appear to be thrown up with more regularity (because your error rates are going up, your frustration levels are high and you’re losing resource organisational ability) but even the completion of goals merely becomes something that facilitates more work. Having completed job X, because you’re over-committed, you must immediately commence job X+1. Goal completion, which should be a time for celebration and reflection, now becomes a way to open more gateways of burden. Goals delayed become a source of frustration. The likely outcome is diminished enjoyment and an encroaching sense of work, work, work

[I have removed a paragraph here that contained too much personal detail of my friend.]

So, the question is whether your work is too much, given everything else that you want to do, and only you can answer this question as to whether you are frustrated by it most of the time and whether you are enjoying achieving goals, or if they are merely opening more doors of work. I don’t expect you to reply on this one but it’s an important question – how do you feel when you open your eyes in the morning? How often are you angry at things? Is this something that you want to continue for the foreseeable future? 

Would you still do it, if you didn’t have to pay the rent and eat?

Regrettably, one of the biggest problems with over-commitment is not having time to adequately reflect. However, long term over-commitment is clearly demonstrated (through research) to be bad for manual labourers, soldiers, professionals, and knowledge workers. The loss of situational awareness and cognitive function are not good for anyone. 

My belief is that an approach based on listening to your body and working within sensible and sustainable limits is possible for all aspects of life but readily acknowledge that transition away from over-commitment to sustainable commitment can be very, very hard. I’m facing that challenge at the moment and know that it is anything but easy. I’m not trying to lecture you, I’m trying to share my own take on it, which may or may not apply. However, you should always feel free to drop by for a coffee to chat, if you like, and I hope that you have some easier and less committed times ahead.

Reading through this, I reminded of how much work I have left to do in order to reduce my overall commitments to sensible levels. It’s hard, sometimes, because there are so many things that I want to do but I can easily point to a couple of indicators that tell me that I still don’t quite have the balance right. For example, I’m managing my time at the moment, but that’s probably because being unable to run has given me roughly 8 hours  a week back to spend elsewhere. I am getting things done because I am using up almost all of that running time but working in it instead. And that, put simply, means I’m regularly working longer hours than I should.

Looking back at the advice, I am projecting my own problems with goals: completing something merely unlocks new burdens, and there is very little feeling of finalisation. I am very careful to try and give my students closure points, guidance and a knowledge of when to stop. Time to take a weekend and reflect on how I can get that back for myself – and still do everything cool that I want to do! 🙂


Authenticity and Challenge: Software Engineering Projects Where Failure is an Option

It’s nearly the end of semester and that means that a lot of projects are coming to fruition – or, in a few cases, are still on fire as people run around desperately trying to put them out. I wrote a while about seeing Fred Brooks at a conference (SIGCSE) and his keynote on building student projects that work. The first four of his eleven basic guidelines were:

  1. Have real projects for real clients.
  2. Groups of 3-5.
  3. Have lots of project choices
  4. Groups must be allowed to fail.

We’ve done this for some time in our fourth year Software Engineering option but, as part of a “Dammit, we’re Computer Science, people should be coming to ask about getting CS projects done” initiative, we’ve now changed our third year SE Group Project offering from a parallel version of an existing project to real projects for real clients, although I must confess that I have acted as a proxy in some of them. However, the client need is real, the brief is real, there are a lot of projects on the go and the projects are so large and complex that:

  1. Failure is an option.
  2. Groups have to work out which part they will be able to achieve in the 12 weeks that they have.

For the most part, this approach has been a resounding success. The groups have developed their team maturity faster, they have delivered useful and evolving prototypes, they have started to develop entire tool suites and solve quite complex side problems because they’ve run across areas that no-one else is working in and, most of all, the pride that they are taking in their work is evident. We have lit the blue touch paper and some of these students are skyrocketing upwards. However, let me not lose sight of one our biggest objectives, that we be confident that these students will be able to work with clients. In the vast majority of cases, I am very happy to say that I am confident that these students can make a useful, practical and informed contribution to a software engineering project – and they still have another year of projects and development to go.

The freedom that comes with being open with a client about the possibility of failure cannot be overvalued. This gives both you and the client a clear understanding of what is involved- we do not need to shield the students, nor does the client have to worry about how their satisfaction with software will influence things. We scaffold carefully but we have to allow for the full range of outcomes. We, of course, expect the vast majority of projects to succeed but this experience will not be authentic unless we start to pull away the scaffolding over time and see how the students stand by themselves. We are not, by any stretch, leaving these students in the wilderness. I’m fulfilling several roles here: proxying for some clients, sharing systems knowledge, giving advice, mentoring and, every so often, giving a well-needed hairy eyeball to a bad idea or practice. There is also the main project manager and supervisor who is working a very busy week to keep track of all of these groups and provide all of what I am and much, much more. But, despite this, sometimes we just have to leave the students to themselves and it will, almost always, dawn on them that problem solving requires them to solve the problem.

I’m really pleased to see this actually working because it started as a brainstorm of my “Why aren’t we being asked to get involved in more local software projects” question and bouncing it off the main project supervisor, who was desperate for more authentic and diverse software projects. Here is a distillation of our experience so far:

  1. The students are taking more ownership of the projects.
  2. The students are producing a lot of high quality work, using aggressive prototyping and regular consultation, staged across the whole development time.
  3. The students are responsive and open to criticism.
  4. The students have a better understanding of Software Engineering as a discipline and a practice.
  5. The students are proud of what they have achieved.

None of this should come as much of a surprise but, in a 25,000+ person University, there are a lot of little software projects on the 3-person team 12 month scale, which are perfect for two half-year project slots because students have to design for the whole and then decide which parts to implement. We hope to give these projects back to them (or similar groups) for further development in the future because that is the way of many, many software engineers: the completion, extension and refactoring of other people’s codebases. (Something most students don’t realise is that it only takes a very short time for a codebase you knew like the back of your hand to resemble the product of alien invaders.)

I am quietly confident, and hopeful, that this bodes well for our Software Engineers and that we still start to seem them all closely bunched towards the high achieving side of the spectrum in terms of their ability to practice. We’re planning to keep running this in the future because the early results have been so promising. I suppose the only problem now is that I have to go and find a huge number of new projects for people to start on for 2013.

As problems go, I can certainly live with that one!


Industry Speaks! (May The Better Idea Win)

Alan Noble, Director of Engineering for Google Australia and an Adjunct Professor with my Uni, generously gave up a day today to give a two hour lecture of distributed systems and scale to our third-year Distributed Systems course, and another two-hour lecture on entrepreneurship to my Grand Challenge students. Industry contact is crucial for my students because the world inside the Uni and the world outside the Uni can be very, very different. While we try to keep industry contact high in later years, and we’re very keen on authentic assignments that tackle real-world problems, we really need the people who are working for the bigger companies to come in and tell our students what life would be like working for Google, Microsoft, Saab, IBM…

My GC students have had a weird mix of lectures that have been designed to advance their maturity in the community and as scientists, rather than their programming skills (although that’s an indirect requirement), but I’ve been talking from a position of social benefit and community-focused ethics. It is essential that they be exposed to companies, commercialisation and entrepreneurship as it is not my job to tell them who to be. I can give them skills and knowledge but the places that they take those are part of an intensely personal journey and so it’s great to have an opportunity for Alan, a man with well-established industry and research credentials, to talk to them about how to make things happen in business terms.

The students I spoke to afterwards were very excited and definitely saw the value of it. (Alan, if they all leave at the end of this year and go to Google, you’re off the Christmas Card list.) Alan focused on three things: problems, users and people.

Problems: Most great companies find a problem and solve it but, first, you have to recognise that there is a problem. This sometimes just requires putting the right people in front of something to find out what these new users see as a problem. You have to be attentive to the world around you but being inventive can be just as important. Something Alan said really resonated with me in that people in the engineering (and CS) world tend to solve the problems that they encounter (do it once manually and then set things up so it’s automatic thereafter) and don’t necessarily think “Oh, I could solve this for everyone”. There are problems everywhere but, unless we’re looking for them, we may just adapt and move on, instead of fixing the problem.

Users: Users don’t always know what they want yet (the classic Steve Jobs approach), they may not ask for it or, if they do ask for something, what they want may not yet be available for them. We talked here about a lot of current solutions to problems but there are so many problems to fix that would help users. Simultaneous translation, for example, over telephone. 100% accurate OCR (while we’re at it). The risk is always that when you offer the users the idea of a car, all they ask for is a faster horse (after Henry Ford). The best thing for you is a happy user because they’re the best form of marketing – but they’re also fickle. So it’s a balancing act between genuine user focus and telling them what they need.

People: Surround yourself with people who are equally passionate! Strive of a culture of innovation and getting things done. Treasure your agility as a company and foster it if you get too big. Keep your units of work (teams) smaller if you can and match work to the team size. Use structures that encourage a short distance from top to bottom of the hierarchy, which allows for ideas to move up, down and sideways. Be meritocratic and encourage people to contest ideas, using facts and articulating their ideas well. May the Better Idea Win! Motivating people is easier when you’re open and transparent about what they’re doing and what you want.

Alan then went on to speak a lot about execution, the crucial step in taking an idea and having a successful outcome. Alan had two key tips.

Experiment: Experiment, experiment, experiment. Measure, measure, measure. Analyse. Take it into account. Change what you’re doing if you need to. It’s ok to fail but it’s better to fail earlier. Learn to recognise when your experiment is failing – and don’t guess, experiment! Here’s a quote that I really liked:

When you fail a little every day, it’s not failing, it’s learning.

Risk goes hand-in-hand with failure and success. Entrepreneurs have to learn when to call an experiment and change direction (pivot). Pivot too soon, you might miss out on something good. Pivot too late, you’re in trouble. Learning how to be agile is crucial.

Data: Collect and scrutinise all of the data that you get – your data will keep you honest if you measure the right things. Be smart about your data and never copy it when you can analyse it in situ.

(Alan said a lot more than this over 2 hours but I’m trying to give you the core.)

Alan finished by summarising all of this as his Three As of Entrepreneurship, then why we seem to be hitting an entrepreneurship growth spurt in Australia at the moment. The Three As are:

  • Audit your data
  • Having Audited, Admit when things aren’t working
  • Once admitted, you can Adapt (or pivot)

As to why we’re seeing a growth of entrepreneurship, Australia has a population who are some of the highest early adopters on the planet. We have a high technical penetration, over 20,000,000 potential users, a high GDP and we love tech. 52% of Australians have smart phones and we had so many mobile phones, pre-smart, that it was just plain crazy. Get the tech right and we will buy it. Good tech, however, is hardware+software+user requirement+getting it all right.

It’s always a pleasure to host Alan because he communicates his passion for the area well but he also puts a passionate and committed face onto industry, which is what my students need to see in order to understand where they could sit in their soon-to-be professional community.


Dealing with Plagiarism: Punishment or Remediation?

I have written previously about classifying plagiarists into three groups (accidental, panicked and systematic), trying to get the student to focus on the journey rather than the objective, and how overwork can produce situations in which human beings do very strange things. Recently, I was asked to sit in on another plagiarism hearing and, because I’ve been away from the role of Assessment Coordinator for a while, I was able to look at the process with an outsider’s eye, a slightly more critical view, to see how it measures up.

Our policy is now called an Academic Honesty Policy and is designed to support one of our graduate attributes: “An awareness of ethical, social and cultural issues within a global context and their importance in the exercise of professional skills and responsibilities”. The principles are pretty straight-forward for the policy:

  • Assessment is an aid to learning and involves obligations on the part of students to make it effective.
  • Academic honesty is an essential component of teaching, learning and research and is fundamental to the very nature of universities.
  • Academic writing is evidence-based, and the ideas and work of others must be acknowledged and not claimed or presented as one’s own, either deliberately or unintentionally.

The policy goes on to describe what student responsibilities are, why they should do the right thing for maximum effect of the assessment and provides some handy links to our Writing Centre and applying for modified arrangements. There’s also a clear statement of what not to do, followed by lists of clarifications of various terms.

Sitting in on a hearing, looking at the process unfolding, I can review the overall thrust of this policy and be aware that it has been clearly identified to students that they must do their own work but, reading through the policy and its implementation guide, I don’t really see what it provides to sufficiently scaffold the process of retraining or re-educating students if they are detected doing the wrong thing.

There are many possible outcomes from the application of this policy, starting with “Oh, we detected something but we turned out to be wrong”, going through “Well, you apparently didn’t realise so we’ll record your name for next time, now submit something new ” (misunderstanding), “You knew what you were doing so we’re going to give you zero for the assignment and (will/won’t) let you resubmit it (with a possible mark cap)” (first offence), “You appear to make a habit of this so we’re giving you zero for the course” (second offence) and “It’s time to go.” (much later on in the process after several confirmed breaches).

Let me return to my discussions on load and the impact on people from those earlier posts. If you accept my contention that the majority of plagiarism cheating is minor omission or last minute ‘helmet fire’ thinking under pressure, then we have to look at what requiring students to resubmit will do. In the case of the ‘misunderstanding’, students may also be referred to relevant workshops or resources to attend in order to improve their practices. However, considering that this may have occurred because the student was under time pressure, we have just added more work and a possible requirement to go and attend extra training. There’s an old saying from Software Development called Brook’s Law:

“…adding manpower to a late software project makes it later.” (Brooks, Mythical Man Month, 1975)

In software it’s generally because there is ramp up time (the time required for people to become productive) and communication overheads (which increases with the square of the number of people again). There is time required for every assignment that we set which effectively stands in for the ramp-up and, as plagiarising/cheating students have probably not done the requisite work before (or could just have completed the assignment), we have just added extra ramp-up into their lives for any re-issued assignments and/or any additional improvement training. We have also greatly increased the communication burden because the communication between lecturers and peers has implicit context based on where we are in the semester. All of the student discussion (on-line or face-to-face) from points A to B will be based around the assignment work in that zone and all lecturing staff will also have that assignment in their heads. An significantly out-of-sequence assignment not only isolates the student from their community, it increases the level of context switching required by the staff, decreasing the amount of effective time that have with the student and increasing the amount of wall-clock time. Once again, we have increased the potential burden on a student that, we suspect, is already acting this way because of over-burdening or poor time management!

Later stages in the policy increase the burden on students by either increasing the requirement to perform at a higher level, due to the reduction of available marks through giving a zero, or by removing an entire course from their progress and, if they wish to complete the degree, requiring them to overload or spend an additional semester (at least) to complete their degree.

My question here is, as always, are any of these outcomes actually going to stop the student from cheating or do they risk increasing the likelihood of either the student cheating or the student dropping out? I complete agree with the principles and focus of our policy, and I also don’t believe that people should get marks for work that they haven’t done, but I don’t see how increasing burden is actually going to lead to the behaviour that we want. (Dan Pink on TED can tell you many interesting things about motivation, extrinsic factors and cognitive tasks, far more effectively than I can.)

This is, to many people, not an issue because this kind of policy is really treated as being punitive rather than remedial. There are some excellent parts in our policy that talk about helping students but, once we get beyond the misunderstanding, this language of support drops away and we head swiftly into the punitive with the possibility of controlled resubmission. The problem, however, is that we have evidence that light punishment is interpreted as a licence to repeat the action, because it doesn’t discourage. This does not surprise me because we have made such a risk/reward strategy framing with our current policy. We have resorted to a punishment modality and, as a result, we have people looking at the punishments to optimise their behaviour rather than changing their behaviour to achieve our actual goals.

This policy is a strange beast as there’s almost no way that I can take an action under the current approach without causing additional work to students at a time when it is their ability to handle pressure that is likely to have led them here. Even if it’s working, and it appears that it does, it does so by enforcing compliance rather than actually leading people to change the way that they think about their work.

My conjecture is that we cannot isolate the problems to just this policy. This spills over into our academic assessment policies, our staff training and our student support, and the key difference between teaching ethics and training students in ethical behaviour. There may not be a solution in this space that meets all of our requirements but if we are going to operate punitively then let us be honest about it and not over-burden the student with remedial work that they may not be supported for. If we are aiming for remediation then let us scaffold it properly. I think that our policy, as it stands, can actually support this but I’m not sure that I’ve seen the broad spread of policy and practice that is required to achieve this desirable, but incredibly challenging, goal of actually changing student behaviour because the students realise that it is detrimental to their learning.