Thoughts on Overloading: I Still Appear to be Ignoring My Own Advice

The delicate art of Highway Jenga(TM)

I was musing recently on the inherent issues with giving students more work to do, if they are already overloaded to a point where they start doing questionable things (like cheating). A friend of mine is also going through a contemplation of how he seems to be so busy that fitting in everything that he wants to do keeps him up until midnight. My answer to him, which includes some previous comments from other people, is revealing – not least because I am talking through my own lens, and I appear to still feel that I am doing too much.

Because I am a little too busy, I am going to repost (with some editing to remove personal detail and clarify) what I wrote to him, which distils a lot of my thoughts over the past few months on overloading. This was all in answer to the question: “How do people fit everything in?

You have deliberately committed to a large number of things and you wish to perform all of them at a high standard. However, to do this requires that you spend a very large amount of time, including those things that you need to do for your work.

Most people do one of three things:

    1. they do not commit to as much,
    2. they do commit to as much but do it badly, or
    3. they lie about what they are doing because claiming to be a work powerhouse is a status symbol.

A very, very small group of people can buck the well documented long-term effects of overwork but these peopler are in the minority. I would like to tell you what generally happens to people who over-commit, while readily admitting that this might not apply to you. Most of this is based on research, informed by bitter personal experience.

The long-term effects of overwork (as a result of over-commitment) are sinister and self-defeating. As fatigue increases, errors increase. The introduction of errors requires you to spend more time to achieve tasks because you are now doing the original task AND fixing errors, whether the errors are being injected by you or they are actually just unforeseen events because your metacognitive skills (resource organisation) are being impaired by fatigue.

However, it’s worse than that because you start to lose situational awareness as well. You start to perform tasks because they are there to perform, without necessarily worrying about why or how you’re doing it. Suddenly, not only are you tired and risking the introduction of errors, you start to lose the ability to question whether you should be carrying out a certain action in the first place.

Then it gets worse again because not only do obstacles now appear to be thrown up with more regularity (because your error rates are going up, your frustration levels are high and you’re losing resource organisational ability) but even the completion of goals merely becomes something that facilitates more work. Having completed job X, because you’re over-committed, you must immediately commence job X+1. Goal completion, which should be a time for celebration and reflection, now becomes a way to open more gateways of burden. Goals delayed become a source of frustration. The likely outcome is diminished enjoyment and an encroaching sense of work, work, work

[I have removed a paragraph here that contained too much personal detail of my friend.]

So, the question is whether your work is too much, given everything else that you want to do, and only you can answer this question as to whether you are frustrated by it most of the time and whether you are enjoying achieving goals, or if they are merely opening more doors of work. I don’t expect you to reply on this one but it’s an important question – how do you feel when you open your eyes in the morning? How often are you angry at things? Is this something that you want to continue for the foreseeable future? 

Would you still do it, if you didn’t have to pay the rent and eat?

Regrettably, one of the biggest problems with over-commitment is not having time to adequately reflect. However, long term over-commitment is clearly demonstrated (through research) to be bad for manual labourers, soldiers, professionals, and knowledge workers. The loss of situational awareness and cognitive function are not good for anyone. 

My belief is that an approach based on listening to your body and working within sensible and sustainable limits is possible for all aspects of life but readily acknowledge that transition away from over-commitment to sustainable commitment can be very, very hard. I’m facing that challenge at the moment and know that it is anything but easy. I’m not trying to lecture you, I’m trying to share my own take on it, which may or may not apply. However, you should always feel free to drop by for a coffee to chat, if you like, and I hope that you have some easier and less committed times ahead.

Reading through this, I reminded of how much work I have left to do in order to reduce my overall commitments to sensible levels. It’s hard, sometimes, because there are so many things that I want to do but I can easily point to a couple of indicators that tell me that I still don’t quite have the balance right. For example, I’m managing my time at the moment, but that’s probably because being unable to run has given me roughly 8 hours  a week back to spend elsewhere. I am getting things done because I am using up almost all of that running time but working in it instead. And that, put simply, means I’m regularly working longer hours than I should.

Looking back at the advice, I am projecting my own problems with goals: completing something merely unlocks new burdens, and there is very little feeling of finalisation. I am very careful to try and give my students closure points, guidance and a knowledge of when to stop. Time to take a weekend and reflect on how I can get that back for myself – and still do everything cool that I want to do! 🙂


Authenticity and Challenge: Software Engineering Projects Where Failure is an Option

It’s nearly the end of semester and that means that a lot of projects are coming to fruition – or, in a few cases, are still on fire as people run around desperately trying to put them out. I wrote a while about seeing Fred Brooks at a conference (SIGCSE) and his keynote on building student projects that work. The first four of his eleven basic guidelines were:

  1. Have real projects for real clients.
  2. Groups of 3-5.
  3. Have lots of project choices
  4. Groups must be allowed to fail.

We’ve done this for some time in our fourth year Software Engineering option but, as part of a “Dammit, we’re Computer Science, people should be coming to ask about getting CS projects done” initiative, we’ve now changed our third year SE Group Project offering from a parallel version of an existing project to real projects for real clients, although I must confess that I have acted as a proxy in some of them. However, the client need is real, the brief is real, there are a lot of projects on the go and the projects are so large and complex that:

  1. Failure is an option.
  2. Groups have to work out which part they will be able to achieve in the 12 weeks that they have.

For the most part, this approach has been a resounding success. The groups have developed their team maturity faster, they have delivered useful and evolving prototypes, they have started to develop entire tool suites and solve quite complex side problems because they’ve run across areas that no-one else is working in and, most of all, the pride that they are taking in their work is evident. We have lit the blue touch paper and some of these students are skyrocketing upwards. However, let me not lose sight of one our biggest objectives, that we be confident that these students will be able to work with clients. In the vast majority of cases, I am very happy to say that I am confident that these students can make a useful, practical and informed contribution to a software engineering project – and they still have another year of projects and development to go.

The freedom that comes with being open with a client about the possibility of failure cannot be overvalued. This gives both you and the client a clear understanding of what is involved- we do not need to shield the students, nor does the client have to worry about how their satisfaction with software will influence things. We scaffold carefully but we have to allow for the full range of outcomes. We, of course, expect the vast majority of projects to succeed but this experience will not be authentic unless we start to pull away the scaffolding over time and see how the students stand by themselves. We are not, by any stretch, leaving these students in the wilderness. I’m fulfilling several roles here: proxying for some clients, sharing systems knowledge, giving advice, mentoring and, every so often, giving a well-needed hairy eyeball to a bad idea or practice. There is also the main project manager and supervisor who is working a very busy week to keep track of all of these groups and provide all of what I am and much, much more. But, despite this, sometimes we just have to leave the students to themselves and it will, almost always, dawn on them that problem solving requires them to solve the problem.

I’m really pleased to see this actually working because it started as a brainstorm of my “Why aren’t we being asked to get involved in more local software projects” question and bouncing it off the main project supervisor, who was desperate for more authentic and diverse software projects. Here is a distillation of our experience so far:

  1. The students are taking more ownership of the projects.
  2. The students are producing a lot of high quality work, using aggressive prototyping and regular consultation, staged across the whole development time.
  3. The students are responsive and open to criticism.
  4. The students have a better understanding of Software Engineering as a discipline and a practice.
  5. The students are proud of what they have achieved.

None of this should come as much of a surprise but, in a 25,000+ person University, there are a lot of little software projects on the 3-person team 12 month scale, which are perfect for two half-year project slots because students have to design for the whole and then decide which parts to implement. We hope to give these projects back to them (or similar groups) for further development in the future because that is the way of many, many software engineers: the completion, extension and refactoring of other people’s codebases. (Something most students don’t realise is that it only takes a very short time for a codebase you knew like the back of your hand to resemble the product of alien invaders.)

I am quietly confident, and hopeful, that this bodes well for our Software Engineers and that we still start to seem them all closely bunched towards the high achieving side of the spectrum in terms of their ability to practice. We’re planning to keep running this in the future because the early results have been so promising. I suppose the only problem now is that I have to go and find a huge number of new projects for people to start on for 2013.

As problems go, I can certainly live with that one!


Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach. Part 2.

Continuing on from yesterday’s post, I was discussing the workshop that I went to and what I’d learned from it. I finished on the point that assessment of learning occurs when Lecturers:

  • Use evidence of student learning
  • to make judgements on student achievement
  • against goals and standards

but we have so many other questions to ask at this stage. What were our initial learning objectives? What were we trying to achieve? The learning outcome is effectively a contract between educator and student so we plan to achieve them, but how they fit in the context of our accreditation and overall requirements? One of the things stressed in the workshop was that we need a range of assessment tasks to achieve our objectives:

  • We need a wide variety
  • These should be open-entry where students can begin the tasks from a range of previous learning levels and we cater for different learning preferences and interests
  • They should be open-ended, where we don’t railroad the students towards a looming and monolithic single right answer, and multiple pathways or products are possible
  • We should be building students’ capabilities by building on the standards
  • Finally, we should provide space for student ownership and decision making.

Effectively, we need to be able to get to the solution in a variety of ways. If we straitjacket students into a fixed solution we risk stifling their ability to actually learn and, as I’ve mentioned before, we risk enforcing compliance to a doctrine rather than developing knowledgeable self-regulated learners. If we design these activities properly then we should find the result reduces student complaints about fairness or incorrect assumptions about their preparation. However, these sorts of changes take time and, a point so important that I’ll give it its own line:

You can’t expect to change all of your assessment in one semester!

The advice from Wageeh and Jeff was to focus on an aspect, monitor it, make your change, assess it, reflect and then extend what you’ve learned to other aspects. I like this because, of course, it sounds a lot like a methodical scientific approach to me. Because it is. As to which assessment methods you should choose, the presenters recognised that working out how to make a positive change to your assessment can be hard so they suggested generating a set of alternative approaches and then picking one. They then introduced Prus and Johnson’s 1994 paper “A critical review of Student Assessment Options” which provide twelve different assessment methods and their drawbacks and advantages. One of the best things about this paper is that there is no ‘must’ or ‘right’, there is always ‘plus’ and ‘minus’.

Want to mine archival data to look at student performance? As I’ve discussed before, archival data gives you detailed knowledge but at a time when it’s too late to do anything for that student or a particular cohort in that class. Archival data analysis is, however, a fantastic tool for checking to see if your prerequisites are set correctly. Does their grade in this course correlate with grades in the prereqs? Jeff mentioned a student where the students should have depended upon Physics and Maths but, while their Physics mark correlated with their final Statics mark, Mathematics didn’t. (A study at Baldwin-Wallace presented at SIGCSE 2012 asked the more general question: what are the actual dependencies if we carry out a Bayesian Network Analysis. I’m still meaning to do this for our courses as well.)

Other approaches, such as Surveys, are quick and immediate but are all perceptual. Asking a student how they did on a quiz should never be used as their actual mark! The availability of time will change the methods you choose. If you have a really big group then you can statistically sample to get an indication but this starts to make your design and tolerance for possible error very important.

Jeff stressed that, in all of this assessment, it was essential to never give students an opportunity to gain marks in areas that are not the core focus. (Regular readers know that this is one of my design and operational mantras, as it encourages bad behaviour, by which I mean incorrect optimisation.)

There were so many other things covered in this workshop and, sadly, we only had three hours. I suggested that the next time it was run that they allow more time because I believe I could happily have spent a day going through this. And I would still have had questions.

We discussed the issue of subjectivity and objectivity and the distinction between setting and assessment. Any way that I set a multiple choice quiz is going to be subjective, because I will choose the questions based on my perception of the course and assessment requirements, but it is scored completely objectively.

We also discussed data collection as well because there are so many options here. When will we collect the data? If we collect continuously, can we analyse and react continuously? What changes are we making in response? This is another important point:

If you collect data in order to determine which changes are to be made, tie your changes to your data driven reasons!

There’s little point in saying “We collected all student submission data for three years and then we went to multiple choice questions” unless you can provide a reason from the data, which will both validate your effort in collection and give you a better basis for change. When do I need data to see if someone is clearing the bar? If they’re not, what needs to be fixed? What do I, as a lecturer, need to collect during the process to see what needs to be fixed, rather than the data we collect at the end to determine if they’ve met the bar.

How do I, as a student, determine if I’m making progress along the way? Can I put all of the summative data onto one point? Can I evaluate everything on a two-hour final exam?

WHILE I’m teaching the course, are the students making progress, do they need something else, how do I (and should I) collect data throughout the course. A lot of what we actually collect is driven by the mechanisms that we already have. We need to work out what we actually require and this means that we may need to work beyond the systems that we have.

Again, a very enjoyable workshop! It’s always nice to be able to talk to people and get some really useful suggestions for improvement.


Let the Denial Begin

It is an awful fact that women are very underrepresented in my discipline, Computer Science, and as an aggregate across my faculty, which includes Engineering and Mathematics (so we’re the Technology, Engineering and Mathematics of STEM). I have heard almost every tired and discredited excuse for why this is the case but what has always angered me is the sheer weight of resistance to any research that (a) clearly demonstrates that bias exists to explain why this occurs, (b) identifies how performance can be manipulated through preconceptions and (c) requires people to consider that we are all more similar than current representation would indicate.

Yes, if I were to look around and say “Women are not going to graduate in large numbers because I see so few of them” then I would be accurate and yet, at the same time, completely missing the point. If I were to turn that around and ask “Why are so few women coming in to my degree?” then I have a useful question and, from various branches of research, the more rocks we turn over, the more we seem to find bias (conscious or otherwise) in both industry and academia that discourages women from participation in STEM.

A paper was recently published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS, to its friends), entitled “Science faculty’s subtle gender biases favor male students”. (PNAS has an open access option but the key graphs and content are also covered in a Scientific American blog article.) The study was simple. Take a job application for  a lab manager position. Assign a name where half of the names are a recognisably male name, the other half are female. (The names John and Jennifer were chosen for this purpose as they had been pre-tested to be equivalent in terms of likability and recogniseability.) Get people to rate the application, including aspects like degree of mentoring offered and salary.

Let me summarise that: the name John or Jennifer is assigned to the same application materials. What we would expect, if there is no bias, is that we would see a similar ranking and equivalent salary offering. (All figures from the original paper, via the SciAm link.)

Oh. It appears that the mere presence of a woman’s name somehow altered reality so that an objective assessment of ability was warped through some sort of … I give up. Humour has escaped me. The name change has resulted in a systematic and significant downgrading of perceived ability. Let me get the next graph out of the way which is the salary offer.

And, equally mysteriously, having the name John is worth over $3,500 more than having the name Jennifer.

I should leap to note that it was both male and female scientists making this classification – which starts to lead us away from outright misogyny and towards ingrained and subtler prejudices. Did people resort to explicitly sexist reasoning to downgrade the candidates? No, they used sound reasoning to argue against the applicant’s competency. Except, of course, we draw back the curtain and suddenly reveal that our sound reasoning works one way when the applicant is a man, another if they are a woman.

Before you think “Oh, they must have targeted a given field, age group or gone after people who do or don’t have tenure”, the field, age and tenure status of the rating professors had no significant effect. This bias is pervasive among faculty, field, age, gender and status. The report also looked at mentoring and, regardless of the rater’s gender, they offered less mentoring to women.

Let’s be blunt. Study after study shows that if there are any gender differences at all, they are so small as to not even vaguely explain what we see in the representation of female students in certain fields and completely fails to explain their reduced progress in later life. However, the bias and stereotypes that people are operating under do not so much predict what will happen as shape what will happen. We are now aware of effects such as Stereotype Threat (Wiki link) that allows us to structure important situations in someone’s life so that the framing of the activity leaves them in a position where they reinforce the negative stereotype because of higher anxiety, relative to a non-stereotyped group. As an example, look at Osborne, Linking Stereotype Threat and Anxiety, where you can actually reduce the performance of girls on a maths test through reminding them that they are girls and that girls tend to do worse on test than boys. Osborne then compared this with a group where the difference was identified but a far more positive statement was made (the participants were told that despite the difference, there were situations where girls performed as well or better). The first scenario (girls do worse) was a high Stereotype Threat scenario (high ST), the second is low ST. Here’s the graph from Wikipedia that is a redrawing of the one in the paper that shows the results.

The effect of Stereotype Threat (ST) on math test scores for girls and boys. Data from Osborne (2007) (via Wikipedia)

That is the impact of an explicit stereotype in action – suddenly, when framed fairly and without an explicit stereotype or implicit bias, we see that people are far more similar than we thought. If anything, we have partially inverted the stereotype.

To return to my first paragraph, I said:

what has always angered me is the sheer weight of resistance to any research that (a) clearly demonstrates that bias exists to explain why this occurs, (b) identifies how performance can be manipulated through preconceptions and (c) requires people to consider that we are all more similar than current representation would indicate.

The PNAS paper, among others, clearly shows that the biasses exist. A simple name change is enough, as long as it’s a woman’s name. The demonstrated existence of stereotype threat shows us how performance can be manipulated through preconception. (And it’s important to note that stereotype threat is as powerful against minorities as woman – anyone who is part of a stereotype can be manipulated through their own increased or reduced anxiety.) So let me finally discuss the consideration of all of this and the title of this post.

I am expecting to get at least one person howling me down. Someone who will tear apart all of this because this cannot, possibly, under any circumstances be true. Someone who will start talking about our “African ancestors” to start arguing the Savanna-distribution of roles, as if our hominid predecessors ever had to apply to be a lab manager anywhere. Most of you, I hope, will read this and know all of this far too well. Some of you will reflect on this and, like me, examine yourself very carefully to find out if you have been using this bias or if you have been framing things, while trying to help, in a way that really didn’t help at all.

Some of you, who are my students, will read this and will see that research that you have done is reflected in these figures. Yes, we treat women differently and we appear, in these circumstances, to treat them less well. This does not, under any circumstances, mean that we have to accept this or, in any way, respect this as an established tradition or a desirable status quo. But the detection of an insidious and pervasive bias, that spans a community, shows us how hard my point (c) actually is.

We must first accept that there is a problem. There is a problem. Denying it will achieve nothing. Arguing minutiae will achieve nothing. We have to change the way that we react and be honest with ourselves that, sometimes, our treasured objectivity is actually nothing of the kind.


The Future of the Text Book: A Printbook, an eText and a Custom walk into a bar.

Well, it’s still Banned Books Week so I thought I’d follow up on this and talk about text books. I’ve just come from a meeting with a Leading Publishing House (LPH) who, in this fine age of diversification, have made some serious moves into electronic publishing and learning systems. This really doesn’t identify any of the major players because they’re all doing it, we just happen to have a long term relationship with LPH. My students are not the largest purchasers of text books, a fact that LPH’s agent confirmed. While Engineers buy a lot of books, Computer Scientists tend not to buy many and will, maybe, buy one serious text if they think it will be of use to them.

It’s not hard to see why. Many programming language or application books are obsolete within weeks or months, sometimes even before they arrive, and when the books cost upwards of $100 – why buy them when you can download all of the documentation for free? Unlike Humanities, where core texts can remain the same from year to year, or Engineering and Physics, where the principles are effectively established, my discipline’s principles are generally taught by exposure to languages and contextualisation in programming. There are obvious exceptions. Bentley’s Programming Pearls, almost anything by Knuth and certain key texts on algorithms or principles (hello, Dragon Book!) all deal with fundamentals and the things that don’t change from year to year – however, this is not the majority of recommended texts in CS, which tend to head towards programming language guides and manuals. With very few exceptions, any book on a specific programming language has a shelf-life and, if we are updating the course to reflect new content, then we really shouldn’t be surprised if students don’t feel the need to keep buying the new book.

Ah, the Dragon Book.

In other disciplines, the real text book is still being sold extensively and, interestingly, in Australia the eBook is generally sold in a bundle with the real text, even when we know that the student has some form of eBook reader. The model appears to be “work at home from the book and have the eCopy for skimming at Uni”. Both of these forms are still the text book and, if we’re talking about the text book, it appears to be that if students see the need, they’ll buy it. However, the price is becoming more and more important. Is there a widespread model where students can only buy the chapters they need, much as you can buy individual songs from the iTunes Store, and wait until later to see if they want to buy the whole thing? Well, yes, but it’s not widespread in the text book world and, as far as LPH is concerned, it’s not something that they do. Yet.

What is interesting is the growing market in textbook mash-ups. It is now possible to pick a selection of chapters from a range of a publisher’s offerings, add some of your own content, get it checked for copyright issues and then *voila* you have your own custom printed book with only the chapters that you need. All thriller, no filler. Of course, any costs involved in this, especially costly copyright issues, get passed on to the people who buy it. (The students.) This, fairly obviously, restricts the mash-ups to easy to mash materials – books only from one publisher where the IP issues are sorted, open-source images and the like. One problem that surfaces occasionally are people who put their own work in to be included in such a custom run and it turns out that some of the content is not actually original. This can be an oversight and even due to inheritability sometimes. Suppose that Person A created a course from a text, B inherited the course and made some changes based on the course, then continued to change it over the years. It’s a Boat of Theseus problem because the final work is the work of A and B but probably retains enough of the original text source to cause copyright issues when combined back into a new book. Copyright issues can often be overcome but it increases costs and, as stated, that costs the student more.

Given how expensive text books (still) are and that the custom market still operates at a high-ish price point, I’m still waiting for one of the LPHs to take the radical step of providing books at a price point that makes them effectively irresistible. Look at the Orange Penguin reprints, which I do often because I own a million of them, they cost $10 (cheaper in a bundle) and you can pick them up anywhere. Yes, there is an amelioration of the editing costs because these are all reprints of previous versions. Yes, there are no cover arts costs and they are using relatively mainline stock for the printing. But, hang on, isn’t this exactly what we can do in the custom sense, if we stick to jamming together existing chapters? Yet my early researches indicate that there is no large market of custom textbooks that are anywhere near this cost.

I’m going to put up the naïve and relatively ignorant flags here as I’m sure that LPH actuaries have been all over this so, rather than say “Surely…” (and have to kick myself), let me make this a wish.

“I wish that I could assemble a useful book for my students from key chapters of available works and, with low presentation costs, get a book together for under $40 that really nailed the content required for a year level.”  I’d be even happier if that $40 was $20. Or even free. There are some seriously successful free text book initiatives but, as always, there is that spectre of reimbursement for the effort expended by the author. I’m certainly not advocating doing authors out of their entitlements but I am wondering how we can do that and, with minimal overhead, make all of these books as useful and widespread as they need to be. 

There are some books and sets of chapters that I’d love my students to have, while respecting the author’s right to receive their entitlements for the work and setting a fair price. To be honest, it really seems like I’m expecting too much. What do you think?


Banned Books Week: Time to Hit the Library!

It’s Banned Books Week until October the 6th so what better time to talk about the freedom to read and go off and subversively read some banned or challenged books? There’s a great link on the American Library Association’s site with the top 100 Banned/Challenged Books 2000-2009. Some of them are completely predictable and some of them are more surprising. The reasons given for withdrawing books are, in the words of the ALA site:

Books usually are challenged with the best intentions—to protect others, frequently children, from difficult ideas and information.

However, it is always easy to see where such noble intentions have been subverted and politics or other overtones have come into play. Let’s look at the Top 10 from 1990-1999 as an example:

  1. Scary Stories (series), by Alvin Schwartz (7)
  2. Daddy’s Roommate, by Michael Willhoite (-)
  3. I Know Why the Caged Bird Sings, by Maya Angelou (6)
  4. The Chocolate War, by Robert Cormier (3)
  5. The Adventures of Huckleberry Finn, by Mark Twain (14)
  6. Of Mice and Men, by John Steinbeck (5)
  7. Forever, by Judy Blume (16)
  8. Bridge to Terabithia, by Katherine Paterson (28)
  9. Heather Has Two Mommies, by Leslea Newman (-)
  10. The Catcher in the Rye, by J.D. Salinger (19)

Book 1 is scary and has gruesome illustrations. Book 2 deals with homosexual parents.Book 3 contains a rape involving an eight year old girl. Book 4 is about bullying and also contains a masturbation scene. Book 5 is … book 5 is Huckleberry Finn!!! Of course, HF is probably in here because of the fairly extensive use of racial pejoratives and stereotypes, even if argument can be made that the book itself is anti-racist. Book 6 is a magnificent book but, between the deaths and a dead puppy, it’s not exactly an easy book. Book 7 has teen sex in it but nowhere near the same tone or difficulty as some of the previous. Book 8 is a surprisingly depressing book that manages to balance a fantasy world with death and disappointment. Book 9, well, what a surprise, another book on homosexuality has made the list. Finally, we have Catcher, full of profanity and sexual depiction.

Looking at this list, we see sex, racism, homosexual relationships and death being the major themes. (Notably, to be banned for sex, depictions that range to the explicit are required for heterosexual activity, but it is merely the existence of the relationship that can suffice for homosexual relationships.) Those numbers at the end are, by the way, where they feature in the top 100 of 2000-2009. Let’s look at that to see what appals and is too complicated for children or library users in the first decade of the 21st Century, I’ve bolded the new entries:

1. Harry Potter (series), by J.K. Rowling
2. Alice series, by Phyllis Reynolds Naylor
3. The Chocolate War, by Robert Cormier
4. And Tango Makes Three, by Justin Richardson/Peter Parnell
5. Of Mice and Men, by John Steinbeck
6. I Know Why the Caged Bird Sings, by Maya Angelou
7. Scary Stories (series), by Alvin Schwartz
8. His Dark Materials (series), by Philip Pullman
9. ttyl; ttfn; l8r g8r (series), by Lauren Myracle
10. The Perks of Being a Wallflower, by Stephen Chbosky

I’ll come back to Harry Potter in a moment. Number 2, the Alice series, covers a wide range of topics, including our old friend sex, so it’s for the sexual content that it made the list – topping the list in 2003. Number 4 is a children’s book based on the observed behaviour of two male penguins who became a couple and raised a hatchling. (You can read about Roy and Silo here.) Number 8 has some nasty moments across the trilogy but, in the main, has drawn most of its criticism because of a negative portrayal of religion in general, and Christianity specifically. Number 9 is on the list because, from a Banned Books story in 2010, “Preoccupied with sex and college, the teen girls encounter realistic situations that feature foul language, drugs and alcohol in a less than casual way.” Finally, the new number 10 contains references to suicide and death, as well as the usual teen cocktail of drugs, alcohol and sex that guarantee requests for banning. Oh, and there’s also a gay friend and there is a reference to child molestation. But none of it is graphic and it’s written up as a series of letters to a friend.

So the themes are now sex, drugs, bad language, homosexual penguins (Penguin Lust!), discussion of real teenagers and… fantasy novels? Let me return to Harry Potter which contains teens who are so heavily plasticised that they appear to have no real functioning genitalia, never smoke drugs, don’t swear seriously even when being threatened with death and are laughably vanilla in so many ways that the dominant fantasy conceit of the HP universe is not the magic, it’s that teenagers would actually function this way! This, and the inclusion of His Dark Materials, appear to show the direction that book banning has taken over the last decade: removing a point of view for reasons that appear to have little to do with protecting children from difficult ideas and information, but to remove them from ideas that have been stated as unacceptable by some form of organised body.

I strongly suggest looking at both lists, side-by-side, so that you too can have the moments that I had of cocking your head to one side and thinking “why is that on there?” Then coming to the slow, and unpleasant realisation, that the answer is not “because it’s too dark or encourages drug use” but “because of an organised campaign by a group who are trying to orchestrate the removal of a book that, ultimately, is a fairy tale and of no more harm to children than any other”.

There are sometimes good reasons to restrict access, by age or maturity, to certain materials and, definitely, there are lines that you can’t cross and expect to show up on a public library on the shelves – this is a far cry from completely removing or destroying a work. But what appears to be happening now is that the political reasons for banning are starting to dominate, with Internet and local organisation allowing a majority to form that can request a book’s withdrawal. Fortunately, the Internet can bring books to anyone but, with existing models, e-Books may not be as widely available as we often think so the local and school library forms a valuable point for students. I read voraciously when I was younger and, despite reading many of the banned books on the lists, I don’t appear to have turned out too badly. (I know, I know, anecdotal existential evidence doesn’t count. But I can say that not everyone who reads The Chocolate War turns into a psychopath, so why is it always in the top 5? If anything, it made me aware that the adult advice on bullying was generally an empty mechanism that never dealt with the real problem: bullies are not always cowards, don’t fear the same type of repercussions and, sometimes, are in charge. I know – how subversive!)

Let me leave you with an example of how things have changed in the last two decades. One inclusion on the banned book list only showed up in the last decade, despite being published decades earlier, and it’s number 69 on the 2000-2009 list. I’m scared how high it will be driven in the 2010-2019 list and it is yet another example of why we have to be very careful about how we construct any list of books that we wish to treat differently. Or ‘sanction’. You might have heard of it.

It’s called Fahrenheit 451, by Ray Bradbury.


October Reflection: Planning for 2013

When I was younger, I used to play a science fiction role-playing game that was based in a near-ish future, where humans had widely adopted the use of electronic implants and computers were everywhere in a corporate-dominated world. The game was called “Cyberpunk 2013” and was heavily influenced by the work of William Gibson (“Neuromancer” and many other works), Bruce Sterling (“Mirrorshades” anthology and far too many to list), Walter Jon Williams (“Hardwired” among others) and many others who had written of a grim, depressing, and above all stylish near future. It was a product of the 80s and, much like other fashion crime of the time, some of the ideas that emerged were conceits rather than concepts, styles rather than structures. But, of course, back in the 1980s, setting it in 2013 made it far away and yet close enough. This was not a far future setting like Star Trek but it was just around the corner.

The game had some serious issues but was a great deal of fun. Don’t start me talking about it or we’ll be here all night.

And now it is here. My plans for the near future, the imminent and the inevitable, now include planning calendars for a year that was once a science fiction dream. In that dark dream, 2013 was a world of human/machine synthesis, of unfeeling and mercenary corporate control, of mindless pleasure and stylish control of a population that seeks to float as lotus eaters rather than continue to exist in the dirty and poor reality of their actual world.

Well, we haven’t yet got the cybernetics working… and, joking aside, the future is not perfect but it is far less gloomy and dramatic in the main that the authors envisioned. Yes, there are lots of places to fix but the majority of our culture is still working to the extent that it can be developed and bettered. The catastrophic failures and disasters of the world of 2013 has not yet occurred. We can’t relax, of course, and some things are looking bleak, but this is not the world of Night City.

In the middle of all of this musing on having caught up to the future that I envisioned as a boy, I am now faced with the mundane questions such as:

  • What do I want to be doing in 2020 (the next Cyberpunk release was set in this year, incidentally)
  • Therefore, what do I want to be doing in 2013 that will lead me towards 2020?
  • What is the place of this blog in 2013?

I won’t bore you with the details of my career musings (if my boss is reading this, I’m planning to stay at work, okay?) but I had always planned that the beginning of October would be a good time to muse about the blog and work out what would happen once 2012 ended. I committed to writing the blog every day, focussed on learning and teaching to some extent, but it was always going to be for one year and then see what happened.

I encourage my students to reflect on what they’ve done but not in a ‘nostalgic’ manner (ah, what a great assignment) but in a way that the can identify what worked, what didn’t work and how they could improve. So let me once again trot out the dog food and the can opener and give it a try.

What has worked

I think my blog has been most successful when I’ve had a single point to make, I’ve covered it in depth and then I’ve ducked out. Presenting it with humour, humility, and an accurate assessment of the time that people have to read makes it better. I think some of my best blogs present information and then let people make up their own minds. The goal was always to present my thought processes, not harangue people.

What hasn’t worked

I’m very prone to being opinionated and, sometimes, I think I’ve blogged too much opinion and too little fact. I also think that there are tangents I’ve taken when I’ve become more editorial and I’m not sure that this is the blog for that. Any blog over about 1,100 words is probably too long for people to read and that’s why I strive to keep the blog at or under 1,000 words.

Having to blog every day has also been a real challenge. While it keeps a flow of information going, the requirement to come up with something every, single, day regardless of how I’m feeling or what is going on is always going to have an impact on quality. For example, I recently had a medical condition that required my doctor to prescribe some serious anti-inflammatory drugs and painkillers for weeks and this had a severe impact on me. I have spent the last 10 days shaking off the effects of these drugs that, among other effects, make me about half as fast at writing and reduce my ability to concentrate. The load of the blog on top of this has been pretty severe and I’m open about some of the mistakes that I’ve made during this time. Today is the first day that I feel pretty reasonable and, by my own standards, fit for fair, complex marking of large student submissions (which is my true gauge of my mental agility).

How to improve

Wow, good question. This is where the thinking process starts, not stops, after such an inventory. The assessment above indicates that I am mostly happy with what came out (and my readership/like figures indicate this as well) but that I really want to focus on quality over quantity and to give myself the ability to take a day off if I need to. But I should also be focused on solid, single issue, posts that address something useful and important in learning and teaching – and this requires more in-depth reading and work than I can often muster on a day-to-day basis.

In short, I’m looking to change my blog style for next year to a shorter and punchier version that gives more important depth, maintains an overall high standard, but allows me to get sick or put my feet up occasionally. What is the advice that I would give a student? Make a plan that includes space for the real world and that still allows you to do your best work. Content matters more than frequency, as long as you meet your real deadline. So, early notice for 2013, expect a little less regularity but a much more consistent output.

It’s a work in progress. More as I think of it.

 


Fragile Relationships: Networks and Support

I’ve been working with a large professional and technical network organisation for the past couple of days and, while I’m not going to go into too much detail, it’s an organisation that has been around for 28 years and, because of a major change in funding, is now having to look at what the future holds. What’s interesting about this organisation is that it doesn’t have a silo problem in terms of its membership across Australia and New Zealand, which makes it almost unique in terms of technology networks in this neck of the woods. There’s no division between academic and professional staff, there are representatives from both. Same for tech and non-tech, traditional and new Unis, big and small players. It’s a bizarrely egalitarian and functional organisation that has been developing for 28 pretty good years.

Now, for some quite understandable reasons, the original funds provider is withdrawing and we have to look at the future and decide what we’re going to do. I’ve been out talking to possible organisation sponsors or affiliates but, until we decide what form we’re going to take, I’m trying to sell a beast behind a curtain by offering a dowry. This is not a great foundation for a future direction. As it turns out, trying to find a parent organisation that will be a good host is challenging because there’s nothing quite like us in the region. So, we’re looking at other alternatives. I have, however, just moved on to the executive of the organisation to try and help steer it through the next couple of years and, with any luck, into a form that will be self-sustaining and continue to give the valuable contribution to the ANZ community that it has been making for so many years.

The problem is that it takes 28 years to produce a network this strong and, if we get it wrong, relationships are inherently fragile and the disintegration of a group is far easier (and requires zero effort) than the formation. I have one of those composite stone benches in my house and I often ponder the amount of work it took to produce it and get that particular shape up on my bench top.

And how easily it could be broken, irrevocably, with one strike of a sledgehammer.

Knock knock!

(This is why my wife won’t let me use the sledgehammer to cook with.)

Human networks don’t need a sledgehammer strike to fall apart, they just need neglect. There are many examples of good low-cost networks that manage to keep people linked up, regardless of their level of resource, and I often think of the computing education community in the US, made of the regional committees, the overarching groups like SIGCSE and how the regional groups provide sustenance and a focus point, with the large conference coming into town every so often to bring everyone together.

2012 is an interesting year in so many ways and, every time I turn around, there seems to be a new challenge, something to look at, something to review to see if it’s worth keeping and, in many cases, something new to steward or assist. But I suppose that it’s important to remember that all of these things take energy and, at some stage, I’m going to have to sit down and organise how all of these tasks will go together in a way that I can make this work effectively for 2013.


Our Obligations: Moral and Legal?

Mark Guzdial raises an interesting point over at a BLOG@CACM article, namely that, if we don’t keep up to to date with contemporary practice in learning and teaching, can be considered unprofessional or even negligent or unethical? If we were surgeons who had not bothered to stay up to date then our patients, and certifying bodies, would be rightly upset. If we are teachers – then what?

The other issue Mark discusses is that of the legal requirement. The US has Title IX, which should extend the same participation rights to all genders for any education program or activity that attracts federal funding. If we do not construct activities that are inclusive (or we design activities that, by their nature, are exclusive) would we be liable under US law?

Mark’s final question is: If we know a better way to teach computing, are we professionally (and even legally) required to use it?

That is a spectacularly good question and, of course, it has no easy answer. Let me extend the idea of the surgeon by building on the doctors’ credo: primum non nocere (first, do no harm). Ultimately, it requires us to consider that all of our actions have outcomes and, in the case of medical intervention, we should be sure that we must always consider the harm that will be caused by this intervention.

Let us consider that there are two approaches that we could take in our pursuit of knowledge of learning and teaching: that of true scholarship of learning and teaching, and that of ignorance of new techniques of learning and teaching. (We’ll leave enthusiasm and ability to the side for the time being.) While this is falsely dichotomous, we can fix this by defining scholarship as starting at ‘knowing that other techniques exist and change might not kill you’, with everything else below that as ‘ignorance of new techniques’.

Now let us consider the impact of both of these bases, in terms of enthusiasm. If someone has any energy at all, then they will be able to apply techniques in the classroom. If they are more energetic then they will apply with more vigour and any effect will be amplified. If these are useful and evidentially supported techniques, then we would expect benefit. If these are folk pedagogies or traditions that have long been discredited then any vigour will be applied to an innately useless or destructive technique. In the case of an inert teacher, neither matters. It is obvious then that the minimum harm is to employ techniques that will reward vigour with sound outcomes: so we must either use validated techniques or explore new techniques that will work.

Now let us look at ability. If a teacher is ‘gifted’ (or profoundly experienced)  then he or she will be more likely to carry the class, pretty much regardless. However, what if a teacher is not so much of a star? Then, in this case, we start to become dependent once again upon the strength of the underlying technique or pedagogy. Otherwise, we risk harming our students by applying bad technique because of insufficient ability to correct it. Again, do no harm requires us to provide techniques that will survive the average or worse-than-average teacher, which requires a consideration of load, development level, reliance upon authority and so on – for student and teacher.

I believe that this argues that, yes, we are professionally bound to confirm our techniques and approaches and, if a better approach is available, evaluate it and adopt it. To do anything else risks doing harm and we cannot do this and remain professional. We are intervening with our students all the time – if we didn’t feel that our approach had worth or would change lives then we wouldn’t be doing it. If intervention and guidance are at our core then we must adopt something like the first, do no harm maxim because it gives us a clear signpost on decisions that could affect a student for life.

One of the greatest problems we face is potentially those people who are highly enthused and deeply undereducated in key areas of modern developments of teaching. As Kurt von Hammerstein-Equord would have said:

One must beware of anyone who is [undereducated] and [very enthusiastic] — [s/he] must not be entrusted with any responsibility because [s/he] will always cause only mischief.

If your best volunteer is also your worst nightmare, how do you resolve this when doing so requires you to say “This is right but you are wrong.” Can you do so without causing enormous problems that may swamp the benefit of doing so?

What about the legal issues? Do we risk heading into the murky world of compliance if we add a legal layer – will an ethical argument be enough?

What do you think about it?


Enhancing the Reputation of Australian IT Research – by giving it away?

(Update: Gernot has responded to this blog and has found fault with both it and the original article. I have responded to him. You can read his article here and my comment below that, or just look at the comments on this post. Thanks again, Gernot, for the clarifications.)

(Update 2: Gernot has put a further discussion of the points raised both in the previous post and this one, which you can find here. In this one, Gernot clearly explains why approaches were taken the way they were, how NICTA is benefiting from the ongoing work (as are we) and further identifies that the original article didn’t manage to capture a lot of the detail of what had happened. My thanks again to Professor Heiser for taking the time to respond to this so thoroughly and so patiently!

As I noted on his blog post, the article took a tone that I responded to and, with additional information, I can clearly see both the benefit as expressed and the reasons behind such a decision. I have left this and the follow-up posts intact, with these updates, to show the evolution of the discussion. Please make sure that you read both Parts 1 and 2 of Gernot’s response if you’re going to read this!)

I stumbled across this article in the Australian (Australia’s national newspaper) inside their AustralianIT section. In it, it was announced that the Australian research body National ICT Australia had sold “groundbreaking technologies” to a US company, for virtualisation security software that was used on 1.6 billion mobile devices worldwide. The spun-off company that was sold, Open Kernel (OK) Labs, was sold in its entirety and with no provision of royalties back to NICTA. Now, before we go any further, let’s talk about NICTA. NICTA is Australia’s Information and Communication Technology Research Centre of Excellence, employing about 700 people and funded by the Australian government. One of NICTA’s primary goals is to apply the high-impact research it develops to create national benefit and wealth for Australia. Remember this, it’s important.

Now let’s go back to the sale of OK Labs and, if you read the article carefully, you’ll see that there is some serious non-discussion of how much money changed hands and whether the Australian government, or NICTA, would receive any payment back at all from the sale. The former CTO and co-founder, Professor Gernot Heiser, has stated that while he couldn’t reveal the cost of the technology, it was about 25 person years of development. He then goes on to point out that the original micro-kernel was open source and hence no royalties accrued, but they had received some payments for it. (In the past, I think?) The second kernel was developed after the original OK Labs had been spun off, with NICTA retaining a minority share, but that NICTA didn’t have any share or role in its development, hence that had transferred wholesale to the new US owner and, again, no royalties. The third micro-kernel was a research outcome from NICTA but hadn’t been deployed commercially – but this was moot as OK Labs had received an exclusive licence to use it, then purchased it outright and NICTA had obtained some equity without cash in OK Labs as a result.

Got that? Now let’s get to the profit sharing. Firstly, there has been no indication whether NICTA would receive any payment back from the sale to balance against the initial investment of taxpayer funds.

Hmm.

Any profit from the deal went to OK Labs investors initially and “anything left” is distributed to shareholders, which included NICTA. (Remember that they traded valuable and NICTA developed research for a greater stake of the pie, which will be valuable if “anything is left”.)

Hmmm.

Let me add the final paragraph of the article here, because I can’t do it any better justice:

Professor Heiser said professional bankers were engaged to make the sale “and they didn’t do it for free”. He said the sale of OK Labs enhanced the reputation of Australian IT research.

Financially, this is pretty much what has happened.

I can only hope that this is the worst-written, hatchet-job of an article because, otherwise, I’m flabbergasted. It appears that a government funded body has managed to develop and deploy a technology while systematically ensuring that any actual benefit from IP developed on these monies was distributed to everyone else before a single dollar flowed back in to turn over the research cycle once more. The investors are making money, NICTA traded some valuable IP for magic beans and may not get any money, the bankers are making money and, somehow, in the scope of this operatically complex financial dance, where the private benefit is enormous, Professor Heiser then turns around and sticks a public benefit statement on the end. We’ve enhanced the reputation of Australian IT research.

How does this … situation enhance anyone’s opinion of our research? Who is going to know in a year’s time where that research came from and why will they ever have to know?

The standard shining light in Australian IT from public funding is the CSIRO WiFi patent which is scheduled to attract royalty payments of roughly $1 billion over the next 5-10 years. This is the model that everyone explains to you when you first get into University research and, if you have anything commercialisable, expect a knock on the door from your local research innovation group because everyone wants another CSIRO patent. A billion dollars buys a lot of research.

I don’t know how you can possible slice up 25 person years of time and trade that for a peppercorn in potentia, with federal funding and the dominant position of NICTA on the Australian academic research scene, and possibly call this enhancing the reputation of Australian IT Research. Why, yes, I’m sure investors will want to come back, get us to pay for it, trade it away, sell it to them with no hope of recouping our investment and then not require royalties. I have no doubt that this may bring more investors but in the same way that a wounded fish attracts sharks. The enhanced reputation of the fish is a fleeting experience and is hardly enjoyable.

If Professor Heiser is reading this, then I welcome any clarification that he can make and, in the Australian have miscast this, then I welcome and will publish any supported correction. I sincerely hope that this is merely a miscommunication because the alternative is really rather embarrassing for all concerned.