Being a Hypnoweasel and Why That’s a Bad Idea.

I greatly enjoy the television shows and, as it turns out, the writing of Derren Brown. Mr Brown is a successful conjurer, hypnotist and showman who performs stage magic and a range of deceits and experiments, including trying to turn a random member of the public into an assassin or convincing people that they committed a murder.

This is Derren hypnotising you into believing that this is the best post ever.

His combination of trickery, showmanship, claimed psychology/neurolinguistic programming and hypnotism makes for an interesting show – he has been guilty of over claiming in earlier shows and, these days, focusses on the art of misdirection, with a healthy dose of human influence to tell interesting stories. I am reading his book “Tricks of the Mind” at the moment and the simple tricks he discusses are well informed by the anecdotes that accompany them. However, some of his Experiments and discussions of the human aspects of wilful ignorance of probability and statistics are very interesting indeed and I use these as part of my teaching.

In “The System”, Derren shares his “100% successful horse race prediction system” with a member of the public. He also shows how, by force of will alone, he can flip a coin 10 times and have it come up heads – with no camera trickery. I first saw this on a rather dull plane flight and watched with interest as he did a number of things that, characteristically, showed you exactly what he was doing but cleverly indicated that he was doing something else – or let you believe that he was doing something else. “The System” is a great thing to show students because they have to consider what is and what isn’t possible at each stage and then decide how he did it, or how he could have done it. By combining his own skill at sleight of hand, his rather detailed knowledge of how people work and his excellent preparation, “The System” will leave a number of people wondering about the detail, like all good magic should.

The real reason that I am reading Derren at the moment, as well as watching him carefully, is that I am well aware how easy it is to influence people and, in teaching, I would rather not be using influence and stagecraft to manipulate my students’ memories of a teaching experience, even if I’m doing it unconsciously. Derren is, like all good magicians, very, very good at forcing cards onto people or creating situations where they think that they have carried out an act of their own free will, when really it is nothing of the kind. Derren’s production and writings on creating false memory, where a combination of preparation, language and technique leads to outcomes where participants will swear blind that a certain event occurred when it most certainly did not. This is the flashy cousin of the respectable work on cognition and load thresholds, monkey business illusion anyone?, but I find it a great way to step back critically and ask myself if I have been using any of these techniques in the showman-like manipulation of my students to make them think that knowledge has been transferred when, really, what they have is the memory of a good lecture experience?

This may seem both overly self-critical and not overly humble but I am quite a good showman and I am aware that my presentation can sometimes overcome the content. There is, after all, a great deal of difference between genuinely being able to manipulate time and space to move cards in a deck, and merely giving the illusion that one can. One of these is a miracle and the other is practise. Looking through the good work on cognitive load and transfer between memory systems, I can shape my learning and teaching design so that the content is covered thoroughly, linked properly and staged well. Reading and watching Derren, however, reminds me how much I could undo all of the good work by not thinking about how easy it is for humans to accept a strange personally skewed perspective of what has really happened. I could convince my students that they are learning, when in reality they are confused and need more clarification. The good news is that, looking back, I’m pretty sure that I do prepare and construct in a way that I can build upon something good, which is what I want to do, rather than provide an empty but convincing facade over the top of something that is not all that solid. Watching Derren, however, lets me think about the core difference between an enjoyable and valuable learning experience and misdirection.

There are many ways to fool people and these make for good television but I want my students to be the kind of people who see through such enjoyable games and can quickly apply their properly developed knowledge and understanding of how things really work to determine what is actually happening. There’s an old saying “Set a thief to catch a thief” and, in this case, it takes a convincing showman/hypnotist to clarify the pitfalls possible when you get a little too convincing in your delivery.

Deception is not the basis for good learning and teaching, no matter how noble an educator’s intent.


Planning Spontaneity

Whenever I teach an intensive mode class, as I’ve just finished doing, I have to face the fact that I just don’t have the same level of ‘slack’ time between classes that I’m used to. In a traditional model of 2-3 lectures a week, I usually get a break of up to 5 or so days between each teaching activity to make changes based on student feedback, to rehearse and to plan. When you’re teaching 3 hours on Friday, 6 hours on Saturday and 7 hours on Sunday, and adding a good hour of extra time per session on for student questions, you have no slack.

I like to able to take the class in a wide range of directions, where student questions and comments allow the exploration of the knowledge in a way that recognises how the students appear to be engaging with it. I still get all of the same content across (and it’s in the notes and probably podcasts as well) but we may meander a fair bit on our path through it. I learned, very early on, that being able to be spontaneous like this and still cover everyone was not something that I could achieve without planning.

The New Caledonian Crow can spontaneously solve problems without planning. I’m guessing that this doesn’t include teaching Computer Networks to humans.

If I don’t ask students any questions, or they don’t ask me any, then I can predict how the lecture will roll out. I can also predict that most students will end up asleep and that they will learn very little. Not a good solution. I like to be able to try different things, other activities, focus on issues of direct concern to the students but, given that I have almost no reaction time in a tight teaching mode like this, how can I do it? Here are five things that I’ve found are useful. There are more but these are my top five and I hope that they’re helpful – there’s nothing really earth-shattering here but there’s a tweak here and there.

  1. Get any early indications you can of what students are interested in. Use this to identify areas that might get explored more.

    Have you set an assignment on ‘subject X’? Students will be interested in subject X. Has something been in the news? Is there something on the student’s mind? In the previous post, I used a question board to find out what each student was really curious about. A quick scan of that every now and then gave me an idea of what the students would talk about, ask questions about and care about. It also allowed me to do some quick looking up to confirm areas that weren’t on the traditional course that could add more interest.

  2. Review the course and know what can be dropped. Plan not to but have it as a safety valve.

    Ok, this is Teaching 101, but it’s essential in an intensive course. Once it’s over, you’ve missed the chance to add new content and you only have 2.5 days to get it across. I know which areas I can reduce depth on if we’ve gone deep elsewhere but, sometimes, I’m in a section where nothing can be dropped. Therefore I use that knowledge to say “I can’t drop anything” so I have to use a different strategy like…

  3. Have a really good idea of how long everything will take. Be prepared to hold to time if you have to.

    If you run 10 minutes over in a traditional lecture, the next lecturer will grumble, the students will grumble, but the end of the day resets the problem. Do that in intensive mode and you lose an hour for every six lectures. On the course I just did, you’d lose nearly three lectures (worst case). Yes, yes, we’re all rehearsing our content and re-reading it before we present, but intensive mode students have different demands, and may keep asking you questions because they know this is one of their few chances to talk to you face-to-face, which brings me to…

  4. Understand your students.

    My intensive students have full-time jobs when they’re not in my classroom. They’re so dedicated to their studies that they work 5 days, spend Friday night with me, work Saturday morning, and then spend Saturday and Sunday afternoon with me. What does this mean? It means that I can’t just run over on Friday night because I feel like it. I need to respect the demands on their time. Some of them might be late because of public transport or work running over and things like that – because we’re all jamming stuff in. I don’t condone students not caring about things but I do try to understand my students and respect the amount of effort they’re putting in. What else does this mean? I have to be very interesting and very clear in my explanations on Sunday, preferably with lots of interactive activities of one form or another, because everyone is really, really tired by then.

  5. Understand yourself.

    The whole reason I plan really carefully for these activities is that, by Sunday, I’m pretty tired myself. University courses do not usually run at this pace and this is not my usual approach. I’m rounding out a 10 day week at a pace that’s faster than usual. While I will still be quite happily able to teach, interact and work with my students, there’s no way that I’m going to be very creative. If I want to support interesting activities on Sunday, I need to plan them early and identify their feasibility on Friday and Saturday. However, I plan with the assumption that it will go ahead.

This Sunday I ran a collaborative activity that I had planned earlier, foreshadowed to the students and used as a driver for thinking about certain parts of the course. It ran, and ran well, but there’s no way that I could have carried it out ‘off the cuff’ and everything good that happened on Sunday had been planned at least a few days in advance, with some of it planned weeks before.

I love being spontaneous in the classroom but it has taken me years to realise that the best opportunities for the kind of spontaneity that builds useful knowledge are almost always very carefully planned.

 


Wall of Questions – Simple Student Involvement

Teaching an intensive mode class can be challenging. Talking to anyone for 6 hours in a row (however you try and break it up) requires you to try and maintain engagement with student, but the student has to want to become and stay engaged! We’re humans so we’re always more interested in things when it is relevant to our interests – the question now becomes “How can I make students care about what I’m teaching because it is relevant to them?”

I’ve learned a lot from looking at the great work coming out of CS Unplugged, so I decided to take a low-tech approach to getting the students involved in the knowledge construction in the course.

On the Friday night of teaching, I gave my students a simple homework question: “What is your big question about networking?” This could be technical, social or crystal-ball gazing. The next morning, I handed out some large sticky notes in a variety of garish colours and asked them to write their questions on the notes and stick them on the board. This is what it looked like this morning (after about 6 hours of teaching).

The Big Network Question Board

The blue, orange and pink rectangles are questions. The ones on the left are yet to be answered. The ones on the right have been answered. (The green post-its are 2D bit parity as an audience participation magic trick.)

I’ve been answering these questions as fill-ins, where I have gaps, but a lot of them address issues that I was planning to cover anyway. The range is, however, far wider than I would have thought of but it’s given me a chance to address the applications and implications of networking, to directly answer questions that are of interest to the students.

Here are some (not verbatim) examples: What happened to the versions of the Internet Protocol that aren’t 4 or 6? What would happen if we had a human colony on Mars in terms of network implications? Was the IPv4 allocation ‘fair’ in terms of all countries? Could you run WiFi in the underground train network and, if so, what is the impact of the speed of the train? Will increased WiFi coverage give us cancer?

Every student has a question on the board and, now, every student is (at least to a slight degree) involved in the course. A lot of the questions that are left are security questions, and I’ll answer them as part of my security lectures this afternoon.

If you like this, and want to try it, then I am not claiming any originality for this but I can offer some suggestions:

  1. Give the students a little time to think about the question. It’s a good homework assignment.
  2. Get them to fill out the notes in class. As they finish their notes and pop them up to the board, it appears to encourage other people to finish their own notes to get them up. The notes are also shorter because the students want to get it done quickly.
  3. Once the notes are up, quickly review them to see how you can use them and where they fit into your teaching.
  4. When you can, group the notes by theme based on what you are teaching. I left them unordered for a while and I kept having to exhaustively search them, which is irritating.
  5. Be bold and prominent – the board is an eye-catcher and it clearly says “We have questions!” It’s also dynamic because I can easily rearrange it, move it or regroup the notes.

I’m still thinking about what to do with the notes next. I am planning to keep them but am unsure as to whether I want to ‘capture’ answers to this as I may have a knock-on effect for the next offering of this course.

What pleased me was the students who recognised their own question, because their faces lit up as I spoke to their concern. For a relatively low effort investment, that’s a great reward.

Could I have used an electronic forum? Yes, but then the focus isn’t in the classroom. The board, and your question, are in the classroom. You can go up and look at anyone else’s to see if it’s interesting. Rather than taking the application focus out of the classroom, we’re bringing in the realities and the answers as I go through the teaching.

Is there a risk that they’ll ask something I don’t know? No more than usual, and now I can sneak off and look it up before I answer, because it’s on the board. Being an honest man, I would of course have to say “I had to look this up” but I did warn them that this might happen. If a student can ask a question that has me scratching my head but I can develop an answer, I think that’s a very valuable example and it’s probably a nice moment for the student too.

I’ll certainly be doing this again!

 


Road to Intensive Teaching: Post 1

I’m back on the road for intensive teaching mode again and, as always, the challenge lies in delivering 16 hours of content in a way that will stick and that will allow the students to develop and apply their understanding of the core knowledge. Make no mistake, these are keen students who have committed to being here, but it’s both warm and humid where I am and, after a long weekend of working, we’re all going to be a bit punch-drunk by Sunday.

That’s why there is going to be a heap of collaborative working, questioning, voting, discussion. That’s why there are going to be collaborative discussions of connecting machines and security. Computer Networking is a strange beast at the best of times because it’s often presented as a set of competing models and protocols, with very few actual axioms beyond “never early adopt anything because of a vendor promise” and “the only way to merge two standards is by developing another standard. Now you have three standards.”

There is a lot of serious Computer Science lurking in networking. Algorithmic efficiency is regularly considered in things like routing convergence and the nature of distributed routing protocols. Proofs of correctness abound (or at least are known about) in a variety of protocols that , every day, keep the Internet humming despite all of the dumb things that humans do. It’s good that it keeps going because the Internet is important. You, as a connected being, are probably smarter than you, disconnected. A great reach for your connectivity is almost always a good thing. (Nyancat and hate groups notwithstanding. Libraries have always contained strange and unpleasant things.)

“If I have seen further, it is by standing on the shoulders of giants” (Newton, quoting Bernard of Chartres) – the Internet brings the giants to you at a speed and a range that dwarfs anything we have achieved previously in terms of knowledge sharing. It’s not just about the connections, of course, because we are also interested in how we connect, to whom we connect and who can read what we’re sharing.

There’s a vast amount of effort going into making the networks more secure and, before you think “Great, encrypted cat pictures”, let me reassure you that every single thing that comes out of your computer could, right now, be secretly and invisibly rerouted to a malicious third party and you would never, ever know unless you were keeping a really close eye (including historical records) on your connection latency. I have colleagues who are striving to make sure that we have security protocols that will make it harder for any country to accidentally divert all of the world’s traffic through itself. That will stop one typing error on a line somewhere from bringing down the US network.

“The network” is amazing. It’s empowering. It is changing the way that people think and live, mostly for the better in my opinion. It is harder to ignore the rest of the world or the people who are not like you, when you can see them, talk to them and hear their stories all day, every day. The Internet is a small but exploding universe of the products of people and, increasingly, the products of the products of people.

This is one of the representations of what the Internet looks like, graphically.

Computer Networking is really, really important for us in the 21st Century. Regrettably, the basics can be a bit dull, which is why I’m looking to restructure this course to look at interesting problems, which drives the need for comprehensive solutions. In the classroom, we talk about protocols and can experiment with them, but even when we have full labs to practise this, we don’t see the cosmos above, we see the reality below.

Maybe a green light will come on!

Nobody is interested in the compaction issues of mud until they need to build a bridge or a road. That’s actually very sensible because we can’t know everything – even Sherlock Holmes had his blind spots because he had to focus on what he considered to be important. If I give the students good reasons, a grand framing, a grand challenge if you will, then all of the clicking, prodding, thinking and protocol examination suddenly has a purpose. If I get it really right, then I’ll have difficulty getting them out of the classroom on Sunday afternoon.

Fingers crossed!

(Who am I kidding? My fingers have an in-built crossover!)


A Late Post On Deadlines, Amusingly Enough

Currently still under a big cloud at the moment but I’m still teaching at Singapore on the weekend so I’m typing this at the airport. All of my careful plans to have items in the queue have been undermined by having a long enough protracted spell of illness (to be precise, I’m working at about half speed due to migraine or migraine-level painkillers). I have very good parts of the day where I teach and carry out all of the face-to-face things I need to do, but it drains me terribly and leaves me with no ‘extra’ time and it was the extra time I was using to do this. I’m confident that I will teach well over this weekend, I wouldn’t be going otherwise, but it will be a blur in the hotel room outside of those teaching hours.

This brings me back to the subject of deadlines. I’ve now been talking about my time banking and elastic time management ideas to a lot of people and I’ve got quite polished in my responses to the same set of questions. Let me distill them for you, as they have relevance to where I am at the moment:

  1. Not all deadlines can be made flexible.

    I completely agree. We have to grant degrees, finalise resource allocations and so on. Banking time is about teaching time management and the deadline is the obvious focal point, but some deadlines cannot be missed. This leads me to…

  2. We have deadlines in industry that are fixed! Immutable! Miss it and you miss out! Why should I grant students flexible deadlines?

    Because not all of your deadlines are immutable, in the same way that not all are flexible. The serious high-level government grants? The once in a lifetime opportunities to sell product X to company YYPL? Yes, they’re fixed. But to meet these fixed deadlines, we move those other deadlines that we can. We shift off other things. We work weekends. We stay up late. We delay reading something. When we learn how to manage our deadlines so that we can make time for those that are both important and immovable, we do so by managing our resources to shift other deadlines around.

    Elastic time management recognises that life is full of management decisions, not mindless compliance. Pretending that some tiny assignment of pre-packaged questions we’ve been using for 10 years is the most important thing in an 18 year old’s life is not really very honest. But we do know that the students will do things if they are important and we provide enough information that they realise this!

I have had to shift a lot of deadlines to make sure that I am ready to teach for this weekend. On top of that I’ve been writing a paper that is due on the 17th of November, as well as working on many other things. How did I manage this? I quickly looked across my existing resources (and remember I’m at half-speed, so I’ve had to schedule half my usual load) and broke things down into: things that had to happen before this teaching trip, and things that could happen after. I then looked at the first list and did some serious re-arrangement. Let’s look at some of these individually.

Blog posts, which are usually prepared 1-2 days in advance, are now written on the day. My commitment to my blog is important. I think it is valuable but, and this is key, no-one else depends upon it. The blog is now allocated after everything else, which is why I had my lunch before writing this. I will still meet my requirement to post every day but it may show up some hours after my usual slot.

I haven’t been sleeping enough, which is one of the reasons that I’m in such a bad way at the moment. All of my deadlines now have to work around me getting into bed by 10pm and not getting out before 6:15am. I cannot lose any more efficiency so I have to commit serious time to rest. I have also built in some sitting around time to make sure that I’m getting some mental relaxation.

I’ve cut down my meeting allocations to 30 minutes, where possible, and combined them where I can. I’ve said ‘no’ to some meetings to allow me time to do the important ones.

I’ve pushed off certain organisational problems by doing a small amount now and then handing them to someone to look after while I’m in Singapore. I’ve sketched out key plans that I need to look at and started discussions that will carry on over the next few days but show progress is being made.

I’ve printed out some key reading for plane trips, hotel sitting and the waiting time in airports.

Finally, I’ve allocated a lot of time to get ready for teaching and I have an entire day of focus, testing and preparation on top of all of the other preparation I’ve done.

What has happened to all of the deadlines in my life? Those that couldn’t be moved, or shouldn’t be moved, have stayed where they are and the rest have all been shifted around, with the active involvement of other participants, to allow me room to do this. That is what happens in the world. Very few people have a world that is all fixed deadline and, if they do, it’s often at the expense of the invisible deadlines in their family space and real life.

I did not learn how to do this by somebody insisting that everything was equally important and that all of their work requirements trumped my life. I am learning to manage my time maturely by thinking about my time as a whole, by thinking about all of my commitments and then working out how to do it all, and to do it well. I think it’s fair to say that I learned nothing about time management from the way that my assignments were given to me but I did learn a great deal from people who talked to me about their processes, how they managed it all and through an acceptance of this as a complex problem that can be dealt with, with practice and thought.

 


Another semester over – what have I learned?

Monday the 29th marks the last official teaching activities, barring the exam and associated marking, for my grand challenges in Computer Science course. It’s been a very busy time and I’ve worked very hard on it but my students have worked even harder. Their final projects are certainly up where I wanted them to be and I believe that the majority of the course has gone well.

However, I’m running some feedback activities this week and I’ll find out how I can make it better for next year. At this stage we look like we’re going to have a reasonably large group for next year’s intake – somewhere in the region of 10-20 – and this is going to change how I run the course. Certain things just won’t work at that scale unless I start to take better advantage of group structure. I’ve already learnt a lot about how hard it is to connect students and data and, in our last meeting, I commented that I was thinking about making more data available in advance. Well, maybe, was the reply from students but we learned so much about how the data in the world is actually stored and treated.

Hmm. Back to the drawing board maybe – but also I’m going to wait for all of the final feedback.

Do I have students who I would happily put out in front of a class to run it for a while, doubly so for a community involvement project, with the confidence that they’ll communicate confidently, competently and with passion? Well, yes, actually – although there’ll be a small range. (And now I’ve just made at least three people paranoid – that’s what you get for reading my blog.)

There is so much going on that the next two months are going to be pretty frantic. Next year is already shaping up to be a real make-or-break year for my career and that means I need to sit down with a list of things that I want to achieve and a list of things that I am and am not prepared to do in order to achieve things. The achievement list is going to be a while coming, as goal lists always are, but the will/won’t/want list is forming. Here’s a rough draft.

  1. I still want to teach and be pretty involved in teaching. That’s easy as I’m not senior or research-loaded enough to get out of teaching. (I don’t really have a choice.
  2. I need to have more time to work on my non-work projects. I’ve just spent all of a Sunday working and the only reason I stopped was that I couldn’t spell constructivist reliably any more. (Yes, that just took three tries.)
  3. I want to have enough time to spend time with my students and not looked rushed or feel guilty about the time.
  4. I want to have the time to be able to help out any colleagues who could use my assistance AND I want to have the time to be able to seek help from my colleagues!
  5. I don’t want to take on anything that I have to give up on, or push to the sidelines for next year.

So, obviously, it all boils down to time, planning and allocation of priorities. With that in mind, I’ll wish you a happy Monday or good weekend. I’m going to have some dinner.


I am a potato – heading towards caramelisation. (Programming Language Threshold Concepts Part II)

Following up on yesterday’s discussion of some of the chapters in “Threshold Concepts Within the Disciplines”, I finished by talking about Flanagan and Smith’s thoughts on the linguistic issues in learning computer programming. This led me to the theory of markedness, a useful way to think about some of the syntactic structures that we see in computer programs. Let me introduce the concept of markedness with an example. Consider the pair of opposing concepts big/small. If you ask how ‘big’ something is, then you’re not actually assuming that the thing you’re asking about is ‘big’, you’re asking about its size. However, ask someone how ‘small’ something is and there’s a presumption that it’s actually small (most of the time). The same thing happens for old/young. Asking someone how old they are, bad jokes aside, is not implying that they are old – the word “old” here is standing in for the concept of age. This is an example of markedness in the relationship between lexical opposites: the assumed meaning (the default) is referred to as the unmarked form, where the marked form is more restrictive (in that it doesn’t subsume both concepts) and it is generally not the default. You see this in gender and plural forms too. In Lions/LionessesLions is an unmarked form because it’s the default and it doesn’t exclude the Lionesses, whereas Lionesses would not be the general form used (for whatever reasons, good or bad) and excludes the male lions.

Why is this important for programming languages? Because we often have syntactic elements (the structures and the tokens that we type) that take the form of opposing concepts where one is the default, and hence unmarked, form. Many modern languages employ object-oriented programming practices (itself a threshold concept) that allow programmers to specify how the data that they define inside their programs is going to be used, even within that program. These practices include the ability to set access controls, that strictly define how you can use your code, how other pieces of code that you write can use your code, and how other people’s code can use it, as well. The fundamental access control pairs are public and private, one of which says anyone can use this piece of code to calculate things or can change this value, the other restricts such use or change to the owner. In the Java programming language, public dominates, by far, and can be considered unmarked. Private, however, changes the way that you can work with your own code and it’s easy for students to get this wrong.  (To make it more confusing, there is another type of access control that sits effectively between public and private, which is an even more cognitively complex concept and is probably the least well understood of the lot!) One of the issues with any programming language is that deviating from the default requires you to understand what you are doing because you are having to type more, think more and understand more of the implications of your actions.

However, it gets harder, because we sometimes have marked/unmarked pairs where the unmarked element is completely invisible. If we didn’t have the need to describe how people could use our code then we wouldn’t need the access modifiers – the absence of public, private or protected wouldn’t signify anything. There are some implicit modes of operation in programming languages that can be overridden with keywords but the introduction of these keywords just doesn’t illustrate a positive/negative asymmetry (as with big/small or private/public), these illustrate an asymmetry between “something” and “nothing”. Now, the presence of a specific and marked keyword makes it glaringly obvious that there has been an invisible assumption sitting in that spot the whole time.

One of these troublesome word/nothing pairs is found in several languages and consists of the keyword static, with no matching keyword. What do you think the opposite (and pair) of static is? If you’re like most humans, you’d think dynamic. However, not only is this not what this keyword actually means but there is no dynamic keyword that balances it. Let’s look at this in Java:

public static void main(String [] args) {...}
public static int numberOfObjects(int theFirst) {...}
public int getValues() {...}

You’ll see that static keyword twice.Where static isn’t used, however, there’s nothing at all, and this (by its absence) also has a definite meaning and this defines what the default expectation is of behaviour in the Java programming language. From a teaching perspective, this means that we now have a default context, with a separation between those tokens and concepts that are marked and unmarked, and it becomes easier to see why students will struggle with instance methods and fields (which is what we call things without static) if we start with static, and struggle with the concept of static if we start the other way around! What further complicates is this is that every single program we write must contain at least one static method, because it is the starting point for the program’s execution. Even if you don’t want to talk about static yet, you must use it anyway (unless you want to provide the students with some skeleton code or a harness that removes this – but now we’ve put the wizard behind the curtain even more).

One other point I found very interesting in Flanagan and Smith’s chapter was the discussion of barriers and traps in programming languages, from Thimbleby’s critique of Java (1999). Barriers are the limitations on expressiveness that mean that what you want to say in a programming language can only be said in a certain way or in a certain place – which limits how we can explain the language and therefore affects learnability. As students tend to write their lines of code as and when they think of them, at least initially, these barriers will lead the students to make errors because they haven’t developed the locally valid computational idiom. I could ask for food in German as “please two pieces ham thick tasty” and, while I’ll get some looks, I’ll also get ham. Students hitting a barrier get confusing error messages that are given back to them at a time when they barely have enough framework to understand what these messages mean, let alone how to fix them. No ham for them!

THIS IS AN IMPORTANT QUESTION!

Traps are unknown and unexpected problems, such as those caused by not using the right way to compare two things in a program. In short, it is possible in many programming languages to ask “does this equal that” and return an answer of true or false that does not depend upon the values of this or that, but where they are being stored in memory. This is a trap. It is confusing for the novice to try to work out why the program is telling her that two containers that have the value “3” in them are not the same because they are duplicates rather than aliases for the same entity. These traps can seriously trip someone up as they attempt to form a correct mental model and, in the worst case, can lead to magical or cargo-cult thinking once again. (This is not helped by languages that, despite saying that they will take such-and-such an action, take actions that further undermine consistent mental models without being obvious about it. Sekrit Java String munging, I’m looking at you.)

This way of thinking about languages is of great interest to me because, instead of talking about usability in an abstract sense, we are now discussing concrete benefits and deficiencies in the language. Is it heavily restrictive on what goes where, such as Pascal’s pre-declaration of variables or Java’s package import restrictions? Does the language have a large number on unbalanced marked/unmarked pairs where one of them is invisible and possibly counterintuitive, such as static? Is it easy to turn a simple English statement into a programmatic equivalent that does not do what was expected?

The authors suggested ways to dealing with this, including teaching students about formal grammars for programming languages – effectively treating this as learning a new language because the grammar, syntax and semantics are very, very different from English.(Suggestions included Wittgenstein’s Sprachspiel, language game, which will be a post for another time.) Another approach is to start from logic and then work forwards, turning this into forms that will then match the programming languages and giving us a Rosetta stone between English speakers and program speakers.

I have found the whole book very interesting so far and, obviously, so too this chapter. Identifying the problems and their locations, regrettably, is only the starting point. Now I have to think about ways to overcome this, building on what these and other authors have already written.


A Difficult Argument: Can We Accept “Academic Freedom” In Defence of Poor Teaching?

Let me frame this very carefully, because I realise that I am on very, very volatile ground with any discussion that raises the spectre of a right or a wrong way of teaching. The educational literature is equally careful about this and, very sensibly, you read about rates of transfer, load issues, qualitative aspects and quantitative outcomes, without any hard and fast statements such as “You must never lecture again!” or “You must use formative assessment or bees will consume your people!”

Not even your marching bands will be safe!

I am aware, however, that we are seeing a split between those people who accept that educational research has something to tell them, which may possibly override personal experience or industry requirement, and those who don’t. But, and let me tread very carefully indeed, while those of us who accept that the traditional lecture is not always the right approach realise that the odd lecture (or even entire course of lectures) won’t hurt our students, there is far more damaging and fundamental disagreement.

Does education transform in the majority of cases or are most students ‘set’ by the time that they come to us?

This is a key question because it affects how we deal with our students. If there are ‘good’ and ‘bad’ students, ‘smart’ and ‘dumb’ or ‘hardworking’ and ‘lazy’, and this is something that is an immutable characteristic, then a lot of what we are doing in order to engage students, to assist them in constructing knowledge and placing into them collaborative environments, is a waste of their time. They will either get it (if they’re smart and hardworking) or they won’t. Putting a brick next to a bee doesn’t double your honey-making capacity or your ability to build houses. Except, of course, that students are not bees or bricks. In fact, there appears to be a vast amount of evidence that says that such collaborative activities, if set up correctly in accordance with the established work in social constructivism and cognitive apprenticeship, will actually have the desired effect and you will see positive transformations in students who take part.

However, there are still many activities and teachers who continue to treat students as if they are always going to be bricks or bees. Why does this matter? Let me digress for a moment.

I don’t care if vampires, werewolves or zombies actually exist or not and, for the majority of my life, it is unlikely to make any difference to me. However, if someone else is convinced that she is a vampire and she attacks me and drain my blood, I am just as dead as if she were not a vampire – of course, I now will not rise from the dead but this is of little import to me. What matters is the impact upon me because of someone else’s practice of their beliefs.

If someone strongly believes that students are either ‘smart enough’ to take their courses or not, they don’t care who fails or how many, and that it is purely the role of the student to have or to spontaneously develop this characteristic then their impact will likely be high enough to have a negative impact on at least some students. We know about stereotype threat. We’re aware of inherent bias. In this case, we’re no longer talking about right or wrong teaching (thank goodness), we’re talking about a fundamentally self-fulfilling prophecy as a teaching philosophy. This will have as great an impact to those who fail or withdraw as the transformation pathway does to those who become better students and develop.

It is, I believe, almost never about the bright light of our most stellar successes. Perhaps we should always be held to answer (or at least explain) for the number and nature of those who fall away. I have been looking for statements of student rights across Australia and the Higher Education sites all seem to talk about ‘fair assessment’ and ‘right of appeal’, as well as all of the student responsibilities. The ACARA (Australian Curriculum and Reporting Authority) website talks a lot about opportunities and student needs in schools. What I haven’t yet found is something that I would like to see, along these lines:

“Educational is transformational. Students are entitled to be assessed on their own performance, in the context of their opportunities.”

Curve grading, which I’ve discussed before, immediately forces a false division of students into good and bad, merely by ‘better’ students existing. It is hard to think of something that is fundamentally less fair or appropriate to the task if we accept that our goal is improvement to a higher standard, regardless of where people start. In a curve graded system, the ‘best’ person can coast because all they have to do is stay one step ahead of their competition and natural alignment and inflation will do the rest. This is not the motivational framework that we wish to establish, especially when the lowest realise that all is lost.

I am a long distance runner and my performances will never set the world on fire. To come first in a race, I would have to be in a small race with very unfit people. But no-one can take away my actual times for my marathons and it is those times that have been used to allow me to enter other events. You’ll note that in the Olympics, too. Qualifying times are what are used because relative performance does not actually establish any set level of quality. The final race? Yes, we’ve established competitiveness and ranking becomes more important – but then again, entering the final heat of an Olympic race is an Olympian achievement. Let’s not quibble on this, because this is the equivalent of Nobel and Turing awards.

And here is the problem again. If I believe that education is transformative and set up all of my classes with collaborative work, intrinsic motivation and activities to develop self-regulation, then that’s great but what if it’s in third-year? If the ‘students were too dumb to get it’ people stand between me and my students for the first two years then I will have lost a great number of possibly good students by this stage – not to mention the fact that the ones who get through may need some serious de-programming.

Is it an acceptable excuse that another academic should be free to do what they want, if what they want to do is having an excluding and detrimental effect on students? Can we accept that if it means that we have to swallow that philosophy? If I do, does it make me complicit? I would like nothing more than to let people do what they want, hey, I like that as much as the next person, but in thinking about the effect of some decisions being made, is the notion of personal freedom in what is ultimately a public service role still a sufficiently good argument for not changing practice?


Recursive Tutorial: A tutorial on writing a tutorial

I assigned the Grand Challenge students a slightly strange problem for yesterday’s tutorial: “How would you write an R tutorial for Year 11 High School Students?” R is an open source statistics package that is incredibly powerful and versatile but it is nowhere near as friendly to use or accessible as traditional GUI tools such as Microsoft Excel. R has some menus and buttons on it but most of these are used to control the environment, rather than applying the statistical and mathematical functions. R Studio is an associated Integrated Development Environment (IDE) that makes working with R easier but, at its core, R relies upon you knowing enough R to type the right commands.

Discussing this with students, we compared Excel and R to find out what the core differences were and some of them are not important early on but become more important later. Excel, for example, allows you to quickly paste and move around data, apply some functions, draw some graphs and come to a result quickly, mostly by pushing buttons and using on-line help with a little typing. But, and it’s an important but, unless you write a program in Excel (and not that many people do), re-applying all of that manipulation to a new data source requires you to click and push and move across the screen all over again. You have to recreate a long and complicated combination of mechanical and cognitive functions. R, by contrast, requires you to type commands to get things to happen but it remembers them by default and you can easily extract them. Because of how R works, you drag in data (from a file, say) and then execute a set of manipulation steps. If you’re familiar with R then this is straight-forward. If not, then steep learning curve. However, re-using these instructions and manipulations on a new data source is trivial. You change the file and re-run all of the steps.

Why am I talking about new data sources? Because it’s often the case that you want to do the same thing with new data OR you realise that the data you were working with was incomplete or in error. Unless you write a lot of Visual Basic in Excel (and that no longer works on Macs so it’s not a transferable option), your Excel spreadsheet with changed data requires you to potentially reapply or check the application of everything in the spreadsheet, especially if there is any sorting of data, creation of new columns or summary data – and let’s not even start talking about pivot tables! But, for single run, for finance, for counting stuff, Excel is almost always going to be more easy to teach people to use than R. For scientists, however, R is better to use for two very important reasons: it is less likely to do something that is irreversible to your data and the vast majority of its default choices are sensible.

The students came up with a list of things that Excel does (good and bad): it’s strongly visual, lay-user friendly, tells you what you can do, does what it damn well wants to, data changes may require manual reapplication. There’s a corresponding list for R: steep learning curve, visual display for R environment but command-line interface for commands, does what you tell it to do (except when it’s too smart). I surveyed the class to find out who was using R rather than Excel and the majority of students were using R for their analysis but, and again it’s an important but, only because they had to. In situations where Excel was enough (simple manipulation, straight forward analysis), then Excel got used because Excel is far easier to use and far friendlier.

The big question for the students was “How do I start doing something?” In Excel, you type numbers into the spreadsheet and then can just start selecting things using a relatively good on-line help system. In R you are faced with a blinking prompt and you have to know enough to type streams of commands like this:

newtab <-read.csv("~/days.txt",header=FALSE)
plot(seq(1,nrow(newtab)),newtab$V1) 
boxplot(newtab) 
abline(a=1500,b=0) 
mean(newtab)

And, with a whole set of other commands, you can get graphs like this. (I realise that this is not a box plot!)

Once you’re used to it, this is meaningful, powerful and re-applicable. I can update the data and re-run this to my heart’s content, analysing vast quantities of data without having to keep mouse clicking into cells. But let’s remember our context. I’m not talking about higher education students, I’m talking about school students and it’s important to remember that teaching people something before they’re ready to use it or before they have an opportunity to use it is potentially not the best use of effort.

My students pointed out that the school students of today are all learning how to use graphing calculators, with giant user manuals, and (in some cases) the students switch on their calculators to see a menu rather than the traditional calculator single line. But the syntax and input modes for calculators vary widely. Some use ( ) for operations like sin, so a student will see sin(30) when they start doing trig, whereas some don’t. This means that some of the students I might want to teach R to have not necessarily got their head around the fact that functions exist, except as something that Excel requires them to do. Let’s go to the why here, because it’s important. Why are students learning how to use these graphing calculators? So they can pass their exams, where the competent and efficient use of these things will help them. Yes, it appears that students may be carrying out the kind of operations I would like them to put into a more powerful tool, but why should they?

If a teach a high school student about Excel then there are many places that they might use this kind of software: micro-budgeting, keeping track of things, the ‘simple’ approximation of a database storing books or things like that. However, the general practice of using Excel is familiarisation with a GUI interface that is very, very common and that most students need experience with. If I teach them R then I might be extending their knowledge but (a) the majority are probably not yet ready for it and (b) they are highly unlikely to need to use it for anything in the near future.

The conclusion that my students reached was that, if we really wanted to provide exposure to an industry-like scientific or engineering tool at the earlier stage, then why not use one that was friendlier, more helpful but still had a more scientific focus. They suggested Matlab (as a number of them had been exposed) or Mathematica. Now this whole exercise was designed to get them to practice their thinking about outreach, community, communication and sharing knowledge, so I wasn’t ever actually planning to run an R tutorial at Year 11. But these students thought through and asked the very important questions:

  • Who is this aimed at?
  • What do they already know?
  • What do they need to know?
  • Why are we doing this?

Of course, I have also learned a great deal from this as well – I had no idea that the calculators had quite got to this point, nor that there were schools were students would have to select through a graphical menu to get to the simple “3+3 EXE” section of the calculator! Don’t tell my Grand Challenge students but I think I’m learning roughly as much as they are!


Authenticity and Challenge: Software Engineering Projects Where Failure is an Option

It’s nearly the end of semester and that means that a lot of projects are coming to fruition – or, in a few cases, are still on fire as people run around desperately trying to put them out. I wrote a while about seeing Fred Brooks at a conference (SIGCSE) and his keynote on building student projects that work. The first four of his eleven basic guidelines were:

  1. Have real projects for real clients.
  2. Groups of 3-5.
  3. Have lots of project choices
  4. Groups must be allowed to fail.

We’ve done this for some time in our fourth year Software Engineering option but, as part of a “Dammit, we’re Computer Science, people should be coming to ask about getting CS projects done” initiative, we’ve now changed our third year SE Group Project offering from a parallel version of an existing project to real projects for real clients, although I must confess that I have acted as a proxy in some of them. However, the client need is real, the brief is real, there are a lot of projects on the go and the projects are so large and complex that:

  1. Failure is an option.
  2. Groups have to work out which part they will be able to achieve in the 12 weeks that they have.

For the most part, this approach has been a resounding success. The groups have developed their team maturity faster, they have delivered useful and evolving prototypes, they have started to develop entire tool suites and solve quite complex side problems because they’ve run across areas that no-one else is working in and, most of all, the pride that they are taking in their work is evident. We have lit the blue touch paper and some of these students are skyrocketing upwards. However, let me not lose sight of one our biggest objectives, that we be confident that these students will be able to work with clients. In the vast majority of cases, I am very happy to say that I am confident that these students can make a useful, practical and informed contribution to a software engineering project – and they still have another year of projects and development to go.

The freedom that comes with being open with a client about the possibility of failure cannot be overvalued. This gives both you and the client a clear understanding of what is involved- we do not need to shield the students, nor does the client have to worry about how their satisfaction with software will influence things. We scaffold carefully but we have to allow for the full range of outcomes. We, of course, expect the vast majority of projects to succeed but this experience will not be authentic unless we start to pull away the scaffolding over time and see how the students stand by themselves. We are not, by any stretch, leaving these students in the wilderness. I’m fulfilling several roles here: proxying for some clients, sharing systems knowledge, giving advice, mentoring and, every so often, giving a well-needed hairy eyeball to a bad idea or practice. There is also the main project manager and supervisor who is working a very busy week to keep track of all of these groups and provide all of what I am and much, much more. But, despite this, sometimes we just have to leave the students to themselves and it will, almost always, dawn on them that problem solving requires them to solve the problem.

I’m really pleased to see this actually working because it started as a brainstorm of my “Why aren’t we being asked to get involved in more local software projects” question and bouncing it off the main project supervisor, who was desperate for more authentic and diverse software projects. Here is a distillation of our experience so far:

  1. The students are taking more ownership of the projects.
  2. The students are producing a lot of high quality work, using aggressive prototyping and regular consultation, staged across the whole development time.
  3. The students are responsive and open to criticism.
  4. The students have a better understanding of Software Engineering as a discipline and a practice.
  5. The students are proud of what they have achieved.

None of this should come as much of a surprise but, in a 25,000+ person University, there are a lot of little software projects on the 3-person team 12 month scale, which are perfect for two half-year project slots because students have to design for the whole and then decide which parts to implement. We hope to give these projects back to them (or similar groups) for further development in the future because that is the way of many, many software engineers: the completion, extension and refactoring of other people’s codebases. (Something most students don’t realise is that it only takes a very short time for a codebase you knew like the back of your hand to resemble the product of alien invaders.)

I am quietly confident, and hopeful, that this bodes well for our Software Engineers and that we still start to seem them all closely bunched towards the high achieving side of the spectrum in terms of their ability to practice. We’re planning to keep running this in the future because the early results have been so promising. I suppose the only problem now is that I have to go and find a huge number of new projects for people to start on for 2013.

As problems go, I can certainly live with that one!