Another semester, more lessons learned (mostly by me).
Posted: June 16, 2013 Filed under: Education, Opinion | Tags: advocacy, authenticity, collaboration, community, curriculum, design, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, learning, plagiarism, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design Leave a commentI’ve just finished the lecturing component for my first year course on programming, algorithms and data structures. As always, the learning has been mutual. I’ve got some longer posts to write on this at some time in the future but the biggest change for this year was dropping the written examination component down and bringing in supervised practical examinations in programming and code reading. This has given us some interesting results that we look forward to going through, once all of the exams are done and the marks are locked down sometime in late July.
Whenever I put in practical examinations, we encounter the strange phenomenon of students who can mysteriously write code in very short periods of time in a practical situation very similar to the practical examination, but suddenly lose the ability to write good code when they are isolated from the Internet, e-Mail and other people’s code repositories. This is, thank goodness, not a large group (seriously, it’s shrinking the more I put prac exams in) but it does illustrate why we do it. If someone has a genuine problem with exam pressure, and it does occur, then of course we set things up so that they have more time and a different environment, as we support all of our students with special circumstances. But to be fair to everyone, and because this can be confronting, we pitch the problems at a level where early achievement is possible and they are also usually simpler versions of the types of programs that have already been set as assignment work. I’m not trying to trip people up, here, I’m trying to develop the understanding that it’s not the marks for their programming assignments that are important, it’s the development of the skills.
I need those people who have not done their own work to realise that it probably didn’t lead to a good level of understanding or the ability to apply the skill as you would in the workforce. However, I need to do so in a way that isn’t unfair, so there’s a lot of careful learning design that goes in, even to the selection of how much each component is worth. The reminder that you should be doing your own work is not high stakes – 5-10% of the final mark at most – and builds up to a larger practical examination component, worth 30%, that comes after a total of nine practical programming assignments and a previous prac exam. This year, I’m happy with the marks design because it takes fairly consistent failure to drop a student to the point where they are no longer eligible for redemption through additional work. The scope for achievement is across knowledge of course materials (on-line quizzes, in-class scratchy card quizzes and the written exam), programming with reference materials (programming assignments over 12 weeks), programming under more restricted conditions (the prac exams) and even group formation and open problem handling (with a team-based report on the use of queues in the real world). To pass, a student needs to do enough in all of these. To excel, they have to have a good broad grasp of theoretical and practical. This is what I’ve been heading towards for this first-year course, a course that I am confident turns out students who are programmers and have enough knowledge of core computer science. Yes, students can (and will) fail – but only if they really don’t do enough in more than one of the target areas and then don’t focus on that to improve their results. I will fail anyone who doesn’t meet the standard but I have no wish to do any more of that than I need to. If people can come up to standard in the time and resource constraints we have, then they should pass. The trick is holding the standard at the right level while you bring up the people – and that takes a lot of help from my colleagues, my mentors and from me constantly learning from my students and being open to changing the learning design until we get it right.
Of course, there is always room for improvement, which means that the course goes back up on blocks while I analyse it. Again. Is this the best way to teach this course? Well, of course, what we will do now is to look at results across the course. We’ll track Prac Exam performance across all practicals, across the two different types of quizzes, across the reports and across the final written exam. We’ll go back into detail on the written answers to the code reading question to see if there’s a match for articulation and comprehension. We’ll assess the quality of response to the exam, as well as the final marked outcome, to tie this back to developmental level, if possible. We’ll look at previous results, entry points, pre-University marks…
And then we’ll teach it again!
The Continuum of Ethical Challenge: Why the Devil Isn’t Waiting in the Alleyway and The World is Harder than Bioshock.
Posted: June 15, 2013 Filed under: Education, Opinion | Tags: advocacy, authenticity, community, curriculum, design, education, educational research, ethics, feedback, Generation Why, higher education, in the student's head, learning, principles of design, reflection, student perspective, teaching, teaching approaches, thinking Leave a commentThis must be a record for a post title but I hope to keep the post itself shortish. Years ago, when I was still at school, a life counsellor (who was also a pastor) came to talk to us about life choices and ethics. He was talking about the usual teen cocktail: sex, drugs and rebellion.. However, he made an impression on me by talking about his early idea of temptation. Because of the fire and brimstone preaching he’d grown up with, he half expected temptation to take the form of the Devil, beckoning him into an alleyway to take an illicit drag on a cigarette. As he grew up, and grew wiser, he realised that living ethically was really a constant set of choices, interlocking or somewhat dependant, rather than an easy life periodically interrupted by strictly defined challenges that could be overcome with a quick burst of willpower.
I recently started replaying the game Bioshock, which I have previously criticised elsewhere, and was struck by the facile nature of the much-vaunted ethical aspect to game play. For those who haven’t played it, you basically have a choice between slaughtering or saving little girls – apart from that, you have very little agency or ability to change the path you’re on. In fact, rather than provide you with the continual dilemma of whether you should observe, ignore or attack the inhabitants of the game world, you very quickly realise that there are no ‘good’ people in the world (or there are none that you are actually allowed to attack, they are all carefully shielded from you) so you can reduce your ‘choices’ when encountering a figure crouching over a pram to “should I bludgeon her to death, or set her on fire and shoot her in the head”. (It’s ok, if you try anything approaching engagement, she will try and kill you.) In fact, one of the few ‘innocents’ in the game is slaughtered in front of you while you watch impotently. So your ethical engagement is restricted, at very distinctly defined intervals, to either harvesting or rescuing the little girls who have been stolen from orphanages and turned into corpse scavenging monsters. This is as ridiculous as the intermittent Devil in the alleyway, in fact, probably more so!
I completely agree with that counsellor from (goodness) 30 years ago – it would be a nonsense to assume that tests of our ethics can be conveniently compartmentalised to a time when our resolve is strong and can be so easily predicted. The Bioshock model (or models like it, such as Call of Duty 4, where everyone is an enemy or can’t be shot in a way that affects our game beyond a waggled finger and being taken back to a previous save) is flawed because of the limited extent of the impact of the choices you make – in fact, Bioshock is particularly egregious because the ‘outcome’ of your moral choice has no serious game impact except to show you a different movie at the end. Before anyone says “it’s only a game”, I agree, but they were the ones who imposed the notion that this ethical choice made a difference. Games such as Deus Ex gave you very much un-cued opportunities to intervene or not – with changes to the game world depending on what happened. As a result, people playing Deus Ex had far more moral engagement with the game and everyone I’ve spoken to felt as if they were making the choices that led to the outcome: autonomy, mastery and purpose anyone? That was in 2000 – very few games actually see the world as one that you can influence (although some games are now coming up to par on this).
I think about this a lot for my learning design. While my students may recognise ethical choices in the real world, I am always concerned that a learning design that reduces their activities to high stakes hurdle challenges will mimic the situation where we have, effectively, put the Devil in the alleyway and you can switch on your ‘ethical’ brain at this point. I posed a question to my students in their sample exam where I proposed that they had commissioned someone to write their software for an assignment – and them asked to think about the effect that this decision would have on their future self in terms of knowledge development, if we assumed that they would always be better prepared if they did the work themselves. This takes away the focus from the day or so leading up to an individual assignment and starts to encourage continuum thinking, where every action is take as part of a whole life of ethical actions. I’m a great believer that skills only develop with practice and knowledge only stays in your head when you reinforce it, so any opportunity to encourage further development of ethical thinking is to be encouraged!
“Hi, my name is Nick and I specialise in failure.”
Posted: June 10, 2013 Filed under: Education, Opinion | Tags: advocacy, collaboration, community, curriculum, design, education, educational research, ethics, failure, Generation Why, higher education, in the student's head, learning, measurement, reflection, resources, student perspective, survivorship, teaching, teaching approaches, thinking, tools Leave a commentI recently read an article on survivorship bias in the “You Are Not So Smart” website, via Metafilter. While the whole story addressed the World War II Statistical Research Group, it focused on the insight contributed by Abraham Wald, a statistician. The World War II Allied bomber losses were large, very large, and any chances of reducing this loss was incredibly valuable. The question was “How could the US improve their chances of bringing their bombers back intact?” Bombers landing back after missions were full of holes but armour just can’t be strapped willy-nilly on to a plane without it becoming land-locked. (There’s a reason that birds are so light!) The answer, initially, was obvious – find the place where the most holes were, by surveying the fleet, and patching them. Put armour on the colander sections and, voila, increased survival rate.
No, said Wald. That wouldn’t help.
Wald’s logic is both simple and convincing. If a plane was coming back with those holes in place, then the holes in the skin were not leading to catastrophic failure – they couldn’t have been if the planes were returning! The survivors were not showing the damage that would have led to them becoming lost aircraft. Wald used the already collected information on the damage patterns to work out how much damage could be taken on each component and the likelihood of this occurring during a bombing run. based on what kind of forces it encountered.
It’s worth reading the entire article because it’s a simple and powerful idea – attributing magical properties to the set of steps taken by people who have become ultra-successful is not going to be as useful as looking at what happened to take people out of the pathway to success. If you’ve read Steve Jobs’ biography then you’re aware that he had a number of interesting traits, only some of which may have led to him becoming as successful as he did. Of course, if you’ve been reading a lot, you’ll be aware of the importance of Paul Jobs, Steve Wozniak, Robert Noyce, Bill Gates, Jony Ive, John Lasseter, and, of course, his wife, Laurene Powell Jobs. So the whole “only eating fruit” thing, the “reality distortion field” thing and “not showering” thing (some of which he changed, some he didn’t) – which of these are the important things? Jobs, like many successful people, failed at some of his endeavours, but never in a way that completely wiped him out. Obviously. Now, when he’s not succeeding, he’s interesting, because we can look at the steps that took him down and say “Oh, don’t do that”, assuming that it’s something that can be changed or avoided . When he’s succeeding, there are so many other things getting in the way that depend upon what’s happened to you so far, who your friends are, and how many resources you get to play with, it’s hard to be able to give good advice on what to do.
I have been studying failure for some time. Firstly in myself, and now in my students. I look for those decisions, or behaviours, that lead to students struggling in their academic achievement, or to falling away completely in some cases. The majority of the students who come to me with a high level of cultural, financial and social resources are far less likely to struggle because, even when faced with a set-back, they rarely hit the point where they can’t bounce back – although, sadly, it does happen but in far fewer numbers. When they do fall over, it is for the same reasons as my less-advantaged students, who just do so in far greater numbers because they have less resilience to the set-backs. By studying failure, and the lessons learned and the things to be avoided, I can help all of my students and this does not depend upon their starting level. If I were studying the top 5% of students, especially those who had never received a mark less than A+, I would be surprised if I could learn much that I could take and usefully apply to those in the C- bracket. The reverse, however? There’s gold to be mined there.
By studying the borderlines and by looking for patterns in the swirling dust left by those departing, I hope that I can find things which reduce failure everywhere – because every time someone fails, we run the risk of not getting them back simply because failure is disheartening. Better yet, I hope to get something that is immediately usable, defensible and successful. Probably rather a big ask for a protracted study of failure!
SIGCSE 2013: Special Session on Designing and Supporting Collaborative Learning Activities
Posted: March 31, 2013 Filed under: Education | Tags: authenticity, community, curriculum, education, educational problem, educational research, feedback, higher education, in the student's head, learning, principles of design, reflection, resources, sigcse, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design Leave a commentKatrina and I delivered a special session on collaborative learning activities, focused on undergraduates because that’s our area of expertise. You can read the outline document here. We worked together on the underlying classroom activities and have both implemented these techniques but, in this session, Katrina did most of the presenting and I presented the collaborative assessment task examples, with some facilitation.
The trick here is, of course, to find examples that are both effective as teaching tools and are effective as examples. The approach I chose to take was to remind everyone in the room of what the most important aspects were to making this work with students and I did this by deliberately starting with a bad example. This can be a difficult road to walk because, when presenting a bad example, you need to convince everyone that your choice was deliberate and that you actually didn’t just stuff things up.
My approach was fairly simple. Break people into groups, based on where they were currently sitting, and then I immediately went into the question, which had been tailored for the crowd and for my purposes:
“I want you to talk about the 10 things that you’re going to do in the next 5 years to make progress in your career and improve your job performance.”
And why not? Everyone in the room was interested in education and, most likely, had a job at a time when it’s highly competitive and hard to find or retain work – so everyone has probably thought about this. It’s a fair question for this crowd.
Well, it would be, if it wasn’t so anxiety inducing. Katrina and I both observed a sea of frozen faces as we asked a question that put a large number of participants on the spot. And the reason I did this was to remind everyone that anxiety impairs genuine participation and willingness to engage. There were a large number of frozen grins with darting eyes, some nervous mumbles and a whole lot of purposeless noise, with the few people who were actually primed to answer that question starting to lead off.
I then stopped the discussion immediately. “What was wrong with that?” I asked the group.
Well, where do we start? Firstly, it’s an individual activity, not a collaborative activity – there’s no incentive or requirement for discussion, groupwork or anything like that. Secondly, while we might expect people to be able to answer this, it is a highly charged and personal areas, and you may not feel comfortable discussing your five year plan with people that you don’t know. Thirdly, some people know that they should be able to answer this (or at least some supervisors will expect that they can) but they have no real answer and their anxiety will not only limit their participation but it will probably stop them from listening at all while they sweat their turn. Finally, there is no point to this activity – why are we doing this? What are we producing? What is the end point?
My approach to collaborative activity is pretty simple and you can read any amount of Perry, Dickinson, Hamer et al (and now us as well) to look at relevant areas and Contributing Student Pedagogy, where students have a reason to collaborate and we manage their developmental maturity and their roles in the activity to get them really engaged. Everyone can have difficulties with authority and recognising whether someone is making enough contribution to a discussion to be worth their time – this is not limited to students. People, therefore, have to believe that the group they are in is of some benefit to them.
So we stepped back. I asked everyone to introduce themselves, where they came from and give a fact about their current home that people might not know. Simple task, everyone can do it and the purpose was to tell your group something interesting about your home – clear purpose, as well. This activity launched immediately and was going so well that, when I tried to move it on because the sound levels were dropping (generally a good sign that we’re reaching a transition), some groups asked if they could keep going as they weren’t quite finished. (Monitoring groups spread over a large space can be tricky but, where the activity is working, people will happily let you know when they need more time.) I was able to completely stop the first activity and nobody wanted me to continue. The second one, where people felt that they could participate and wanted to say something, needed to keep going.
Having now put some faces to names, we then moved to a simple exercise of sharing an interesting teaching approach that you’d tried recently or seen at the conference and it’s important to note the different comfort levels we can accommodate with this – we are sharing knowledge but we give participants the opportunity to share something of themselves or something that interest them, without the burden of ownership. Everyone had already discovered that everyone in the group had some areas of knowledge, albeit small, that taught them something new. We had started to build a group where participants valued each other’s contribution.
I carried out some roaming facilitation where I said very little, unless it was needed. I sat down with some groups, said ‘hi’ and then just sat back while they talked. I occasionally gave some nodded or attentive feedback to people who looked like they wanted to speak and this often cued them into the discussion. Facilitation doesn’t have to be intrusive and I’m a much bigger fan of inclusiveness, where everyone gets a turn but we do it through non-verbal encouragement (where that’s possible, different techniques are required in a mixed-ability group) to stay out of the main corridor of communication and reduce confrontation. However, by setting up the requirement that everyone share and by providing a task that everyone could participate in, my need to prod was greatly reduced and the groups mostly ran themselves, with the roles shifting around as different people made different points.
We covered a lot of the underlying theory in the talk itself, to discuss why people have difficulty accepting other views, to clarify why role management is a critical part of giving people a reason to get involved and something to do in the conversation. The notion that a valid discursive role is that of the supporter, to reinforce ideas from the proposer, allows someone to develop their confidence and critically assess the idea, without the burden of having to provide a complex criticism straight away.
At the end, I asked for a show of hands. Who had met someone knew? Everyone. Who had found out something they didn’t know about other places? Everyone. Who had learned about a new teaching technique that they hadn’t known before. Everyone.
My one regret is that we didn’t do this sooner because the conversation was obviously continuing for some groups and our session was, sadly, on the last day. I don’t pretend to be the best at this but I can assure you that any capability I have in this kind of activity comes from understanding the theory, putting it into practice, trying it, trying it again, and reflecting on what did and didn’t work.
I sometimes come out of a lecture or a collaborative activity and I’m really not happy. It didn’t gel or I didn’t quite get the group going as I wanted it to – but this is where you have to be gentle on yourself because, if you’re planning to succeed and reflecting on the problems, then steady improvement is completely possible and you can get more comfortable with passing your room control over to the groups, while you move to the facilitation role. The more you do it, the more you realise that training your students in role fluidity also assists them in understanding when you have to be in control of the room. I regularly pass control back and forward and it took me a long time to really feel that I wasn’t losing my grip. It’s a practice thing.
It was a lot of fun to give the session and we spent some time crafting the ‘bad example’, but let me summarise what the good activities should really look like. They must be collaborative, inclusive, achievable and obviously beneficial. Like all good guidelines there are times and places where you would change this set of characteristics, but you have to know your group well to know what challenges they can tolerate. If your students are more mature, then you push out into open-ended tasks which are far harder to make progress in – but this would be completely inappropriate for first years. Even in later years, being able to make some progress is more likely to keep the group going than a brick wall that stops you at step 1. But, let’s face it, your students need to know that working in that group is not only not to their detriment, but it’s beneficial. And the more you do this, the better their groupwork and collaboration will get – and that’s a big overall positive for the graduates of the future.
To everyone who attended the session, thank you for the generosity and enthusiasm of your participation and I’m catching up on my business cards in the next weeks. If I promised you an e-mail, it will be coming shortly.
Expressiveness and Ambiguity: Learning to Program Can Be Unnecessarily Hard
Posted: January 23, 2013 Filed under: Education, Opinion | Tags: advocacy, collaboration, curriculum, design, education, educational problem, feedback, Generation Why, higher education, in the student's head, learning, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools Leave a commentOne of the most important things to be able to do in any profession is to think as a professional. This is certainly true of Computer Science, because we have to spend so much time thinking as a Computer Scientist would think about how the machine will interpret our instructions. For those who don’t program, a brief quiz. What is the value of the next statement?
What is 3/4?
No doubt, you answered something like 0.75 or maybe 75% or possibly even “three quarters”? (And some of you would have said “but this statement has no intrinsic value” and my heartiest congratulations to you. Now go off and contemplate the Universe while the rest of us toil along on the material plane.) And, not being programmers, you would give me the same answer if I wrote:
What is 3.0/4.0?
Depending on the programming language we use, you can actually get two completely different answers to this apparently simple question. 3/4 is often interpreted by the computer to mean “What is the result if I carry out integer division, where I will only tell you how many times the denominator will go into the numerator as a whole number, for 3 and 4?” The answer will not be the expected 0.75, it will be 0, because 4 does not go into 3 – it’s too big. So, again depending on programming language, it is completely possible to ask the computer “is 3/4 equivalent to 3.0/4.0?” and get the answer ‘No’.
This is something that we have to highlight to students when we are teaching programming, because very few people use integer division when they divide one thing by another – they automatically start using decimal points. Now, in this case, the different behaviour of the ‘/’ is actually exceedingly well-defined and is not all ambiguous to the computer or to the seasoned programmer. It is, however, nowhere near as clear to the novice or casual observer.
I am currently reading Stephen Ramsay’s excellent “Reading Machines: Towards an Algorithmic Criticism” and it is taking me a very long time to read an 80 page book. Why? Because, to avoid ambiguity and to be as expressive and precise as possible, he has used a number of words and concepts with which I am unfamiliar or that I have not seen before. I am currently reading his book with a web browser and a dictionary because I do not have a background in literary criticism but, once I have the building blocks, I can understand his argument. In other words, I am having to learn a new language in order to read a book for that new language community. However, rather than being irked that “/” changes meaning depending on the company it keeps, I am happy to learn the new terms and concepts in the space that Ramsay describes, because it is adding to my ability to express key concepts, without introducing ambiguous shadings of language over things that I already know. Ramsay is not, for example, telling me that “book” no longer means “book” when you place it inside parentheses. (It is worth noting that Ramsay discusses the use of constraint as a creative enhancer, a la Oulipo, early on in the book and this is a theme for another post.)
The usual insult at this point is to trot out the accusation of jargon, which is as often a statement that “I can’t be bothered learning this” than it is a genuine complaint about impenetrable prose. In this case, the offender in my opinion is the person who decided to provide an invisible overloading of the “/” operator to mean both “division” and “integer division”, as they have required us to be aware of a change in meaning that is not accompanied by a change in syntax. While this isn’t usually a problem, spoken and written languages are full of these things after all, in the computing world it forces the programmer to remember that “/” doesn’t always mean “/” and then to get it the right way around. (A number of languages solve this problem by providing a distinct operator – this, however, then adds to linguistic complexity and rather than learning two meanings, you have to learn two ‘words’. Ah, no free lunch.) We have no tone or colour in mainstream programming languages, for a whole range of good computer grammar reasons, but the absence of the rising tone or rising eyebrow is sorely felt when we encounter something that means two different things. The net result is that we tend to use the same constructs to do the same thing because we have severe limitations upon our expressivity. That’s why there are boilerplate programmers, who can stitch together a solution from things they have already seen, and people who have learned how to be as expressive as possible, despite most of these restrictions. Regrettably, expressive and innovative code can often be unreadable by other people because of the gymnastics required to reach these heights of expressiveness, which is often at odds with what the language designers assumed someone might do.
We have spent a great deal of effort making computers better at handling abstract representations, things that stand in for other (real) things. I can use a name instead of a number and the computer will keep track of it for me. It’s important to note that writing int i=0; is infinitely preferable to typing “0000000000000000000000000000000000000000000000000000000000000000” into the correct memory location and then keeping that (rather large number) address written on a scrap of paper. Abstraction is one of the fundamental tools of modern programming, yet we greatly limit expressiveness in sometimes artificial ways to reduce ambiguity when, really, the ambiguity does seem a little artificial.
One of the nastiest potential ambiguities that shows up a lot is “what do we mean by ‘equals'”. As above, we already know that many languages would not tell you that “3/4 equals 3.0/4.0” because both mathematical operations would be executed and 0 is not the same as 0.75. However, the equivalence operator is often used to ask so many different questions: “Do these two things contain the same thing?”, “Are these two things considered to be the same according to the programmer?” and “Are these two things actually the same thing and stored in the same place in memory?”
Generally, however, to all of these questions, we return a simple “True” or “False”, which in reality reflects neither the truth nor the falsity of the situation. What we are asking, respectively, is “Are the contents of these the same?” to which the answer is “Same” or “Different”. To the second, we are asking if the programmer considers them to be the same, in which case the answer is really “Yes” or “No” because they could actually be different, yet not so different that the programmer needs to make a big deal about it. Finally, when we are asking if two references to an object actually point to the same thing, we are asking if they are in the same location or not.
There are many languages that use truth values, some of them do it far better than others, but unless we are speaking and writing in logical terms, the apparent precision of the True/False dichotomy is inherently deceptive and, once again, it is only as precise as it has been programmed to be and then interpreted, based on the knowledge of programmer and reader. (The programming language Haskell has an intrinsic ability to say that things are “Undefined” and to then continue working on the problem, which is an obvious, and welcome, exception here, yet this is not a widespread approach.) It is an inherent limitation on our ability to express what is really happening in the system when we artificially constrain ourselves in order to (apparently) reduce ambiguity. It seems to me that we have reduced programmatic ambiguity, but we have not necessarily actually addressed the real or philosophical ambiguity inherent in many of these programs.
More holiday musings on the “Python way” and why this is actually an unreasonable demand, rather than a positive feature, shortly.
The Limits of Expressiveness: If Compilers Are Smart, Why Are We Doing the Work?
Posted: January 23, 2013 Filed under: Education, Opinion | Tags: 'pataphysics, collaboration, community, curriculum, data visualisation, design, education, educational problem, higher education, principles of design, programming, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools 2 CommentsI am currently on holiday, which is “Nick shorthand” for catching up on my reading, painting and cat time. Recently, my interests in my own discipline have widened and I am precariously close to that terrible state that academics sometimes reach when they suddenly start uttering words like “interdisciplinary” or “big tent approach”. Quite often, around this time, the professoriate will look at each other, nod, and send for the nice people with the butterfly nets. Before they arrive and cart me away, I thought I’d share some of the reading and thinking I’ve been doing lately.
My reading is a little eclectic, right now. Next to Hooky’s account of the band “Joy Division” sits Dennis Wheatley’s “They Used Dark Forces” and next to that are four other books, which are a little more academic. “Reading Machines: Towards an Algorithmic Criticism” by Stephen Ramsay; “Debates in the Digital Humanities” edited by Matthew Gold; “10 PRINT CHR$(205.5+RND(1)); : GOTO 10” by Montfort et al; and “‘Pataphysics: A Useless Guide” by Andrew Hugill. All of these are fascinating books and, right now, I am thinking through all of these in order to place a new glass over some of my assumptions from within my own discipline.
“10 PRINT CHR$…” is an account of a simple line of code from the Commodore 64 Basic language, which draws diagonal mazes on the screen. In exploring this, the authors explore fundamental aspects of computing and, in particular, creative computing and how programs exist in culture. Everything in the line says something about programming back when the C-64 was popular, from the use of line numbers (required because you had to establish an execution order without necessarily being able to arrange elements in one document) to the use of the $ after CHR, which tells both the programmer and the machine that what results from this operation is a string, rather than a number. In many ways, this is a book about my own journey through Computer Science, growing up with BASIC programming and accepting its conventions as the norm, only to have new and strange conventions pop out at me once I started using other programming languages.
Rather than discuss the other books in detail, although I recommend all of them, I wanted to talk about specific aspects of expressiveness and comprehension, as if there is one thing I am thinking after all of this reading, it is “why aren’t we doing this better”? The line “10 PRINT CHR$…” is effectively incomprehensible to the casual reader, yet if I wrote something like this:
do this forever
pick one of “/” or “\” and display it on the screen
then anyone who spoke English (which used to be a larger number than those who could read programming languages but, honestly, today I’m not sure about that) could understand what was going to happen but, not only could they understand, they could create something themselves without having to work out how to make it happen. You can see language like this in languages such as Scratch, which is intended to teach programming by providing an easier bridge between standard language and programming using pre-constructed blocks and far more approachable terms. Why is it so important to create? One of the debates raging in Digital Humanities at the moment, at least according to my reading, is “who is in” and “who is out” – what does it take to make one a digital humanist? While this used to involve “being a programmer”, it is now considered reasonable to “create something”. For anyone who is notionally a programmer, the two are indivisible. Programs are how we create things and programming languages are the form that we use to communicate with the machines, to solve the problems that we need solved.
When we first started writing programs, we instructed the machines in simple arithmetic sequences that matched the bit patterns required to ensure that certain memory locations were processed in a certain way. We then provided human-readable shorthand, assembly language, where mnemonics replaced numbers, to make it easier for humans to write code without error. “20” became “JSR” in 6502 assembly code, for example, yet “JSR” is as impenetrably occulted as “20” unless you learn a language that is not actually a language but a compressed form of acronym. Roll on some more years and we have added pseudo-English over the top: GOSUB in Basic and the use of parentheses to indicate function calls in other languages.
However, all I actually wanted to do was to make the same thing happen again, maybe with some minor changes to what it was working on. Think of a sub-routine (method, procedure or function, if we’re being relaxed in our terminology) and you may as well think of a washing machine. It takes in something and combines it with a determined process, a machine setting, powders and liquids to give you the result you wanted, in this case taking in dirty clothes and giving back clean ones. The execution of a sub-routine is identical to this but can you see the predictable familiarity of the washing machine in JSR FE FF?
If you are familiar with ‘Pataphysics, or even “Ubu Roi” the most well-known of Jarry’s work, you may be aware of the pataphysician’s fascination with the spiral – le Grand Gidouille. The spiral, once drawn, defines not only itself but another spiral in the negative space that it contains. The spiral is also a natural way to think about programming because a very well-used programming language construct, the for loop, often either counts up to a value or counts down. It is not uncommon for this kind of counting loop to allow us to advance from one character to the next in a text of some sort. When we define a loop as a spiral, we clearly state what it is and what it is not – it is not retreading old ground, although it may always spiral out towards infinity.
However, for maximum confusion, the for loop may iterate a fixed number of times but never use the changing value that is driving it – it is no longer a spiral in terms of its effect on its contents. We can even write a for loop that goes around in a circle indefinitely, executing the code within it until it is interrupted. Yet, we use the same keyword for all of these.
In English, the word “get” is incredibly overused. There are very few situations when another verb couldn’t add more meaning, even in terms of shade, to the situation. Using “get” forces us, quite frequently, to do more hard work to achieve comprehension. Using the same words for many different types of loop pushes load back on to us.
What happens is that when we write our loop, we are required to do the thinking as to how we want this loop to work – although Scratch provides a forever, very few other languages provide anything like that. To loop endlessly in C, we would use while (true) or for (;;), but to tell the difference between a loop that is functioning as a spiral, and one that is merely counting, we have to read the body of the loop to see what is going on. If you aren’t a programmer, does for(;;) give you any inkling at all as to what is going on? Some might think “Aha, but programming is for programmers” and I would respond with “Aha, yes, but becoming a programmer requires a great deal of learning and why don’t we make it simpler?” To which the obvious riposte is “But we have special languages which will do all that!” and I then strike back with “Well, if that is such a good feature, why isn’t it in all languages, given how good modern language compilers are?” (A compiler is a program that turns programming languages into something that computers can execute – English words to byte patterns effectively.)
In thinking about language origins, and what we are capable of with modern compilers, we have to accept that a lot of the heavy lifting in programming is already being done by modern, optimising, compilers. Years ago, the compiler would just turn your instructions into a form that machines could execute – with no improvement. These days, put something daft in (like a loop that does nothing for a million iterations), and the compiler will quietly edit it out. The compiler will worry about optimising your storage of information and, sometimes, even help you to reduce wasted use of memory (no, Java, I’m most definitely not looking at you.)
So why is it that C++ doesn’t have a forever, a do 10 times, or a spiral to 10 equivalent in there? The answer is complex but is, most likely, a combination of standards issues (changing a language standard is relatively difficult and requires a lot of effort), the fact that other languages do already do things like this, the burden of increasing compiler complexity to handle synonyms like this (although this need not be too arduous) and, most likely, the fact that I doubt that many people would see a need for it.
In reading all of these books, and I’ll write more on this shortly, I am becoming increasingly aware that I tolerate a great deal of limitation in my ability to solve problems using programming languages. I put up with having my expressiveness reduced, with taking care of some unnecessary heavy lifting in making things clear to the compiler, and I occasionally even allow the programming language to dictate how I write the words on the page itself – not just syntax and semantics (which are at least understandably, socially and technically) but the use of blank lines, white space and end of lines.
How are we expected to be truly creative if conformity and constraint are the underpinnings of programming? Tomorrow, I shall write on the use of constraint as a means of encouraging creativity and why I feel that what we see in programming is actually limitation, rather than a useful constraint.
Thanks for the exam – now I can’t help you.
Posted: December 31, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, curriculum, design, education, educational problem, ethics, feedback, Generation Why, grand challenge, higher education, in the student's head, learning, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, time banking, tools, universal principles of design, vygotsky, workload 1 CommentI have just finished marking a pile of examinations from a course that I co-taught recently. I haven’t finalised the marks but, overall, I’m not unhappy with the majority of the results. Interestingly, and not overly surprisingly, one of the best answered sections of the exam was based on a challenging essay question I set as an assignment. The question spans many aspects of the course and requires the student to think about their answer and link the knowledge – which most did very well. As I said, not a surprise but a good reinforcement that you don’t have to drill students in what to say in the exam, but covering the requisite knowledge and practising the right skills is often helpful.
However, I don’t much like marking exams and it doesn’t come down to the time involved, the generally dull nature of the task or the repetitive strain injury from wielding a red pen in anger, it comes down to the fact that, most of the time, I am marking the student’s work at a time when I can no longer help him or her. Like most exams at my Uni, this was the terminal examination for the course, worth a substantial amount of the final marks, and was taken some weeks after teaching finished. So what this means is that any areas I identify for a given student cannot now be corrected, unless the student chooses to read my notes in the exam paper or come to see me. (Given that this campus is international, that’s trickier but not impossible thanks to the Wonders of Skypenology.) It took me a long time to work out exactly why I didn’t like marking, but when I did, the answer was obvious.
I was frustrated that I couldn’t actually do my job at one of the most important points: when lack of comprehension is clearly identified. If I ask someone a question in the classroom, on-line or wherever, and they give me an answer that’s not quite right, or right off base, then we can talk about it and I can correct the misunderstanding. My job, after all, is not actually passing or failing students – it’s about knowledge, the conveyance, construction and quality management thereof. My frustration during exam marking increases with every incomplete or incorrect answer I read, which illustrates that there is a section of the course that someone didn’t get. I get up in the morning with the clear intention of being helpful towards students and, when it really matters, all I can do is mark up bits of paper in red ink.

Quickly, Jones! Construct a valid knowledge framework! You’re in a group environment! Vygotsky, man, Vygotsky!
A student who, despite my sweeping, and seeping, liquid red ink of doom, manages to get a 50 Passing grade will not do the course again – yet this mark pretty clearly indicates that roughly half of the comprehension or participation required was not carried out to the required standard. Miraculously, it doesn’t matter which half of the course the student ‘gets’, they are still deemed to have attained the knowledge. (An interesting point to ponder, especially when you consider that my colleagues in Medicine define a Pass at a much higher level and in far more complicated ways than a numerical 50%, to my eternal peace of mind when I visit a doctor!) Yet their exam will still probably have caused me at least some gnashing of teeth because of points missed, pointless misstatement of the question text, obscure song lyrics, apologies for lack of preparation and the occasional actual fact that has peregrinated from the place where it could have attained marks to a place where it will be left out in the desert to die, bereft of the life-giving context that would save it from such an awful fate.
Should we move the exams earlier and then use this to guide the focus areas for assessment in order to determine the most improvement and develop knowledge in the areas in most need? Should we abandon exams entirely and move to a continuous-assessment competency based system, where there are skills and knowledge that must be demonstrated correctly and are practised until this is achieved? We are suffering, as so many people have observed before, from overloading the requirement to grade and classify our students into neatly discretised performance boxes onto a system that ultimately seeks to identify whether these students have achieved the knowledge levels necessary to be deemed to have achieved the course objectives. Should we separate competency and performance completely? I have sketchy ideas as to how this might work but none that survive under the blow-torches of GPA requirements and resource constraints.
Obviously, continuous assessment (practicals, reports, quizzes and so on) throughout the semester provide a very valuable way to identify problems but this requires good, and thorough, course design and an awareness that this is your intent. Are we premature in treating the exam as a closing-off line on the course? Do we work on that the same way that we do any assignment? You get feedback, a mark and then more work to follow-up? If we threw resourcing to the wind, could we have a 1-2 week intensive pre-semester program that specifically addressed those issues that students failed to grasp on their first pass? Congratulations, you got 80%, but that means that there’s 20% of the course that we need to clarify? (Those who got 100% I’ll pay to come back and tutor, because I like to keep cohorts together and I doubt I’ll need to do that very often.)
There are no easy answers here and shooting down these situations is very much in the fish/barrel plane, I realise, but it is a very deeply felt form of frustration that I am seeing the most work that any student is likely to put in but I cannot now fix the problems that I see. All I can do is mark it in red ink with an annotation that the vast majority will never see (unless they receive the grade of 44, 49, 64, 74 or 84, which are all threshold-1 markers for us).
Ah well, I hope to have more time in 2013 so maybe I can mull on this some more and come up with something that is better but still workable.
Thinking about teaching spaces: if you’re a lecturer, shouldn’t you be lecturing?
Posted: December 30, 2012 Filed under: Education | Tags: blogging, collaboration, community, curriculum, design, education, educational problem, feedback, Generation Why, higher education, in the student's head, learning, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design, vygotsky Leave a commentI was reading a comment on a philosophical post the other day and someone wrote this rather snarky line:
He’s is a philosopher in the same way that (celebrity historian) is a historian – he’s somehow got the job description and uses it to repeat the prejudices of his paymasters, flattering them into thinking that what they believe isn’t, somehow, ludicrous. (Grangousier, Metafilter article 123174)
Rather harsh words in many respects and it’s my alteration of the (celebrity historian)’s name, not his, as I feel that his comments are mildy unfair. However, the point is interesting, as a reflection upon the importance of job title in our society, especially when it comes to the weighted authority of your words. From January the 1st, I will be a senior lecturer at an Australian University and that is perceived differently where I am. If I am in the US, I reinterpret this title into their system, namely as a tenured Associate Professor, because that’s the equivalent of what I am – the term ‘lecturer’ doesn’t clearly translate without causing problems, not even dealing with the fact that more lecturers in Australia have PhDs, where many lecturers in the US do not. But this post isn’t about how people necessarily see our job descriptions, it’s very much about how we use them.
In many respects, the title ‘lecturer’ is rather confusing because it appears, like builder, nurse or pilot, to contain the verb of one’s practice. One of the big changes in education has been the steady acceptance of constructivism, where the learners have an active role in the construction of knowledge and we are facilitating learning, in many ways, to a greater extent than we are teaching. This does not mean that teachers shouldn’t teach, because this is far more generic than the binding of lecturers to lecturing, but it does challenge the mental image that pops up when we think about teaching.
If I asked you to visualise a classroom situation, what would you think of? What facilities are there? Where are the students? Where is the teacher? What resources are around the room, on the desks, on the walls? How big is it?
Take a minute to do just this and make some brief notes as to what was in there. Then come back here.
It’s okay, I’ll still be here!
False Dichotomy: If I don’t understand it, then either I am worthless or it is!
Posted: December 29, 2012 Filed under: Education | Tags: authenticity, collaboration, community, curriculum, education, educational research, ethics, higher education, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools Leave a commentI’ve been reading an interesting post on Metafilter about the “Minima Moralia: Reflections from the Damaged Life“, by Theodor Adorno. While the book itself is very interesting, two of the comments on the article caught my eye. An earlier commenter had mentioned that they neither understood nor appreciated this kind of thing, and made the usual throwaway remark about postmodernism being “a scam to funnel money from the productive classes to the parasitical academy” (dydecker). Further down, another commenter, Frowner, gently took this statement to task, starting by noting that Adorno would have been appalled by being labelled a post-modernist, and then discussing why dydecker might have felt the need to attack things in this way. It’s very much worth reading Frowner’s comments on this post, but I shall distil the first one here:
- Just because a text is difficult to obscure does not mean that it is postmodern. Also post-modernist is not actually an insult and this may be a politically motivated stance to attacks group of people who are also likely to identify as status quo critical or (gasp) Marxist.
- Not all texts need to be accessible to all audiences, not is something worthless, fake or elitist if it requires pre-readings or some effort to get into. Advanced physics texts can be very difficult to comprehend for the layperson. This does not make Quantum Field Theory wrong or a leftist conspiracy.
- You don’t need to read books that you don’t want to read.
- You don’t need to be angry at difficult books for being difficult. To exactly quote Frowner,
Difficult books only threaten us if we decide to feel guilty and ashamed for not reading them.
If you’re actually studying an area, and read the books that the work relies upon, difficult books can become much clearer, illustrating that it was perhaps not the book that was causing the difficulty.
- Sometimes you won’t like something and this has nothing to do with its quality or worth – you just don’t like it.
- Don’t picture a perfect reader in your head who understands everything and hold yourself to that standard. If you’re reading a hard book then keep plugging away and accept your humanity.
Frowner then goes on to beautifully summarise all of this in a later comment, where he notes that we seem to learn to be angry at, or uncomfortable with, difficult texts, because we are under pressure to be capable of understanding everything of worth. This is an argument of legitimacy: if the work is legitimate and I don’t understand it, then I am stupid, however if I can argue that the work is illegitimate, then this is a terrible con job, I am not stupid for not understanding this and we should attack this work! Frowner wonders about how we are prepared for the world and believes that we are encouraged to see ourselves as inadequate if we do not understand everything for ourselves, hence the forced separation of work into legitimate and illegitimate, with am immediate, and often vicious, attack on those things we define as illegitimate in order to protect our image of ourselves.
I spend a reasonable amount of time in art galleries and I wish I had a dollar for everyone who stood in front of a piece of modern art (anything from the neo-impressionists on, basically) and felt the need to loudly state that they “didn’t get it” or that they could “have painted it themselves.” (I like Rothko, Mondrian and Klee, among others, so I am often in that part of the gallery.) It is quite strange when you come to think about it – why on earth are people actually vocalising this? Looking more closely, it is (less surprisingly) people in groups of two or more who seem to do this: I don’t understand this so, before you ask me about, I will declare it to be without worth. I didn’t get it, therefore this art has failed me. We go back to Frowner’s list and look at point 2: Not all art (in this case) is for everyone and that’s ok. I can admire Grant Wood’s skill and his painting “American Gothic” but the painting doesn’t appeal as much to me as does the work of Schiele, for example. That’s ok, that doesn’t make Schiele better than Wood in some Universal Absolute Fantasy League of Painters (although the Schiele/Klimt tag team wrestling duo, with their infamous Golden Coat Move, would be fun to watch) – it’s a matter of preference. I regularly look at things that I don’t quite understand but I don’t regard it as a challenge or an indication that it or I are at fault, although I do see things that I understand completely and can quite happily identify reasons that I don’t like it!

Klee’s “The Goldfish”. Some will see this as art, others will say “my kids could do that”. Unless you are Hans Wilhelm Klee, no, probably not.
I am, however, very lucky, because I have a job and lifestyle where my ability to think about things is a core component: falsely dichotomous thinking is not actually what I’m paid to do. However, I do have influence over students and I need to be very careful in how I present information to them. In my last course, I deliberately referred to Wikipedia among other documents because it is designed to be understood and is usually shaped by many hands until it reaches an acceptable standard of readability. I could have pointed my students at ethics texts but these texts often require more preparation and a different course structure, which may have put students off actually reading and understanding them. If my students go into ethics, or whatever other area they deem interesting, then point 4 becomes valid and their interest, and contextual framing, can turn what would have been a difficult book into a useful book.
I agree with this (effectively) anonymous poster and his or her summary of an ongoing issue: we make it hard for people to admit that they are learning, that they haven’t quite worked something out yet, because we make “not getting something immediately” a sign of slowness (informally) and often with negative outcomes (in assessment or course and career progression). We do not have to be experts at everything, nor should we pretend to be. We risk not actually learning some important and beautiful things because we feel obliged to reject it before it rejects us – and some things, of great worth that will be long appreciated, take longer to ‘get’ then just the minute or two that we feel we can allocate.
Adelaide Computing Education Conventicle 2012: “It’s all about the people”
Posted: December 27, 2012 Filed under: Education | Tags: acec2012, advocacy, authenticity, blogging, collaboration, community, conventicle, curriculum, design, education, educational problem, educational research, ethics, feedback, Generation Why, grand challenge, higher education, in the student's head, learning, principles of design, reconciliation, reflection, resources, student perspective, teaching, teaching approaches, thinking, universal principles of design 1 Commentacec 2012 was designed to be a cross-University event (that’s the whole point of the conventicles, they bring together people from a region) and we had a paper from the University of South Australia: ‘”It’s all about the people”; building cultural competence in IT graduates’ by Andrew Duff, Kathy Darzanos and Mark Osborne. Andrew and Kathy came along to present and the paper was very well received, because it dealt with an important need and a solid solution to address that need, which was inclusive, insightful and respectful.
For those who are not Australians, it is very important to remember that the original inhabitants of Australia have not fared very well since white settlement and that the apology for what happened under many white governments, up until very recently, was only given in the past decade. There is still a distance between the communities and the overall process of bringing our communities together is referred to as reconciliation. Our University has a reconciliation statement and certain goals in terms of representation in our staff and student bodies that reflect percentages in the community, to reduce the underrepresentation of indigenous Australians and to offer them the same opportunities. There are many challenges facing Australia, and the health and social issues in our indigenous communities are often exacerbated by years of poverty and a range of other issues, but some of the communities have a highly vested interest in some large-scale technical, ICT and engineering solutions, areas where indigenous Australians are generally not students. Professor Lester Irabinna Rigney, the Dean of Aboriginal Education, identified the problem succinctly at a recent meeting: when your people live on land that is 0.7m above sea level, a 0.9m sea-level rise starts to become of concern and he would really like students from his community to be involved in building the sea walls that address this, while we look for other solutions!
Andrea, Kathy and Mark’s aim was to share out the commitment to reconciliation across the student body, making this a whole of community participation rather than a heavy burden for a few, under the guiding statement that they wanted to be doing things with the indigenous community, rather than doing things to them. There’s always a risk of premature claiming of expertise, where instead of working with a group to find out what they want, you walk in and tell them what they need. For a whole range of very good and often heartbreaking reasons, the Australian indigenous communities are exceedingly wary when people start ordering them about. This was the first thing I liked about this approach: let’s not make the same mistakes again. The authors were looking for a way to embed cultural awareness and the process of reconciliation into the curriculum as part of an IT program, sharing it so that other people could do it and making it practical.
Their key tenets were:
- It’s all about the diverse people. They developed a program to introduce students to culture, to give them more than one world view of the dominant culture and to introduce knowledge of the original Australians. It’s an important note that many Australians have no idea how to use certain terms or cultural items from indigenous culture, which of course hampers communication and interaction.
For the students, they were required to put together an IT proposal, working with the indigenous community, that they would implement in the later years of their degree. Thus, it became part of the backbone of their entire program.
- Doing with [people], not to [people]. As discussed, there are many good reasons for this. Reduce the urge to be the expert and, instead, look at existing statements of right and how to work with other peplum, such as the UN rights of indigenous people and the UniSA graduate attributes. This all comes together in the ICUP – Indigenous Content in Undergraduate Program
How do we deal with information management in another culture? I’ve discussed before the (to many) quite alien idea that knowledge can reside with one person and, until that person chooses or needs to hand on that knowledge, that is the person that you need. Now, instead of demanding knowledge and conformity to some documentary standard, you have to work with people. Talking rather than imposing, getting the client’s genuine understanding of the project and their need – how does the client feel about this?
Not only were students working with indigenous people in developing their IT projects, they were learning how to work with other peoples, not just other people, and were required to come up with technologically appropriate solutions that met the client need. Not everyone has infinite power and 4G LTE to run their systems, nor can everyone stump up the cash to buy an iPhone or download apps. Much as programming in embedded systems shakes students out of the ‘infinite memory, disk and power’ illusion, working with other communities in Australia shakes them out of the single worldview and from the, often disrespectful, way that we deal with each other. The core here is thinking about different communities and the fact that different people have different requirements. Sometimes you have to wait to speak to the right person, rather than the available person.
The online forum has four questions that students have to find a solution to, where the forum is overseen by an indigenous tutor. The four questions are:
- What does culture mean to you?
- Post a cultural artefact that describes your culture?
- I came here to study Computer Science – not Aboriginal Australians?
- What are some of the differences between Aboriginal and non-Aboriginal Australians?
The first two are amazing questions – what is your answer to question number 2? The second pair of questions are more challenging and illustrate the bold and head-on approach of this participative approach to reconciliation. Reconciliation between all of the Australian communities requires everyone to be involved and, being honest, questions 3 and 4 are going to open up some wounds, drag some silly thinking out into the open but, most importantly, allow us to talk through issues of concern and confusion.
I suspect that many people can’t really answer question 4 without referring back to mid-50s archetypal depictions of Australian Aborigines standing on one leg, looking out over cliffs, and there’s an excellent ACMI (Australian Centre for the Moving Image) exhibit in Melbourne that discusses this cultural misappropriation and stereotyping. One of the things that resonated with me is that asking these questions forces people to think about these things, rather than repeating old mind grooves and received nonsense overheard in pubs, seen on TV and heard in racist jokes.
I was delighted that this paper was able to be presented, not least because the goal of the team is to share this approach in the hope of achieving even greater strides in the reconciliation process. I hope to be able to bring some of it to my Uni over the next couple of years.

