Time to Work and Time to Play

Now Print, Black, Linocut, (C) Nick Falkner, 2013

I do a lot of grounded theory research into student behaviour patterns. It’s a bit Indiana Jones in a rather dry way: hear a rumour of a giant cache of data, hack your way through impenetrable obfuscation and poor data alignment to find the jewel at the centre, hack your way out and try to get it to the community before you get killed by snakes, thrown into a propellor or eaten. (Perhaps the analogy isn’t perfect but how recently have you been through a research quality exercise?) Our students are all pretty similar, from the metrics  I have, and I’ve gone on at length about this in other posts: hyperbolic time-discounting and so on. Embarrassingly recently, however, I was introduced to the notion of instrumentality, the capability to see that achieving a task now will reduce the difficulty in completing a goal later. If we can’t see how important this is to getting students to do something, maybe it’s time to have a good sit-down and a think! Husman et al identify three associated but distinguishable aspects to a student’s appreciation of a task: how much they rate its value, their intrinsic level of motivation, and their appreciation of the instrumentality. From this study, we have a basis for the confusing and often paradoxical presentation of a student who is intelligent and highly motivated – but just not for the task we’ve given them, despite apparently and genuinely being aware of the value of the task. Without the ability to link this task to future goal success, the exponential approach of the deadline horizon can cause a student to artificially inflate the value of something of less final worth, because the actual important goal is out of sight. But rob a student of motivation and we have to put everything into a high-stakes, heavily temporally fixed into the almost immediate future and the present, often resorting to extrinsic motivating factors (bribes/threats) to impose value. This may be why everyone who uses a punishment/reward situation achieves compliance but then has to keep using this mechanism to continue to keep values artificially high. Have we stumbled across an Economy of Pedagogy? I hope not, because I can barely understand basic economics. But we can start to illustrate why the student has to be intrinsically connected to the task and the goal framework – without it, it’s carrot/stick time and, once we do that, it’s always carrot/stick time.

Like almost every teacher I know, all of my students are experts at something but mining that can be tricky. What quickly becomes apparent, and as McGonigall reflected on in “Reality is Broken”, is that people will put far more effort into an activity that they see as play than one which they see as work. I, for example, have taken up linocut printing and, for no good reason at all, have invested days into a painstaking activity where it can take four hours to achieve even a simple outcome of reasonable quality – and it will be years before I’m good at it. Yet the time I spend at the printing studio on Saturdays is joyful, recharging and, above all, playful. If I consumed 6 hours marking assignments, writing a single number out of 10 and restricting my comments to good/bad/try harder, then I would feel spent and I would dread starting, putting it off as long as possible. Making prints, I consumed about 6 hours of effort to scan, photoshop, trim, print, reverse, apply over carbon paper, trace, cut out of lino and then manually and press print about four pieces of paper – and I felt like a new man. No real surprises here. In both cases, I am highly motivated. One task has great value to my students and me because it provides useful feedback. The artistic task has value to me because I am exploring new forms of art and artistic thinking, which I find rewarding.

But what of the instrumentality? In the case of the marking, it has to be done at a time where students can get the feedback at a time where they can use it and, given we have a follow-up activity of the same type for more marks, they need to get that sooner rather than later. If I leave it all until the end of the semester, it makes my students’ lives harder and mine, too, because I can’t do everything at once and every single ‘when is it coming’ query consumes more time. In the case of the art, I have no deadline but I do have a goal – a triptych work to put on the wall in August. Every print I make makes this final production easier. The production of the lino master? Intricate, close work using sharp objects and it can take hours to get a good result. It should be dull and repetitive but it’s not – but ask me to cut out 10 of the same thing or very, very similar things and I think it would be, very quickly. So, even something that I really enjoy becomes mundane when we mess with the task enough or get to the point, in this case, where we start to say “Well, why can’t a machine do this?” Rephrasing this, we get the instrumentality focus back again: “What do I gain in the future from doing this ten times if I will only do this ten times once?” And this is a valid question for our students, too. Why should they write “Hello, World” – it has most definitely and definitively been written. It’s passed on. It is novel no more. Bereft of novelty, it rests on its laurels. If we didn’t force students to write it, there is no way that this particular phrase, which we ‘owe’ to Brian Kernighan, is introducing anyone to anything that could not have a modicum of creativity added to it by saying in the manual “Please type a sentence into this point in the program and it will display it back to you.” It is an ex-program.

I love lecturing. I love giving tutorials. I will happily provide feedback in pracs. Why don’t I like marking? It’s easy to say “Well, it’s dull and repetitive” but, if I wouldn’t ask a student to undertake a task like that so why am I doing it? Look, I’m not advocating that all marking is like this but, certainly, the manual marking of particular aspects of software does tend to be dull.

Unless, of course, you start enjoying it and we can do that if we have enough freedom and flexibility to explore playful aspects. When I marked a big group of student assignments recently, I tried to write something new for each student and, this doesn’t always succeed for small artefacts with limited variability, I did manage to complement a student on their spanish variable names, provide personalised feedback to some students who had excelled and, generally, turned a 10 mark program into a place where I thought about each student personally and then (more often than not) said something unique. Yes, sometimes the same errors cropped up and the copy/paste is handy – but by engaging with the task and thinking about how much my future interactions with the students would be helped with a little investment now, the task was still a slog, but I came out of it quite pleased with the overall achievement. The task became more enjoyable because I had more flexibility but I also was required to be there to be part of the process, I was necessary. It became possible to be (professionally and carefully) playful – which is often how I approach teaching.

Any of you who are required to use standardised tests with manual marking: you already know how desperately dull the grading is and it is a grindingly dull, rubric-bound, tick/flick scenario that does nothing except consume work. It’s valuable because it’s required and money is money. Motivating? No. Any instrumentality? No, unless giving the test raises the students to the point where you get improved circumstances (personal/school) or you reduce the amount of testing required for some reason. It is, sadly, as dull for your students to undertake them, in this scenario, because they will know how it’s marked and it is not going to trigger any of Husman’s three distinguished but associated variables.

I am never saying that everything has to fun or easy, because I doubt many areas would be able to convey enough knowledge under these strictures, but providing tasks that have room to encourage motivation, develop a personal sense of task value, and that allow students to play, potentially bringing in some of their own natural enthusiasm on other areas or channeling it here, solves two thirds of the problem in getting students involved. Intentionally grounding learning in play and carefully designing materials to make this work can make things better. It also makes it easier for staff. Right now, as we handle the assignment work of the course I’m currently teaching, other discussions on the student forums includes the History of Computing, Hofstede’s Cultural Dimensions, the significance of certain questions in the practical, complexity theory and we have only just stopped the spontaneous student comparison of performance at a simple genetic algorithms practical. My students are exploring, they are playing in the space of the discipline and, by doing so, are moving more deeply into a knowledge of taxonomy and lexicon within this space. I am moving from Lion Tamer to Ringmaster, which is the logical step to take as what I want is citizens who are participating because they can see value, have some level of motivation and are forming their instrumentality. If learning and exploration is fun now, then going further in this may lead to fun later – the future fun goal is enhanced by achieving tasks now. I’m not sure if this is necessarily the correct first demonstration of instrumentality, but it is a useful one!

However, it requires time for both the staff member to be able to construct and moderate such an environment, especially if you’re encouraging playful exploration of areas on public discussion forums, and the student must have enough time to be able to think about things, make plans and then to try again if they don’t pick it all up on the first go. Under strict and tight deadlines, we know the creativity can be impaired when we enforce the deadlines the wrong way, and we reduce the possibility of time for exploration and play – for students and staff.

Playing is serious business and our lives are better when we do more of it – the first enabling act of good play is scheduling that first play date and seeing how it goes. I’ve certainly found it to be helpful, to me and to my students.


SIGCSE 2013: Special Session on Designing and Supporting Collaborative Learning Activities

Katrina and I delivered a special session on collaborative learning activities, focused on undergraduates because that’s our area of expertise. You can read the outline document here. We worked together on the underlying classroom activities and have both implemented these techniques but, in this session, Katrina did most of the presenting and I presented the collaborative assessment task examples, with some facilitation.

The trick here is, of course, to find examples that are both effective as teaching tools and are effective as examples. The approach I chose to take was to remind everyone in the room of what the most important aspects were to making this work with students and I did this by deliberately starting with a bad example. This can be a difficult road to walk because, when presenting a bad example, you need to convince everyone that your choice was deliberate and that you actually didn’t just stuff things up.

My approach was fairly simple. Break people into groups, based on where they were currently sitting, and then I immediately went into the question, which had been tailored for the crowd and for my purposes:

“I want you to talk about the 10 things that you’re going to do in the next 5 years to make progress in your career and improve your job performance.”

And why not? Everyone in the room was interested in education and, most likely, had a job at a time when it’s highly competitive and hard to find or retain work – so everyone has probably thought about this. It’s a fair question for this crowd.

Well, it would be, if it wasn’t so anxiety inducing. Katrina and I both observed a sea of frozen faces as we asked a question that put a large number of participants on the spot. And the reason I did this was to remind everyone that anxiety impairs genuine participation and willingness to engage. There were a large number of frozen grins with darting eyes, some nervous mumbles and a whole lot of purposeless noise, with the few people who were actually primed to answer that question starting to lead off.

I then stopped the discussion immediately. “What was wrong with that?” I asked the group.

Well, where do we start? Firstly, it’s an individual activity, not a collaborative activity – there’s no incentive or requirement for discussion, groupwork or anything like that. Secondly, while we might expect people to be able to answer this, it is a highly charged and personal areas, and you may not feel comfortable discussing your five year plan with people that you don’t know. Thirdly, some people know that they should be able to answer this (or at least some supervisors will expect that they can) but they have no real answer and their anxiety will not only limit their participation but it will probably stop them from listening at all while they sweat their turn. Finally, there is no point to this activity – why are we doing this? What are we producing? What is the end point?

My approach to collaborative activity is pretty simple and you can read any amount of Perry, Dickinson, Hamer et al (and now us as well) to look at relevant areas and Contributing Student Pedagogy, where students have a reason to collaborate and we manage their developmental maturity and their roles in the activity to get them really engaged. Everyone can have difficulties with authority and recognising whether someone is making enough contribution to a discussion to be worth their time – this is not limited to students. People, therefore, have to believe that the group they are in is of some benefit to them.

So we stepped back. I asked everyone to introduce themselves, where they came from and give a fact about their current home that people might not know. Simple task, everyone can do it and the purpose was to tell your group something interesting about your home – clear purpose, as well. This activity launched immediately and was going so well that, when I tried to move it on because the sound levels were dropping (generally a good sign that we’re reaching a transition), some groups asked if they could keep going as they weren’t quite finished. (Monitoring groups spread over a large space can be tricky but, where the activity is working, people will happily let you know when they need more time.) I was able to completely stop the first activity and nobody wanted me to continue. The second one, where people felt that they could participate and wanted to say something, needed to keep going.

Having now put some faces to names, we then moved to a simple exercise of sharing an interesting teaching approach that you’d tried recently or seen at the conference and it’s important to note the different comfort levels we can accommodate with this – we are sharing knowledge but we give participants the opportunity to share something of themselves or something that interest them, without the burden of ownership. Everyone had already discovered that everyone in the group had some areas of knowledge, albeit small, that taught them something new. We had started to build a group where participants valued each other’s contribution.

I carried out some roaming facilitation where I said very little, unless it was needed. I sat down with some groups, said ‘hi’ and then just sat back while they talked. I occasionally gave some nodded or attentive feedback to people who looked like they wanted to speak and this often cued them into the discussion. Facilitation doesn’t have to be intrusive and I’m a much bigger fan of inclusiveness, where everyone gets a turn but we do it through non-verbal encouragement (where that’s possible, different techniques are required in a mixed-ability group) to stay out of the main corridor of communication and reduce confrontation. However, by setting up the requirement that everyone share and by providing a task that everyone could participate in, my need to prod was greatly reduced and the groups mostly ran themselves, with the roles shifting around as different people made different points.

We covered a lot of the underlying theory in the talk itself, to discuss why people have difficulty accepting other views, to clarify why role management is a critical part of giving people a reason to get involved and something to do in the conversation. The notion that a valid discursive role is that of the supporter, to reinforce ideas from the proposer, allows someone to develop their confidence and critically assess the idea, without the burden of having to provide a complex criticism straight away.

At the end, I asked for a show of hands. Who had met someone knew? Everyone. Who had found out something they didn’t know about other places? Everyone. Who had learned about a new teaching technique that they hadn’t known before. Everyone.

My one regret is that we didn’t do this sooner because the conversation was obviously continuing for some groups and our session was, sadly, on the last day. I don’t pretend to be the best at this but I can assure you that any capability I have in this kind of activity comes from understanding the theory, putting it into practice, trying it, trying it again, and reflecting on what did and didn’t work.

I sometimes come out of a lecture or a collaborative activity and I’m really not happy. It didn’t gel or I didn’t quite get the group going as I wanted it to – but this is where you have to be gentle on yourself because, if you’re planning to succeed and reflecting on the problems, then steady improvement is completely possible and you can get more comfortable with passing your room control over to the groups, while you move to the facilitation role. The more you do it, the more you realise that training your students in role fluidity also assists them in understanding when you have to be in control of the room. I regularly pass control back and forward and it took me a long time to really feel that I wasn’t losing my grip. It’s a practice thing.

It was a lot of fun to give the session and we spent some time crafting the ‘bad example’, but let me summarise what the good activities should really look like. They must be collaborativeinclusiveachievable and obviously beneficial. Like all good guidelines there are times and places where you would change this set of characteristics, but you have to know your group well to know what challenges they can tolerate. If your students are more mature, then you push out into open-ended tasks which are far harder to make progress in – but this would be completely inappropriate for first years. Even in later years, being able to make some progress is more likely to keep the group going than a brick wall that stops you at step 1. But, let’s face it, your students need to know that working in that group is not only not to their detriment, but it’s beneficial. And the more you do this, the better their groupwork and collaboration will get – and that’s a big overall positive for the graduates of the future.

To everyone who attended the session, thank you for the generosity and enthusiasm of your participation and I’m catching up on my business cards in the next weeks. If I promised you an e-mail, it will be coming shortly.


Grace.

A friend sent me a link to this excellent piece on the importance of grace, in terms of your own appreciation of yourself and in your role as a teacher. Thank you, A! Here is the link:

The Lesson of Grace in Teaching

“…to hear from my own professor, whom I really love and admire, at a time when I felt ashamed of my intelligence and thus unworthy of his friendship, that I wasn’t just a student in a seat, not just a letter grade or a number on my transcript, but a valuable person who he wants to know on a personal level, was perhaps the most incredible moment of my college career.”

 


Expressiveness and Ambiguity: Learning to Program Can Be Unnecessarily Hard

One of the most important things to be able to do in any profession is to think as a professional. This is certainly true of Computer Science, because we have to spend so much time thinking as a Computer Scientist would think about how the machine will interpret our instructions. For those who don’t program, a brief quiz. What is the value of the next statement?

What is 3/4?

No doubt, you answered something like 0.75 or maybe 75% or possibly even “three quarters”? (And some of you would have said “but this statement has no intrinsic value” and my heartiest congratulations to you. Now go off and contemplate the Universe while the rest of us toil along on the material plane.) And, not being programmers, you would give me the same answer if I wrote:

What is 3.0/4.0?

Depending on the programming language we use, you can actually get two completely different answers to this apparently simple question. 3/4 is often interpreted by the computer to mean “What is the result if I carry out integer division, where I will only tell you how many times the denominator will go into the numerator as a whole number, for 3 and 4?” The answer will not be the expected 0.75, it will be 0, because 4 does not go into 3 – it’s too big. So, again depending on programming language, it is completely possible to ask the computer “is 3/4 equivalent to 3.0/4.0?” and get the answer ‘No’.

This is something that we have to highlight to students when we are teaching programming, because very few people use integer division when they divide one thing by another – they automatically start using decimal points. Now, in this case, the different behaviour of the ‘/’ is actually exceedingly well-defined and is not all ambiguous to the computer or to the seasoned programmer. It is, however, nowhere near as clear to the novice or casual observer.

I am currently reading Stephen Ramsay’s excellent “Reading Machines: Towards an Algorithmic Criticism” and it is taking me a very long time to read an 80 page book. Why? Because, to avoid ambiguity and to be as expressive and precise as possible, he has used a number of words and concepts with which I am unfamiliar or that I have not seen before. I am currently reading his book with a web browser and a dictionary because I do not have a background in literary criticism but, once I have the building blocks, I can understand his argument. In other words, I am having to learn a new language in order to read a book for that new language community. However, rather than being irked that “/” changes meaning depending on the company it keeps, I am happy to learn the new terms and concepts in the space that Ramsay describes, because it is adding to my ability to express key concepts, without introducing ambiguous shadings of language over things that I already know. Ramsay is not, for example, telling me that “book” no longer means “book” when you place it inside parentheses. (It is worth noting that Ramsay discusses the use of constraint as a creative enhancer, a la Oulipo, early on in the book and this is a theme for another post.)

The usual insult at this point is to trot out the accusation of jargon, which is as often a statement that “I can’t be bothered learning this” than it is a genuine complaint about impenetrable prose. In this case, the offender in my opinion is the person who decided to provide an invisible overloading of the “/” operator to mean both “division” and “integer division”, as they have required us to be aware of a change in meaning that is not accompanied by a change in syntax. While this isn’t usually a problem, spoken and written languages are full of these things after all, in the computing world it forces the programmer to remember that “/” doesn’t always mean “/” and then to get it the right way around. (A number of languages solve this problem by providing a distinct operator – this, however, then adds to linguistic complexity and rather than learning two meanings, you have to learn two ‘words’. Ah, no free lunch.) We have no tone or colour in mainstream programming languages, for a whole range of good computer grammar reasons, but the absence of the rising tone or rising eyebrow is sorely felt when we encounter something that means two different things. The net result is that we tend to use the same constructs to do the same thing because we have severe limitations upon our expressivity. That’s why there are boilerplate programmers, who can stitch together a solution from things they have already seen, and people who have learned how to be as expressive as possible, despite most of these restrictions. Regrettably, expressive and innovative code can often be unreadable by other people because of the gymnastics required to reach these heights of expressiveness, which is often at odds with what the language designers assumed someone might do.

We have spent a great deal of effort making computers better at handling abstract representations, things that stand in for other (real) things. I can use a name instead of a number and the computer will keep track of it for me. It’s important to note that writing int i=0; is infinitely preferable to typing “0000000000000000000000000000000000000000000000000000000000000000” into the correct memory location and then keeping that (rather large number) address written on a scrap of paper. Abstraction is one of the fundamental tools of modern programming, yet we greatly limit expressiveness in sometimes artificial ways to reduce ambiguity when, really, the ambiguity does seem a little artificial.

One of the nastiest potential ambiguities that shows up a lot is “what do we mean by ‘equals'”. As above, we already know that many languages would not tell you that “3/4 equals 3.0/4.0” because both mathematical operations would be executed and 0 is not the same as 0.75. However, the equivalence operator is often used to ask so many different questions: “Do these two things contain the same thing?”, “Are these two things considered to be the same according to the programmer?” and “Are these two things actually the same thing and stored in the same place in memory?”

Generally, however, to all of these questions, we return a simple “True” or “False”, which in reality reflects neither the truth nor the falsity of the situation. What we are asking, respectively, is “Are the contents of these the same?” to which the answer is “Same” or “Different”. To the second, we are asking if the programmer considers them to be the same, in which case the answer is really “Yes” or “No” because they could actually be different, yet not so different that the programmer needs to make a big deal about it. Finally, when we are asking if two references to an object actually point to the same thing, we are asking if they are in the same location or not.

There are many languages that use truth values, some of them do it far better than others, but unless we are speaking and writing in logical terms, the apparent precision of the True/False dichotomy is inherently deceptive and, once again, it is only as precise as it has been programmed to be and then interpreted, based on the knowledge of programmer and reader. (The programming language Haskell has an intrinsic ability to say that things are “Undefined” and to then continue working on the problem, which is an obvious, and welcome, exception here, yet this is not a widespread approach.) It is an inherent limitation on our ability to express what is really happening in the system when we artificially constrain ourselves in order to (apparently) reduce ambiguity. It seems to me that we have reduced programmatic ambiguity, but we have not necessarily actually addressed the real or philosophical ambiguity inherent in many of these programs.

More holiday musings on the “Python way” and why this is actually an unreasonable demand, rather than a positive feature, shortly.


The Limits of Expressiveness: If Compilers Are Smart, Why Are We Doing the Work?

I am currently on holiday, which is “Nick shorthand” for catching up on my reading, painting and cat time. Recently, my interests in my own discipline have widened and I am precariously close to that terrible state that academics sometimes reach when they suddenly start uttering words like “interdisciplinary” or “big tent approach”. Quite often, around this time, the professoriate will look at each other, nod, and send for the nice people with the butterfly nets. Before they arrive and cart me away, I thought I’d share some of the reading and thinking I’ve been doing lately.

My reading is a little eclectic, right now. Next to Hooky’s account of the band “Joy Division” sits Dennis Wheatley’s “They Used Dark Forces” and next to that are four other books, which are a little more academic. “Reading Machines: Towards an Algorithmic Criticism” by Stephen Ramsay; “Debates in the Digital Humanities” edited by Matthew Gold; “10 PRINT CHR$(205.5+RND(1)); : GOTO 10” by Montfort et al; and “‘Pataphysics: A Useless Guide” by Andrew Hugill. All of these are fascinating books and, right now, I am thinking through all of these in order to place a new glass over some of my assumptions from within my own discipline.

“10 PRINT CHR$…” is an account of a simple line of code from the Commodore 64 Basic language, which draws diagonal mazes on the screen. In exploring this, the authors explore fundamental aspects of computing and, in particular, creative computing and how programs exist in culture. Everything in the line says something about programming back when the C-64 was popular, from the use of line numbers (required because you had to establish an execution order without necessarily being able to arrange elements in one document) to the use of the $ after CHR, which tells both the programmer and the machine that what results from this operation is a string, rather than a number. In many ways, this is a book about my own journey through Computer Science, growing up with BASIC programming and accepting its conventions as the norm, only to have new and strange conventions pop out at me once I started using other programming languages.

Rather than discuss the other books in detail, although I recommend all of them, I wanted to talk about specific aspects of expressiveness and comprehension, as if there is one thing I am thinking after all of this reading, it is “why aren’t we doing this better”? The line “10 PRINT CHR$…” is effectively incomprehensible to the casual reader, yet if I wrote something like this:

do this forever
pick one of “/” or “\” and display it on the screen

then anyone who spoke English (which used to be a larger number than those who could read programming languages but, honestly, today I’m not sure about that) could understand what was going to happen but, not only could they understand, they could create something themselves without having to work out how to make it happen. You can see language like this in languages such as Scratch, which is intended to teach programming by providing an easier bridge between standard language and programming using pre-constructed blocks and far more approachable terms. Why is it so important to create? One of the debates raging in Digital Humanities at the moment, at least according to my reading, is “who is in” and “who is out” – what does it take to make one a digital humanist? While this used to involve “being a programmer”, it is now considered reasonable to “create something”. For anyone who is notionally a programmer, the two are indivisible. Programs are how we create things and programming languages are the form that we use to communicate with the machines, to solve the problems that we need solved.

When we first started writing programs, we instructed the machines in simple arithmetic sequences that matched the bit patterns required to ensure that certain memory locations were processed in a certain way. We then provided human-readable shorthand, assembly language, where mnemonics replaced numbers, to make it easier for humans to write code without error. “20” became “JSR” in 6502 assembly code, for example, yet “JSR” is as impenetrably occulted as “20” unless you learn a language that is not actually a language but a compressed form of acronym. Roll on some more years and we have added pseudo-English over the top: GOSUB in Basic and the use of parentheses to indicate function calls in other languages.

However, all I actually wanted to do was to make the same thing happen again, maybe with some minor changes to what it was working on. Think of a sub-routine (method, procedure or function, if we’re being relaxed in our terminology) and you may as well think of a washing machine. It takes in something and combines it with a determined process, a machine setting, powders and liquids to give you the result you wanted, in this case taking in dirty clothes and giving back clean ones. The execution of a sub-routine is identical to this but can you see the predictable familiarity of the washing machine in JSR FE FF?

If you are familiar with ‘Pataphysics, or even “Ubu Roi” the most well-known of Jarry’s work, you may be aware of the pataphysician’s fascination with the spiral – le Grand Gidouille. The spiral, once drawn, defines not only itself but another spiral in the negative space that it contains. The spiral is also a natural way to think about programming because a very well-used programming language construct, the for loop, often either counts up to a value or counts down. It is not uncommon for this kind of counting loop to allow us to advance from one character to the next in a text of some sort. When we define a loop as a spiral, we clearly state what it is and what it is not – it is not retreading old ground, although it may always spiral out towards infinity.

However, for maximum confusion, the for loop may iterate a fixed number of times but never use the changing value that is driving it – it is no longer a spiral in terms of its effect on its contents. We can even write a for loop that goes around in a circle indefinitely, executing the code within it until it is interrupted. Yet, we use the same keyword for all of these.

In English, the word “get” is incredibly overused. There are very few situations when another verb couldn’t add more meaning, even in terms of shade, to the situation. Using “get” forces us, quite frequently, to do more hard work to achieve comprehension. Using the same words for many different types of loop pushes load back on to us.

What happens is that when we write our loop, we are required to do the thinking as to how we want this loop to work – although Scratch provides a forever, very few other languages provide anything like that. To loop endlessly in C, we would use while (true) or for (;;), but to tell the difference between a loop that is functioning as a spiral, and one that is merely counting, we have to read the body of the loop to see what is going on. If you aren’t a programmer, does for(;;) give you any inkling at all as to what is going on? Some might think “Aha, but programming is for programmers” and I would respond with “Aha, yes, but becoming a programmer requires a great deal of learning and why don’t we make it simpler?” To which the obvious riposte is “But we have special languages which will do all that!” and I then strike back with “Well, if that is such a good feature, why isn’t it in all languages, given how good modern language compilers are?” (A compiler is a program that turns programming languages into something that computers can execute – English words to byte patterns effectively.)

In thinking about language origins, and what we are capable of with modern compilers, we have to accept that a lot of the heavy lifting in programming is already being done by modern, optimising, compilers. Years ago, the compiler would just turn your instructions into a form that machines could execute – with no improvement. These days, put something daft in (like a loop that does nothing for a million iterations), and the compiler will quietly edit it out. The compiler will worry about optimising your storage of information and, sometimes, even help you to reduce wasted use of memory (no, Java, I’m most definitely not looking at you.)

So why is it that C++ doesn’t have a forever, a do 10 times, or a spiral to 10 equivalent in there? The answer is complex but is, most likely, a combination of standards issues (changing a language standard is relatively difficult and requires a lot of effort), the fact that other languages do already do things like this, the burden of increasing compiler complexity to handle synonyms like this (although this need not be too arduous) and, most likely, the fact that I doubt that many people would see a need for it.

In reading all of these books, and I’ll write more on this shortly, I am becoming increasingly aware that I tolerate a great deal of limitation in my ability to solve problems using programming languages. I put up with having my expressiveness reduced, with taking care of some unnecessary heavy lifting in making things clear to the compiler, and I occasionally even allow the programming language to dictate how I write the words on the page itself – not just syntax and semantics (which are at least understandably, socially and technically) but the use of blank lines, white space and end of lines.

How are we expected to be truly creative if conformity and constraint are the underpinnings of programming? Tomorrow, I shall write on the use of constraint as a means of encouraging creativity and why I feel that what we see in programming is actually limitation, rather than a useful constraint.


Thanks for the exam – now I can’t help you.

I have just finished marking a pile of examinations from a course that I co-taught recently. I haven’t finalised the marks but, overall, I’m not unhappy with the majority of the results. Interestingly, and not overly surprisingly, one of the best answered sections of the exam was based on a challenging essay question I set as an assignment. The question spans many aspects of the course and requires the student to think about their answer and link the knowledge – which most did very well. As I said, not a surprise but a good reinforcement that you don’t have to drill students in what to say in the exam, but covering the requisite knowledge and practising the right skills is often helpful.

However, I don’t much like marking exams and it doesn’t come down to the time involved, the generally dull nature of the task or the repetitive strain injury from wielding a red pen in anger, it comes down to the fact that, most of the time, I am marking the student’s work at a time when I can no longer help him or her. Like most exams at my Uni, this was the terminal examination for the course, worth a substantial amount of the final marks, and was taken some weeks after teaching finished. So what this means is that any areas I identify for a given student cannot now be corrected, unless the student chooses to read my notes in the exam paper or come to see me. (Given that this campus is international, that’s trickier but not impossible thanks to the Wonders of Skypenology.) It took me a long time to work out exactly why I didn’t like marking, but when I did, the answer was obvious.

I was frustrated that I couldn’t actually do my job at one of the most important points: when lack of comprehension is clearly identified. If I ask someone a question in the classroom, on-line or wherever, and they give me an answer that’s not quite right, or right off base, then we can talk about it and I can correct the misunderstanding. My job, after all, is not actually passing or failing students – it’s about knowledge, the conveyance, construction and quality management thereof. My frustration during exam marking increases with every incomplete or incorrect answer I read, which illustrates that there is a section of the course that someone didn’t get. I get up in the morning with the clear intention of being helpful towards students and, when it really matters, all I can do is mark up bits of paper in red ink.

Quickly, Jones! Construct a valid knowledge framework! You're in a group environment! Vygotsky, man, Vygotsky!

Quickly, Jones! Construct a valid knowledge framework! You’re in a group environment! Vygotsky, man, Vygotsky!

A student who, despite my sweeping, and seeping, liquid red ink of doom, manages to get a 50 Passing grade will not do the course again – yet this mark pretty clearly indicates that roughly half of the comprehension or participation required was not carried out to the required standard. Miraculously, it doesn’t matter which half of the course the student ‘gets’, they are still deemed to have attained the knowledge. (An interesting point to ponder, especially when you consider that my colleagues in Medicine define a Pass at a much higher level and in far more complicated ways than a numerical 50%, to my eternal peace of mind when I visit a doctor!) Yet their exam will still probably have caused me at least some gnashing of teeth because of points missed, pointless misstatement of the question text, obscure song lyrics, apologies for lack of preparation and the occasional actual fact that has peregrinated from the place where it could have attained marks to a place where it will be left out in the desert to die, bereft of the life-giving context that would save it from such an awful fate.

Should we move the exams earlier and then use this to guide the focus areas for assessment in order to determine the most improvement and develop knowledge in the areas in most need? Should we abandon exams entirely and move to a continuous-assessment competency based system, where there are skills and knowledge that must be demonstrated correctly and are practised until this is achieved? We are suffering, as so many people have observed before, from overloading the requirement to grade and classify our students into neatly discretised performance boxes onto a system that ultimately seeks to identify whether these students have achieved the knowledge levels necessary to be deemed to have achieved the course objectives. Should we separate competency and performance completely? I have sketchy ideas as to how this might work but none that survive under the blow-torches of GPA requirements and resource constraints.

Obviously, continuous assessment (practicals, reports, quizzes and so on) throughout the semester provide a very valuable way to identify problems but this requires good, and thorough, course design and an awareness that this is your intent. Are we premature in treating the exam as a closing-off line on the course? Do we work on that the same way that we do any assignment? You get feedback, a mark and then more work to follow-up? If we threw resourcing to the wind, could we have a 1-2 week intensive pre-semester program that specifically addressed those issues that students failed to grasp on their first pass? Congratulations, you got 80%, but that means that there’s 20% of the course that we need to clarify? (Those who got 100% I’ll pay to come back and tutor, because I like to keep cohorts together and I doubt I’ll need to do that very often.)

There are no easy answers here and shooting down these situations is very much in the fish/barrel plane, I realise, but it is a very deeply felt form of frustration that I am seeing the most work that any student is likely to put in but I cannot now fix the problems that I see. All I can do is mark it in red ink with an annotation that the vast majority will never see (unless they receive the grade of 44, 49, 64, 74 or 84, which are all threshold-1 markers for us).

Ah well, I hope to have more time in 2013 so maybe I can mull on this some more and come up with something that is better but still workable.


Thinking about teaching spaces: if you’re a lecturer, shouldn’t you be lecturing?

I was reading a comment on a philosophical post the other day and someone wrote this rather snarky line:

He’s is a philosopher in the same way that (celebrity historian) is a historian – he’s somehow got the job description and uses it to repeat the prejudices of his paymasters, flattering them into thinking that what they believe isn’t, somehow, ludicrous. (Grangousier, Metafilter article 123174)

Rather harsh words in many respects and it’s my alteration of the (celebrity historian)’s name, not his, as I feel that his comments are mildy unfair. However, the point is interesting, as a reflection upon the importance of job title in our society, especially when it comes to the weighted authority of your words. From January the 1st, I will be a senior lecturer at an Australian University and that is perceived differently where I am. If I am in the US, I reinterpret this title into their system, namely as a tenured Associate Professor, because that’s the equivalent of what I am – the term ‘lecturer’ doesn’t clearly translate without causing problems, not even dealing with the fact that more lecturers in Australia have PhDs, where many lecturers in the US do not. But this post isn’t about how people necessarily see our job descriptions, it’s very much about how we use them.

In many respects, the title ‘lecturer’ is rather confusing because it appears, like builder, nurse or pilot, to contain the verb of one’s practice. One of the big changes in education has been the steady acceptance of constructivism, where the learners have an active role in the construction of knowledge and we are facilitating learning, in many ways, to a greater extent than we are teaching. This does not mean that teachers shouldn’t teach, because this is far more generic than the binding of lecturers to lecturing, but it does challenge the mental image that pops up when we think about teaching.

If I asked you to visualise a classroom situation, what would you think of? What facilities are there? Where are the students? Where is the teacher? What resources are around the room, on the desks, on the walls? How big is it?

Take a minute to do just this and make some brief notes as to what was in there. Then come back here.

It’s okay, I’ll still be here!

Read the rest of this entry »


Adelaide Computing Education Conventicle 2012: “It’s all about the people”

acec 2012 was designed to be a cross-University event (that’s the whole point of the conventicles, they bring together people from a region) and we had a paper from the University of South Australia:  ‘”It’s all about the people”; building cultural competence in IT graduates’ by Andrew Duff, Kathy Darzanos and Mark Osborne. Andrew and Kathy came along to present and the paper was very well received, because it dealt with an important need and a solid solution to address that need, which was inclusive, insightful and respectful.

For those who are not Australians, it is very important to remember that the original inhabitants of Australia have not fared very well since white settlement and that the apology for what happened under many white governments, up until very recently, was only given in the past decade. There is still a distance between the communities and the overall process of bringing our communities together is referred to as reconciliation. Our University has a reconciliation statement and certain goals in terms of representation in our staff and student bodies that reflect percentages in the community, to reduce the underrepresentation of indigenous Australians and to offer them the same opportunities. There are many challenges facing Australia, and the health and social issues in our indigenous communities are often exacerbated by years of poverty and a range of other issues, but some of the communities have a highly vested interest in some large-scale technical, ICT and engineering solutions, areas where indigenous Australians are generally not students. Professor Lester Irabinna Rigney, the Dean of Aboriginal Education, identified the problem succinctly at a recent meeting: when your people live on land that is 0.7m above sea level, a 0.9m sea-level rise starts to become of concern and he would really like students from his community to be involved in building the sea walls that address this, while we look for other solutions!

Andrea, Kathy and Mark’s aim was to share out the commitment to reconciliation across the student body, making this a whole of community participation rather than a heavy burden for a few, under the guiding statement that they wanted to be doing things with the indigenous community, rather than doing things to them. There’s always a risk of premature claiming of expertise, where instead of working with a group to find out what they want, you walk in and tell them what they need. For a whole range of very good and often heartbreaking reasons, the Australian indigenous communities are exceedingly wary when people start ordering them about. This was the first thing I liked about this approach: let’s not make the same mistakes again. The authors were looking for a way to embed cultural awareness and the process of reconciliation into the curriculum as part of an IT program, sharing it so that other people could do it and making it practical.

Their key tenets were:

  1. It’s all about the diverse people. They developed a program to introduce students to culture, to give them more than one world view of the dominant culture and to introduce knowledge of the original Australians. It’s an important note that many Australians have no idea how to use certain terms or cultural items from indigenous culture, which of course hampers communication and interaction.

    For the students, they were required to put together an IT proposal, working with the indigenous community, that they would implement in the later years of their degree. Thus, it became part of the backbone of their entire program.

  2. Doing with [people], not to [people]. As discussed, there are many good reasons for this. Reduce the urge to be the expert and, instead, look at existing statements of right and how to work with other peplum, such as the UN rights of indigenous people and the UniSA graduate attributes. This all comes together in the ICUP – Indigenous Content in Undergraduate Program

How do we deal with information management in another culture? I’ve discussed before the (to many) quite alien idea that knowledge can reside with one person and, until that person chooses or needs to hand on that knowledge, that is the person that you need. Now, instead of demanding knowledge and conformity to some documentary standard, you have to work with people. Talking rather than imposing, getting the client’s genuine understanding of the project and their need – how does the client feel about this?

Not only were students working with indigenous people in developing their IT projects, they were learning how to work with other peoples, not just other people, and were required to come up with technologically appropriate solutions that met the client need. Not everyone has infinite power and 4G LTE to run their systems, nor can everyone stump up the cash to buy an iPhone or download apps. Much as programming in embedded systems shakes students out of the ‘infinite memory, disk and power’ illusion, working with other communities in Australia shakes them out of the single worldview and from the, often disrespectful, way that we deal with each other. The core here is thinking about different communities and the fact that different people have different requirements. Sometimes you have to wait to speak to the right person, rather than the available person.

The online forum has four questions that students have to find a solution to, where the forum is overseen by an indigenous tutor. The four questions are:

  1. What does culture mean to you?
  2. Post a cultural artefact that describes your culture?
  3. I came here to study Computer Science – not Aboriginal Australians?
  4. What are some of the differences between Aboriginal and non-Aboriginal Australians?

The first two are amazing questions – what is your answer to question number 2? The second pair of questions are more challenging and illustrate the bold and head-on approach of this participative approach to reconciliation. Reconciliation between all of the Australian communities requires everyone to be involved and, being honest, questions 3 and 4 are going to open up some wounds, drag some silly thinking out into the open but, most importantly, allow us to talk through issues of concern and confusion.

I suspect that many people can’t really answer question 4 without referring back to mid-50s archetypal depictions of Australian Aborigines standing on one leg, looking out over cliffs, and there’s an excellent ACMI (Australian Centre for the Moving Image) exhibit in Melbourne that discusses this cultural misappropriation and stereotyping. One of the things that resonated with me is that asking these questions forces people to think about these things, rather than repeating old mind grooves and received nonsense overheard in pubs, seen on TV and heard in racist jokes.

I was delighted that this paper was able to be presented, not least because the goal of the team is to share this approach in the hope of achieving even greater strides in the reconciliation process. I hope to be able to bring some of it to my Uni over the next couple of years.

 


John Henry Died

Every culture has its myths and legends, especially surrounding those incredible individuals who stand out or tower over the rest of the society. The Ancient Greeks and Romans had their gods, demigods, heroes and, many times, cautionary tales of the mortals who got caught in the middle. Australia has the stories of pre- and post-federation mateship, often anti-authoritarian or highlighting the role of the larrikin. We have a lot of bushrangers (with suspiciously good hearts or reacting against terrible police oppression), Simpson and his donkey (a first world war hero who transported men to an aid station using his donkey, ultimately dying on the battlefield) and a Prime Minister who goes on YouTube to announce that she’s now convinced that the Mayans were right and we’re all doomed – tongue firmly in cheek. Is this the totality of the real Australia? No, but the stylised notion of ‘mateship’, the gentle knock and the “come off the grass, you officious … person” attitude are as much a part of how many Australians see themselves as shrimp on a barbie is to many US folk looking at us. In any Australian war story, you are probably more likely to hear about the terrible hangover the Gunner Suggs had and how he dragged his friend a kilometre over rough stones to keep him safe, than you are to hear about how many people he killed. (I note that this mateship is often strongly delineated over gender and racial lines, but it’s still a big part of the Australian story.)

The stores that we tell and those that we pass on as part of our culture strongly shape our culture. Look at Greek mythology and you see stern warnings against hubris – don’t rate yourself too highly or the gods will cut you down. Set yourself up too high in Australian culture and you’re going to get knocked down as well: a ‘tall poppies’ syndrome that is part cultural cringe, inherited from colonial attitudes to the Antipodes, part hubris and part cultural confusion as Anglo, Euro, Asian, African and… well, everyone, come to terms with a country that took the original inhabitants, the Australian Aboriginal and Torres Strait Islanders, quite a while to adapt to. As someone who wasn’t born in Australia, like so many others who live here and now call themselves Australia, I’ve spent a long time looking at my adopted homeland’s stories to see how to fit. Along the way, because of travel, I’ve had the opportunity to look at other cultures as well: the UK, obviously as it’s drummed into you at school, and the US, because it interests me.

The stories of Horatio Alger, from the US, fascinate me, because of their repeated statement of the rags to riches story. While most of Alger’s protagonists never become amazingly wealthy, they rise, through their own merits, to take the opportunities presented to them and, because of this, a good man will always rise. This is, fundamentally, the American Dream – that any person can become President, effectively, through the skills that they have and through rolling up their sleeves. We see this Dream become ugly when any of the three principles no longer hold, in a framing I first read from Professor Harlon Dalton:

  1. The notion that we are judged solely on our merits:For this to be true, we must not have any bias, racist, gendered, religious, ageist or other. Given the recent ruling that an attractive person can be sacked, purely for being attractive and for providing an irresistible attraction for their boss, we have evidence that not only is this point not holding in many places, it’s not holding in ways that beggar belief.
  2. We will each have a fair opportunity to develop these merits:This assumes equal opportunity in terms of education, in terms of jobs, which promptly ignores things like school districts, differing property tax levels, teacher training approaches and (because of the way that teacher districts work) just living in a given state or country because your parents live there (and can’t move) can make the distance between a great education and a sub-standard child minding service. So this doesn’t hold either.
  3. Merit will out:Look around. Is the best, smartest, most talented person running your organisation or making up all of the key positions? Can you locate anyone in the “important people above me” who is holding that job for reasons other than true, relevant merit?

Australia’s myths are beneficial in some ways and destructive in others. For my students, the notion that we help each other, we question but we try to get things done is a positive interpretation of the mild anti-authoritarian mateship focus. The downside is drinking buddies going on a rampage and covering up for each other, fighting the police when the police are actually acting reasonably and public vandalism because of a desire to act up. The mateship myth hides a lot of racism, especially towards our indigenous community, and we can probably salvage a notion of community and collaboration from mateship, while losing some of the ugly and dumb things.

The tunnel went through.

The tunnel went through.

Horatio Alger myths would give hope, except for the bleak reality that many people face which is that it is three giant pieces of boloney that people get hit about the head with. If you’re not succeeding, then Horatio Alger reasoning lets us call you lazy or stupid or just not taking the opportunities. You’re not trying to pull yourself up by your bootstraps hard enough. Worse still, trying to meet up to this, sometimes impossible, guideline leads us into John Henryism. John Henry was a steel driver, who hammered and chiseled the rock through the mountains to build tunnels for the railroad. One day the boss brought in a steam driven hammer and John Henry bet that he could beat it, to show that he and his crew should not be replaced. After a mammoth battle between man and machine, John Henry won, only to die with the hammer in his hand.

Let me recap: John Henry died – and the boss still got a full day’s work that was equal to two steam-hammers. (One of my objections to “It’s a Wonderful Life” is that the rich man gets away with stealing the money – that’s not a fairy tale, it’s a nightmare!) John Henryism occurs when people work so hard to lift themselves up by their bootstraps that they nearly (or do) kill themselves. Men in their 50s with incredibly high blood pressure, ulcers and arthritis know what I’m talking about here. The mantra of the John Henryist is:

“When things don’t go the way that I want them to, that just makes me work even harder.”

There’s nothing intrinsically wrong with this when your goal is actually achievable and you apply this maxim in moderation. At its extreme, and for those people who have people standing on their boot caps, this is a recipe to achieve a great deal for whoever is benefiting from your labour.

And then dying.

As John Henry observes in the ballad (Springsteen version), “I’ll hammer my fool self to death”, and the ballad of John Henry is actually a cautionary tale to set your pace carefully because if you’re going to swing a hammer all day, every day, then you have to do it at a pace that won’t kill you. This is the natural constraint on Horatio Alger and balances all of the issues with merit and access to opportunity: don’t kill your “fool self” striving for something that you can’t achieve. It’s a shame, however, that the stories line up like this because there’s a lot of hopelessness sitting in that junction.

Dealing with students always makes me think very carefully about the stories I tell and the stories I live. Over the next few days, I hope to put together some thoughts on a 21st century myth form that inspires without demanding this level of sacrifice, and that encourages without forcing people into despair if existing obstacles block them – and it’s beyond their current control to shift. However, on that last point, what I’d really like to come up with is a story that encourages people to talk about obstacles and then work together to lift them out of the way. I do like a challenge, after all. 🙂


Vitamin Ed: Can It Be Extracted?

Mmm. Taste the learnination.

Mmm. Taste the learnination.

There are a couple of ways to enjoy a healthy, balanced diet. The first is to actually eat a healthy, balanced diet made up from fresh produce across the range of sources, which requires you to prepare and cook foods, often changing how you eat depending on the season to maximise the benefit. The second is to eat whatever you dang well like and then use an array of supplements, vitamins, treatments and snake oil to try and beat your diet of monster burgers and gorilla dogs into something that will not kill you in 20 years. If you’ve ever bothered to look on the side of those supplements, vitamins, minerals or whatever, that most people have in their ‘medicine’ cabinets, you might see statements like “does not substitute for a balanced diet” or nice disclaimers like that. There is, of course, a reason for that. While we can be fairly certain about a range of deficiency disorders in humans, and we can prevent these problems with selective replacement, many other conditions are not as clear cut – if you eat a range of produce which contains the things that we know we need, you’re probably getting a slew of things that we also need but don’t make themselves as prominent.

In terms of our diet, while the debate rages about precisely which diet humans should be eating, we can have a fairly good stab at a sound basis from a dietician’s perspective built out of actual food. Recreating that from raw sugars, protein, vitamin and mineral supplements is technically possible but (a) much harder to manage and (b) nowhere near as satisfying as eating the real food, in most cases. Let’s nor forget that very few of us in the western world are so distant from our food that we regard it purely as fuel, with no regard for its presentation, flavour or appeal. In fact, most of us could muster a grimace for the thought of someone telling us to eat something because it was good for us or for some real or imagined medical benefit. In terms of human nutrition, we have the known components that we have to eat (sugars, proteins, fats…) and we can identify specific vitamins and minerals that we need to balance to enjoy good health, yet there is not shortage of additional supplements that we also take out of concern for our health that may have little or no demonstrated benefit, yet still we take them.

There’s been a lot of work done in trying to establish an evidence base for medical supplements and far more of the supplements fail than pass this test. Willow bark, an old remedy for pain relief, has been found to have a reliable effect because it has a chemical basis for working – evidence demonstrated that and now we have aspirin. Homeopathic memory water? There’s no reliable evidence for this working. Does this mean it won’t work? Well, here we get into the placebo effect and this is where things get really complicated because we now have the notion that we have a set of replacements that will work for our diet or health because they contain useful chemicals, and a set of solutions that work because we believe in them.

When we look at education, where it’s successful, we see a lot of techniques being mixed in together in a ‘natural’ diet of knowledge construction and learning. Face-to-face and teamwork, sitting side-by-side with formative and summative assessment, as part of discussions or ongoing dialogues, whether physical or on-line. Exactly which parts of these constitute the “balanced” educational diet? We already know that a lecture, by itself, is not a complete educational experience, in the same way that a stand-alone multiple-choice question test will not make you a scholar. There is a great deal of work being done to establish an evidence basis for exactly which bits work but, as MIT said in the OCW release, these components do not make up a course. In dietary terms, it might be raw fuel but is it a desirable meal? Not yet, most likely.

Now let’s get into the placebo side of the equation, where students may react positively to something just because it’s a change, not because it’s necessarily a good change. We can control for these effects, if we’re cautious, and we can do it with full knowledge of the students but I’m very wary of any dependency upon the placebo effect, especially when it’s prefaced with “and the students loved it”. Sorry, students, but I don’t only (or even predominantly) care if you loved it, I care if you performed significantly better, attended more, engaged more, retaining the information for longer, could achieve more, and all of these things can only be measured when we take the trouble to establish base lines, construct experiments, measure things, analyse with care and then think about the outcomes.

My major concern about the whole MOOC discussion is not whether MOOCs are good or bad, it’s more to do with:

  • What does everyone mean when they say MOOC? (Because there’s variation in what people identify as the components)
  • Are we building a balanced diet or are we constructing a sustenance program with carefully balanced supplements that might miss something we don’t yet value?
  • Have we extracted the essential Vitamin Ed from the ‘real’ experience?
  • Can we synthesise Vitamin Ed outside of the ‘real’ educational experience?

I’ve been searching for a terminological separation that allows me to separate ‘real’/’conventional’ learning experiences from ‘virtual’/’new generation’/’MOOC’ experiences and none of those distinctions are satisfying – one says “Restaurant meal” and the other says “Army ration pack” to me, emphasising the separation. Worse, my fear is that a lot of people don’t regard MOOC as ever really having Vitamin Ed inside, as the MIT President clearly believed back in 2001.

I suspect that my search for Vitamin Ed starts from a flawed basis, because it assumes a single silver bullet if we take a literal meaning of the term, so let me me spread the concept out a bit to label Vitamin Ed as the essential educational components that define a good learning and teaching experience. Calling it Vitamin Ed gives me a flag to wave and an analogue to use, to explain why we should be seeking a balanced diet for all of our students, rather than a banquet for one and dog food for the other.