CDEDU, Day 3, “Through the Lens of Third Space Theory: Possibilities For Research Methodologies in Educational Technologies”, (#csedu14 #AdelEd)

This talk was presented by Kathy Jordan and Jennifer Elsden-Clifton, both from RMIT University. They discussed educational technologies through another framework that they have borrowed from another area: third space theory. This allows us to describe how teachers and students use complex roles in their activities.

HALlo.

HALlo.

A lot of educational research is focused on the use of technology and can be rather theory light (no arguments from me), leading to technological evangelism that is highly determinist. (I’m assuming that the speakers mean technological determinism, which is the belief that it’s a society’s technology that drives its culture and social structures, after Veblen.) The MOOC argument was discussed again. Today, the speakers were planning to offer an alternative way to think about technology and use of technology. As always, don’t just plunk technology down in the classroom and expect it to achieve your learning and teaching goals. Old is not always bad and new is not always good, in effect. (I often say this and then present the reverse as well. Binary thinking is for circuits.)

The real voyage of discover consists not in seeing new landscapes, but in having new eyes.” (Proust, cited in Canfield, Hanson and Zlkman, 2002)

With whose eyes were my eyes crafted?” (Castor, 1991)

Basically, we bring ourselves to the landscape and have to think about why we see what we;re seeing. The new methodology proposed moves away from a simplistic, techno-centric approach and towards Third Space Theory. Third Space Theory is used to explore and understand the spaces in between two or more discourses, conceptualisations or binaries. (Bhabna, 1994). Thirdspace is this a “come together” space (Soja, 1996) to combine the first and second spaces and then enmesh the binaries that characterise these spaces. This also reduces the implicit privileging of one conceptual space over another.

Conceptualisations of the third space include bridges, navigational spaces and transformative spaces.  Interesting, from an editorial perspective, I find the binary notion of MOOC good/MOOC bad, which we often devolve to, is one of the key problems in discussing MOOCs because it often forces people into responding to a straw man and I think that this work on Thirdspaces is quite strong without having to refer to a perceived necessity for MOOCs.

Thirdspace theory is used across a variety of disciplines at the moment. Firstspace in our context could be face-to-face learning, the second space is “on-line learning” and the speakers argue that this binary classification is inherently divisive. Well, yes it is, but this assumes that you are not perceiving these are naturally overlapping when we consider blended learning, which we’ve really had as a concept since 1999. There are definitely problems when people move through f2f and on-line as if they are exclusive  binary aspects of some educational Janus but I wonder how much of that is lack of experience and exposure rather than a strict philosophical structure – but there is no doubt that thinking about these things as a continuum is beneficial and if Thirdspace theory brings people to it – then hooray!

(As Hugh noted yesterday, MOOC got people interested in on-line learning, which made it worth running MOOCs. Then, hooray!)

A lot of the discussion of technology in education is a collection of Shibboleths and “top of the head” solutions that have little maturity or strategy behind them, so a new philosophical approach to this is most definitely welcome and I need to read up more on Thirdspace, obviously.

The speakers provided some examples, including some learning fusion around Blackboard collaborate and the perceived inability of pre-service teachers to be able to move personal technology literacy into their new workplace, due to fear. So, in the latter case, Thirdspace allowed an analysis of the tensions involved and to assist the pre-service teachers in negotiating the “unfamiliar terrain” (Bhabha, 1992) of sanctioned technology frameworks in schools. (An interesting example was having t hand write an e-mail first before being allowed to enter it electronically – which is an extreme sanctioning of the digital space.)

I like the idea of the lens that Thirdspace provides but wonder whether we are seeing the liminal state that we would normally associate with a threshold concept. Rather than a binary model, we are seeing a layered model, where the between is neither stable nor clearly understood as it is heavily personalised. There is, of course, no guarantee that having a skill in one area makes it transferable to another because of the inherent contextual areas (hang on, have we gone into NeoPiaget!).

Anything that removes the potential classification of any category as a lower value or undesirable other is a highly desirable thing for me. The notion that transitional states, however we define them, are a necessary space that occurs between two extremes, whether they are dependent or opposing concepts, strongly reduces the perceived privilege of the certainty that so many people confuse with being knowledgeable and informed. Our students, delightful dualists that they are, often seek black/white dichotomies and it is part of our job to teach them that grey is not only a colour but an acceptable colour.

I think that labelling the MOOC discussion as a techno-determinist and shallow argument doesn’t really reflect the maturity of the discussion in the contemporary MOOC space and is a bit of a dismissive binary, if I can be so bold. We did discuss this in the questions and the speakers agreed that the discussion of MOOC has matured and was definitely in advance of the rather binary and outmoded description presented in the first keynote that I railed against. Yes, MOOCs have been presented by evangelists and profit makers as something but the educational community has done a lot of work to refine this and very few of the practitioners I know who are still involved in MOOC are what I would call techno-determinists. Techno-utopians, maybe, techno-optimisits, often, but techno-skeptics and serious, serious educational theorists who are also techno-optional, just as often.

The other potential of Third Space Theory is that it “provides a framework for destabilisation” and moving beyond past patterns rather than relying on old binary conceptualisations of new/old good/bad updated/outmoded. Projecting any single method to everything is always challenging and I suspect it’s a little bit of a hay hominid but the resulting questions clarified that the potential of Thirdspace is in being capable of deliberately rejecting staid and binary thinking, without introducing a new mode of privilege on to the new Thirdspace model. I’m not sure that I agree with all of the points here but I certainly have a lot to think about.


Humans: We Appear To Be Stuck With Them

I’ve just presented a paper with the ‘lofty’ title of “Computer Science Education: The First Threshold Concept” and the fundamental question I ask is “Why are certain ideas in learning and teaching in Computer Science just not getting any traction?” I frame this in the language of Threshold Concepts, which allows us to talk about certain concepts as being far more threatening than others but far more useful when we accept them. It doesn’t really matter why we say that people aren’t accepting these things, the fact is that they aren’t. Is it because of authority issues, from Perry’s work, where people aren’t ready to accept more than one source of truth? Is it because of poor role management, which leads us to the work of Dickinson? Is it because many people struggle in the pre-operational stages of Neo-Piagetian theory and, even if they can realise some concrete goals, they can’t apply things to the abstract?

It doesn’t matter, really, because we all have colleagues who, on reading the above, would roll their eyes and reject the notion that this is even a valid language of discourse. Why, some will wonder, are we making it so hard when we talk about teaching – “I know how to teach, it’s just sometimes that the students aren’t working hard enough or smart enough”.  When I mentioned to a colleague that I was giving this paper, he said “Feeling sensitive, are you?” and what he meant was, possibly with a slightly malign edge, that I was taking all of this criticism personally.

Yes, well, probably I am, but let’s talk about why. It’s because it’s important that students are taught well. It’s because it’s important that students get the best opportunities. It’s important that my assumptions about the world, my presumptions of my own ability and that of my students, do not have a detrimental effect on the way that I do my job. I’m taking money to be a teacher, a researcher and an academic administrator – I should be providing real value for that money.

But I am not, by any stretch, the best ‘anything’ in the world. I am not the best teacher. I am not the best researcher. I am not the best speaker. If you are looking for an expert in this area, look elsewhere, because I am a tolerable channel for the works of much better scholars. And, yes, I’m sensitive about some of this because, like many people I speak to in this community, I’m getting tired of having good, solid, scientific work rejected because people feel threatened by it or are dismissive of it. I’m sick of rubbish statements like “we can’t tell people how to teach” because, well, yes, actually we can but it requires us to define what teaching quality is and what our learning environments should look like – what we are trying to do, what we actually do and what we should be doing. Lots of work has been done here, lots of work is yet to occur, and, let me be clear, I am not now, or ever, saying that the “Nick way” is the only way  or the desired way – I’m saying that the discussion is important and that we should be able to say what good teaching is and then we must require this.

In my talk, I mentioned the use of social capital – the investment into our social networks that leads to real and future benefits – and how we spend a lot of time on bonding but too little time on bridging. In other words, we don’t have great ways to reach out and we miss opportunities but, a lot of the time, once we bring someone into the educational community, we can build those relationships. Unfortunately, this is not always true and politics, the curse of academia, too often raises its ugly head and provides too many possible venues, or excludes people, or drives wedges between the community when we should be bonding. I was saddened to discover that politics was traipsing around my current activity, as I was hoping that this would be a launchpad for more and more collaborative work – now we are in the middle of a field of politics.

*sigh*

So much energy – so much lost opportunity unless we use that energy to connect, build and work together. It’s not as if we don’t have enough people saying “Why are you bothering with that? I don’t see the need therefore it’s not important.” But this is humans, after all. My paper opened with a quote from Terence in 163BC,

Homo sum, humani a me nihil alienum puto (I am a [human], nothing human is foreign to me)”

and I then proceeded to shoot this down because threshold concept theory says that one of our key problems is that so much is foreign to us that, unless we recognise this, we are in trouble. However, some things are horribly familiar to us and the unpleasantries of academic politics are one that is not foreign to anyone who has spent more than a couple of years post-PhD.

When I looked at the recent ACM/IEEE Curriculum, the obvious omission was any real attempt to provide a grounding for pedagogy in the document. Hundreds, if not thousands, of concepts were presented with hours attached to them as if this was a formal scientific statement of actual time required to achieve the task. I see this as a wasted bridging opportunity to share, with everyone who reads that document, the idea that certain ideas are trickier, however we frame that statement. If we say “You might have some trouble with this”, we give agency to teachers to think about how they prepare and we also give them a licence to struggle with it, without being worried that they are fundamentally flawed as teachers. If we say “Students may find this challenging”, then the teachers can understand that they do not have a class of bad or lazy students, they have a class of humans because some things are harder to learn than others.

My point from the talk was that, however we slice it, we are fighting an uphill battle and need to focus on bringing in more and more people, which means focusing on bridging rather than division and, where possible, bridging with the same vigour as we bond with our current friends and colleagues. As for politics, it will always be with us, so I suppose the question now is how much energy we give to that, when we could be giving it to to bridging in new people and consolidating our bridges with other people? Bridges are fundamentally hard to build, because it’s so easy for them to fall down, and that’s why the maintenance, the bonding energy, is so important.

I don’t have a solid answer to this but I hope that someone else has some good ideas and feels like sharing them.


I am a potato – heading towards caramelisation. (Programming Language Threshold Concepts Part II)

Following up on yesterday’s discussion of some of the chapters in “Threshold Concepts Within the Disciplines”, I finished by talking about Flanagan and Smith’s thoughts on the linguistic issues in learning computer programming. This led me to the theory of markedness, a useful way to think about some of the syntactic structures that we see in computer programs. Let me introduce the concept of markedness with an example. Consider the pair of opposing concepts big/small. If you ask how ‘big’ something is, then you’re not actually assuming that the thing you’re asking about is ‘big’, you’re asking about its size. However, ask someone how ‘small’ something is and there’s a presumption that it’s actually small (most of the time). The same thing happens for old/young. Asking someone how old they are, bad jokes aside, is not implying that they are old – the word “old” here is standing in for the concept of age. This is an example of markedness in the relationship between lexical opposites: the assumed meaning (the default) is referred to as the unmarked form, where the marked form is more restrictive (in that it doesn’t subsume both concepts) and it is generally not the default. You see this in gender and plural forms too. In Lions/LionessesLions is an unmarked form because it’s the default and it doesn’t exclude the Lionesses, whereas Lionesses would not be the general form used (for whatever reasons, good or bad) and excludes the male lions.

Why is this important for programming languages? Because we often have syntactic elements (the structures and the tokens that we type) that take the form of opposing concepts where one is the default, and hence unmarked, form. Many modern languages employ object-oriented programming practices (itself a threshold concept) that allow programmers to specify how the data that they define inside their programs is going to be used, even within that program. These practices include the ability to set access controls, that strictly define how you can use your code, how other pieces of code that you write can use your code, and how other people’s code can use it, as well. The fundamental access control pairs are public and private, one of which says anyone can use this piece of code to calculate things or can change this value, the other restricts such use or change to the owner. In the Java programming language, public dominates, by far, and can be considered unmarked. Private, however, changes the way that you can work with your own code and it’s easy for students to get this wrong.  (To make it more confusing, there is another type of access control that sits effectively between public and private, which is an even more cognitively complex concept and is probably the least well understood of the lot!) One of the issues with any programming language is that deviating from the default requires you to understand what you are doing because you are having to type more, think more and understand more of the implications of your actions.

However, it gets harder, because we sometimes have marked/unmarked pairs where the unmarked element is completely invisible. If we didn’t have the need to describe how people could use our code then we wouldn’t need the access modifiers – the absence of public, private or protected wouldn’t signify anything. There are some implicit modes of operation in programming languages that can be overridden with keywords but the introduction of these keywords just doesn’t illustrate a positive/negative asymmetry (as with big/small or private/public), these illustrate an asymmetry between “something” and “nothing”. Now, the presence of a specific and marked keyword makes it glaringly obvious that there has been an invisible assumption sitting in that spot the whole time.

One of these troublesome word/nothing pairs is found in several languages and consists of the keyword static, with no matching keyword. What do you think the opposite (and pair) of static is? If you’re like most humans, you’d think dynamic. However, not only is this not what this keyword actually means but there is no dynamic keyword that balances it. Let’s look at this in Java:

public static void main(String [] args) {...}
public static int numberOfObjects(int theFirst) {...}
public int getValues() {...}

You’ll see that static keyword twice.Where static isn’t used, however, there’s nothing at all, and this (by its absence) also has a definite meaning and this defines what the default expectation is of behaviour in the Java programming language. From a teaching perspective, this means that we now have a default context, with a separation between those tokens and concepts that are marked and unmarked, and it becomes easier to see why students will struggle with instance methods and fields (which is what we call things without static) if we start with static, and struggle with the concept of static if we start the other way around! What further complicates is this is that every single program we write must contain at least one static method, because it is the starting point for the program’s execution. Even if you don’t want to talk about static yet, you must use it anyway (unless you want to provide the students with some skeleton code or a harness that removes this – but now we’ve put the wizard behind the curtain even more).

One other point I found very interesting in Flanagan and Smith’s chapter was the discussion of barriers and traps in programming languages, from Thimbleby’s critique of Java (1999). Barriers are the limitations on expressiveness that mean that what you want to say in a programming language can only be said in a certain way or in a certain place – which limits how we can explain the language and therefore affects learnability. As students tend to write their lines of code as and when they think of them, at least initially, these barriers will lead the students to make errors because they haven’t developed the locally valid computational idiom. I could ask for food in German as “please two pieces ham thick tasty” and, while I’ll get some looks, I’ll also get ham. Students hitting a barrier get confusing error messages that are given back to them at a time when they barely have enough framework to understand what these messages mean, let alone how to fix them. No ham for them!

THIS IS AN IMPORTANT QUESTION!

Traps are unknown and unexpected problems, such as those caused by not using the right way to compare two things in a program. In short, it is possible in many programming languages to ask “does this equal that” and return an answer of true or false that does not depend upon the values of this or that, but where they are being stored in memory. This is a trap. It is confusing for the novice to try to work out why the program is telling her that two containers that have the value “3” in them are not the same because they are duplicates rather than aliases for the same entity. These traps can seriously trip someone up as they attempt to form a correct mental model and, in the worst case, can lead to magical or cargo-cult thinking once again. (This is not helped by languages that, despite saying that they will take such-and-such an action, take actions that further undermine consistent mental models without being obvious about it. Sekrit Java String munging, I’m looking at you.)

This way of thinking about languages is of great interest to me because, instead of talking about usability in an abstract sense, we are now discussing concrete benefits and deficiencies in the language. Is it heavily restrictive on what goes where, such as Pascal’s pre-declaration of variables or Java’s package import restrictions? Does the language have a large number on unbalanced marked/unmarked pairs where one of them is invisible and possibly counterintuitive, such as static? Is it easy to turn a simple English statement into a programmatic equivalent that does not do what was expected?

The authors suggested ways to dealing with this, including teaching students about formal grammars for programming languages – effectively treating this as learning a new language because the grammar, syntax and semantics are very, very different from English.(Suggestions included Wittgenstein’s Sprachspiel, language game, which will be a post for another time.) Another approach is to start from logic and then work forwards, turning this into forms that will then match the programming languages and giving us a Rosetta stone between English speakers and program speakers.

I have found the whole book very interesting so far and, obviously, so too this chapter. Identifying the problems and their locations, regrettably, is only the starting point. Now I have to think about ways to overcome this, building on what these and other authors have already written.


Imagine that you are a raw potato…

Tuber or not tuber.

The words in the title of this post, surprisingly, are the first words in the Editors’ Preface to Land, Meyer and Smiths 2008 edited book “Threshold Concepts within the Disciplines”. Our group has been looking at the penetration of certain ideas through the discipline, examining how much the theory social constructivism accompanies the practice of group work for example, or, as in this case, seeing how many people identify threshold concepts in what they are trying to teach. Everyone who teaches first year Computer Science knows that some ideas seem to be sticking points and Meyer and Land’s two papers on “Threshold Concepts and Troublesome Knowledge” (2003 and 2005) provide a way of describing these sticking points by characterising why these particular aspects are hard – but also by identifying the benefits when someone actually gets it.

Threshold concept theory, in the words of Cousin, identifies the “the kind of complicated learner transitions learners undergo” and identifies portals that change the way that you think about a given discipline. This is deeply related to our goal of “Thinking as a discipline practitioner” because we must assume that a sound practitioner has passed through these portals and has transformed the way that they think in order to be able to practice correctly. Put simply, being a mathematician is more than plugging numbers into formulae.

As you can read, and I’ve mentioned in a previous post, threshold concepts are transformative, integrative, irreversible and (unfortunately) troublesome. Once you have passed through the hurdle then a new vista opens up before you but, my goodness, sometimes that’s a steep hurdle and, unsurprisingly, this is where many students fall.

The potato example in the preface describes the irreversible chemical process of cooking and how the way that we can use the potato changes at each stage. Potatoes, thankfully unaware, have no idea of what is going on nor can they oscillate on their pathway to transformation. Students, especially in the presence of the challenging, can and do oscillate on their transformational road. Anyone who teaches has seen this where we make great strides on one day and, the next, some of the progress ebbs away because a student has fallen back to a previous way of thinking. However, once we have really got the new concept to stick, then we can move forward on the basis of the new knowledge.

Threshold concepts can also be thought of as marking the boundary of areas within a discipline and, in this regard, have special interest to teachers and learners alike. Being able to subdivide knowledge into smaller sections to develop mastery that then allows further development makes the learning process easier to stage and scaffold. However, the looming and alien nature of the portal between sections introduces a range of problems that will apply to many of our students, so we have to ready to assist at these key points.

The book then provides a collection of chapters that discuss how these threshold concepts manifest inside different disciplines and in what forms the alien and troublesome nature can appear. It’s unsurprising again, for anyone teaching Computer Science or programming, that there are a large number of fundamental concepts in programming that are considered threshold concepts. These include the notion of program state, the collection of data that describes the information within a program. While state is an everyday concept (the light is on, the lift is on level 4), the concentration on state, the limitations and implications of manipulation and the new context raise this banal and everyday concepts into the threshold area. A large number of students can happily tell you which floor the lift is on, but cannot associate this physical state with the corresponding programmatic state in their own code.

Until students master some of these concepts, their questions will always appear facile, potentially ill-formed and (regrettably) may be interpreted as lazy. Flanagan and Smith raise an interesting point in that programming languages, which are written in pseudo-English with a precise but alien grammar, may be leading a linguistic problem, where the translation to a comprehensible form is one of the first threshold concepts that a student faces. As an example, consider this simple English set of instructions:

There are 10 apples in the basket.
Take each apple out of the basket, polish it, and place it in the sink.

Now let’s look at what the ‘take each apple’ instruction looks like in the C programming language.

for (int i  = 0; i < numberOfApples; i++) {
  // commands here
}

This is second nature to me to read but a number of you have just looked at that and gone ‘huh’? If you don’t learn what each piece does, understand its importance and can then actually produce it when asked then the risk is that you will just reproduce this template whenever I ask you to count apples. However, there are two situations that humans understand readily: “do something so many times” and “do something UNTIL something happens”. In programs we write these two cases differently – but it’s a linguistic distinction that, from Flanagan and Smith’s work “From Playing to Understanding”, correlates quite well with an ability to pick the more appropriate way of writing the program. If the language itself is the threshold, and for some students it certainly appears that it is, then we are not even able to assume that the students will reach the first stage of ‘local thresholds’ found within the subdomain itself, they are stuck on the outside reading a menu in a foreign language trying to work out if it says “this way to the toilet”.

Such linguistic thresholds will make students appear very, very slow and this is a problem. If you ask a student a question and the words make no sense in the way that you’re presenting them, then they will either not respond (if they have a choice) as they don’t know what you asked, they will answer a different question (by taking a stab at the meaning) or they will ask you what you mean. If someone asks you what you mean when, to you, the problem is very simple, we run the risk of throwing up a barrier between teacher and learner, the teacher assuming that the learner is stupid or lazy, the student assuming that the teacher either doesn’t know what they’re saying or doesn’t care about them.

I’ll write more on the implications of all of this tomorrow.


The Earth Goes Around the Sun or the Sun Goes Around the Earth: Your Reaction Reflects Your Investment

There is a rather good new BBC version of Sherlock Holmes, called Sherlock because nobody likes confusion, where Holmes is played by Benedict Cumberbatch. One of the key points about Holmes’ focus is that it comes at a very definite cost. At one point, Cumberbatch’s Holmes is being lightly mocked because he was unaware that the Earth goes around the Sun. He is completely unfazed by this (he may have known it but he deleted it) because it’s not important to him. This extract is from the episode “The Great Game”:

Sherlock Holmes: Listen: [gets up and points to his head] This is my hard-drive, and it only makes sense to put things in there that are useful. Really useful. Ordinary people fill their heads with all kinds of rubbish, and that makes it hard to get at the stuff that matters! Do you see?

John Watson[brief silence; looks at Sherlock incredulously] But it’s the solar system!

Sherlock Holmes[extremely irritated by now] Oh, hell! What does that matter?! So we go around the sun! If we went around the moon or round and round the garden like a teddy bear, it wouldn’t make any difference! All that matters to me is the work!

Sherlock’s (self-described) sociopathy and his focus on his work make heliocentricity an irrelevant detail. But this clearly indicates his level of investment in his work. All the versions of Sherlock have extensive catalogues of tobacco types, a detailed knowledge of chemistry and an unerring eye for detail. If someone had walked up to him and said “Captain Ross smokes Greenseas tobacco” and they were wrong then Sherlock’s agitation (and derision) would be directed at them: worse if he had depended upon this fact to draw a conclusion.

We are all well aware that such indifference to whether Sun or Earth occupies the centre of the Solar System has not always been received so sanguinely. As it turns out, while there is widespread acceptance of the fact of heliocentricity, there is still considerable opposition in some quarters and, in the absence of scientific education, it is easy to see why people would naturally assume by simple (unaided) observation that the Sun is circling us, rather than the reverse. You have to accept a number of things before heliocentricity moves from being a sound mathematical model for calculation (as Cardinal Bellarmine did when discussing it with Galileo, because it so well explains things hypothetically) to the acceptance of it as the model of what actually occurs (as it makes the associated passages of scripture much harder to deal with). And the challenge of accepting this often lies in the degree to which that acceptance will change your world.

Your reaction reflects your investment.

Sherlock didn’t care either way. His world was not shaken by which orbited what because it was not a key plank of his being, nor did it force him to revise anything that he cared about. Cardinal Bellarmine, in discussions with Galileo, had a much greater investment, acting as he was on behalf of the Church and, one can only assume, firm in his belief in scripture while retaining his sensibilities to be able work in science (Bellarmine was a Jesuit and worked predominantly in theology). As he is quoted:

If there were a real proof that the Sun is in the center of the universe, that the Earth is in the third sphere, and that the Sun does not go round the Earth but the Earth round the Sun, then we should have to proceed with great circumspection in explaining passages of Scripture which appear to teach the contrary, and we should rather have to say that we did not understand them than declare an opinion false which has been proved to be true. But I do not think there is any such proof since none has been shown to me.

It’s easy to think that these battles are over but, of course, as we deal with one challenging issue, another arises. This battle is not actually over. The 2006 General Social Survey showed that 18.3% of those people surveyed thought that the Sun went around the Earth, and 8% didn’t know. (0.1% refused. I think I’ve read his webpage.) (If you’re interested, all of the GSS data and its questions are available here. I hope to run the more recent figures to see how this has trended but I’ve run out of time this week.) That’s a survey run in 2006 in the US.

Why do nearly a quarter of the US population (or why did, given that this is 2006) not know about the Earth going around the Sun? As an educator, I have to look at this because if it’s because nobody told them, then, boy, do we have some ‘splaining to do. If it’s because they deleted it like Sherlock, then we have some seriously focused people or a lot of self-deleting sociopaths. (This is also not that likely a conjecture.) If it’s because someone told them that believing this meant that they had to spit in the face of one god or another, then we are seeing the same old combat between reaction and investment. There are a number of other correlations on this that, fortunately, indicate that this might be down to poor education, as knowledge of heliocentricity appears to correlate with the number of words that people got correct in the vocabulary test. Also, the number of people who didn’t accept heliocentricity decreased with increasing education. (Yes, that can also be skewed culturally as well but the large-representation major religions embrace education.)

So, and it’s a weird straw to clutch at and I need to dig more, it doesn’t appear that heliocentricity is, in the majority of cases, being rejected because of a strong investment in an antithetical stance, it’s just a lack of education or retention of that information. So, maybe we can put this one down, give more money to science teachers and move on.

But let me get to the meat of my real argument here, which is that a suitably alien or counter-intuitive proposition will be met with hostility, derision and rejection. When things matter, for whatever reason, we take them more seriously. When we take things so seriously that they shape how we live, consciously or not, then there is a problem when those underpinnings are challenged. We can make empty statements like “well, I suppose that works in theory” when the theory forces us to accept that we have been wrong, or at least walking on the less righteous path. When someone says to me “well, that’s fine in theory” I know what they are really saying. I’ve heard it before from Cardinal Bellarmine and it has gained no more weight since then. So it’s hard? Our job is hard. Constantly questioning is hard, tiring and often unrewarding. Yet, without it, we would have achieved very, very little.

People of all colours and races are equal? Unthinkable! Against our established texts! Supported by pseudo-science and biased surveys! They appear to be more similar than we thought! But they can’t marry! Wait, they can! They are equal! How can you think that they’re not?!

How many times do we have to go through this? We are playing out the same argument over and over again: when it matters enough (or too much), we resist to the point where we are being stubborn and (often) foolish.

And, that, I believe is where we stand in the middle of all of these revelations of unconscious and systematic bias against women that I referred to in my last post. People who have considered themselves fair and balanced, objective and ethical, now have to question whether they have been operating in error over all these years – if they accept the research published in PNAS and all of the associated areas. Suddenly, positive discrimination hiring policies become obvious as they now allow the hiring of people who appear to be the same, that the evidence now says have most likely been undervalued. This isn’t disadvantaging a man, this is being fair to the best candidate.

When presented with something challenging I find it helpful to switch the focus or the person involved. Would I be so challenged if it were to someone else? If the new revelation concerned these people or those people? How would I feel about if I read it in the paper? Would it matter if someone I trusted said it to me? Where are my human frailties and how I can account for them?

But, of course, as an educator, I have to think about how to frame my challenging and heretical information so that I don’t cause a spontaneous rejection that will prevent further discussion. I have to provide an atmosphere that exemplifies good practice, a world where people eventually wonder why this part of the world seems to be better, fairer and more reasonable than that part of the world. Then, with any luck, they take their questioning and new thinking to another place and we seed better things.


More on Computer Science Education as a fundamentally challenging topic.

Homo sum, humani a me nihil alienum puto (I am a [human], nothing human is foreign to me)” , Terence, 163BC

While this is a majestic sentiment, we are constantly confronted by how many foreign ideas and concepts there are in our lives. In the educational field, Meyer and Land have identified threshold concepts as a set of concepts that are transformative once understood but troublesome and alien before they are comprehended. The existence of these, often counter-intuitive, concepts give the lie to Terence’s quote as it appears that certain concepts will be extremely foreign and hard to communicate or comprehend until we understand them. (I’ve discussed this before in my write-up of the ICER Keynote.)

“Terry” to his friends.

Reading across the fields of education, educational psychology and Computer Science education research, it rapidly becomes apparent that some ideas have been described repeatedly over decades, but have gained little traction. Dewey’s disgust at the prison-like school classroom was recorded in 1938, yet you can walk onto any campus in the world and find the same “cells”, arrayed in ranks. The lecture is still the dominant communication form in many institutions, despite research support for the far greater efficacy of different approaches. For example, the benefits of social constructivism, including the zone of proximal development, are well known and extensively studied, yet even where group work is employed, it is not necessarily designed or facilitated to provide the most effective outcomes. The majority of course design and implementation shows little influence of any of the research conducted in the last 20 years, let alone the cognitive development stages of Piaget, the reliance upon authority found in Perry or even the existence of threshold concepts themselves. Why?

From a personal perspective, I was almost completely ignorant of the theoretical underpinnings of educational practice until very recently and I still rate myself as a rank novice in the area. I write here to be informed, not to be seen as an expert, and I learn from thinking and writing about what I’m doing. I am also now heavily involved in a research group that focuses on this so I have the peer support and time to start learning in the fascinating area of Computer Science Education. Many people, however, do not, and it is easy to see why one would not confront or even question the orthodoxy when one is unaware of any other truth.

Of course, as we all know, it is far harder to see that anything needs fixing when, instead of considering that our approach may be wrong, we identify our students as the weak link in the chain. It’s easy to do and, because we are often not scrupulously scientific in our recollection of events (because we are human), our anecdotal evidence dominates our experience. “Good” students pass, “bad” students fail. If we then define a bad student as “someone who fails”, we have a neat (if circular) definition that shields us from any thoughts on changing what we do.

When I found out how much I had to learn, I initially felt very guilty about some of the crimes that I had perpetrated against my students in my ignorance. I had bribed them with marks, punished them for minor transgressions with no real basis, talked at them for 50 minutes and assumed that any who did not recall my words just weren’t paying attention. At the same time, I carried out my own tasks with no bribery, negotiated my own deadlines and conditions, and checked my mail whenever possible in any meetings in which I felt bored. The realisation that, even through ignorance and human frailty, you have let your students down is not a good feeling, especially when you realise that you have been a hypocrite.

I lament the active procrastinator, who does everything except the right work and thus fails anyway with a confused look on their face, and I feel a great sympathy for the caring educator who, through lack of exposure or training, has no idea that what they are doing is not the best thing for their students. This is especially true when the educators have been heavily acculturated by their elders and superiors, at a vulnerable developmental time, and now not only have to question their orthodoxy, they must challenge their mentors and friends.

Scholarship in Computer Science learning and teaching illuminates one’s teaching practice. Discovering tools, theories and methodologies that can explain the actions of our students is of great importance to the lecturer and transforms the way that one thinks about learning and teaching. But transformative and highly illuminative mechanisms often come at a substantial cost in terms of the learning curve and we believe that this explains why there is a great deal of resistance from those members of the community who have not yet embraced the scholarship of learning and teaching. Combine this with a culture where you may be telling esteemed and valued colleagues that they have been practising poorly for decades and the resistance becomes even more understandable. We must address the fact that resistance to acceptance in the field may stem from effects that we would carefully address in our students (their ongoing problems with threshold concepts) but that we expect our colleagues to just accept these alien, challenging and unsettling ideas merely because we are right.

The burden of proof does not, I believe, lie with us. We have 70 years of studies in education and over 100 years of study in work practices to establish the rightness of our view. However, I wonder how we can approach our colleagues who continue to question these strange, counter-inutitive and frightening new ideas and help them to understand and eventually adopt these new concepts?

 


ICER 2012 Day 1 Keynote: How Are We Thinking?

We started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):

  1. Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
  2. Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
  3. This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
  4. This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
  5. Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.

Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?

How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?

Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.

A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.