ICER 2013 – San Diego!
Posted: September 11, 2012 Filed under: Education | Tags: blogging, education, higher edu, icer, icer2012, icer2013, vernor vinge Leave a commentWe’re in the closing phases of ICER 2012 and we’re just learning about the location of ICER 2013, which is San Diego. Currently, we’re hearing about Vernor Vinge, one of my favourite authors, and his book “Rainbow’s End” is set on the UCSD campus.
So, I guess I’ll try to get to ICER 2013, end of July/beginning of August, in San Diego. See some of you there!
A side post on MOOCs: angrymath Hates Statistics 101
Posted: September 11, 2012 Filed under: Education | Tags: blogging, community, education, educational problem, feedback, higher education, moocs, teaching approaches, udacity Leave a commentA friend just forwarded me a rather scathing critique of one of the Udacity courses. The rather aptly named angrymath has published Udacity Statistics 101. To forewarn you, this is one of the leading quotes:
In brief, here is my overall assessment: the course is amazingly, shockingly awful.
As one of the commenters put it, hopefully the problems are growing pains and iteration towards perfection will continue. I haven’t seen the course in question so can’t comment, merely present.
ICER 2012 Day 1: Discussion Papers Session 1
Posted: September 11, 2012 Filed under: Education | Tags: blogging, community, education, educational research, higher education, icer, icer2012, measurement, principles of design, student perspective, teaching approaches, tools, universal principles of design Leave a commentICER contains a variety of sessions: research papers, discussion papers, lightning talks and elevator pitches. The discussion papers allow people to present ideas and early work in order to get the feedback of the community. This is a very vocal community so opening yourself up to discussion is going to be a bit like drinking from the firehouse: sometimes you quench your thirst for knowledge and sometimes you’re being water-cannonned.
Web-scale Data Gathering with BlueJ
Ian Utting, Neil Brown, Michael Kölling, Davin McCall and Philip Stevens
BlueJ is a very long-lived and widely used Java programming environment with a development environment designed to assist with the learning and teaching of object-oriented programming, as well as Java. The BlueJ project is now adding automated instrumentation to every single BlueJ installation and students can opt-in to a data reporting mechanism that will allow the collection and formation of a giant data repository: Project Blackbox. (As a note, that’s a bit of a super villain name, guys.)
Evaluating an Early Software Engineering Course with Projects and Tools from Open Source Software
Robert McCartney, Swapna Gokhale and Therese Smith
We tend to give Software Engineering students a project that requires them to undertake design and then, as a group, produce a large software artefact from scratch. In this talk, Robert discussed using existing projects that use a range of skills that are directly relevant to one of the most common activities our students will carray out in industry: maintenance and evolution.
Under a model of developing new features in an open-source system, the instructors provide a pre-selected set of projects and then the 2 person team:
- picks a project
- learns to comprehend code
- proposes enhancements
- describes and documents
- implements and presents
A Case Study of Environmental Factors Influencing Teaching Assistant Job Satisfaction
Elizabeth Patitsas
Elizabeth presented some interesting work on the impact of lecture theatres on what our TAs do. If the layout is hard to work with then, unsurprisingly, the TAs are less inclined to walk around and more inclined to disengage, sitting down the front checking e-mail. When we say ‘less inclined’, we mean that in closed lab layouts TAs spend 40% of the their time interacting with students, versus 76% in an open layout. However, these effects are also seen in windowless spaces: make a space unpleasant and you reduce the time that people spend answering questions and engaging.
The value of a pair of TAs was stressed: a pair gives you a backup but doesn’t lead to decision problems when coming to consensus. However, the importance of training was also stressed, as already clearly identified in the literature.
Education and Research: Evidence of a Dual Life
Joe Mirõ Julia, David López and Ricardo Alberich
ICER 2012 General Note
Posted: September 11, 2012 Filed under: Education | Tags: blogging, education, educational research, higher education, icer, icer2012 Leave a commentOnce again, we’re so full of interesting content that I don’t really have the time to put together some longer posts although I’m going to try and get something out over the tea break and lunch. In short, if you get a chance, COME TO ICER.
I will however note, while I can transcribe a lot of speakers almost as fast as they can deliver interesting talks, my top speed is asymptotically bound at an upper limit that I am officially designating One Guzdial.
ICER 2012 Day 1 Keynote: How Are We Thinking?
Posted: September 10, 2012 Filed under: Education | Tags: community, curriculum, education, educational problem, educational research, higher education, icer, icer 2012, in the student's head, reflection, teaching, teaching approaches, thinking, threshold concepts, tools, workload 3 CommentsWe started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):
- Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
- Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
- This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
- This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
- Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.
Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?
How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?
Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.
A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.
ICER 2012: So Good I Don’t Have Time To Blog!
Posted: September 10, 2012 Filed under: Education | Tags: education, educational research, higher education, icer, icer 2012 Leave a commentI’m going to try and post when I can but the conference is so good that there’s nothing I can skip. Apologies, I shall try and dump my notes from today when I have a chance!
ICER 2012: Day 0 (Workshops)
Posted: September 10, 2012 Filed under: Education | Tags: collaboration, community, design, education, educational problem, educational research, feedback, Generation Why, higher education, icer, icer 2012, in the student's head, learning, principles of design, student perspective, teaching, teaching approaches, workload 1 CommentWell, it’s Sunday so it must be New Zealand (or at least it was Sunday yesterday). I attended that rarest of workshops, one where every session was interesting and made me think – a very good sign for the conference to come.
We started with an on-line workshop on Bloom’s taxonomy, classifying exam questions, with Raymond Lister from UTS. One of the best things about this for me was the discussion about the questions where we disagreed: is this application or synthesis? It really made me think about how I write my examinations and how they could be read.
We then segued into a fascinating discussion of neo-Piagetian theory, where we see the development stages that we usually associate with children in adults as they learn new areas of knowledge. In (very rough) detail, we look at whether we have enough working memory to carry out a task and, if not, weird things happen.
Students can indulge in some weird behaviours when they don’t understand what’s going on. For example, permutation programming, where they just type semi-randomly until their program compiles or works. Other examples include shotgun debugging and voodoo programming and what these amount to are the student not having a good consistent model of what works and, as a result, they are basically dabbling in a semi-magic approach.
My notes from the session contain this following excerpt:
“Bizarro” novice programmer behaviours are actually normal stages of intellectual development.Accept this and then work with this to find ways of moving students from pre-op, to concrete op, to formal operational. Don’t forget the evaluation. Must scaffold this process!
What this translates to is that the strange things we see are just indications that students having moved to what we would normally associate with an ‘adult’ (formal operational) understanding of the area. This shoots several holes in the old “You’re born a programmer” fallacy. Those students who are more able early may just have moved through the stages more quickly.
There was also an amount of derisive description of folk pedagogy, those theories that arise during pontification in the tea room, with no basis in educational theory or formed from a truly empirical study. Yet these folk pedagogies are very hard to shake and are one of the most frustrating things to deal with if you are in educational research. One “I don’t think so” can apparently ignore the 70 years since Dewey called the classrooms prisons.
The worst thought is that, if we’re not trying to help the students to transition, then maybe the transition to concrete operation is happening despite us instead of because of us, which is a sobering thought.
I thought that Ray Lister finished the session with really good thought regarding why students struggle sometimes:
The problem is not a student’s swimming skill, it’s the strength of the torrent.
As I’ve said before, making hard things easier to understand is part of the job of the educator. Anyone will fail, regardless of their ability, if we make it hard enough for them.
Post 300 – 2012, the Year of the Plague
Posted: September 9, 2012 Filed under: Education, Opinion | Tags: 300, advocacy, authenticity, community, education, ethics, higher education, measurement, teaching approaches, vaccination Leave a commentAs it turns out, this is post 300 and I’m going to use it to make a far more opinionated point than usual. I’m currently in Auckland, New Zealand, and there is a warning up on the wall about a severe outbreak of measles. This is one of the most outrageously stupid signs to see on a wall, anywhere, given that we have had a solid vaccine since 1971 and, despite ill-informed and unscientific studies that try to contradict this, the overall impact of the MMR vaccine is overwhelmingly positive. There is no reasonable excuse for the outbreak of an infectious, dangerous disease 40 years after the development of a reliable (and overwhelmingly safe) vaccine.
My fear is that, rather than celebrating the elimination of measles and polio (under 200 cases this year so far according to the records I’ve seen) in the same way that we eradicated smallpox, we will be seeing more and more of these signs identifying outbreaks of eradicable and controllable diseases, because ignorance is holding sway.
Be in no doubt, if we keep going down this path, the risk increases rapidly that a disease will finish us off because we will not have the correct mental framing and scientific support to quickly respond to a lethal outbreak or mutation. The risk we take is that, one day, our cities lie empty with signs like this up all over the place, doors sealed with crosses on them, a quiet end to a considerable civilisation. All attributable to a rejection of solid scientific evidence and the triumph of ignorance. We have survived massive outbreaks before, even those with high lethality, but we have been, for want of a better word, lucky. We live in much denser environments and are far more connected than we were before. I can step around the world in a day and, with every step, a disease can follow my footsteps.
One of my students recently plotted 2009 Flu cases relative to air routes. While disease used to rely upon true geographical contiguity, we now connect the world with the false adjacency of the air route. Outbreaks in isolated parts of the world map beautifully to the air hubs and their importance and utilisation: more people, higher disease.
So, in short, it’s not just the way that we control the controllable diseases that is important, it is accepting that the lower risk of vaccination is justifiable in the light of the much greater risk of infection and pandemic. This fights the human tendency to completely misunderstand probability, our susceptibility to fallacious thinking, and our desperate desire to do no harm to our children. I get this but we have to be a little bit smarter or we are putting ourselves at a much higher risk – regrettably, this is a future risk so temporal discounting gets thrown into the mix to make it ever harder for people to make a good decision.
Here’s what the Smallpox Wikipedia page says: “Smallpox was an infectious disease unique to humans” (emphasis mine). This is one of the most amazing things that we have achieved. Let’s do it again!
I talk a lot about education, in terms of my thoughts on learning and teaching, but we must never forget why we educate. It’s to enlighten, to inform, to allow us to direct our considerable resources to solving the considerable problems that beset us. It’s helping people to make good decisions. It’s being aware of why people find it so hard to accept scientific evidence: because they’re scared, because someone lied to them, because no-one has gone to the trouble to actually try and explain it to them properly. Ignorance of a subject is the state that we occupy before we become informed and knowledgable. It’s not a permanent state!
That sign made me angry. But it underlined the importance of what it is that we do.
Conference Blogging! (Redux)
Posted: September 8, 2012 Filed under: Education | Tags: blogging, education, educational problem, educational research, feedback, Generation Why, higher education, icer, icer 2012, in the student's head, learning, measurement, student perspective, teaching, teaching approaches, time banking, workload 1 CommentI’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.
I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.
Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.
For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)
I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.
I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.
Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.
Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?
I’ll leave the rest of this discussion for after my talk on Monday.
And more on the Harvard Scandal: Scandal? Apparently it’s not?
Posted: September 7, 2012 Filed under: Education | Tags: advocacy, blogging, community, curriculum, education, educational research, ethics, higher education, in the student's head, plagiarism, reflection, student perspective, teaching, teaching approaches Leave a commentI’ve just read a Salon article regarding the Harvard cheating issue. Apparently, according to Farhad Manjoo, these students should be “celebrated for collaborating“.
Note that word? It’s the one that I picked on in the Crimson article and the reason that I did so is that it’s a very mild word, and a very positive one at that. However, this article, while acknowledging that the students were prevented from any such sharing, Manjoo then asks, to me somewhat disingenuously, “What’s the point of prohibiting these students from working together?”
Urm, well, for most of the course, they don’t. At the end of the course, when they want to see how much each individual knows, they attempt to test them individually. That’s not an unusual pattern.
Manjoo’s interpretation of the other articles goes well beyond anything else that I’ve seen, including putting all of the plagiarism claims together as group work and tutor consultation. I can’t speak to this as I don’t have his sources but, given that this was explicitly forbidden anyway, he’s making an empty argument. It doesn’t matter how you slice it, if students worked together, they did something that they weren’t supposed to do. However Manjoo argues that their actions are justified, I’m not sure that this argument is.
The author obviously disagrees with the nature of the open book test and, to my reading, has no real idea of what he’s talking about. Sentences like “But if you want to determine how well students think, why force them to think alone?” are almost completely self-defeating. It also ignores the need to build knowledge in a way that functions when the group isn’t there. We don’t use social constructivism in the assumption that we will always be travelling in packs, we do it to assist the construction of knowledge inside the individual by leveraging the advantages of the social structure. To evaluate how well it has happened, and to isolate group effects so that we can see the individual performing, we use rules such as Harvard clearly defined to set these boundaries.
Manjoo waxes rhetorical in this essay. “Rather than punishing these students, shouldn’t we be praising them for solving these problems the only way they could? ” Well, no, I think that we shouldn’t. There were many ways that, if they thought this approach was unreasonable or unfair, they could have legitimately protested. I note that half the class managed to not (apparently, as far as the number suspected) cheat during this test – what do we say about these people? Are these people worthy of double-plus-praise for somehow transcending the impossible test, or are they fools for not collaborating?
I’m not sure why these articles are providing so much padding for these students, if they have actually done nothing wrong (I hasten to add that they are merely suspected at the moment but if they are to be martyrs then let us assume a bleak outcome). At least, unlike the writers in the Crimson, Manjoo is a Cornell alumnus so he has some distance. I do note that he has a book called “True Enough: Learning to Live in a Post-Fact Society” which, according to the reviews, is about the media establishing views of reality that aren’t necessarily the facts so he’s aware of the impact that his words have on how people will see this issue. He is also writing in a column with, among its bylines, “The Conventional Wisdom Debunked”, so it’s not surprising that this article is written this way.
Manjoo has created (another) Harvard bogeyman: scared of collaboration, unfair to students, and out of step with reality. However, his argument is ultimately a series of misdirections and Manjoo’s opinion that don’t address the core issue: if these students worked with each other, they shouldn’t have. Until he accepts that this, and that this is not a legitimate course, I’m not sure that his arguments have much weight with me.


