The Illusion of a Number
Posted: January 10, 2016 Filed under: Education, Opinion | Tags: authenticity, beauty, curve grading, design, education, educational problem, educational research, ethics, grading, higher education, in the student's head, learning, rapaport, reflection, resources, teaching, teaching approaches, thinking, tools, wittgenstein 3 Comments
Rabbit? Duck? Paging Wittgenstein!
I hope you’ve had a chance to read William Rapaport’s paper, which I referred to yesterday. He proposed a great, simple alternative to traditional grading that reduces confusion about what is signalled by ‘grade-type’ feedback, as well as making things easier for students and teachers. Being me, after saying how much I liked it, I then finished by saying “… but I think that there are problems.” His approach was that we could break all grading down into: did nothing, wrong answer, some way to go, pretty much there. And that, I think, is much better than a lot of the nonsense that we pretend we hand out as marks. But, yes, I have some problems.
I note that Rapaport’s exceedingly clear and honest account of what he is doing includes this statement. “Still, there are some subjective calls to make, and you might very well disagree with the way that I have made them.” Therefore, I have license to accept the value of the overall scholarship and the frame of the approach, without having to accept all of the implementation details given in the paper. Onwards!
I think my biggest concern with the approach given is not in how it works for individual assessment elements. In that area, I think it shines, as it makes clear what has been achieved. A marker can quickly place the work into one of four boxes if there are clear guidelines as to what has to be achieved, without having to worry about one or two percentage points here or there. Because the grade bands are so distinct, as Rapaport notes, it is very hard for the student to make the ‘I only need one more point argument’ that is so clearly indicative as a focus on the grade rather than the learning. (I note that such emphasis is often what we have trained students for, there is no pejorative intention here.) I agree this is consistent and fair, and time-saving (after Walvoord and Anderson), and it avoids curve grading, which I loathe with a passion.
However, my problems start when we are combining a number of these triaged grades into a cumulative mark for an assignment or for a final letter grade, showing progress in the course. Sections 4.3 and 4.4 of the paper detail the implementation of assignments that have triage graded sub-tasks. Now, instead of receiving a “some way to go” for an assignment, we can start getting different scores for sub-tasks. Let’s look at an example from the paper, note 12, to describe programming projects in CS.
- Problem definition 0,1,2,3
- Top-down design 0,1,2,3
- Documented code
- Code 0,1,2,3
- Documentation 0,1,2,3
- Annotated output
- Output 0,1,2,3
- Annotations 0,1,2,3
Total possible points = 18
Remember my hypothetical situation from yesterday? I provided an example of two students who managed to score enough marks to pass by knowing the complement of each other’s course knowledge. Looking at the above example, it appears (although not easily) to be possible for this situation to occur and both students to receive a 9/18, yet for different aspects. But I have some more pressing questions:
- Should it be possible for a student to receive full marks for output, if there is no definition, design or code presented?
- Can a student receive full marks for everything else if they have no design?
The first question indicates what we already know about task dependencies: if we want to build them into numerical grading, we have to be pedantically specific and provide rules on top of the aggregation mathematics. But, more subtly, by aggregating these measures, we no longer have an ‘accurately triaged’ grade to indicate if the assignment as a whole is acceptable or not. An assignment with no definition, design or code can hardly be considered to be a valid submission, yet good output, documentation and annotation (with no code) will not give us the right result!
The second question is more for those of us who teach programming and it’s a question we all should ask. If a student can get a decent grade for an assignment without submitting a design, then what message are we sending? We are, implicitly, saying that although we talk a lot about design, it’s not something you have to do in order to be successful. Rapaport does go on to talk about weightings and how we can emphasis these issues but we are still faced with an ugly reality that, unless we weight our key aspects to be 50-60% of the final aggregate, students will be able to side-step them and still perform to a passing standard. Every assignment should be doing something useful, modelling the correct approaches, demonstrating correct techniques. How do we capture that?
Now, let me step back and say that I have no problem with identifying the sub-tasks and clearly indicating the level of performance using triage grading, but I disagree with using it for marks. For feedback it is absolutely invaluable: triage grading on sub-tasks will immediately tell you where the majority of students are having trouble, quickly. That then lets you know an area that is more challenging than you thought or one that your students were not prepared for, for some reason. (If every student in the class is struggling with something, the problem is more likely to lie with the teacher.) However, I see three major problems with sub-task aggregation and, thus, with final grade aggregation from assignments.
The first problem is that I think this is the wrong kind of scale to try and aggregate in this way. As Rapaport notes, agreement on clear, linear intervals in grading is never going to be achieved and is, very likely, not even possible. Recall that there are four fundamental types of scale: nominal, ordinal, interval and ratio. The scales in use for triage grading are not interval scales (the intervals aren’t predictable or equidistant) and thus we cannot expect to average them and get sensible results. What we have here are, to my eye, ordinal scales, with no objective distance but a clear ranking of best to worst. The clearest indicator of this is the construction of a B grade for final grading, where no such concept exists in the triage marks for assessing assignment quality. We have created a “some way to go but sometimes nearly perfect” that shouldn’t really exist. Think of it like runners: you win one race and you come third in another. You never actually came second in any race so averaging it makes no sense.
The second problem is that aggregation masks the beauty of triage in terms of identifying if a task has been performed to the pre-determined level. In an ideal world, every area of knowledge that a student is exposed to should be an important contributor to their learning journey. We may have multiple assignments in one area but our assessment mechanism should provide clear opportunities to demonstrate that knowledge. Thus, their achievement of sufficient assignment work to demonstrate their competency in every relevant area of knowledge should be a necessary condition for graduating. When we take triage grading back to an assignment level, we can then look at our assignments grouped by knowledge area and quickly see if a student has some way to go or has achieved the goal. This is not anywhere near as clear when we start aggregating the marks because of the mathematical issues already raised.
Finally, the reduction of triage to mathematical approximation reduces the ability to specify which areas of an assessment are really valuable and, while weighting is a reasonable approximation to this, it is very hard to use a mathematical formula with more and more ‘fudge factors’, a term Rapaport uses, to make up for the fact that this is just a little too fragile.
To summarise, I really like the thrust of this paper. I think what is proposed is far better, even with all of the problems raised above, at giving a reasonable, fair and predictable grade to students. But I think that the clash with existing grading traditions and the implicit requirement to turn everything back into one number is causing problems that have to be addressed. These problems mean that this solution is not, yet, beautiful. But let’s see where we can go.
Tomorrow, I’ll suggest an even more cut-down version of grading and then work on an even trickier problem: late penalties and how they affect grades.
Assessment is (often) neither good nor true.
Posted: January 9, 2016 Filed under: Education, Opinion | Tags: advocacy, aesthetics, beauty, community, design, education, ethics, higher education, in the student's head, kohn, principles of design, rapaport, reflection, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design, work/life balance, workload 3 CommentsIf you’ve been reading my blog over the past years, you’ll know that I have a lot of time for thinking about assessment systems that encourage and develop students, with an emphasis on intrinsic motivation. I’m strongly influenced by the work of Alfie Kohn, unsurprisingly given I’ve already shown my hand on Focault! But there are many other writers who are… reassessing assessment: why we do it, why we think we are doing it, how we do it, what actually happens and what we achieve.

In my framing, I want assessment to be as all other aspects of education: aesthetically satisfying, leading to good outcomes and being clear and what it is and what it is not. Beautiful. Good. True. There are some better and worse assessment approaches out there and there are many papers discussing this. One of these that I have found really useful is Rapaport’s paper on a simplified assessment process for consistent, fair and efficient grading. Although I disagree with some aspects, I consider it to be both good, as it is designed to clearly address a certain problem to achieve good outcomes, and it is true, because it is very honest about providing guidance to the student as to how well they have met the challenge. It is also highly illustrative and honest in representing the struggle of the author in dealing with the collision of novel and traditional assessment systems. However, further discussion of Rapaport is for the near future. Let me start by demonstrating how broken things often are in assessment, by taking you through a hypothetical situation.
Thought Experiment 1
Two students, A and B, are taking the same course. There are a number of assignments in the course and two exams. A and B, by sheer luck, end up doing no overlapping work. They complete different assignments to each other, half each and achieve the same (cumulative bare pass overall) marks. They then manage to score bare pass marks in both exams, but one answers only the even questions and only answers the odd. (And, yes, there are an even number of questions.) Because of the way the assessment was constructed, they have managed to avoid any common answers in the same area of course knowledge. Yet, both end up scoring 50%, a passing grade in the Australian system.
Which of these students has the correct half of the knowledge?
I had planned to build up to Rapaport but, if you’re reading the blog comments, he’s already been mentioned so I’ll summarise his 2011 paper before I get to my main point. In 2011, William J. Rapaport, SUNY Buffalo, published a paper entitled “A Triage Theory of Grading: The Good, The Bad and the Middling.” in Teaching Philosophy. This paper summarised a number of thoughtful and important authors, among them Perry, Wolff, and Kohn. Rapaport starts by asking why we grade, moving through Wolff’s taxonomic classification of assessment into criticism, evaluation, and ranking. Students are trained, by our world and our education systems to treat grades as a measure of progress and, in many ways, a proxy for knowledge. But this brings us into conflict with Perry’s developmental stages, where students start with a deep need for authority and the safety of a single right answer. It is only when students are capable of understanding that there are, in many cases, multiple right answers that we can expect them to understand that grades can have multiple meanings. As Rapaport notes, grades are inherently dual: a representative symbol attached to a quality measure and then, in his words, “ethical and aesthetic values are attached” (emphasis mine.) In other words, a B is a measure of progress (not quite there) that also has a value of being … second-tier if an A is our measure of excellence. A is not A, as it must be contextualised. Sorry, Ayn.
When we start to examine why we are grading, Kohn tells us that the carrot and stick is never as effective as the motivation that someone has intrinsically. So we look to Wolff: are we critiquing for feedback, are we evaluating learning, or are we providing handy value measures for sorting our product for some consumer or market? Returning to my thought experiment above, we cannot provide feedback on assignments that students don’t do, our evaluation of learning says that both students are acceptable for complementary knowledge, and our students cannot be discerned from their graded rank, despite the fact that they have nothing in common!
Yes, it’s an artificial example but, without attention to the design of our courses and in particular the design of our assessment, it is entirely possible to achieve this result to some degree. This is where I wish to refer to Rapaport as an example of thoughtful design, with a clear assessment goal in mind. To step away from measures that provide an (effectively) arbitrary distinction, Rapaport proposes a tiered system for grading that simplifies the overall system with an emphasis on identifying whether a piece of assessment work is demonstrating clear knowledge, a partial solution, an incorrect solution or no work at all.
This, for me, is an example of assessment that is pretty close to true. The difference between a 74 and a 75 is, in most cases, not very defensible (after Haladyna) unless you are applying some kind of ‘quality gate’ that really reduces a percentile scale to, at most, 13 different outcomes. Rapaport’s argument is that we can reduce this further and this will reduce grade clawing, identify clear levels of achieve and reduce marking load on the assessor. That last point is important. A system that buries the marker under load is not sustainable. It cannot be beautiful.
There are issues in taking this approach and turning it back into the grades that our institutions generally require. Rapaport is very open about the difficulties that he has turning his triage system into an acceptable letter grade and it’s worth reading the paper to see that discussion alone, because it quite clearly shows what
Rapaport’s scheme clearly defines which of Wolff’s criteria he wishes his assessment to achieve. The scheme, for individual assessments, is no good for ranking (although we can fashion a ranking from it) but it is good to identify weak areas of knowledge (as transmitted or received) for evaluation of progress and also for providing elementary critique. It says what it is and it pretty much does it. It sets out to achieve a clear goal.
The paper ends with a summary of the key points of Haladyna’s 1999 book “A Complete Guide to Student Grading”, which brings all of this together.
Haladyna says that “Before we assign a grade to any students, we need:
- an idea about what a grade means,
- an understanding of the purposes of grading,
- a set of personal beliefs and proven principles that we will use in teaching
and grading,
- a set of criteria on which the grade is based, and, finally,
- a grading method,which is a set of procedures that we consistently follow
in arriving at each student’s grade. (Haladyna 1999: ix)
There is no doubt that Rapaport’s scheme meets all of these criteria and, yet, for me, we have not yet gone far enough in search of the most beautiful, most good and most true extent that we can take this idea. Is point 3, which could be summarised as aesthetics not enough for me? Apparently not.
Tomorrow I will return to Rapaport to discuss those aspects I disagree with and, later on, discuss both an even more trimmed-down model and some more controversial aspects.
Beauty Attack I: Assessment
Posted: January 8, 2016 Filed under: Education, Opinion | Tags: aesthetics, beauty, design, discipline, education, educational research, ethics, foucault, higher education, in the student's head, principles of design, punishment, reflection, resources, teaching, teaching approaches 3 CommentsFor the next week, I’m going to be applying an aesthetic lens to assessment and, because I’m in Computer Science, I’ll be focusing on the assessment of Computer Science knowledge and practice.
How do we know if our students know something? In reality, the best way is to turn them loose, come back in 25 years and ask the people in their lives, their clients, their beneficiaries and (of course) their victims, the same question: “Did the student demonstrate knowledge of area X?”
This is not available to us as an option because my Dean, if not my Head of School, would probably peer at me curiously if I were to suggest that all measurement of my efficacy be moved a generation from now. Thus, I am forced to retreat to the conventions and traditions of assessment: it is now up to the student to demonstrate to me, within a fixed timeframe, that he or she has taken a firm grip of the knowledge.
We know that students who are prepared to learn and who are motivated to learn will probably learn, often regardless of what we do. We don’t have to read Vallerand et al to be convinced that self-motivated students will perform, as we can see it every day. (But it is an enjoyable paper to read!) Yet we measure these students in the same assessment frames as students who do not have the same advantages and, thus, may not yet have the luxury or capacity of self-motivation: students from disadvantaged backgrounds, students who are first-in-family and students who wouldn’t know auto-didacticism if it were to dance in front of them.
How, then, do we fairly determine what it means to pass, what it means to fail and, even more subtly, what it means to pass or fail well? I hesitate to invoke Foucault, especially when I speak of “Discipline and Punish” in an educational setting, but he is unavoidable when we gaze upon a system that is dedicated to awarding ranks, graduated in terms of punishment and reward. It is strange, really, that were many patients to die under the hand of a surgeon for a simple surgery, we would ask for an inquest, but many students failing under the same professor in a first-year course is merely an indicator of “bad students”. So many of our mechanisms tell us that students are failing but often too late to be helpful and not in a way that encourages improvement. This is punishment. And it is not good enough.

Foucault: thinking about something very complicated, apparently.
Our assessment mechanisms are not beautiful. They are barely functional. They exist to provide a rough measure to separate pass from fail, with a variety of other distinctions that owe more to previous experience and privilege in many cases than any higher pedagogical approach.
Over the next week, I shall conduct an attack upon the assessment mechanisms that are currently used in my field, including my own, in the hope of arriving at a mechanism of design, practice and validation that is pedagogically pleasing (the aesthetic argument again) and will lead to outcomes that are both good and true.
Dewey’s Pedagogic Creed
Posted: January 6, 2016 Filed under: Education, Opinion | Tags: aesthetics, authenticity, beauty, design, dewey, education, educational problem, educational research, ethics, higher education, in the student's head, john c. dewey, learning, pragmatism, reflection, teaching, thinking Leave a commentAs I’ve noted, the space I’m in is not new, although some of the places I hope to go with it are, and we have records of approaches to education that I think fit well into an aesthetic framing.
As a reminder, I’m moving beyond ‘sensually pleasing’ in the usual sense and extending this to the wider definition of aesthetics: characteristics that define an approach or movement. However, we can still see a Cubist working as both traditionally aesthetically pleasing and also beautiful because of its adherence to the Cubist aesthetic. To draw on this, where many art viewers find a large distance between them and an art work, it is often attributable to a conflict over how beauty is defined in this context. As Hegel noted, beauty is not objective, it is our perspective and our understanding of its effect upon us (after Kant) that contributes greatly to the experience.

John C. Dewey. Psychologist, philosopher, educator, activist and social critic. Also, inspiration.
Dewey’s Pedagogic Creed was published in 1897 and he sought to share his beliefs on what education was, what schools were, what he considered the essential subject-matter of education, the methods employed, and the essential role of the school in social progress. I use the word ‘beliefs’ deliberately as this is what Dewey published: line after line of “I believe…” (As a note, this is what a creed is, or should be, as a set of beliefs or aims to guide action. The word ‘creed’ comes to us from the Latin credo, which means “I believe”.) Dewey is not, for the most part, making a religious statement in his Creed although his personal faith is expressed in a single line at the end.
To my reading, and you know that I seek characteristics that I can use to form some sort of object to guide me in defining beautiful education, many of Dewey’s points easily transfer to characteristics of beauty. For example, here are three lines from the work:
- “I believe that education thus conceived marks the most perfect and intimate union of science and art conceivable in human experience.“
- “I believe that with the growth of psychological science, giving added insight into individual structure and laws of growth; and with growth of social science, adding to our knowledge of the right organization of individuals, all scientific resources can be utilized for the purposes of education.“
- “ I believe that under existing conditions far too much of the stimulus and control proceeds from the teacher, because of neglect of the idea of the school as a form of social life.“
Dewey was very open about what he thought the role of school was, he saw it as the “fundamental method of social progress and reform“. I believe that he saw education, when carried out correctly, as being a thing that was beautiful, good and true and his displeasure with what he encountered in the schools and colleges of the late 19th/early 20th Century is manifest in his writings. He writes in reaction to an ugly, unfair, industrialised and mechanistic system and he wants something that conforms to his aesthetics. From the three lines above, he seeks education that is grounded in the arts and science, he wants to use technology in a positive way and he wants schools to be a vibrant and social community.
And this is exactly what the evidence tells us works. The fact that Dewey arrived at this through a focus on equity, opportunity, his work in psychology and his own observations is a testament to his vision. Dewey was rebelling against the things he could see were making children hate education.
I believe that next to deadness and dullness, formalism and routine, our education is threatened with no greater evil than sentimentalism.
John Dewey, School Journal vol. 54 (January 1897), pp. 77-80
Here, sentimentalism is where we try to evoke emotions without associating them with an appropriate action: Dewey seeks authenticity and a genuine expression. But look at the rest of that list: dead, dull, formal and routine. Dewey would go on to talk about schools as if they were prisons and over a hundred years later, we continue to line students up into ranks and bore them.
I have a lot of work to do as I study Dewey and his writings again with my aesthetic lens in place but, while I do so, it might be worth reading the creed. Some things are dated. Some ideas have been improved upon with more research, including his own and we will return to these issues. But I find it hard to argue with this:
I believe that the community’s duty to education is, therefore, its paramount moral duty. By law and punishment, by social agitation and discussion, society can regulate and form itself in a more or less haphazard and chance way. But through education society can formulate its own purposes, can organize its own means and resources, and thus shape itself with definiteness and economy in the direction in which it wishes to move.
ibid.
Maximise beauty or minimise …?
Posted: January 4, 2016 Filed under: Education, Opinion | Tags: advocacy, authenticity, beauty, charlie brown, education, educational research, ethics, higher education, in the student's head, peanuts, peppermint patty, principles of design, reflection, teaching, teaching approaches 3 CommentsThere is a Peanuts comic from April 16, 1972 where Peppermint Patty asks Charlie Brown what he thinks the secret of living is. Charlie Brown’s answer is “A convertible and a lake.” His reasoning is simple. When it’s sunny you can drive around in the convertible and be happy. When it’s raining you can think “oh well, the rain will fill up my lake.” Peppermint Patty asks Snoopy the same question and, committed sensualist that he is, he kisses her on the nose.

This is the Amphicar. In the 21st century, no philosophical construct will avoid being reified.
Charlie Brown, a character written to be constantly ground down by the world around him, is not seeking to maximise his happiness, he is seeking to minimise his unhappiness. Given his life, this is an understandable philosophy.
But what of beauty and, in this context, beauty in education? I’ve already introduced the term ‘ugly’ as the opposite of beauty but it’s hard for me to wrap my head around the notion of ‘minimising ugliness’; ugly is such a strong term. It’s also hard to argue that any education, when it covers any aspects to which we would apply that label, is ever totally ugly. Perhaps, in the educational framing, the absence of beauty is plainness. We end up with things that are ordinary, rather than extraordinary. I think that there is more than enough range between beauty and plainness for us to have a discussion on the movement between those states.
Is it enough for us to accept educational thinking that is acceptably plain? Is that a successful strategy? Many valid concerns about learning at scale focus on the innate homogeneity and lack of personalisation inherent in such an approach: plainness is the enemy. Yet there are many traditional and face-to-face approaches where plainness stares us in the face. Banality in education is, when identified, always rejected, yet it so often slips by without identification. We know that there is a hole in our slippers, yet we only seem to notice when that hole directly affects us or someone else points it out.
My thesis here is that a framing of beauty should lead us to a strategy of maximising beauty, rather than minimising plainness, as it is only in that pursuit that we model that key stage of falling in love with knowledge that we wish our students to emulate. If we say “meh is ok”, then that is what we will receive in return. We model, they follow as part of their learning. That’s what we’re trying to make happen, isn’t it?
What would Charlie Brown’s self-protective philosophy look like in a positive framing, maximising his joy rather than managing his grief? I’m not sure but I think it would look a lot like a dancing beagle who kisses people on the nose. We may need more than this for a sound foundation to reframe education!
Educator’s Statement: Nick Falkner
Posted: October 11, 2015 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, community, education, educational problem, ethics, higher education, learning, teaching, thinking Leave a commentAn artist’s educator’s statement (or artist educator statement) is an artist’s educator’s written description of their work. The brief verbal representation is for, and in support of, his or her own work to give the viewer the student/a peer/an observer/questioning parents/unconvinced politicians/citizens/history understanding. As such it aims to inform, connect with artistic/scientific/educational/societal/intellectual/political contexts, and present the basis for the work; it is therefore didactic, descriptive, or reflective in nature. (Wikipedia + Nick Falkner)
Fear thrives in conditions of ignorance and deprivation. Ignorance is defeated by knowledge. Deprivation is defeated by fairness, equality and equity.
Education shares knowledge and provides the basis for more knowledge. Education attacks ignorance, fights fear, champions equality and saves the world.
If I am always learning then I can model learning for my students and adapt my practice to reflect changes in education as my knowledge increases. Who are my students? What do they need to know? How can I teach them? When will I know if they have the knowledge that they need? What do I need to do today, tomorrow and the day after that?
I have made mistakes but I will try not to make the same mistakes again. The essence of education is that we pass on what we have learned and keep developing knowledge so that we do not have to make the same mistakes again.
That is why I am an educator.
Designing a MOOC: how far did it reach? #csed
Posted: June 10, 2015 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, collaboration, community, computer science education, constructivist, contributing student pedagogy, curriculum, data visualisation, design, education, educational problem, educational research, ethics, feedback, higher education, in the student's head, learning, measurement, MOOC, moocs, principles of design, reflection, resources, students, teaching, teaching approaches, thinking, tools Leave a commentMark Guzdial posted over on his blog on “Moving Beyond MOOCS: Could we move to understanding learning and teaching?” and discusses aspects (that still linger) of MOOC hype. (I’ve spoken about MOOCs done badly before, as well as recording the thoughts of people like Hugh Davis from Southampton.) One of Mark’s paragraphs reads:
“The value of being in the front row of a class is that you talk with the teacher. Getting physically closer to the lecturer doesn’t improve learning. Engagement improves learning. A MOOC puts everyone at the back of the class, listening only and doing the homework”
My reply to this was:
“You can probably guess that I have two responses here, the first is that the front row is not available to many in the real world in the first place, with the second being that, for far too many people, any seat in the classroom is better than none.
But I am involved in a, for us, large MOOC so my responses have to be regarded in that light. Thanks for the post!”
Mark, of course, called my bluff and responded with:
“Nick, I know that you know the literature in this space, and care about design and assessment. Can you say something about how you designed your MOOC to reach those who would not otherwise get access to formal educational opportunities? And since your MOOC has started, do you know yet if you achieved that goal — are you reaching people who would not otherwise get access?”
So here is that response. Thanks for the nudge, Mark! The answer is a bit long but please bear with me. We will be posting a longer summary after the course is completed, in a month or so. Consider this the unedited taster. I’m putting this here, early, prior to the detailed statistical work, so you can see where we are. All the numbers below are fresh off the system, to drive discussion and answering Mark’s question at, pretty much, a conceptual level.
First up, as some background for everyone, the MOOC team I’m working with is the University of Adelaide‘s Computer Science Education Research group, led by A/Prof Katrina Falkner, with me (Dr Nick Falkner), Dr Rebecca Vivian, and Dr Claudia Szabo.
I’ll start by noting that we’ve been working to solve the inherent scaling issues in the front of the classroom for some time. If I had a class of 12 then there’s no problem in engaging with everyone but I keep finding myself in rooms of 100+, which forces some people to sit away from me and also limits the number of meaningful interactions I can make to individuals in one setting. While I take Mark’s point about the front of the classroom, and the associated research is pretty solid on this, we encountered an inherent problem when we identified that students were better off down the front… and yet we kept teaching to rooms with more student than front. I’ll go out on a limb and say that this is actually a moral issue that we, as a sector, have had to look at and ignore in the face of constrained resources. The nature of large spaces and people, coupled with our inability to hover, means that we can either choose to have a row of students effectively in a semi-circle facing us, or we accept that after a relatively small number of students or number of rows, we have constructed a space that is inherently divided by privilege and will lead to disengagement.
So, Katrina’s and my first foray into this space was dealing with the problem in the physical lecture spaces that we had, with the 100+ classes that we had.
Katrina and I published a paper on “contributing student pedagogy” in Computer Science Education 22 (4), 2012, to identify ways for forming valued small collaboration groups as a way to promote engagement and drive skill development. Ultimately, by reducing the class to a smaller number of clusters and making those clusters pedagogically useful, I can then bring the ‘front of the class’-like experience to every group I speak to. We have given talks and applied sessions on this, including a special session at SIGCSE, because we think it’s a useful technique that reduces the amount of ‘front privilege’ while extending the amount of ‘front benefit’. (Read the paper for actual detail – I am skimping on summary here.)
We then got involved in the support of the national Digital Technologies curriculum for primary and middle school teachers across Australia, after being invited to produce a support MOOC (really a SPOC, small, private, on-line course) by Google. The target learners were teachers who were about to teach or who were teaching into, initially, Foundation to Year 6 and thus had degrees but potentially no experience in this area. (I’ve written about this before and you can find more detail on this here, where I also thanked my previous teachers!)
The motivation of this group of learners was different from a traditional MOOC because (a) everyone had both a degree and probable employment in the sector which reduced opportunistic registration to a large extent and (b) Australian teachers are required to have a certain number of professional development (PD) hours a year. Through a number of discussions across the key groups, we had our course recognised as PD and this meant that doing our course was considered to be valuable although almost all of the teachers we spoke to were furiously keen for this information anyway and my belief is that the PD was very much ‘icing’ rather than ‘cake’. (Thank you again to all of the teachers who have spent time taking our course – we really hope it’s been useful.)
To discuss access and reach, we can measure teachers who’ve taken the course (somewhere in the low thousands) and then estimate the number of students potentially assisted and that’s when it gets a little crazy, because that’s somewhere around 30-40,000.
In his talk at CSEDU 2014, Hugh Davis identified the student groups who get involved in MOOCs as follows. The majority of people undertaking MOOCs were life-long learners (older, degreed, M/F 50/50), people seeking skills via PD, and those with poor access to Higher Ed. There is also a small group who are Uni ‘tasters’ but very, very small. (I think we can agree that tasting a MOOC is not tasting a campus-based Uni experience. Less ivy, for starters.) The three approaches to the course once inside were auditing, completing and sampling, and it’s this final one that I want to emphasise because this brings us to one of the differences of MOOCs. We are not in control of when people decide that they are satisfied with the free education that they are accessing, unlike our strong gatekeeping on traditional courses.
I am in total agreement that a MOOC is not the same as a classroom but, also, that it is not the same as a traditional course, where we define how the student will achieve their goals and how they will know when they have completed. MOOCs function far more like many people’s experience of web browsing: they hunt for what they want and stop when they have it, thus the sampling engagement pattern above.
(As an aside, does this mean that a course that is perceived as ‘all back of class’ will rapidly be abandoned because it is distasteful? This makes the student-consumer a much more powerful player in their own educational market and is potentially worth remembering.)
Knowing these different approaches, we designed the individual subjects and overall program so that it was very much up to the participant how much they chose to take and individual modules were designed to be relatively self-contained, while fitting into a well-designed overall flow that built in terms of complexity and towards more abstract concepts. Thus, we supported auditing, completing and sampling, whereas our usual face-to-face (f2f) courses only support the first two in a way that we can measure.
As Hugh notes, and we agree through growing experience, marking/progress measures at scale are very difficult, especially when automated marking is not enough or not feasible. Based on our earlier work in contributing collaboration in the class room, for the F-6 Teacher MOOC we used a strong peer-assessment model where contributions and discussions were heavily linked. Because of the nature of the cohort, geographical and year-level groups formed who then conducted additional sessions and produced shared material at a slightly terrifying rate. We took the approach that we were not telling teachers how to teach but we were helping them to develop and share materials that would assist in their teaching. This reduced potential divisions and allows us to establish a mutually respectful relationship that facilitated openness.
(It’s worth noting that the courseware is creative commons, open and free. There are people reassembling the course for their specific take on the school system as we speak. We have a national curriculum but a state-focused approach to education, with public and many independent systems. Nobody makes any money out of providing this course to teachers and the material will always be free. Thank you again to Google for their ongoing support and funding!)
Overall, in this first F-6 MOOC, we had higher than usual retention of students and higher than usual participation, for the reasons I’ve outlined above. But this material was for curriculum support for teachers of young students, all of whom were pre-programming, and it could be contained in videos and on-line sharing of materials and discussion. We were also in the MOOC sweet-spot: existing degreed learners, PD driver, and their PD requirement depended on progressive demonstration on goal achievement, which we recognised post-course with a pre-approved certificate form. (Important note: if you are doing this, clear up how the PD requirements are met and how they need to be reported back, as early on as you can. It meant that we could give people something valuable in a short time.)
The programming MOOC, Think. Create. Code on EdX, was more challenging in many regards. We knew we were in a more difficult space and would be more in what I shall refer to as ‘the land of the average MOOC consumer’. No strong focus, no PD driver, no geographically guaranteed communities. We had to think carefully about what we considered to be useful interaction with the course material. What counted as success?
To start with, we took an image-based approach (I don’t think I need to provide supporting arguments for media-driven computing!) where students would produce images and, over time, refine their coding skills to produce and understand how to produce more complex images, building towards animation. People who have not had good access to education may not understand why we would use programming in more complex systems but our goal was to make images and that is a fairly universally understood idea, with a short production timeline and very clear indication of achievement: “Does it look like a face yet?”
In terms of useful interaction, if someone wrote a single program that drew a face, for the first time – then that’s valuable. If someone looked at someone else’s code and spotted a bug (however we wish to frame this), then that’s valuable. I think that someone writing a single line of correct code, where they understand everything that they write, is something that we can all consider to be valuable. Will it get you a degree? No. Will it be useful to you in later life? Well… maybe? (I would say ‘yes’ but that is a fervent hope rather than a fact.)
So our design brief was that it should be very easy to get into programming immediately, with an active and engaged approach, and that we have the same “mostly self-contained week” approach, with lots of good peer interaction and mutual evaluation to identify areas that needed work to allow us to build our knowledge together. (You know I may as well have ‘social constructivist’ tattooed on my head so this is strongly in keeping with my principles.) We wrote all of the materials from scratch, based on a 6-week program that we debated for some time. Materials consisted of short videos, additional material as short notes, participatory activities, quizzes and (we planned for) peer assessment (more on that later). You didn’t have to have been exposed to “the lecture” or even the advanced classroom to take the course. Any exposure to short videos or a web browser would be enough familiarity to go on with.
Our goal was to encourage as much engagement as possible, taking into account the fact that any number of students over 1,000 would be very hard to support individually, even with the 5-6 staff we had to help out. But we wanted students to be able to develop quickly, share quickly and, ultimately, comment back on each other’s work quickly. From a cognitive load perspective, it was crucial to keep the number of things that weren’t relevant to the task to a minimum, as we couldn’t assume any prior familiarity. This meant no installers, no linking, no loaders, no shenanigans. Write program, press play, get picture, share to gallery, winning.
As part of this, our support team (thanks, Jill!) developed a browser-based environment for Processing.js that integrated with a course gallery. Students could save their own work easily and share it trivially. Our early indications show that a lot of students jumped in and tried to do something straight away. (Processing is really good for getting something up, fast, as we know.) We spent a lot of time testing browsers, testing software, and writing code. All of the recorded materials used that development environment (this was important as Processing.js and Processing have some differences) and all of our videos show the environment in action. Again, as little extra cognitive load as possible – no implicit requirement for abstraction or skills transfer. (The AdelaideX team worked so hard to get us over the line – I think we may have eaten some of their brains to save those of our students. Thank you again to the University for selecting us and to Katy and the amazing team.)
The actual student group, about 20,000 people over 176 countries, did not have the “built-in” motivation of the previous group although they would all have their own levels of motivation. We used ‘meet and greet’ activities to drive some group formation (which worked to a degree) and we also had a very high level of staff monitoring of key question areas (which was noted by participants as being very high for EdX courses they’d taken), everyone putting in 30-60 minutes a day on rotation. But, as noted before, the biggest trick to getting everyone engaged at the large scale is to get everyone into groups where they have someone to talk to. This was supposed to be provided by a peer evaluation system that was initially part of the assessment package.
Sadly, the peer assessment system didn’t work as we wanted it to and we were worried that it would form a disincentive, rather than a supporting community, so we switched to a forum-based discussion of the works on the EdX discussion forum. At this point, a lack of integration between our own UoA programming system and gallery and the EdX discussion system allowed too much distance – the close binding we had in the R-6 MOOC wasn’t there. We’re still working on this because everything we know and all evidence we’ve collected before tells us that this is a vital part of the puzzle.
In terms of visible output, the amount of novel and amazing art work that has been generated has blown us all away. The degree of difference is huge: armed with approximately 5 statements, the number of different pieces you can produce is surprisingly large. Add in control statements and reputation? BOOM. Every student can write something that speaks to her or him and show it to other people, encouraging creativity and facilitating engagement.
From the stats side, I don’t have access to the raw stats, so it’s hard for me to give you a statistically sound answer as to who we have or have not reached. This is one of the things with working with a pre-existing platform and, yes, it bugs me a little because I can’t plot this against that unless someone has built it into the platform. But I think I can tell you some things.
I can tell you that roughly 2,000 students attempted quiz problems in the first week of the course and that over 4,000 watched a video in the first week – no real surprises, registrations are an indicator of interest, not a commitment. During that time, 7,000 students were active in the course in some way – including just writing code, discussing it and having fun in the gallery environment. (As it happens, we appear to be plateauing at about 3,000 active students but time will tell. We have a lot of post-course analysis to do.)
It’s a mistake to focus on the “drop” rates because the MOOC model is different. We have no idea if the people who left got what they wanted or not, or why they didn’t do anything. We may never know but we’ll dig into that later.
I can also tell you that only 57% of the students currently enrolled have declared themselves explicitly to be male and that is the most likely indicator that we are reaching students who might not usually be in a programming course, because that 43% of others, of whom 33% have self-identified as women, is far higher than we ever see in classes locally. If you want evidence of reach then it begins here, as part of the provision of an environment that is, apparently, more welcoming to ‘non-men’.
We have had a number of student comments that reflect positive reach and, while these are not statistically significant, I think that this also gives you support for the idea of additional reach. Students have been asking how they can save their code beyond the course and this is a good indicator: ownership and a desire to preserve something valuable.
For student comments, however, this is my favourite.
I’m no artist. I’m no computer programmer. But with this class, I see I can be both. #processingjs (Link to student’s work) #code101x .
That’s someone for whom this course had them in the right place in the classroom. After all of this is done, we’ll go looking to see how many more we can find.
I know this is long but I hope it answered your questions. We’re looking forward to doing a detailed write-up of everything after the course closes and we can look at everything.
EduTech AU 2015, Day 2, Higher Ed Leaders, “Innovation + Technology = great change to higher education”, #edutechau
Posted: June 3, 2015 Filed under: Education | Tags: education, higher education, learning, teaching, measurement, teaching approaches, educational problem, collaboration, resources, tools, design, principles of design, advocacy, mit, educational research, grand challenge, community, ethics, thinking, students, edutechau, edutech2015, olpc, one laptop per child, nicholas negroponte, mit media lab, seymour papert, Nicholas, connectivity, market forces, public education Leave a commentBig session today. We’re starting with Nicholas Negroponte, founder of the MIT Media Lab and the founder of One Laptop Per Child (OLPC), an initiative to create/provide affordable educational devices for children in the developing world. (Nicholas is coming to us via video conference, hooray, 21st Century, so this may or not work well in translation to blogging. Please bear with me if it’s a little disjointed.)
Nicholas would rather be here but he’s bravely working through his first presentation of this type! It’s going to be a presentation with some radical ideas so he’s hoping for conversation and debate. The presentation is broken into five parts:
- Learning learning. (Teaching and learning as separate entities.)
- What normal market forces will not do. (No real surprise that standard market forces won’t work well here.)
- Education without curricula. (Learning comes from many places and situations. Understanding and establishing credibility.)
- Where do new ideas come from? (How do we get them, how do we not get in the way.)
- Connectivity as a human right. (Is connectivity a human right or a means to rights such as education and healthcare? Human rights are free so that raises a lot of issues.
Nicholas then drilled down in “Learning learning”, starting with a reference to Seymour Papert, and Nicholas reflected on the sadness of the serious accident of Seymour’s health from a personal perspective. Nicholas referred to Papert’s and Minsky’s work on trying to understand how children and machines learned respectively. In 1968, Seymour started thinking about it and on April, 9, 1970, he gave a talk on his thoughts. Seymour realised that thinking about programs gave insight into thinking, relating to the deconstruction and stepwise solution building (algorithmic thinking) that novice programmers, such as children, had to go through.
These points were up on the screen as Nicholas spoke:
- Construction versus instruction
- Why reinventing the wheel is good
- Coding as thinking about thinking
How do we write code? Write it, see if it works, see which behaviours we have that aren’t considered working, change the code (in an informed way, with any luck) and try again. (It’s a little more complicated than that but that’s the core.) We’re now into the area of transferable skills – it appeared that children writing computer programs learned a skill that transferred over into their ability to spell, potentially from the methodical application of debugging techniques.
Nicholas talked about a spelling bee system where you would focus on the 8 out of 10 you got right and ignore the 2 you didn’t get. The ‘debugging’ kids would talk about the ones that they didn’t get right because they were analsysing their mistakes, as a peer group and as individual reflection.
Nicholas then moved on to the failure of market forces. Why does Finland do so well when they don’t have tests, homework and the shortest number of school hours per day and school days per year. One reason? No competition between children. No movement of core resources into the private sector (education as poorly functioning profit machine). Nicholas identified the core difference between the mission and the market, which beautifully summarises my thinking.
The OLPC program started in Cambodia for a variety of reasons, including someone associated with the lab being a friend of the King. OLPC laptops could go into areas where the government wasn’t providing schools for safety reasons, as it needed minesweepers and the like. Nicholas’ son came to Cambodia from Italy to connect up the school to the Internet. What would the normal market not do? Telecoms would come and get cheaper. Power would come and get cheaper. Laptops? Hmm. The software companies were pushing the hardware companies, so they were both caught in a spiral of increasing power consumption for utility. Where was the point where we could build a simple laptop, as a mission of learning, that could have a smaller energy footprint and bring laptops and connectivity to billions of people.
This is one of the reasons why OLPC is a non-profit – you don’t have to sell laptops to support the system, you’re supporting a mission. You didn’t need to sell or push to justify staying in a market, as the production volume was already at a good price. Why did this work well? You can make partnerships that weren’t possible otherwise. It derails the “ah, you need food and shelter first” argument because you can change the “why do we need a laptop” argument to “why do we need education?” at which point education leads to increased societal conditions. Why laptops? Tablets are more consumer-focused than construction-focused. (Certainly true of how I use my tech.)
(When we launched the first of the Digital Technologies MOOCs, the deal we agreed upon with Google was that it wasn’t a profit-making venture at all. It never will be. Neither we nor Google make money from the support of teachers across Australia so we can have all of the same advantages as they mention above: open partnerships, no profit motive, working for the common good as a mission of learning and collegial respect. Highly recommended approach, if someone is paying you enough to make your rent and eat. The secret truth of academia is that they give you money to keep you housed, clothed and fed while you think. )
Nicholas told a story of kids changing from being scared or bored of school to using an approach that brings kids flocking in. A great measure of success.
Now, onto Education without curricula, starting by talking public versus private. This is a sensitive subject for many people. The biggest problem for public education in many cases is the private educational system, dragging out caring educators to a closed system. Remember Finland? There are no public schools and their educational system is astoundingly good. Nicholas’ points were:
- Public versus private
- Age segregation
- Stop testing. (Yay!)
The public sector is losing the imperative of the civic responsibility for education. Nicholas thinks it doesn’t make sense that we still segregate by ages as a hard limit. He thinks we should get away from breaking it into age groups, as it doesn’t clearly reflect where students are at.
Oh, testing. Nicholas correctly labelled the parental complicity in the production of the testing pressure cooker. “You have to get good grades if you’re going to Princeton!” The testing mania is dominating institutions and we do a lot of testing to measure and rank children, rather than determining competency. Oh, so much here. Testing leads to destructive behaviour.
So where do new ideas come from? (A more positive note.) Nicholas is interested in Higher Ed as sources of new ideas. Why does HE exist, especially if we can do things remotely or off campus? What is the role of the Uni in the future? Ha! Apparently, when Nicholas started the MIT media lab, he was accused of starting a sissy lab with artists and soft science… oh dear, that’s about as wrong as someone can get. His use of creatives was seen as soft when, of course, using creative users addressed two issues to drive new ideas: a creative approach to thinking and consulting with the people who used the technology. Who really invented photography? Photographers. Three points from this section.
- Children: our most precious natural resource
- Incrementalism is the enemy of creativity
- Brain drain
On the brain drain, we lose many, many students to other places. Uni are a place to solve huge problems rather than small, profit-oriented problems. The entrepreneurial focus leads to small problem solution, which is sucking a lot of big thinking out of the system. The app model is leading to a human resource deficit because the start-up phenomenon is ripping away some of our best problem solvers.
Finally, to connectivity as a human right. This is something that Nicholas is very, very passionate about. Not content. Not laptops. Being connected. Learning, education, and access to these, from early in life to the end of life – connectivity is the end of isolation. Isolation comes in many forms and can be physical, geographical and social. Here are Nicholas’ points:
- The end of isolation.
- Nationalism is a disease (oh, so much yes.) Nations are the wrong taxonomy for the world.
- Fried eggs and omelettes.
Fried eggs and omelettes? In general, the world had crisp boundaries, yolk versus white. At work/at home. At school/not at school. We are moving to a more blended, less dichotomous approach because we are mixing our lives together. This is both bad (you’re getting work in my homelife) and good (I’m getting learning in my day).
Can we drop kids into a reading environment and hope that they’ll learn to read? Reading is only 3,500 years old, versus our language skills, so it has to be learned. But do we have to do it the way that we did it? Hmm. Interesting questions. This is where the tablets were dropped into illiterate villages without any support. (Does this require a seed autodidact in the group? There’s a lot to unpack it.) Nicholas says he made a huge mistake in naming the village in Ethiopia which has corrupted the experiment but at least the kids are getting to give press conferences!
Another massive amount of interesting information – sadly, no question time!
EduTECH AU 2015, Day 1, Higher Ed Leaders, “Revolutionising the Student Experience: Thinking Boldly” #edutechau
Posted: June 2, 2015 Filed under: Education | Tags: AI, artificial intelligence, blogging, collaboration, community, data visualisation, deakin, design, education, educational research, edutech2015, edutecha, edutechau, ethics, higher education, learning, learning analytics, machine intelligence, measurement, principles of design, resources, student perspective, students, teaching, thinking, tools, training, watson Leave a commentLucy Schulz, Deakin University, came to speak about initiatives in place at Deakin, including the IBM Watson initiative, which is currently a world-first for a University. How can a University collaborate to achieve success on a project in a short time? (Lucy thinks that this is the more interesting question. It’s not about the tool, it’s how they got there.)
Some brief facts on Deakin: 50,000 students, 11,000 of whom are on-line. Deakin’s question: how can we make the on-line experience as good if not better than the face-to-face and how can on-line make face-to-face better?
Part of Deakin’s Student Experience focus was on delighting the student. I really like this. I made a comment recently that our learning technology design should be “Everything we do is valuable” and I realise now I should have added “and delightful!” The second part of the student strategy is for Deakin to be at the digital frontier, pushing on the leading edge. This includes understanding the drivers of change in the digital sphere: cultural, technological and social.
(An aside: I’m not a big fan of the term disruption. Disruption makes room for something but I’d rather talk about the something than the clearing. Personal bug, feel free to ignore.)
The Deakin Student Journey has a vision to bring students into the centre of Uni thinking, every level and facet – students can be successful and feel supported in everything that they do at Deakin. There is a Deakin personality, an aspirational set of “Brave, Stylish, Accessible, Inspiring and Savvy”.
Not feeling this as much but it’s hard to get a feel for something like this in 30 seconds so moving on.
What do students want in their learning? Easy to find and to use, it works and it’s personalised.
So, on to IBM’s Watson, the machine that won Jeopardy, thus reducing the set of games that humans can win against machines to Thumb Wars and Go. We then saw a video on Watson featuring a lot of keen students who coincidentally had a lot of nice things to say about Deakin and Watson. (Remember, I warned you earlier, I have a bit of a thing about shiny videos but ignore me, I’m a curmudgeon.)
The Watson software is embedded in a student portal that all students can access, which has required a great deal of investigation into how students communicate, structurally and semantically. This forms the questions and guides the answer. I was waiting to see how Watson was being used and it appears to be acting as a student advisor to improve student experience. (Need to look into this more once day is over.)
Ah, yes, it’s on a student home page where they can ask Watson questions about things of importance to students. It doesn’t appear that they are actually programming the underlying system. (I’m a Computer Scientist in a faculty of Engineering, I always want to get my hands metaphorically dirty, or as dirty as you can get with 0s and 1s.) From looking at the demoed screens, one of the shiny student descriptions of Watson as “Siri plus Google” looks very apt.
Oh, it has cheekiness built in. How delightful. (I have a boundless capacity for whimsy and play but an inbuilt resistance to forced humour and mugging, which is regrettably all that the machines are capable of at the moment. I should confess Siri also rubs me the wrong way when it tries to be funny as I have a good memory and the patterns are obvious after a while. I grew up making ELIZA say stupid things – don’t judge me! 🙂 )
Watson has answered 26,000 questions since February, with an 80% accuracy for answers. The most common questions change according to time of semester, which is a nice confirmation of existing data. Watson is still being trained, with two more releases planned for this year and then another project launched around course and career advisors.
What they’ve learned – three things!
- Student voice is essential and you have to understand it.
- Have to take advantage of collaboration and interdependencies with other Deakin initiatives.
- Gained a new perspective on developing and publishing content for students. Short. Clear. Concise.
The challenges of revolution? (Oh, they’re always there.) Trying to prevent students falling through the cracks and make sure that this tool help students feel valued and stay in contact. The introduction of new technologies have to be recognised in terms of what they change and what they improve.
Collaboration and engagement with your University and student community are essential!
Thanks for a great talk, Lucy. Be interesting to see what happens with Watson in the next generations.

