Four-tier assessment

We’ve looked at a classification of evaluators that matches our understanding of the complexity of the assessment tasks we could ask students to perform. If we want to look at this from an aesthetic framing then, as Dewey notes:

“By common consent, the Parthenon is a great work of art. Yet it has aesthetic standing only as the work becomes an experience for a human being.”

John Dewey, Art as Experience, Chapter 1, The Live Creature.

Having a classification of evaluators cannot be appreciated aesthetically unless we provide a way for it to be experienced. Our aesthetic framing demands an implementation that makes use of such an evaluator classification, applies to a problem where we can apply a pedagogical lens and then, finally, we can start to ask how aesthetically pleasing it is.

And this is what brings us to beauty.

A systematic allocation of tasks to these different evaluators should provide valid and reliable marking, assuming we’ve carried out our design phase correctly. But what about fairness, motivation or relevancy, the three points that we did not address previously? To be able to satisfy these aesthetic constraints, and to confirm the others, it now matters how we handle these evaluation phases because it’s not enough to be aware that some things are going to need different approaches, we have to create a learning environment to provide fairness, motivation and relevancy.

I’ve already argued that arbitrary deadlines are unfair, that extrinsic motivational factors are grossly inferior to those found within, and, in even earlier articles, that we too insist on the relevancy of the measurements that we have, rather than designing for relevancy and insisting on the measurements that we need.

To achieve all of this and to provide a framework that we can use to develop a sense of aesthetic satisfaction (and hence beauty), here is a brief description of a four-tier, penalty free, assessment.

Let’s say that, as part of our course design, we develop an assessment item, A1, that is one of the elements to provide evaluation coverage of one of the knowledge areas. (Thus, we can assume that A1 is not required to be achieved by itself to show mastery but I will come back to this in a later post.)

Recall that the marking groups are: E1, expert human markers; E2, trained or guided human markers; E3, complex automated marking; and E4, simple and mechanical automated marking.

A1 has four, inbuilt, course deadlines but rather than these being arbitrary reductions of mark, these reflect the availability of evaluation resource, a real limitation as we’ve already discussed. When the teacher sets these courses up, she develops an evaluation scheme for the most advanced aspects (E1, which is her in this case), an evaluation scheme that could be used by other markers or her (E2), an E3 acceptance test suite and some E4 tests for simplicity. She matches the aspects of the assignment to these evaluation groups, building from simple to complex, concrete to abstract, definite to ambiguous.

The overall assessment of work consists of the evaluation of four separate areas, associated with each of the evaluators. Individual components of the assessment build up towards the most complex but, for example, a student should usually have had to complete at least some of E4-evaluated work to be able to attempt E3.

Here’s a diagram of the overall pattern for evaluation and assessment.

Scheme

The first deadline for the assignment is where all evaluation is available. If students provide their work by this time, the E1 will look at the work, after executing the automated mechanisms, first E4 then E3, and applying the E2 rubrics. If the student has actually answered some E1-level items, then the “top tier” E1 evaluator will look at that work and evaluate it. Regardless of whether there is E1 work or not, human-written feedback from the lecturer on everything will be provided if students get their work in at that point. This includes things that would be of help for all other levels. This is the richest form of feedback, it is the most useful to the students and, if we are going to use measures of performance, this is the point at which the most opportunities to demonstrate performance can occur.

This feedback will be provided in enough time that the students can modify their work to meet the next deadline, which is the availability of E2 markers. Now TAs or casuals are marking instead or the lecturer is now doing easier evaluation from a simpler rubric. These human markers still start by running the automated scripts, E4 then E3, to make sure that they can mark something in E2. They also provide feedback on everything in E2 to E4, sent out in time for students to make changes for the next deadline.

Now note carefully what’s going on here. Students will get useful feedback, which is great, but because we have these staggered deadlines, we can pass on important messages as we identify problems. If the class is struggling with key complex or more abstract elements, harder to fix and requiring more thought, we know about it quickly because we have front-loaded our labour.

Once we move down to the fully automated systems, we’re losing opportunities for rich and human feedback to students who have not yet submitted. However, we have a list of students who haven’t submitted, which is where we can allocate human labour, and we can encourage them to get work in, in time for the E3 “complicated” script. This E3 marking script remains open for the rest of the semester, to encourage students to do the work sometime ahead of the exam. At this point, the discretionary allocation of labour for feedback is possible, because the lecturer has done most of the hard work in E1 and E2 and should, with any luck, have far fewer evaluation activities for this particular assignment. (Other things may intrude, including other assignments, but we have time bounds on this one, which is better than we often have!)

Finally, at the end of the teaching time (in our parlance, a semester’s teaching will end then we will move to exams), we move the assessment to E4 marking only, giving students the ability (if required) to test their work to meet any “minimum performance” requirements you may have for their eligibility to sit the exam. Eventually, the requirement to enter a record of student performance in this course forces us to declare the assessment item closed.

This is totally transparent and it’s based on real resource limitations. Our restrictions have been put in place to improve student feedback opportunities and give them more guidance. We have also improved our own ability to predict our workload and to guide our resource requests, as well as allowing us to reuse some elements of automated scripts between assignments, without forcing us to regurgitate entire assignments. These deadlines are not arbitrary. They are not punitive. We have improved feedback and provided supportive approaches to encourage more work on assignments. We are able to get better insight into what our students are achieving, against our design, in a timely fashion. We can now see fairness, intrinsic motivation and relevance.

I’m not saying this is beautiful yet (I think I have more to prove to you) but I think this is much closer than many solutions that we are currently using. It’s not hiding anything, so it’s true. It does many things we know are great for students so it looks pretty good.

Tomorrow, we’ll look at whether such a complicated system is necessary for early years and, spoilers, I’ll explain a system for first year that uses peer assessment to provide a similar, but easier to scale, solution.


Joi Ito on Now-ists

This is a great TED talk. Joi Ito, director of the MIT media lab, talks about the changes that technological innovation have made to the ways that we can work on problems and work together.

I don’t agree with everything, especially the pejorative cast on education, but I totally agree that the way that we construct learning environments has to take into the way that our students will work, rather than trying to prepare them for the world that we (or our parents) worked in. Pretending that many of our students will have to construct simple things by hand, when that is what we were doing fifty years ago, takes up time that we could be using for more authentic and advanced approaches that cover the same material. Some foundations are necessary. Some are tradition. Being a now-ist forces us to question which is which and then act on that knowledge.

Your students will be able to run enterprises from their back rooms that used to require the resources of multinational companies. It’s time to work out what they actually need to get from us and, once we know that, deliver it. There is a place for higher education but it may not be the one that we currently have.

A lot of what I talk about on this blog looks as if I’m being progressive but, really, I’m telling you what we already know to be true right now. And what we have known to be true for decades, if not centuries. I’m not a futurist, at all. I’m a now-ist with a good knowledge of history who sees a very bleak future if we don’t get better at education.

(Side note: yes, this is over twelve minutes long. Watch our around the three minute mark for someone reading documents on an iPad up the back, rather than watching him talk. I think this is a little long and staged, when it could have been tighter, but that’s the TED format for you. You know what you’re getting into and, because it’s not being formally evaluated, it doesn’t matter as much if you recall high-level rather than detail.)


Four tiers of evaluators

We know that we can, and do, assess different levels of skill and knowledge. We know that we can, and do, often resort to testing memorisation, simple understanding and, sometimes, the application of the knowledge that we teach. We also know that the best evaluation of work tends to come from the teachers who know the most about the course and have the most experience, but we also know that these teachers have many demands on their time.

The principles of good assessment can be argued but we can probably agree upon a set much like this:

  1. Valid, based on the content. We should be evaluating things that we’ve taught.
  2. Reliable, in that our evaluations are consistent and return similar results for different evaluators, that re-evaluating would give the same result, that we’re not unintentionally raising or lowering difficulty.
  3. Fair.
  4. Motivating, in that we know how much influence feedback and encouragement have on students, so we should be maximising the motivation and, we hope, this should drive engagement.
  5. Finally, we want our assessment to be as relevant to us, in terms of being able to use the knowledge gained to improve or modify our courses, as it is to our student. Better things should come from having run this assessment.

Notice that nothing here says “We have to mark or give a grade”, yet we can all agree on these principles, and any scheme that adheres to them, as being a good set of characteristics to build upon. Let me label these as aesthetics of assessment, now let’s see if I can make something beautiful. Let me put together my shopping list.

  • Feedback is essential. We can see that. Let’s have lots of feedback and let’s put it in places where it can be the most help.
  • Contextual relevance is essential. We’re going to need good design and work out what we want to evaluate and then make sure we locate our assessment in the right place.
  • We want to encourage students. This means focusing on intrinsics and support, as well as well-articulated pathways to improvement.
  • We want to be fair and honest.
  • We don’t want to overload either the students or ourselves.
  • We want to allow enough time for reliable and fair evaluation of the work.

What are the resources we have?

  • Course syllabus
  • Course timetable
  • The teacher’s available time
  • TA or casual evaluation time, if available
  • Student time (for group work or individual work, including peer review)
  • Rubrics for evaluation.
  • Computerised/automated evaluation systems, to varying degree.

Wait, am I suggesting automated marking belongs in a beautiful marking system? Why, yes, I think it has a place, if we are going to look at those things we can measure mechanistically. Checking to see if someone has ticked the right box for a Bloom’s “remembering” level activity? Machine task. Checking to see if an essay has a lot of syntax or grammatical errors? Machine task. But we can build on that. We can use human markers and machine markers, in conjunction, to the best of their strengths and to overcome each other’s weaknesses.

Some cast-iron wheels and gears, connected with a bicycle chain.

We’ve come a long, in terms of machine-based evaluation. It doesn’t have to be steam-driven.

If we think about it, we really have four separate tiers of evaluators to draw upon, who have different levels of ability. These are:

  1. E1: The course designers and subject matter experts who have a deep understanding of the course and could, possibly with training, evaluate work and provide rich feedback.
  2. E2: Human evaluators who have received training or are following a rubric provided by the E1 evaluators. They are still human-level reasoners but are constrained in terms of breadth of interpretation. (It’s worth noting that peer assessment could fit in here, as well.)
  3. E3: High-level machine evaluation includes machine-based evaluation of work, which could include structural, sentiment or topic analysis, as well as running complicated acceptance tests that look for specific results, coverage of topics or, in the case of programming tasks, certain output in response to given input. The E3 evaluation mechanisms will require some work to set up but can provide evaluation of large classes in hours, rather than days.
  4. E4: Low-level machine evaluation, checking for conformity in terms of length of assignment, names, type of work submitted, plagiarism detection. In the case of programming assignments, E4 would check that the filenames were correct, that the code compiled and also may run some very basic acceptance tests. E4 evaluation mechanisms should be quick to set up and very quick to execute.

This separation clearly shows us a graded increase of expertise that corresponds to an increase of time spent and, unfortunately, a decrease in time available. E4 evaluation is very easy to set up and carry out but it’s not fantastic for detailed feedback or higher Bloom’s level. Yet we have an almost infinite amount of this marking time available. E1 markers will (we hope) give the best feedback but they take a long time and this immediately reduces the amount of time to be spent on other things. How do we handle this and select the best mix?

While we’re thinking about that, let’s see if we are meeting the aesthetics.

  1. Valid? Yes. We’ve looked at our design (we appear to have a design!) and we’ve specifically set up evaluation into different areas while thinking about outcomes, levels and areas that we care about.
  2. Reliable? Looks like it. E3 and E4 are automated and E2 has a defined marking rubric. E1 should also have guidelines but, if we’ve done our work properly in design, the majority of marks, if not all of them, are going to be assigned reliably.
  3. Fair? We’ve got multiple stages of evaluation but we haven’t yet said how we’re going to use this so we don’t have this one yet.
  4. Motivating? Hmm, we have the potential for a lot of feedback but we haven’t said how we’re using that, either. Don’t have this one either.
  5. Relevant to us and the students. No, for the same reasons as 3 and 4, we haven’t yet shown how this can be useful to us.

It looks like we’re half-way there. Tomorrow, we finish the job.

 


What are we assessing? How?

How we can create a better assessment system, without penalties, that works in a grade-free environment? Let’s provide a foundation for this discussion by looking at assessment today.

fx_Bloom_New

Bloom’s Revised Taxonomy

We have many different ways of understanding exactly how we are assessing knowledge. Bloom’s taxonomy allows us to classify the objectives that we set for students, in that we can determine if we’re just asking them to remember something, explain it, apply it, analyse it, evaluate it or, having mastered all of those other aspects, create a new example of it. We’ve also got Bigg’s SOLO taxonomy to classify levels of increasing complexity in a student’s understanding of subjects. Now let’s add in threshold concepts, learning edge momentum, neo-Piagetian theory and …

Let’s summarise and just say that we know that students take a while to learn things, can demonstrate some convincing illusions of progress that quickly fall apart, and that we can design our activities and assessment in a way that acknowledges this.

I attended a talk by Eric Mazur, of Peer Instruction fame, and he said a lot of what I’ve already said about assessment not working with how we know we should be teaching. His belief is that we rarely rise above remembering and understanding, when it comes to testing, and he’s at Harvard, where everyone would easily accept their practices as, in theory, being top notch. Eric proposed a number of approaches but his focus on outcomes was one that I really liked. He wanted to keep the coaching role he could provide separate from his evaluator role: another thing I think we should be doing more.

Eric is in Physics but all of these ideas have been extensively explored in my own field, especially where we start to look at which of the levels we teach students to and then what we assess. We do a lot of work on this in Australia and here is some work by our groups and others I have learned from:

  • Szabo, C., Falkner, K. & Falkner, N. 2014, ‘Experiences in Course Design using Neo-Piagetian Theory’
  • Falkner, K., Vivian, R., Falkner, N., 2013, ‘Neo-piagetian Forms of Reasoning in Software Development Process Construction’
  • Whalley, J., Lister, R.F., Thompson, E., Clear, T., Robbins, P., Kumar, P. & Prasad, C. 2006, ‘An Australasian study of reading and comprehension skills in novice programmers, using Bloom and SOLO taxonomies’
  • Gluga, R., Kay, J., Lister, R.F. & Teague, D. 2012, ‘On the reliability of classifying programming tasks using a neo-piagetian theory of cognitive development’

I would be remiss to not mention Anna Eckerdal’s work, and collaborations, in the area of threshold concepts. You can find her many papers on determining which concepts are going to challenge students the most, and how we could deal with this, here.

Let me summarise all of this:

  • There are different levels at which students will perform as they learn.
  • It needs careful evaluation to separate students who appear to have learned something from students who have actually learned something.
  • We often focus too much on memorisation and simple explanation, without going to more advanced levels.
  • If we want to assess advanced levels, we may have to give up the idea of trying to grade these additional steps as objectivity is almost impossible as is task equivalence.
  • We should teach in a way that supports the assessment we wish to carry out. The assessment we wish to carry out is the right choice to demonstrate true mastery of knowledge and skills.

If we are not designing for our learning outcomes, we’re unlikely to create courses to achieve those outcomes. If we don’t take into account the realities of student behaviour, we will also fail.

We can break our assessment tasks down by one of the taxonomies or learning theories and, from my own work and that of others, we know that we will get better results if we provide a learning environment that supports assessment at the desired taxonomic level.

But, there is a problem. The most descriptive, authentic and open-ended assessments incur the most load in terms of expert human marking. We don’t have a lot of expert human markers. Overloading them is not good. Pretending that we can mark an infinite number of assignments is not true. Our evaluation aesthetics are objectivity, fairness, effectiveness, timeliness and depth of feedback. Assignment evaluation should be useful to the students, to show progress, and useful to us, to show the health of the learning environment. Overloading the marker will compromise the aesthetics.

Our beauty lens tells us very clearly that we need to be careful about how we deal with our finite resources. As Eric notes, and we all know, if we were to test simpler aspects of student learning, we can throw machines at it and we have a near infinite supply of machines. I cannot produce more experts like me, easily. (Snickers from the audience) I can recruit human evaluators from my casual pool and train them to mark to something like my standard, using a rubric or using an approximation of my approach.

Thus I have a framework of assignments, divide by level, and I appear to have assignment evaluation resources. And the more expert and human the marker, the more … for want of a better word … valuable the resource. The better feedback it can produce. Yet the more valuable the resource, the less of it I have because it takes time to develop evaluation skills in humans.

Tune in tomorrow for the penalty free evaluation and feedback that ties all of this together.


Aside: My New York Story

I’m a story-teller. It infuses my work. I share the stories and realities of my life in order to explain points that I think other people could appreciate. Today, I’m telling you my New York story. It’s not rags-to-riches and I’m only triumphing over my own stupidity rather than terrible obstacles. I fight to give opportunity to others but my own tales are infused with my own levels of privilege. If that bothers you, probably best to stop reading now.

Like many New York stories, this one is all about how I never lived in New York. Losing both Bowie and Alan Rickman in a few days has made me think about that city again. This is the story of how I loved a city but we never ended up together.

Aerial perspective photo of a lot of tall buildings in midtown Manhattan, facing north to Central Park.

Oh, Oh! New York!

Even in the home counties towns in the UK, the adults spoke about New York as if it were Olympus, Shangri-La and a wild west town all rolled into one. You could be incredibly cool in Hampshire just by having been to America. If you had been to New York, and survived, you were some kind of god. They had different music, different cars, different money. In a Britain collapsing under the 1970s, America was a golden place.

I grew up in the 70s and was fortunate enough to hear Bowie hit the UK scene hard, to see early Doctor Who, to be mentally invigorated by very demanding progressive UK kids’ TV, to hear a Police album when they were just starting out and then, even more fortunately, we left the home counties and, because my Mum was amazingly brave and strong, we made it across the sea to Australia, where a better life awaited us.

I shrugged off Britain in weeks but New York never left me.

I continued to grow up in one of the Australian state capitals (Australian population is mostly concentrated into cities on the ocean) and it was nice, but it was no New York. I’m not sure I’ve told anyone this but, in my head, I was always going to America. That’s what success was defined as when I was a boy: you were a traveller, you did interesting things and that meant America. When every other Brit was going to the Costa del Sol and baking their skin, the interesting people were pale, thin and had walked around in magical places like Central Park, Times Square and along Broadway. They knew about music and understood what the Tarkus artwork meant on that ELP album. They even knew what “prog-rock” meant. They were art, life and wonder.

When I finished University, I started to think seriously about America. I had visited by then, and that is a story in itself, and finally seen New York. Mid-winter. Just after Christmas. Quiet and grey, on the cusp of 1995 and 1996. It did not quite amaze but it do not disappoint, even though I was walking around with a bung knee after slipping in unfamiliar snow. I began to think about how I could get there.

But one thing became clear as I thought about it. I didn’t know how to get to the New York that I wanted to be part of. I was creative but I wasn’t an amazing artist or musician and the New York I wanted to be part of was the bubbling, creative, amazing community of 1970s. I was a middling rhythm guitarist, a karaoke-tolerable singer, an abstract artist (I still can’t draw very well), an enthusiastic poet, but, mostly, I was a computer guy. I could go and get a job but all I would be doing would be living in (or near, most likely) New York and never becoming part of my vision of NYC. Tech support in Shangri-La was not what I wanted.

I had kept an idea of where I wanted to go in my head but I had never turned that into an intention to actually go there. A goal with no plans had turned into a life without much direction. (I was lucky enough to fall in love and start a real journey and adventure locally, just as I realised that I had never set my cap for New York, but that’s another story.)

There’s a poem I love, Ithaka by C. P. Cavafy, and it talks of setting out on a magnificent journey, full of adventures and monsters, in search of the island of Ithaka as part of a glorious ancient Greek adventure. But the most important part of the poem, for me, is the end:

Ithaka gave you the marvellous journey.
Without her you would not have set out.
She has nothing left to give you now.

And if you find her poor, Ithaka won’t have fooled you.
Wise as you will have become, so full of experience,
you will have understood by then what these Ithakas mean.

As Cavafy and all of the Greek authors of myth before him note, the journey is the thing. When you arrive at your goal, if you have thought about it for long enough, then you may find it was not as good as you thought it would be. Memory may have tricked you, things may have changed. But if it caused you to start a wonderful journey, then it was worthwhile. No, not just worthwhile, it was magical. It was transcendence and enlightenment.

But, by not seeing New York as a goal and leaving it as a dream, I never set myself upon the journey and thus I deprived myself of both the adventure and the chance of achieving my dream one day.

Let’s be realistic, I was never going to live in the 1970s New York that I idolised, unless I found a time machine, but if I had actually made some career and life choices that would have seen me head to America in the 90s, I still would have made it there. It would have seen me experiencing a New York that, while not the one I thought of, would have been a capstone to an amazing journey. But, because I didn’t align myself to realise my New York dream, it didn’t happen.

My long-time love affair with New York was a fantasy and we’re both lucky that we never moved it beyond that point. I would have ended up being bitter and resentful, New York didn’t need another tech support person pretending to be an artist. You need more than attraction and the frisson of distance to have a relationship.

This year, I have reassessed my goals and dreams. I am deciding which of these will define my journey and give my life structure for the next decade or two. Where can I find new wisdom? Where can I find the experiences that will take me to new and amazing places, physically or mentally? I have been successful in a number of things but I really need to focus on the goal to make sure that I get the most out of the journey.

I’m not the same person I was back in the 90s. A lot of thinking has happened, a lot of growing has happened, a lot of love has happened. I’m more comfortable with the softer definition of myself as a communicator, an educator, an artist and even a philosopher. This small journey was triggered by the realisation that I had never chosen what to do or where to go. There’s a natural pause at this stage and it’s time to set a new heading. Where do I go from here?

The point of my New York Story is a simple one: assuming that nothing else gets in the way, you’re unlikely to get somewhere unless you actually set out for it. We often mistake what we’re doing with what we want to do, the necessary aims of our work with our real goals in life. We do something today because we did it yesterday and that means we’ll do it again tomorrow. Perhaps we should only do it tomorrow if it’s the best thing to do.

There are many things in my life that I don’t want (and don’t need) to change. But I look at Cavafy’s poem and I can smell the sea winds, hear the sails fill, and the helm is asking me where to go next.

 


Can we do this? We already have.

How does one actually turn everything I’ve been saying into a course that can be taught? We already have examples of this working, whether in the performance/competency based models found in medical schools around the world or whether in mastery learning based approaches where do not measure anything except whether a student has demonstrated sufficient knowledge or skill to show an appropriate level of mastery.

An absence of grades, or student control over their grades, is not as uncommon as many people think. MIT in the United States give students their entire first semester with no grades more specific than pass or fail. This is a deliberate decision to ease the transition of students who have gone from being leaders at their own schools to the compressed scale of MIT. Why compressed? If we were to assess all school students then we would need a scale that could measure all levels of ability, from ‘not making any progress at school’ to ‘transcendent’. The tertiary entry band is somewhere between ‘passing school studies’ to ‘transcendent’ and, depending upon the college that you enter, can shift higher and higher as your target institution becomes more exclusive. If you look at the MIT entry requirements, they are a little coy for ‘per student’ adjustments, but when the 75th percentile for the SAT components is 800, 790, 790, and 800,800,800 would be perfect, we can see that any arguments on how demotivating simple pass/fail grades must be for excellent students have not just withered, they have caught fire and the ash has blown away. When the target is MIT, it appears the freshmen get their head around a system that is even simpler than Rapaport’s.

MIT_Dome_night1_Edit

Pictured: A highly prestigious University with some of the most stringent entry requirements in the world, which uses no grades in first semester.

Other universities, such as Brown, deliberately allow students to choose how their marks are presented, as they wish to deemphasise the numbers in order to focus on education. It is not a cakewalk to get into Brown, as these figures attest, and yet Brown have made a clear statement that they have changed their grading system in order to change student behaviour – and the world is just going to have to deal with that. It doesn’t seem to be hurting their graduates, from quotes on the website such as “Our 85% admission rate to medical school and 89% admission rate to law school are both far above the national average.

And, returning to medical schools themselves, my own University runs a medical program where the usual guidelines for grading do not hold. The medical school is running on a performance/competency scheme, where students who wish to practise medicine must demonstrate that they are knowledgable, skilful and safe to practice. Medical schools have identified the core problem in my thought experiment where two students could have the opposite set of knowledge or skills and they have come to the same logical conclusion: decide what is important and set up a scheme that works for it.

When I was a solider, I was responsible for much of the Officer Training in my home state for the Reserve. We had any number of things to report on for our candidates, across knowledge and skills, but one of them was “Demonstrate the qualities of an officer” and this single item could fail an otherwise suitable candidate. If a candidate could not be trusted to one day be in command of troops on the battlefield, based on problems we saw in peacetime, then they would be counselled to see if it could be addressed and, if not, let go. (I can assure you that this was not used often and it required a large number of observations and discussion before we would pull that handle. The power of such a thing forced us to be responsible.)

We know that limited scale, mastery-based approaches are not just working in the vocational sector but in allied sectors (such as the military), in the Ivy league (Brown) and in highly prestigious non-Ivy league institutions such as MIT. But we also know of examples such as Harvey Mudd, who proudly state that only seven students since 1955 have earned a 4.0 GPA and have a post on the career blog devoted to “explaining why your GPA is so low” And, be in no doubt, Harvey Mudd is an excellent school, especially for my discipline. I’m not criticising their program, I’ve only heard great things about them, but when you have to put up a page like that? You’re admitting that there’s a problem but you are pushing it on to the student to fix it. But contrast that with Brown, who say to employers “look at our students, not their grades” (at least on the website).

Feedback to the students on their progress is essential. Being able to see what your students are up to is essential for the teacher. Being able to see what your staff and schools are doing is important for the University. Employers want to know who to hire. Which of these is the most important?

The students. It has to be the students. Doesn’t it? (Arguments for the existence of Universities as a self-sustaining bureaucracy system in the comments, if you think that’s a thing you want to do.)

This is not an easy problem but, as we can see, we have pieces of the solution all over the place. Tomorrow, I’m going to put in a place a cornerstone of beautiful assessment that I haven’t seen provided elsewhere or explained in this way. (Then all of you can tell me which papers I should have read to get it from, I can publish the citation, and we can all go forward.)

 


Not just videos!

SMPTE_Color_Bars.svg

Just a quick note that on-line learning is not just videos! I am a very strong advocate of active learning in my face-to-face practice and am working to compose on-line systems that will be as close to this as possible: learning and doing and building and thinking are all essential parts of the process.

Please, once again, check out Mark’s CACM blog on the 10 myths of teaching computer science. There’s great stuff here that extends everything I’m talking about with short video sequences and attention spans. I wrote something ages ago about not turning ‘chalk and talk’ into ‘watch and scratch (your head)’. It’s a little dated but I include it for completeness.