The Year(s) of Replication #las17ed L@S 2017

I was at Koli Calling in 2016 and a paper was presented (“Replication in Computing Education Research: Researcher Attitudes and Experiences”) regarding the issue of replicating previous studies. Why replicate previous work? Because we have a larger number of known issues that have emerged in psychology and the medical sciences, where important work has not been able to be replicated. Perhaps the initial analysis was underpowered, perhaps the researchers had terrible bad luck in their sample, and perhaps there were… other things going on. Whatever the reason, we depend upon replication as a validation tool and being unable to replicate work puts up a red flag.

ReplyHazy

After the paper, I had follow-up discussions with Andrew Petersen, from U Toronto, and we talked about the many problems. If we do choose to replicate studies, which ones do we choose? How do we get the replication result disseminated, given that it’s fundamentally not novel work? When do we stop replicating? What the heck do we do if we invalidate an entire area of knowledge? Andrew suggested a “year of replication” as a starting point but it’s a really big job: how do we start a year of replication studies or commit to doing this as a community?

This issue was raised again at Learning@Scale 2017 by Justin Reich, from MIT, among others. One of the ideas that we discussed as part of that session was that we could start allocating space at the key conferences in the field for replication studies. The final talk as part of L@S was “Learning about Learning at Scale: Methodological Challenges and Recommendations”, which discussed general problems that span many studies and then made recommendations as to how we could make our studies better and reduce the risk of failing future replication. Justin followed up with comments (which he described as a rant but he’s being harsh) about leaving room to make it easier to replicate and being open to this kind of examination of our work: we’re now thinking about making our current studies easier to replicate and better from the outset, but how can we go back and verify all of the older work effectively?

I love the idea of setting aside a few slots in every conference for replication studies. The next challenge is picking the studies but, given each conference has an organising committee, a central theme, and reviewers, perhaps each conference could suggest a set and then the community identify which ones they’re going to have a look at. We want to minimise unnecessary duplication, after all, so some tracking is probably a good idea.

There are several problems to deal with: some political, some scheduling, some scientific, some are just related to how hard it is to read old data formats. None of them are necessarily insurmountable but we have to be professional, transparent and fair in how we manage them. If we’re doing replication studies to improve confidence in the underlying knowledge of the field, we don’t want to damage the community in doing it.

Let me put out a gentle call to action, perhaps for next year, perhaps for the year after. If you’re involved with a conference, why not consider allocating a few slots to replication studies for the key studies in your area, if they haven’t already been replicated? Even the opportunity to have a community discussion about which studies have been effectively replicated will help identify what we can accept as well as showing us what we could fix.

Does your conference have room for a single track, keynote-level session, to devote some time to replication? I’ll propose a Twitter hashtag of #replicationtrack to discuss this and, hey, if we get a single session in one conference out of this, it’s more than we had.


Voices – LATICE 2017

[Edit: The conference is now being held in Hong Kong. I don’t know the reason behind the change but the original issue has been addressed. I have been accepted to Learning @ Scale so will not be able to attend anyway, as it turns out, as the two conferences overlap by two days and even I can’t be in the US and Hong Kong at the same time.]

There is a large amount of discussion in the CS Ed community right now over the LATICE 2017 conference, which is going to be held in a place where many members of the community will be effectively reduced to second-class citizenship and placed under laws that would allow them to be punished for the way that they live their lives. This affected group includes women and people who identify with QUILTBAG (“Queer/Questioning, Undecided, Intersex, Lesbian, Trans (Transgender/Transsexual), Bisexual, Asexual, Gay”). Conferences should be welcoming. This is not a welcoming place for a large percentage of the CS Ed community.

There are many things I could say here but what I would prefer you to do is to look at who is commenting on this and then understand those responses in the context of the author. For once, it matters who said what, because not everyone will be as affected by the decision to host this conference where it is.

From what I’ve seen, a lot of men think this is a great opportunity to do some outreach. A lot has been written, predominantly by men, about how every place has its problems and so on and so forth.

But let’s look at other voices. The female and QUILTBAG voices do not appear to share this support. Asking for their rights to be temporarily reduced or suspended for this ‘amazing opportunity’ is too much to ask. In response, I’ve seen classic diminishment of genuine issues that are far too familiar. Concerns over the reductions of rights are referred to as ‘comfort zone’ issues. This is pretty familiar to anyone who is actually tracking the maltreatment and reduction of non-male voices over time. You may as well say “Stop being so hysterical” and at least be honest and own your sexism.

Please go and read through all of the comments and see who is saying what. I know what my view of this looks like, as it is quite clear that the men who are not affected by this are very comfortable with such a bold quest and the people who would actually be affected are far less comfortable.

This is not a simple matter of how many people said X or Y, it’s about how much discomfort one group has to suffer that we take their concerns seriously. Once again, it appears that we are asking a group of “not-men”, in a reductive sense, to endure more and I cannot be part of that. I cannot condone it.

I will not be going. I will have to work out if I can cite this conference, given that I can see that it will lead to discrimination and a reduction of participation over gender and sexuality lines, unintentionally or not. I have genuine ethical concerns about using this research that I would usually reserve for historical research. But that is for me to worry about. I have to think about my ongoing commitment to this community.

But you shouldn’t care what I think. Go and read what the people who will be affected by this think. For once, please try to ignore what a bunch of vocal guys want to tell you about how non-male groups should be feeling.


Beautiful decomposition

Now there’s a title that I didn’t expect to write. In this case, I’m referring to how we break group tasks down into individual elements. I’ve already noted that groups like team members who are hard-working, able to contribute and dependable, but we also have the (conflicting) elements from the ideal group where the common goal is more important than individual requirements and this may require people to perform tasks that they are either not comfortable with or ideally suited for.

2702161578_2b1f5703ff_o

Kevin was nervous. The group’s mark depended upon him coming up with a “Knock Knock joke” featuring eyes.

How do we assess this fairly? We can look at what a group produces and we can look at what a group does but, to see the individual contribution, there has to be some allocation of sub-tasks to individuals. There are several (let’s call them interesting) ways that people divide up up tasks that we set. Here are three.

  1. Decomposition into dependent sub-tasks.
  2. Decomposition into isolated sub-tasks (if possible).
  3. Decomposition into different roles that spread across different tasks.

Part of working with a group is knowing whether tasks can be broken down, how that can be done successfully, being able to identify dependencies and then putting the whole thing back together to produce a recognisable task at the end.

What we often do with assignment work is to give students identical assignments and they all solemnly go off and solve the same problem (and we punish them if they don’t do enough of this work by themselves). Obviously, then, a group assignment that can be decomposed to isolated sub tasks that have no dependencies and have no assembly requirement is functionally equivalent to an independent assessment, except with some semantic burden of illusory group work.

If we set assignments that have dependent sub-tasks, we aren’t distributing work pressure fairly as students early on in the process have more time to achieve their goals but potentially at the expense of later students. But if the tasks aren’t dependent then we have the problem that the group doesn’t have to perform as a group, they’re a set of people who happen to have a common deadline. Someone (or some people) may have an assembly role at the end but, for the most part, students could work separately.

The ideal way to keep the group talking and working together is to drive such behaviour through necessity, which would require role separation and involvement in a number of tasks across the lifespan of the activity. Nothing radical about that. It also happens to be the hardest form to assess as we don’t have clear task boundaries to work with. However, we also have provided many opportunities for students to demonstrate their ability and to work together, whether as mentor or mentee, to learn from each other in the process.

For me, the most beautiful construction of a group assessment task is found where groups must work together to solve the problem. Beautiful decomposition is, effectively, not a decomposition process but an identification strategy that can pinpoint key tasks while recognising that they cannot be totally decoupled without subverting the group work approach.

But this introduces grading problems. A fluid approach to task allocation can quickly blur neat allocation lines, especially if someone occupies a role that has less visible outputs than another. Does someone get equal recognition for driving ideas, facilitating, the (often dull) admin work or do you have to be on the production side to be seen as valuable?

I know some of you have just come down heavily on one side or the other reading that last line. That’s why we need to choose assessment carefully here.

If you want effective group work, you need an effective group. They have to trust each other, they have to work to individual strengths, and they must be working towards a common goal which is the goal of the task, not a grading goal.

I’m in deep opinion now but I’ve always wondered how many student groups fall apart because we jam together people who just want a pass with people who would kill a baby deer for a high distinction. How do these people have common ground, common values, or the ability to build a mutual trust relationship?

Why do people who just want to go out and practice have to raise themselves to the standards of a group of students who want to get academic honours? Why should academic honours students have to drop their standards to those of people who are happy to scrape by?

We can evaluate group work but we don’t have to get caught up on grading it. The ability to work in a group is a really useful skill. It’s heavily used in my industry and I support it being used as part of teaching but we are working against most of the things we know about the construction of useful groups by assigning grades for knowledge and skill elements that are strongly linked into the group work competency.

Look at how teams work. Encourage them to work together. Provide escape valves, real tasks, things so complex that it’s a rare person who could do it by themselves. Evaluate people, provide feedback, build those teams.

I keep coming back to the same point. So many students dislike group work, we must be doing something wrong because, later in life, many of them start to enjoy it. Random groups? They’re still there. Tight deadlines? Complex tasks? Insufficient instructions? They’re all still there. What matters to people is being treated fairly, being recognised and respected, and having the freedom to act in a way to make a contribution. Administrative oversight, hierarchical relationships and arbitrary assessment sap the will, undermine morale and impair creativity.

If your group task can be decomposed badly, it most likely will be. If it’s a small enough task that one keen person could do it, one keen person probably will because the others won’t have enough of a task to do and, unless they’re all highly motivated, it won’t be done. If a group of people who don’t know each other also don’t have a reason to talk to each other? They won’t. They might show up in the same place if you can trigger a bribe reaction with marks but they won’t actually work together that well.

The will to work together has to be fostered. It has to be genuine. That’s how good things get done by teams.

Valuable tasks make up for poor motivation. Working with a group helps to practise and develop your time management. Combine this with a feeling of achievement and there’s some powerful intrinsic motivation there.

And that’s the fuel that gets complex tasks done.


Aesthetics of group work

What are the characteristics of group work and how can we define these in terms that allow us to form a model of beauty about them? We know what most people want from their group members. They want them to be:

  1. Honest. They do what they say and they only claim what they do. They’re fair in their dealings with others.
  2. Dependable. They actually do all of what they say they’re going to do.
  3. Hard-working. They take a ‘reasonable’ time to get things done.
  4. Able to contribute a useful skill
  5. A communicator. They let the group know what’s going on.
  6. Positive, possibly even optimistic.

A number of these are already included in the Socratic principles of goodness and truth. Truth, in the sense of being honest and transparent, covers 1, 2 and possibly even 5. Goodness, that what we set out to do is what we do and this leads to beauty, covers 3 and 4, and I think we can stretch it to 6.

But what about the aesthetics of the group itself? What does a beautiful group look like? Let’s ignore the tasks we often use in group environments and talk about a generic group. A group should have at least some of these (from) :

  1. Common goals.
  2. Participation from every member.
  3. A focus on what people do rather than who they are.
  4. A focus on what happened rather than how people intended.
  5. The ability to discuss and handle difference.
  6. A respectful environment with some boundaries.
  7. The capability to work beyond authoritarianism.
  8. An accomodation of difference while understanding that this may be temporary.
  9. The awareness that what group members want is not always what they get.
  10. The realisation that hidden conflict can poison a group.

Note how many of these are actually related to the task itself. In fact, of all of the things I’ve listed, none of the group competencies have anything at all to do with a task and we can measure and assess these directly by observation and by peer report.

How many of these are refined by looking at some arbitrary discipline artefact? If anything, by forcing students to work together on a task ‘for their own good’, are we in direct violation of this new number 7, allowing a group to work beyond strict hierarchies?

512px-Group_font_awesome.svg

“I’m carrying my whole team here!”

I’ve worked in hierarchical groups in the Army. The Army’s structure exists for a very specific reason: soldiers die in war. Roles and relationships are strictly codified to drive skill and knowledge training and to ensure smooth interoperation with a minimum of acclimatisation time. I think we can be bold and state that such an approach is not required for third- or fourth-year computer programming, even at the better colleges.

I am not saying that we cannot evaluate group work, nor am I saying that I don’t believe such training to be valuable for students entering the workforce. I just don’t happen to accept that mediating the value of a student’s skills and knowledge through their ability to carry out group competencies is either fair or honest. Item 9, where group members may have to adopt a role that they have identified is not optimal, is grossly unfair when final marks depend upon how the group work channel mediates the perception of your contribution.

There is a vast amount of excellent group work analysis and support being carried out right now, in many places. The problem occurs when we try to turn this into a mark that is re-contextualised into the knowledge frame. Your ability to work in groups is a competency and should be clearly identified as such. It may even be a competency that you need to display in order to receive industry-recognised accreditation. No problems with that.

The hallmarks of traditional student group work are resentment at having to do it, fear that either their own contributions won’t be recognised or someone else’s will dominate, and a deep-seated desire to get the process over with.

Some tasks are better suited to group solution. Why don’t we change our evaluation mechanisms to give students the freedom to explore the advantages of the group without the repercussions that we currently have in place? I can provide detailed evaluation to a student on their group role and tell a lot about the team. A student’s inability to work with a randomly selected team on a fake project with artificial timelines doesn’t say anything that I would be happy to allocate a failing grade to. It is, however, an excellent opportunity for discussion and learning, assuming I can get beyond the tyranny of the grade to say it.


Challenge accepted: beautiful groupwork

You knew it was coming. The biggest challenge of any assessment model: how do we handle group-based assessment?

Angry_mob_of_four

Come out! We know that you didn’t hand it in on-time!

There’s a joke that says a lot about how students feel when they’re asked to do group work:

When I die I want my group project members to lower me into my grave so they can let me down one more time.

Everyone has horror stories about group work and they tend to fall into these patterns:

  1. Group members X and Y didn’t do enough of the work.
  2. I did all of the work.
  3. We all got the same mark but we didn’t do the same work.
  4. Person X got more than I did and I did more.
  5. Person X never even showed up and they still passed!
  6. We got it all together but Person X handed it in late.
  7. Person W said that he/she would do task T but never did and I ended up having to do it.

Let’s consolidate these. People are concerned about a fair division of work and fair recognition of effort, especially where this falls into an allocation of grades. (Point 6 only matters if there are late penalties or opportunities lost by not submitting in time.)

This is totally reasonable! If someone is getting recognition for doing a task then let’s make sure it’s the right person and that everyone who contributed gets a guernsey. (Australian football reference to being a recognised team member.)

How do we make group work beautiful? First, we have to define the aesthetics of group work: which characteristics define the activity? Then we maximise those as we have done before to find beauty. But in order for the activity to be both good and true, it has to achieve the goals that define and we have to be open about what we are doing. Let’s start, even before the aesthetics, and ask about group work itself.

What is the point of group work? This varies by discipline but, usually, we take a task that is too large or complex for one person to achieve in the time allowed and that mimics (or is) a task you’d expect graduates to perform. This task is then attacked through some sort of decomposition into smaller pieces, many of which are dependant in a strict order, and these are assigned to group members. By doing this, we usually claim to be providing an authentic workplace or task-focused assignment.

The problem that arises, for me, is when we try and work out how we measure the success of such a group activity. Being able to function in a group has a lot of related theory (psychological, behavioural, and sociological, at least) but we often don’t teach that. We take a discipline task that we believe can be decomposed effectively and we then expect students to carve it up. Now the actual group dynamics will feature in the assessment but we often measure the outputs associate with the task to determine how effective group formation and management was. However, the discipline task has a skill and knowledge dimension, while the group activity elements have a competency focus. What’s more problematic is that unsuccessful group work can overshadow task achievement and lead to a discounting of skill and knowledge success, through mechanisms that are associated but not necessarily correlated.

Going back to competency-based assessment, we assess competency by carrying out direct observation, indirect measures and through professional reports and references. Our group members’ reports on us (and our reports on them) function in the latter area and are useful sources of feedback, identifying group and individual perceptions as well as work progress. But are these inherently markable? We spend a lot of time trying to balance peer feedback, minimise bullying, minimise over-claiming, and get a realistic view of the group through such mechanisms but adding marks to a task does not make it more cognitively beneficial. We know that.

For me, the problem with most group work assessment is that we are looking at the output of the task and competency based artefacts associated with the group and jamming them together as if they mean something.

Much as I argue against late penalties changing the grade you received, which formed a temporal market for knowledge, I’m going to argue against trying to assess group work through marking a final product and then dividing those grades based on reported contributions.

We are measuring different things. You cannot just add red to melon and divide it by four to get a number and, yet, we are combining different areas, with different intentions, and dragging it into one grade that is more likely to foster resentment and negative association with the task. I know that people are making this work, at least to an extent, and that a lot of great work is being done to address this but I wonder if we can channel all of the energy spent in making it work into getting more amazing things done?

Just about every student I’ve spoken to hates group work. Let’s talk about how we can fix that.


More on the #ATAR, @birmo looks to the Higher Education Standards Panel

The Federal Education Minister, Senator Simon Birmingham, appears as concerned over the disconnect between the Australian Tertiary Admission Rank (ATAR) and university entry as I am. In his own words, students must have “a clear understanding of what they need to do to get into their course of choice and realising what will be expected of them through their further study.”

This article covers some of the issues already raised over transparency and having a system that people work with rather than around.

“While universities determine their own admission requirements, exploring greater transparency measures will ensure that Australian students are provided real information on what they need to do to be admitted to a course at a particular institution and universities are held to account for their public entry requirements,” Senator Birmingham said.

Ensuring that students are ready for university but it’s increasingly obvious that this role is not supported or validated through the current ATAR system, for a large number of students. I look forward to see what comes from the standards panel and I hope that it’s to everyone’s benefit, not just a rejigging of a number that was probably never all that representative to start with.

Bright Sparks 21 Jan 2015 VC

Senator Birmingham, Minister Pyne, Professor Bebbington (VC of Adelaide) and A/Prof Katrina Falkner with one of the Bright Spark participants.

Perhaps I should confess that I would like a system where any student could get into University (or VET or TAFE or whatever tertiary program they want to) but that we build our preparatory and educational systems to support this happening, rather than just bringing people in to watch them fail. Oh, but a boy can dream.


It’s not just GPAs

If you’re watching the Australian media on higher education, you’ll have seen a lot of discussion regarding the validity of the Australian Tertiary Admission Rank (ATAR) as a measure of a student’s future performance and as an accurate summary of the previous years of education.

6874balance_scale

When you are being weighed in the balance, you probably want to know a lot about the scale and the measures.

This article, talking about students being admitted below the cut-offs, contains a lot of discussion on the issue. Not all of the discussion is equally valuable, in my opinion, as the when the question is the validity of the measure, being concerned about ‘standards slipping’ when a lower number is used isn’t that relevant. The interesting parts of the discussion are which mechanisms we should be using and making them transparent so that all students are on a level playing field.

The fact is that students are being admitted to, and passing, courses that have barriers in place which should clearly indicate their chances of success. Yet students are being admitted based on other pathways, using additional measures such as portfolios, and this makes a bit of a mockery on the apparent simplicity of the ATAR system.

My own analysis of student ATAR versus GPA is revelatory: the mapping is a very noisy correlation and, apart from the very highest ATARs, we see people who succeed or fail in a way that does not match their representative ATAR. Yes, there are rough ‘buckets’ but at a granularity of fewer than five buckets, rather than the thousand or so we’re pretending to have.

“Reducing six years of education to a single ranking is simplistic, let’s have a constructive debate about what could replace the ATAR alone as a fairer, more comprehensive and contextual measure of academic potential”.

Iain Martin, from this linked opinion piece.

I couldn’t agree more!


Dances with GPAs

Dragon_dance_at_China_1

The trick to dancing with dragons is to never lose your grip on the tail.

If we are going to try and summarise a complicated, long-term process with a single number, and I don’t see such shortcuts going away anytime soon, then it helps to know:

  • Exactly what the number represents.
  • How it can be used.
  • What the processes are that go into its construction.

We have conventions as to what things mean but, when we want to be precise, we have to be careful about our definition and our usage of the final value. As a simple example, one thing that often surprises people who are new to numerical analysis is that there is more than one way of calculating the average value of a group of numbers.

While average in colloquial language would usually mean that we take the sum of all of the numbers and divide them by their count, this is more formally referred to as the arithmetic mean. What we usually want from the average is some indication of what the typical value for this group would be. If you weigh ten bags of wheat and the average weight is 10 kilograms, then that’s what many people would expect the weight to be for future bags, unless there was clear early evidence of high variation (some 500g, some 20 kilograms, for example.)

But the mean is only one way to measure central tendency in a group of numbers. We can also measure the median, the number that separates the highest half of the data from the lowest, or the mode, the value that is the most frequently occurring value in the group.

(This doesn’t even get into the situation where we decide to aggregate the values in a different way.)

If you’ve got ten bags of wheat and nine have 10 kilograms in there, but one has only 5 kilograms, which of these ways of calculating the average is the one you want? The mode is 10kg but the mean is 9.5kg. If you tried to distribute the bags based on the expectation that everyone gets 9.5, you’re going to make nine people very happy and one person unhappy.

Most Grade Point Average calculations are based on a simple arithmetic mean of all available grades, with points allocated from 0 to an upper bound based on the grade performance. As a student adds more courses, these contributions are added to the calculation.

In yesterday’s post, I mused on letting students control which grades go into a GPA calculation and, to explore that, I now have to explain what I mean and why that would change things.

As it stands, because a GPA is an average across all courses, any lower grades will permanently drop the GPA contribution of any higher grades. If a student gets a 7 (A+ or High Distinction) for 71 of her courses and then a single 4 (a Passing grade) for one, her GPA will be 6.875. It can never return to 7. The clear performance band of this student is at the highest level, given that just under 99% of her marks are at the highest level, yet the inclusion of all grades means that a single underperformance, for whatever reason, in three years has cost her standing for those people who care about this figure.

My partner and I discussed some possible approaches to GPA that would be better and, by better, we mean approaches that encourage students to improve, that clearly show what the GPA figure means, and that are much fairer to the student. There are too many external factors contributing to resilience and high performance for me to be 100% comfortable with the questionable representation provided by the GPA.

Before we even think about student control over what is presented, we can easily think of several ways to make a GPA reflect what you have achieved, rather than what you have survived.

  1. We could only count a percentage of the courses for each student. Even having 90% counted means that students who stumble a little once or twice do not have this permanently etched into a dragging grade.
  2. We could allow a future attempt at a course with an improved to replace the previous grade. Before we get too caught up in the possibility of ‘gaming’, remember that students would have to pay for this (even if delayed) in most systems and it will add years to their degree. If a student can reach achievement level X in a course then it’s up to us to make sure that does correspond to the achievement level!
  3. We could only count passes. Given that a student has to assemble sufficient passing grades to be awarded a degree, why then would we include the courses that do not count in a calculation of GPA?
  4. We could use the mode and report the most common mark the student receives.
  5. We could do away with it totally. (Not going to happen any time soon.)
  6. We could pair the GPA with a statistical accompaniment that tells the viewer how indicative it is.

Options 1 and 2 are fairly straight-forward. Option 3 is interesting because it compresses the measurement band to a range of (in my system) 4-7 and this then implicitly recognises that GPA measures for students who graduate are more likely to be in this tighter range: we don’t actually have the degree of separation that we’d assume from a range of 0-7. Option 4 is an interesting way to think about the problem: which grade is the student most likely to achieve, across everything? Option 5 is there for completeness but that’s another post.

Option 6 introduces the idea that we stop GPA being a number and we carefully and accurately contextualise it. A student who receives all high distinctions in first semester still has a number of known hurdles to get over. The GPA of 7 that would be present now is not as clear an indicator of facility with the academic system as a GPA of 7 at the end of a degree, whichever other GPA adjustment systems are in play.

More evidence makes it clearer what is happening. If we can accompany a GPA (or similar measure) with evidence, then we are starting to make the process apparent and we make the number mean something. However, this also allows us to let students control what goes into their calculation, from the grades that they have, as a clear measure of the relevance of that measure can be associated.

But this doesn’t have to be a method of avoidance, this can be a useful focusing device. If a student did really well in, say, Software Engineering but struggled with an earlier, unrelated, stream, why can’t we construct a GPA for Software Engineering that clearly states the area of relevance and degree of information? Isn’t that actually what employers and people interested in SE want to know?

Handing over an academic transcript seems to allow anyone to do this but human cognitive biases are powerful, subtle and pervasive. It is harder for most humans to recognise positive progress in the areas that they are interested in, if there is evidence of less stellar performance elsewhere. I cite my usual non-academic example: Everyone thought Anthony La Paglia’s American accent was too fake until he stopped telling people he was Australian.

If we have to use numbers like this, then let us think carefully about what they mean and, if they don’t mean that much, then let’s either get rid of them or make them meaningful. These should, at a fundamental level, be useful to the students first, us second.


Brief reflection on a changing world

Like most published academics, I regularly receive invitations to propose books or book chapters from publishers. Today, one of the larger groups contacted me and mentioned that they would also be interested in any proposals for a video lecture sequence.

And so the world changes.

TV_noise

Something something radio star.


What do we want? Passing average or competency always?

I’m at the Australasian Computer Science Week at the moment and I’m dividing my time between attending amazing talks, asking difficult questions, catching up with friends and colleagues and doing my own usual work in the cracks.  I’ve talked to a lot of people about my ideas on assessment (and beauty) and, as always, the responses have been thoughtful, challenging and helpful.

I think I know what the basis of my problem with assessment is, taking into account all of the roles that it can take. In an earlier post, I discussed Wolff’s classification of assessment tasks into criticism, evaluation and ranking. I’ve also made earlier (grumpy) notes about ranking systems and their arbitrary nature. One of the interesting talks I attended yesterday talked about the fragility and questionable accuracy of post-University exit surveys, which are used extensively in formal and informal rankings of Universities, yet don’t actually seem to meet many of the statistical or sensible guidelines for efficacy we already have.

But let’s put aside ranking for a moment and return to criticism and evaluation. I’ve already argued (successfully I hope) for a separation of feedback and grades from the criticism perspective. While they are often tied to each other, they can be separated and the feedback can still be useful. Now let’s focus on evaluation.

Remind me why we’re evaluating our students? Well, we’re looking to see if they can perform the task, apply the skill or knowledge, and reach some defined standard. So we’re evaluating our students to guide their learning. We’re also evaluating our students to indirectly measure the efficacy of our learning environment and us as educators. (Otherwise, why is it that there are ‘triggers’ in grading patterns to bring more scrutiny on a course if everyone fails?) We’re also, often accidentally, carrying out an assessment of the innate success of each class and socio-economic grouping present in our class, among other things, but let’s drill down to evaluating the student and evaluating the learning environment. Time for another thought experiment.

Thought Experiment 2

There are twenty tasks aligned with a particularly learning outcome. It’s an important task and we evaluate it in different ways but the core knowledge or skill is the same. Each of these tasks can receive a ‘grade’ of 0, 0.5 or 1. 0 means unsuccessful, 0.5 is acceptable, 1 is excellent. Student A attempts all tasks and is acceptable in 19, unsuccessful in 1. Student B attempts the first 10 tasks, receives excellent in all of them and stops. Student C sets up a pattern of excellent,unsuccessful, excellent, unsuccessful.. and so on to receive 10 “Excellent”s and 10 “unsuccessful”s. When we form an aggregate grade, A receives 47.5%, B receives 50% and C also receives 50%. Which of these students is the most likely to successfully complete the task?

This framing allows us to look at the evaluation of the student in a meaningful way. “Who will pass the course?” is not the question we should be asking, it’s “Who will be able to reliably demonstrate mastery of the skills or knowledge that we are imparting.” Passing the course has a naturally discrete attention focus: focus on n assignments and m exams and pass. Continual demonstration of mastery is a different goal. This framing also allows us to examine the learning environment because, without looking at the design, I can’t tell you if B and C’s behaviour is problematic or not.

CompFail

A has undertaken the most tasks to an acceptable level but an artefact of grading (or bad luck) has dropped the mark below 50%, which would be a fail (aggregate less than acceptable) in many systems. B has performed excellently on every task attempted but, being aware of the marking scheme, optimising and strategic behaviour allows this student to walk away. (Many students who perform at this level wouldn’t, I’m aware, but we’re looking at the implications of this.) C has a troublesome pattern that provides the same outcome as B but with half the success rate.

Before we answer the original question (which is most likely to succeed), I can nominate C as the most likely to struggle because C has the most “unsuccessful”s. From a simple probabilistic argument, 10/20 success is worse than 19/20. It’s a bit tricker comparing 10/10 and 10/20 (because of confidence intervals) but 10/20 has an Adjusted Wald range of +/- 20% and 10/10 is -14%, so the highest possible ‘real’ measure for C is 14/20 and the lowest possible ‘real’ measure for B is (scaled) 15/20, so they don’t overlap and we can say that B appears to be more successful than C as well.

From a learning design perspective, do our evaluation artefacts have an implicit design that explains C’s pattern? Is there a difference we’re not seeing? Taking apart any ranking of likeliness to pass our evaluatory framework, C’s pattern is so unusual (high success/lack of any progress) that we learn something immediately from the pattern, whether it’s that C is struggling or that we need to review mechanisms we thought to be equivalent!

But who is more likely to succeed out of A and B? 19/20 and 10/10 are barely distinguishable in statistical terms! The question for us now is how many evaluations of a given skill or knowledge mastery are required for us to be confident of competence. This totally breaks the discrete cramming for exams and focus on assignment model because all of our science is built on the notion that evidence is accumulated through observation and the analysis of what occurred, in order to be able to construct models to predict future behaviour. In this case, our goal is to see if our students are competent.

I can never be 100% sure that my students will be able to perform a task but what is the level I’m happy with? How many times do I have to evaluate them at a skill so that I can say that x successes in y attempts constitutes a reliable outcome?

If we say that a student has to reliably succeed 90% of the time, we face the problem that just testing them ten times isn’t enough for us to be sure that they’re hitting 90%.

But the level of performance we need to be confident is quite daunting. By looking at some statistics, we can see that if we provide a student with 150 opportunities to demonstrate knowledge and they succeed at this 143 times, then it is very likely that their real success level is at least 90%.

If we say that competency is measured by a success rate that is greater than 75%, a student who achieves 10/10 has immediately met that but even succeeding at 9/9 doesn’t meet that level.

What this tells us (and reminds us) is that our learning environment design is incredibly important and it must start from a clear articulation of what success actually means, what our goals are and how we will know when our students have reached that point.

There is a grade separation between A and B but it’s artificial. I noted that it was hard to distinguish A and B statistically but there is one important difference in the lower bound of their confidence interval. A is less than 75%, B is slightly above.

Now we have to deal with the fact that A and B were both competent (if not the same) for the first ten tests and A was actually more competent than B until the 20th failed test. This has enormous implications for we structure evaluation, how many successful repetitions define success and how many ‘failures’ we can tolerate and still say that A and B are competent.

Confused? I hope not but I hope that this is making you think about evaluation in ways that you may not have done so before.